CN117390141A - Agricultural socialization service quality user evaluation data analysis method - Google Patents

Agricultural socialization service quality user evaluation data analysis method Download PDF

Info

Publication number
CN117390141A
CN117390141A CN202311690636.9A CN202311690636A CN117390141A CN 117390141 A CN117390141 A CN 117390141A CN 202311690636 A CN202311690636 A CN 202311690636A CN 117390141 A CN117390141 A CN 117390141A
Authority
CN
China
Prior art keywords
learner
layer
data
emotion
user evaluation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311690636.9A
Other languages
Chinese (zh)
Other versions
CN117390141B (en
Inventor
易文龙
黄暄
刘木华
程香平
殷华
徐亦璐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangxi Agricultural University
Institute of Applied Physics of Jiangxi Academy of Sciences
Original Assignee
Jiangxi Agricultural University
Institute of Applied Physics of Jiangxi Academy of Sciences
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangxi Agricultural University, Institute of Applied Physics of Jiangxi Academy of Sciences filed Critical Jiangxi Agricultural University
Priority to CN202311690636.9A priority Critical patent/CN117390141B/en
Publication of CN117390141A publication Critical patent/CN117390141A/en
Application granted granted Critical
Publication of CN117390141B publication Critical patent/CN117390141B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/951Indexing; Web crawling techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • G06N3/0442Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Databases & Information Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention belongs to the field of natural language processing emotion analysis, and particularly relates to an agricultural socialization service quality user evaluation data analysis method, which comprises the steps of carrying out unified label processing on user evaluation data and dividing the user evaluation data into a training set and a testing set; setting an emotion analysis layer-strong learner, wherein the emotion analysis layer-strong learner comprises a coarse granularity emotion analysis layer integrated by a component learner and a fine granularity emotion analysis layer integrated by a plurality of base learners; training to obtain decision weight lists of the component learner and the base learner; after the test set is processed by the coarse-granularity emotion analysis layer, the test set is respectively transmitted into corresponding positive, horizontal or negative learner combinations, emotion tendency prediction is output, and then the test set is compared with a weight decision list to judge the service quality evaluation characteristics of the user on agricultural products in each emotion object dimension. According to the method, the coarse and fine granularity emotion analysis is combined through ensemble learning, emotion tendencies are accurately predicted and evaluated from the aspect of a plurality of fine granularity emotion objects, and generalization capability of emotion analysis is improved.

Description

Agricultural socialization service quality user evaluation data analysis method
Technical Field
The invention belongs to the field of natural language processing emotion analysis, and particularly relates to an agricultural socialization service quality user evaluation data analysis method.
Background
Emotion analysis technology is an important branch of natural language processing, and is mainly used for identifying and extracting emotion tendencies, attitudes and emotions in texts. Emotion analysis generally involves two analysis modes, including text polarity classification and target emotion analysis, wherein the first main process directly judges whether the emotion expressed by the text is positive or negative by giving a piece of text; the second is an improvement based on text polarity classification, which not only analyzes the whole text, but also analyzes emotion of specific objects in the text, identifies and divides emotion object dimensions in the text through entity naming, and uses a related model to judge emotion tendencies of the objects, wherein the two are commonly called coarse granularity and fine granularity emotion analysis. Under the condition that the text content is inconsistent with emotion tendency prediction caused by the traditional coarse-granularity emotion classification, the problem of 'wrong label' is treated to a certain extent by the proposal of fine-granularity emotion classification, but after the experimental result of the past fine-granularity emotion analysis is observed and analyzed, the problems that the model is learned and fitted, a sample is unbalanced, the prediction effect does not reach an ideal value and the like still exist.
Disclosure of Invention
In order to solve the problem that single fine granularity emotion analysis cannot thoroughly process text emotion tendency non-correspondence, the invention provides an agricultural socialization service quality user evaluation data analysis method, which is characterized in that a user evaluation text is processed through an emotion analysis method combining coarse granularity and fine granularity, rich and various emotion characteristics in the text are extracted through fusion deep learning, and the coarse granularity and the fine granularity are fused into a whole strong learner to analyze through a Bagging integrated learning algorithm, a decision weight list is generated corresponding to a plurality of learner groups, a test set is transmitted into a model to be trained, and finally, the emotion object tendency is accurately determined based on the fine granularity characteristics.
The invention is realized by the following technical scheme.
The agricultural socialization service quality user evaluation data analysis method comprises the following steps:
step one: crawling user evaluation data of related website agricultural products through a crawler technology;
step two: performing data preprocessing operation on user evaluation data;
step three: performing unified label processing on the user evaluation data to obtain a processed user evaluation data set, and dividing the processed user evaluation data set into a training set and a testing set;
step four: the emotion analysis layer-strong learner is arranged, and comprises a coarse granularity emotion analysis layer and a fine granularity emotion analysis layer, wherein the coarse granularity emotion analysis layer is provided with a component learner integrated formed by a plurality of different learners and is divided into positive emotion tendencies and negative emotion tendencies; the fine granularity emotion analysis layer is provided with a base learner integration consisting of a plurality of identical learners;
step five: f1 values obtained by training of each learner in the emotion analysis layer are obtained, and the F1 values enter a decision weight layer; in the coarse granularity emotion analysis layer, each trained component learner receives corresponding out-of-bag data to obtain an F1 value, and a decision weight list corresponding to each component learner is generated; in a fine-granularity emotion analysis layer, a base learner analyzes emotion according to each attribute in each piece of user evaluation data, receives out-of-bag data, and trains the out-of-bag data into positive data and negative data to obtain an F1 value, a positive learner, a negative learner and a horizontal learner are divided according to weights of the positive data and the negative data, and a decision weight list of the base learner is established by a weight decision layer;
step six: the fine granularity feature selection layer receives data, which is judged to be positive or negative by the test set in the coarse granularity emotion analysis layer, and the data are respectively transmitted into the horizontal learner combination and the corresponding positive or negative learner combination, emotion tendency prediction of each attribute object of the agricultural product based on fine granularity feature division is output, then the emotion tendency prediction value in each attribute is compared according to a weight decision list of the base learner, service quality evaluation features of the user on the agricultural product in each emotion object dimension are judged, and a service quality evaluation result is output.
Further preferably, the component learner consists of a bi-directional coding characterization model (BERT) and a long-short term memory model (LSTM), the fine granularity emotion analysis layer comprising a plurality of bi-directional coding characterization models; the base learner is comprised of a pair of bi-directional coding characterization models.
Further preferably, the coarse granularity emotion analysis layer is integrated by adopting L component learners, L sampling sets are selected from the training sets, and each sampling set is input into the component learners for training; the method comprises the steps that user evaluation texts in a sampling set are segmented by a bidirectional coding characterization model in a component learner, index sequence codes in dictionary forms corresponding to word segmentation data of each piece of user evaluation text are obtained, and classification characters [ CLS ] and termination characters [ SEP ] are automatically added at the beginning and the end of each sentence; then, in the standardization processing of multiple sentences, user evaluation texts with different lengths are unified into the same length; adding a prompt mask to tell the bi-directional coding characterization model of the returned valid digital codes; when a certain evaluation text has a plurality of sentences, the sentence position codes of the word segmentation are added to indicate the sentence positions of the returned index sequence codes; transmitting the finally obtained user evaluation text word segmentation codes into three embedded layers, wherein the first layer is a word embedded layer, and multiplying each input user evaluation text word segmentation by an embedded matrix to improve the dimension; the second layer is a layer for judging the sequence relation among sentences, wherein 'A' is used for representing a first sentence, and 'B' is used for representing a second sentence; the third layer is a position embedding layer, and an embedding matrix is initialized to perform matrix multiplication according to the obtained coding position value of each user evaluation text word; adding the three vectors obtained by the three embedded layers to obtain a final output word vector; and finally, the output word vector is transmitted into a full-connection layer to be subjected to two-class processing, and an output predicted value processed by a Softmax () activation function is returned.
Further preferably, the long-short term memory model used in the component learner performs emotion classification expression on the user evaluation text by setting a word embedding layer, a long-short term memory layer and a full connection layer; firstly, word vectors are obtained by data preprocessing operation of user evaluation text word segmentation and are transmitted into a word embedding model, a time sequence is obtained by transmitting the word vectors into a long-term and short-term memory layer, a random inactivation layer is added in a weight iteration process, and a focus loss function is used in a training process.
Further preferably, the fine granularity emotion analysis layer is integrated by adopting L base learners, and the base learners extract based on emotion triples, wherein the three are predicted by attribute, view and attribute-view.
Further preferably, the whole internal model of the base learner is composed of four parts, including a pre-training language model, an attribute and viewpoint sequence labeling layer, an attention mechanism layer and a classification layer; the bi-directional coding characterization model is used as a pre-training language model Encoder, a word segmentation processor is called to cut a user evaluation text, generated user evaluation text is segmented, each user evaluation text is processed through a prompt mask (intent_mask) by adding [ CLS ] and [ SEP ] special characters as the beginning and the end of sentence vectors, positions which are used as filling length characters in the sentence vectors with the same fixed length n are identified, and an upper Wen Yuyi characterization R and a lower Wen Yuyi characterization R of the user evaluation text on different attributes are obtained through an inner coding layer (an encoding layer) of the bi-directional coding characterization model; the attribute and view sequence labeling layer extracts attribute sequences and view sequence features of the user evaluation text to obtain attribute-view pairs; the attribute-view is transmitted to an attention mechanism layer, and the weighted representation is obtained after the attention mechanism layer is processed; the classification layer obtains the emotion tendency predicted value corresponding to each attribute through a Softmax classifier.
Further preferably, the calculation process of the weight vector in the attention mechanism layer is as follows:
;
;
wherein,an ith weight vector representing the attention mechanism layer,representing the ith element of upper and lower Wen Yuyi characterizing R,the i-th linear layer weight updated for each iteration,representing the semantic code of the ith evaluation text, and distributing different weights to attribute words with different importance degrees in the user evaluation text;and carrying out pooling treatment on the ith attribute in the user evaluation text by using a tanh function, and obtaining the weighted representation.
Further preferably, the specific process of obtaining the emotion tendency predicted value corresponding to each attribute by the classification layer through the Softmax classifier is as follows:
;
wherein,a predicted emotion tendency value representing the ith attribute in the user evaluation text,representing the ith offset value obtained in the iterative process.
Further preferably, after training the L component learners and the L base learners in the emotion analysis layer, screening out data except for a sampling set by a for-loop method, dividing the data into positive data and negative data by supervised learning labels, transmitting the positive data and the negative data into a trained emotion analysis layer-strong learner, respectively generating a pair of positive F1 values and negative F1 values by each component learner in the coarse granularity emotion analysis layer, transmitting each pair of F1 values into a weight decision layer, and calculating reliability weights of different emotion tendencies on each component learner and the base learner by using the F1 values.
Further preferably, when the positive and negative F1 values of the respective base learners are acquired, the horizontal learner, the positive learner, and the negative learner are divided according to the weights of the positive and negative in the same base learner, when the weights w of the positive category Forward direction >Weight w of negative category Negative going When the base learner is a forward learner; when the weight w of the forward category Forward direction <W of negative-going category Negative going The base learner is a negative learner; when the weight w of the forward category Forward direction Weight w of =negative-going category Negative going And when the base learner is used, the base learner is the same level in the weight association of the positive data and the negative data, and the base learner has no stress tendency and is a level learner.
According to the invention, an integrated learning Bagging algorithm concept is used, coarse and fine granularity emotion analysis is combined and analyzed according to an improved weighting weight strategy, and the robustness and generalization capability of model training are improved, so that an evaluation attribute emotion object trend analysis model which is more accurate in degree and oriented to fine granularity feature selection is established, and the service quality evaluation of the provided agricultural products is more accurately analyzed and predicted.
Drawings
FIG. 1 is a schematic diagram of a model structure of a method for analyzing agricultural socialization quality of service user evaluation data.
FIG. 2 is a schematic diagram of a coarse-grained BERT+long-short term memory model training process.
FIG. 3 is a schematic diagram of a fine-grained BERT+attention mechanism+classification model training process.
Detailed Description
The invention is further described in detail below with reference to the drawings and examples.
Referring to fig. 1, a method for analyzing agricultural socialization service quality user evaluation data includes the steps of:
step one: and crawling user evaluation data of related website agricultural products through a crawler technology. Each evaluation user name, evaluation content and evaluation score of each agricultural product detail page are respectively crawled and stored in a user evaluation data csv file;
step two: and carrying out data preprocessing operation on the user evaluation data. The data preprocessing operation includes removing duplicate rating content data; removing invalid contents such as punctuations, numbers, emoticons, line-feed symbols and the like in the text content by using a regular expression;
step three: unified label processing is carried out on the user evaluation data to obtain a processed user evaluation data set, and the processing is carried out according to 8: the scale of 2 is divided into training and test sets. The known crawled user evaluation data comprises five grades of 1-5 label data, the user evaluation data with the user score of 4-5 is marked as positive emotion label 1 score, 3-grade user evaluation data is also divided into negative labels due to the fact that the positive emotion labels are too many and unbalanced, and 1-3-grade user evaluation data is marked as negative emotion label 0 score, wherein the user evaluation data sample conditions are shown in table 1:
TABLE 1
Step four: the emotion analysis layer-strong learner is arranged and comprises a coarse granularity emotion analysis layer and a fine granularity emotion analysis layer, wherein the coarse granularity emotion analysis layer is provided with a component learner consisting of a plurality of different learners, and the fine granularity emotion analysis layer is provided with a base learner consisting of a plurality of identical learners and is trained. Under the concept of a Bagging basic algorithm framework, two parts of learner combinations are designed according to the emotion analysis characteristics of coarse granularity and fine granularity, and respectively form a coarse granularity emotion analysis layer and a fine granularity emotion analysis layer, and the coarse granularity emotion analysis layer and the fine granularity emotion analysis layer are combined into a strong learner to form an overall emotion analysis layer, which is shown in a specific figure 1. In the Bagging integrated learning algorithm, the same learner is set as a base learner, different learners are set as component learners, 5 bidirectional coding characterization models (BERT) and 5 long-short-term memory models (LSTM) are set in a coarse-granularity emotion analysis layer, 10 bidirectional coding characterization models are set in a fine-granularity emotion analysis layer, the coarse-granularity emotion analysis layer and the fine-granularity emotion analysis layer are trained by receiving L sampling sets (sampling set 1, sampling set 2, … and sampling set L) obtained by random sampling, training effects of each learner are tested by obtaining out-of-bag data (OOB 1, OOB2, …, OOBL), and distinguishing the out-of-bag data into positive out-of-bag data (OOB-POS) and negative out-of-bag data (OOB-NEG) according to emotion labels (0, 1), and decision weights corresponding to the learners are obtained in the whole emotion analysis layer.
The coarse granularity emotion analysis layer is mainly integrated by adopting L component learners (a two-way coding characterization model and a long-period memory model), and a specific training flow is shown in figure 2. Firstly, in a choice () method of transmitting training sets into a numpy library, L sampling sets (L is the number of component learners) are selected, and each sampling set is input into the component learners for training. Firstly, segmenting user evaluation texts in a sampling set by using a BertTokenizer method in a training process of a bidirectional coding characterization model in a component learner, acquiring index sequence codes in dictionary forms corresponding to word segmentation data of each user evaluation text, and automatically adding classification characters [ CLS ] and termination characters [ SEP ] marks at the beginning and the end of each sentence; then in the standardization processing of multiple sentences, user evaluation texts with different lengths are unified into the same length, and in order to avoid meaningless problems caused by filling, a prompt mask is required to be added to tell the effective digital codes returned by the bidirectional coding characterization model; when a certain evaluation text has a plurality of sentences, the positions of the sentences in the returned index sequence codes are indicated by adding the sentence position codes of the segmented words. Transmitting the finally obtained user evaluation text word segmentation codes into three embedded layers, wherein the first layer is a word embedded layer, and multiplying each input user evaluation text word segmentation by an embedded matrix to improve the dimension; the second layer is a layer for judging the sequence relation among sentences, wherein 'A' is used for representing a first sentence, and 'B' is used for representing a second sentence; the third layer is a position embedding layer, and an embedding matrix is initialized to perform matrix multiplication according to the obtained coding position value of each user evaluation text word. Adding the three vectors obtained by the three embedding layers to obtain a final output word vector H, wherein the word vector processing process is shown by a formula (1), and the word vector composition mode is shown by a formula (2):
(1);
(2);
wherein W is q ,W k ,W v Respectively representing the embedding matrix of the word embedding layer, the embedding matrix of the sequence relation layer and the embedding matrix of the position embedding layer, [ x ] 1 ,x 2 ,…x n ]User evaluation text word segmentation coding vector, x representing a certain input 1 ,x 2 ,…x n Respectively representing 1 st, 2 nd, … th and n th user evaluation text word segmentation codes, [ q ] 1 ,q 2 ,…q n ]Representation post-processing word segmentation embedding matrix, q 1 ,q 2 ,…q n Respectively, the number 1 of the steps is represented,2, …, n parts, [ k ] 1 ,k 2 ,…,k n ]Representing the processed sentence forward relation matrix, k 1 ,k 2 ,…,k n Respectively represent 1,2, …, n sentences, v 1 ,v 2 ,…,v n ]Representing the processed position-coding matrix, v 1 ,v 2 ,…,v n Respectively represent 1,2, …, n position codes, H [ H ] 1 ,h 2 ,…h n ]Representing the word vector matrix of the obtained output, h 1 ,h 2 ,…h n Respectively representing 1 st, 2 nd, … th, n th word vectors of the acquired output, Q represents the addition [ CLS ]]Sentential period sum [ SEP ]]The word segmentation matrix of the sentence end, K represents the sentence forward relation matrix, V represents the position coding matrix, and n represents the dimension of the word vector.
The final output word vector H is transmitted into a full connection layer to be subjected to two-class processing, an output predicted value processed by a Softmax () activation function is returned, and the activation function processing process is shown in a formula (3):
(3);
wherein,representing bi-directional coded representation model predictive tag values processed by the activation function,representing bi-directional coded representation model predictive tag values that are not processed by the activation function,represents the i-th element in the incoming word vector,representing the number of overall word vectors in a piece of user evaluation text. In the training process, due to the problem of sample imbalance, cross entropy is also introduced to perform balance processing.
Long-short-period memory mould used in component learnerThe emotion classification expression is carried out on the user evaluation text by arranging a word embedding layer, a long-short-term memory layer and a full-connection layer. Firstly, the user evaluation text word is divided into words W= { W 1 ,W 2 ,…,W n Data preprocessing operation is carried out in the embedded Word model (Word 2 vec) to obtain a Word vector E= { E 1 ,e 2 ,…,e n },W 1 ,W 2 ,…,W n 1 st, 2 nd, … th, n th element, e of evaluating text segmentation for user respectively 1 ,e 2 ,…,e n The word vectors are respectively 1,2, … and n words of the word vectors, and the word vectors are transferred into a long-short-period memory layer to obtain a time sequence T= { T 1 ,t 2 ,…,t n },t 1 ,t 2 ,…,t n The 1,2, …, n elements of the time series, respectively, adding a random inactivation layer (Dropout) during the weight iteration prevents over-sampling and uses a Focal Loss function (Focal Loss) during training to solve the data imbalance problem. In the process of selecting the models of the single component learner, a plurality of models are respectively selected for experiments, and model tests are also carried out by adopting an open-source hotel data set, wherein the specific results are shown in the table 2:
TABLE 2
And finally selecting BERT and long-short-term memory models as a component learner of the coarse granularity emotion analysis layer for model integration according to the data of the table 2.
The fine granularity emotion analysis layer is mainly integrated by adopting L base learners, and is specifically analyzed by adopting a BERT multi-classification model, the overall framework is shown in fig. 3, the base learners are mainly extracted based on emotion triples, the emotion triples comprise attributes, views and attribute-views, the three are predicted, out-of-bag data (OOB) are received after training of the base learners, and the out-of-bag data are divided into positive out-of-bag data and negative out-of-bag data for training.
The internal model of the basic learner is integrally composed of four parts, including a pre-training language model, an attribute and view sequence labeling layer, an attention mechanism layer and a view sequence labeling layerAnd a classification layer. Firstly, a bi-directional coding characterization model is used as a pre-training language model encoder, a word segmentation processor is called to cut a user evaluation text, and the generated user evaluation text word is divided into wordsBy adding classification characters [ CLS ]]And a termination character [ SEP ]]Special characters are used as the beginning and the end of sentence vectors, wherein each user evaluation text is processed through a prompt mask, and becomes the sentence vectors with the same fixed length n,The 1 st, 2 nd, … th and n th elements of sentence vectors are respectively obtained through a bi-directional coding representation model internal coding layer (an Encoder layer) to obtain upper Wen Yuyi and lower Wen Yuyi representation R of user evaluation texts on different attributes, and the specific process is shown in a formula (4):
(4);
wherein the method comprises the steps ofSentence vectors, calculated via J coding layer (transducer) components, are user-rated textThe vector obtained after entering the BERT semantic coding layer (Encoder) encoding,representing the length of the user's rating text,the 1 st, 2 nd, … th, n elements of R are characterized for upper and lower Wen Yuyi, respectively.
The attribute and view sequence labeling layer extracts attribute sequences and view sequence features of the user evaluation text to obtain attribute-view pairs,1,2, …, m attribute-perspective prediction pairs, respectively, whereinThe predicted value of (a) may be { [ A-B ]]、[A-I]、[A-E]、[O-B]、[O-I]、[O-E]And (2) A represents an attribute, O represents a view corresponding to the attribute, B, I, E means a specific position, wherein B is the beginning, I is the middle, and E is the end. The obtained attribute-view is transmitted to a subsequent layer comprising an attention mechanism layer and a classification layer, the attention mechanism layer reduces interference of other irrelevant attribute information in text evaluation, and sentences and fragments most relevant to the appointed attribute in each piece of evaluation data are more accurately found by distributing weights to attribute words with different importance degrees in the sentences, so that the correct judgment rate of emotion tendencies of the attribute is improved. The calculation process of the weight vector in the attention mechanism layer is shown by the formula (5) and the formula (6):
(5);
(6);
wherein,an ith weight vector representing the attention mechanism layer,representing the ith element of upper and lower Wen Yuyi characterizing R,the i-th linear layer weight updated for each iteration,meaning semantic coding of ith evaluation textAnd (3) assigning different weights to the attribute words with different importance degrees in the user evaluation text.And carrying out pooling treatment on the ith attribute in the user evaluation text by using a tanh function, and obtaining the weighted representation.
After the weighted representation is obtained, the classification layer obtains the emotion tendency predicted value corresponding to each attribute through a Softmax classifier, and the specific process is shown in a formula (7):
(7);
wherein,a predicted emotion tendency value representing the ith attribute in the user evaluation text,representing the ith offset value obtained in the iterative process. Specific results are divided into four tag indicators (-2 is not mentioned, -1 negative, 0 neutral, 1 positive).
Step five: f1 values obtained by training of each learner in the emotion analysis layer are obtained, and the F1 values enter a decision weight layer; in the coarse granularity emotion analysis layer, each trained component learner receives corresponding out-of-bag data to obtain an F1 value, and a decision weight list corresponding to each component learner is generated; in a fine-granularity emotion analysis layer, a base learner analyzes emotion according to each attribute in each piece of user evaluation data, receives out-of-bag data, trains the out-of-bag data into positive data and negative data, obtains an F1 value, divides a positive learner, a negative learner and a horizontal learner according to weights of the positive data and the negative data, and establishes a decision weight list of the base learner by a weight decision layer.
After training the L component learners and the L base learners in the emotion analysis layer, screening out bag outside data (OOB) except a sampling set by a for-loop method, dividing the label through supervised learning into positive data (OOB-POS) and negative data (OOB-NEG), transmitting the data into the trained emotion analysis layer-strong learner, respectively generating a pair of positive F1 values and negative F1 values by each component learner in the coarse granularity emotion analysis layer, transmitting each pair of F1 values into a weight decision layer, and calculating reliability weights of different emotion tendencies on each component learner and the base learner by using the F1 values, wherein the calculation mode is shown as a formula (8):
(8);
wherein,the weight magnitudes of the positive predictive value and the negative predictive value in each learner are represented,the number of the learners is represented,indicating whether the class of data outside the bag is positive (1) or negative (0),represent the firstThe k-class data of each learner predicts the F1 value, andso as to ensure that the reliability weight differences between learners are not too great. The above-mentioned procedure is also adopted in the base learner weight fusion method of the fine granularity emotion analysis layer, which is different from the coarse granularity emotion analysis layer in that when positive and negative F1 values of each base learner are obtained, the horizontal learner, the positive learner and the negative learner are divided according to the weights of the positive and negative in the same base learner, when the weights w of the positive category Forward direction >Weight w of negative category Negative going When the base learner is a forward learner; when the weight w of the forward category Forward direction <W of negative-going category Negative going When the base learner is a negative learner; when the weight w of the forward category Forward direction Weight w of =negative-going category Negative going In the method, the base learner is the same level in the weight association of the positive data and the negative data, and has no stress emotion tendency, so the base learner is named as a level learner, and when a weighting strategy algorithm of a fine-grained feature selection layer is carried out, sample data which is judged to be positive or negative is transmitted into the learner to predict attribute-level emotion tendency. And after the weight decision layer establishes decision weight lists of the emotion analysis layer component learner and the base learner respectively, predicting emotion tendencies of the evaluation text by using an improved weighted evaluation strategy.
Step six: the fine granularity feature selection layer receives data, which is judged to be positive or negative by the test set in the coarse granularity emotion analysis layer, and the data are respectively transmitted into the horizontal learner combination and the corresponding positive or negative learner combination, emotion tendency prediction of each attribute object of the agricultural product based on fine granularity feature division is output, then the emotion tendency prediction value in each attribute is compared according to a weight decision list of the base learner, service quality evaluation features of the user on the agricultural product in each emotion object dimension are judged, and a service quality evaluation result is output.
The specific process is shown in fig. 1. Firstly, inputting the test set data obtained in the step three into an emotion analysis layer, entering the coarse granularity emotion analysis layer, then preliminarily obtaining emotion tendencies of the test set data, inputting a weight decision layer, and judging whether the final emotion tendencies of the test data sample are positive or negative by combining with a weight list. And receiving data, which is judged to be positive or negative by the test set in the coarse granularity model prediction result, in the fine granularity feature selection layer, respectively transmitting the data into a horizontal learner combination and a corresponding positive or negative learner combination, outputting emotion tendency predictions of all attribute objects of the agricultural product based on fine granularity feature division, carrying out weighting strategy calculation on emotion tendency prediction values in all attributes and a weight list of a base learner, outputting final emotion tendency, and judging service quality evaluation characteristics of the agricultural product by a user in all dimensions. The process of receiving test set data by the coarse-fine granularity strong learner is as follows, and the process of firstly extracting a piece of sample data d- "is simpler to package, which is not said. The key is that this is not edged, very blunt, relatively hard and relatively sticky, not recommended, laborious-! The price is low, the things are thick, and the quality is good. Inputting the sample data d into a coarse granularity emotion analysis layer, acquiring the overall emotion tendency of the sample data d, analyzing the sample data d by combining a weight list in a weight decision layer, specifically extracting weight values of 6 component learners for prediction, wherein the specific process is shown in the table 3:
TABLE 3 Table 3
The predicted result of the sample data d in the component learner is 1,0,1,0,0,1, wherein the positive weight value is 0.15+0.30+0.10=0.55, the negative weight value is 0.15+0.25+0.20=0.60, and the final emotion tendency of the sample data d in the coarse granularity emotion analysis layer is judged to be negative as seen from 0.60> 0.55; and then, the sample data d is transmitted into a fine granularity feature selection layer for verification, wherein in a weight decision layer, weight distribution is carried out on each base learner of the fine granularity emotion analysis layer, the weights converted according to F1 values of the same learner are divided into a horizontal learner, a positive learner and a negative learner, and 6 base learners are selected to form a partial weight list, wherein the partial weight list is specifically shown in the table 4:
TABLE 4 Table 4
The sample data d of the strip judged to be negative in the coarse granularity emotion analysis layer is input into a fine granularity characteristic selection layer, and enters a horizontal learner group and a negative learner group for attribute-level emotion tendency prediction, wherein the horizontal learner group consists of learners with the number of 1 in table 4, and the weight of the learner group is set as followsThe method comprises the steps of carrying out a first treatment on the surface of the The negative learner group consists of learners numbered 4 and 6 in table 4, and the weights of the learners are respectively set asAndwherein the final result of emotion tendencies prediction comprises 4 scores of 0, -1, -2, respectively, wherein 0 represents neutral, -1 represents negative, 1 represents positive, -2 represents no mention. When the predicted values of the learner group to the same attribute object are the same, the predicted values are output as final predicted results; when the predicted values are different, carrying out normalization processing on the weight value and the emotion tendency predicted four scores of the selected learner group, wherein the processing formula is shown in a formula (9):
(9);
wherein whenWhen 0 is taken, the learning device group is expressedA weight value representing an ith learner of the selected learner group; when (when)Taking 1, four scores are predicted for emotion tendencies, at this timeA weight value representing the i-th score of the emotion tendencies prediction,is thatIs included in the above formula (c). The specific values are shown in Table 5:
TABLE 5
When the predicted values of the learners are different, multiplying the weights of the learner groups by the predicted emotion tendency scores respectively, and adding the results, wherein a specific calculation formula is shown as a formula (10):
(10);
wherein M represents the decision value obtained after normalized fusion of learner group weight and emotion tendency prediction score;respectively representing the weight values of the 1 st learner, the 4 th learner and the 6 th learner of the selected learner group;weight value representing i-th score of emotion tendency prediction, and normalized emotion tendency prediction score range is known to beWhen (when)When the final emotional tendency is not mentioned-2; when (when)When the final emotion tendency is negative-1; when (when)When the final emotion tendency is neutral 0; when (when)When the final emotion tendency is positive 1, the specific results are shown in table 6:
TABLE 6
Although embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that various changes, modifications, substitutions and alterations can be made therein without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (10)

1. An agricultural socialization service quality user evaluation data analysis method is characterized by comprising the following steps:
step one: crawling user evaluation data of related website agricultural products through a crawler technology;
step two: performing data preprocessing operation on user evaluation data;
step three: performing unified label processing on the user evaluation data to obtain a processed user evaluation data set, and dividing the processed user evaluation data set into a training set and a testing set;
step four: the emotion analysis layer-strong learner is arranged, and comprises a coarse granularity emotion analysis layer and a fine granularity emotion analysis layer, wherein the coarse granularity emotion analysis layer is provided with a component learner integrated formed by a plurality of different learners and is divided into positive emotion tendencies and negative emotion tendencies; the fine granularity emotion analysis layer is provided with a base learner integration consisting of a plurality of identical learners;
step five: f1 values obtained by training of each learner in the emotion analysis layer are obtained, and the F1 values enter a decision weight layer; in the coarse granularity emotion analysis layer, each trained component learner receives corresponding out-of-bag data to obtain an F1 value, and a decision weight list corresponding to each component learner is generated; in a fine-granularity emotion analysis layer, a base learner analyzes emotion according to each attribute in each piece of user evaluation data, receives out-of-bag data, and trains the out-of-bag data into positive data and negative data to obtain an F1 value, a positive learner, a negative learner and a horizontal learner are divided according to weights of the positive data and the negative data, and a decision weight list of the base learner is established by a weight decision layer;
step six: the fine granularity feature selection layer receives data, which is judged to be positive or negative by the test set in the coarse granularity emotion analysis layer, and the data are respectively transmitted into the horizontal learner combination and the corresponding positive or negative learner combination, emotion tendency prediction of each attribute object of the agricultural product based on fine granularity feature division is output, then the emotion tendency prediction value in each attribute is compared according to a weight decision list of the base learner, service quality evaluation features of the user on the agricultural product in each emotion object dimension are judged, and a service quality evaluation result is output.
2. The agricultural socialization service quality user evaluation data analysis method according to claim 1, wherein the component learner is composed of a two-way coding characterization model and a long-short-term memory model, and the fine grain emotion analysis layer comprises a plurality of two-way coding characterization models; the base learner is comprised of a pair of bi-directional coding characterization models.
3. The agricultural socialization service quality user evaluation data analysis method according to claim 2, wherein the coarse granularity emotion analysis layer is integrated by adopting L component learners, L sampling sets are selected from the training sets, and each sampling set is input into the component learners for training; the method comprises the steps that user evaluation texts in a sampling set are segmented by a bidirectional coding representation model in a component learner, index sequence codes in dictionary forms corresponding to word segmentation data of each user evaluation text are obtained, and classification characters and termination characters are automatically added at the beginning and the end of each sentence; then, in the standardization processing of multiple sentences, user evaluation texts with different lengths are unified into the same length; adding a prompt mask to tell the bi-directional coding characterization model of the returned valid digital codes; when a certain evaluation text has a plurality of sentences, the sentence position codes of the word segmentation are added to indicate the sentence positions of the returned index sequence codes; transmitting the finally obtained user evaluation text word segmentation codes into three embedded layers, wherein the first layer is a word embedded layer, and multiplying each input user evaluation text word segmentation by an embedded matrix to improve the dimension; the second layer is a layer for judging the sequence relation among sentences, wherein 'A' is used for representing a first sentence, and 'B' is used for representing a second sentence; the third layer is a position embedding layer, and an embedding matrix is initialized to perform matrix multiplication according to the obtained coding position value of each user evaluation text word; adding the three vectors obtained by the three embedded layers to obtain a final output word vector; and finally, the output word vector is transmitted into a full-connection layer to be subjected to two-class processing, and an output predicted value processed by a Softmax () activation function is returned.
4. The agricultural socialization service quality user evaluation data analysis method according to claim 3, wherein the long-short term memory model used in the component learner performs emotion classification expression on the user evaluation text by setting a word embedding layer, a long-short term memory layer and a full connection layer; firstly, word vectors are obtained by data preprocessing operation of user evaluation text word segmentation and are transmitted into a word embedding model, a time sequence is obtained by transmitting the word vectors into a long-term and short-term memory layer, a random inactivation layer is added in a weight iteration process, and a focus loss function is used in a training process.
5. The method for analyzing agricultural socialization service quality user evaluation data according to claim 1, wherein the fine granularity emotion analysis layer is integrated by using L base learners, and the base learners extract based on emotion triples, and predict three of attribute, viewpoint and attribute-viewpoint.
6. The method for analyzing agricultural socialization service quality user evaluation data according to claim 5, wherein the internal model of the base learner is integrally composed of four parts including a pre-trained language model, an attribute and view sequence labeling layer, an attention mechanism layer and a classification layer; the bi-directional coding characterization model is used as a pre-training language model encoder, a word segmentation processor is called to cut a user evaluation text, the generated user evaluation text is segmented, classification characters and termination characters are added to serve as the beginning and the end of sentence vectors, each user evaluation text is processed through a prompt mask, the positions of the characters serving as filling length in the sentence vectors which become the same fixed length n are identified, and the upper Wen Yuyi characterization R and the lower Wen Yuyi characterization R of the user evaluation text on different attributes are obtained through an internal coding layer of the bi-directional coding characterization model; the attribute and view sequence labeling layer extracts attribute sequences and view sequence features of the user evaluation text to obtain attribute-view pairs; the attribute-view is transmitted to an attention mechanism layer, and the weighted representation is obtained after the attention mechanism layer is processed; the classification layer obtains the emotion tendency predicted value corresponding to each attribute through a Softmax classifier.
7. The agricultural socialization service quality user evaluation data analysis method according to claim 6, wherein the calculation process of the weight vector in the attention mechanism layer is as follows:
;
;
wherein,an ith weight vector representing the attention mechanism layer, < ->Representing the ith element of upper and lower Wen Yuyi characterizing R,the ith linear layer weight updated for each iteration, +.>Semantic code representing ith evaluation text and attribute word allocation of different importance degrees in user evaluation textDifferent weights; />And carrying out pooling treatment on the ith attribute in the user evaluation text by using a tanh function, and obtaining the weighted representation.
8. The method for analyzing agricultural socialization service quality user evaluation data according to claim 7, wherein the specific process of obtaining the emotion tendency prediction value corresponding to each attribute by the classification layer through a Softmax classifier is as follows:
;
wherein,predictive value of emotional tendency, representing the ith attribute in the user's evaluation text,/>Representing the ith offset value obtained in the iterative process.
9. The agricultural socialization service quality user evaluation data analysis method according to claim 1, wherein after training of both the L component learners and the L base learners in the emotion analysis layer, the bag outside data except the sample set is screened out by a for-loop method, and the positive data and the negative data are divided into positive data and negative data by supervised learning labels, and are transmitted to the trained emotion analysis layer-strong learner, each component learner in the coarse granularity emotion analysis layer generates a pair of positive and negative F1 values, each pair of F1 values is transmitted to a weight decision layer, and reliability weights of different emotion tendencies on each component learner and the base learner are calculated by using the F1 values.
10. The agricultural socialization quality of service user assessment of claim 9The data analysis method is characterized in that when positive and negative F1 values of each base learner are obtained, the positive learner and the negative learner are divided according to the weight of the positive and the negative in the same base learner, and when the weight w of the positive category Forward direction >Weight w of negative category Negative going When the base learner is a forward learner; when the weight w of the forward category Forward direction <W of negative-going category Negative going The base learner is a negative learner; when the weight w of the forward category Forward direction Weight w of =negative-going category Negative going And when the base learner is used, the base learner is the same level in the weight association of the positive data and the negative data, and the base learner has no stress tendency and is a level learner.
CN202311690636.9A 2023-12-11 2023-12-11 Agricultural socialization service quality user evaluation data analysis method Active CN117390141B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311690636.9A CN117390141B (en) 2023-12-11 2023-12-11 Agricultural socialization service quality user evaluation data analysis method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311690636.9A CN117390141B (en) 2023-12-11 2023-12-11 Agricultural socialization service quality user evaluation data analysis method

Publications (2)

Publication Number Publication Date
CN117390141A true CN117390141A (en) 2024-01-12
CN117390141B CN117390141B (en) 2024-03-08

Family

ID=89472464

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311690636.9A Active CN117390141B (en) 2023-12-11 2023-12-11 Agricultural socialization service quality user evaluation data analysis method

Country Status (1)

Country Link
CN (1) CN117390141B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117973698A (en) * 2024-03-28 2024-05-03 中国汽车技术研究中心有限公司 Decision optimization system and method based on machine learning

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109740154A (en) * 2018-12-26 2019-05-10 西安电子科技大学 A kind of online comment fine granularity sentiment analysis method based on multi-task learning
CN110457480A (en) * 2019-08-16 2019-11-15 国网天津市电力公司 The construction method of fine granularity sentiment classification model based on interactive attention mechanism
CN110532379A (en) * 2019-07-08 2019-12-03 广东工业大学 A kind of electronics information recommended method of the user comment sentiment analysis based on LSTM
US20190378179A1 (en) * 2018-06-12 2019-12-12 Exxonmobil Upstream Research Company Method and System for Generating Contradiction Scores for Petroleum Geoscience Entities within Text using Associative Topic Sentiment Analysis.
CN112667818A (en) * 2021-01-04 2021-04-16 福州大学 GCN and multi-granularity attention fused user comment sentiment analysis method and system
CN113094502A (en) * 2021-03-22 2021-07-09 北京工业大学 Multi-granularity takeaway user comment sentiment analysis method
WO2021164199A1 (en) * 2020-02-20 2021-08-26 齐鲁工业大学 Multi-granularity fusion model-based intelligent semantic chinese sentence matching method, and device
CN113688634A (en) * 2021-08-17 2021-11-23 中国矿业大学(北京) Fine-grained emotion analysis method
KR20220071059A (en) * 2020-11-23 2022-05-31 주식회사 셀바스에이아이 Method for evaluation of emotion based on emotion analysis model and device using the same
CN114722797A (en) * 2022-04-05 2022-07-08 东南大学 Multi-mode evaluation object emotion classification method based on grammar guide network
US20220237386A1 (en) * 2021-01-22 2022-07-28 Nec Laboratories America, Inc. Aspect-aware sentiment analysis of user reviews
WO2023065619A1 (en) * 2021-10-21 2023-04-27 北京邮电大学 Multi-dimensional fine-grained dynamic sentiment analysis method and system
CN116362620A (en) * 2023-04-14 2023-06-30 广东梦鸢影业有限公司 Electronic commerce online customer service evaluation method
CN116737922A (en) * 2023-03-10 2023-09-12 云南大学 Tourist online comment fine granularity emotion analysis method and system

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190378179A1 (en) * 2018-06-12 2019-12-12 Exxonmobil Upstream Research Company Method and System for Generating Contradiction Scores for Petroleum Geoscience Entities within Text using Associative Topic Sentiment Analysis.
CN109740154A (en) * 2018-12-26 2019-05-10 西安电子科技大学 A kind of online comment fine granularity sentiment analysis method based on multi-task learning
CN110532379A (en) * 2019-07-08 2019-12-03 广东工业大学 A kind of electronics information recommended method of the user comment sentiment analysis based on LSTM
CN110457480A (en) * 2019-08-16 2019-11-15 国网天津市电力公司 The construction method of fine granularity sentiment classification model based on interactive attention mechanism
WO2021164199A1 (en) * 2020-02-20 2021-08-26 齐鲁工业大学 Multi-granularity fusion model-based intelligent semantic chinese sentence matching method, and device
KR20220071059A (en) * 2020-11-23 2022-05-31 주식회사 셀바스에이아이 Method for evaluation of emotion based on emotion analysis model and device using the same
CN112667818A (en) * 2021-01-04 2021-04-16 福州大学 GCN and multi-granularity attention fused user comment sentiment analysis method and system
US20220237386A1 (en) * 2021-01-22 2022-07-28 Nec Laboratories America, Inc. Aspect-aware sentiment analysis of user reviews
CN113094502A (en) * 2021-03-22 2021-07-09 北京工业大学 Multi-granularity takeaway user comment sentiment analysis method
CN113688634A (en) * 2021-08-17 2021-11-23 中国矿业大学(北京) Fine-grained emotion analysis method
WO2023065619A1 (en) * 2021-10-21 2023-04-27 北京邮电大学 Multi-dimensional fine-grained dynamic sentiment analysis method and system
CN114722797A (en) * 2022-04-05 2022-07-08 东南大学 Multi-mode evaluation object emotion classification method based on grammar guide network
CN116737922A (en) * 2023-03-10 2023-09-12 云南大学 Tourist online comment fine granularity emotion analysis method and system
CN116362620A (en) * 2023-04-14 2023-06-30 广东梦鸢影业有限公司 Electronic commerce online customer service evaluation method

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
CHEN, JIANFAN 等: "Ensemble Learning for Assessing Degree of Humor", 2022 INTERNATIONAL CONFERENCE ON BIG DATA, INFORMATION AND COMPUTER NETWORK (BDICN 2022), 20 April 2022 (2022-04-20), pages 492 - 498 *
OUYANG, YI 等: "SentiStory: multi-grained sentiment analysis and event summarization with crowdsourced social media data", PERSONAL AND UBIQUITOUS COMPUTING, 28 February 2017 (2017-02-28), pages 97 - 111 *
徐源音;柴玉梅;王黎明;刘箴;: "多语言文本情绪分析模型MF-CSEL", 小型微型计算机系统, no. 05, 14 May 2019 (2019-05-14), pages 1026 - 1033 *
方英兰;孙吉祥;韩兵;: "基于BERT的文本情感分析方法的研究", 信息技术与信息化, no. 02, 28 February 2020 (2020-02-28), pages 108 - 111 *
王国薇;黄浩;周刚;胡英;: "集成学习在短文本分类中的应用研究", 现代电子技术, no. 24, 15 December 2019 (2019-12-15), pages 140 - 145 *
金志刚;韩;朱琦;: "一种结合深度学习和集成学习的情感分析模型", 哈尔滨工业大学学报, no. 11, 4 May 2018 (2018-05-04), pages 32 - 39 *
颜端武;杨雄飞;李铁军;: "基于产品特征树和LSTM模型的产品评论情感分析", 情报理论与实践, no. 12, 16 August 2019 (2019-08-16), pages 134 - 138 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117973698A (en) * 2024-03-28 2024-05-03 中国汽车技术研究中心有限公司 Decision optimization system and method based on machine learning

Also Published As

Publication number Publication date
CN117390141B (en) 2024-03-08

Similar Documents

Publication Publication Date Title
CN111198995B (en) Malicious webpage identification method
CN110119765A (en) A kind of keyword extracting method based on Seq2seq frame
CN111985239A (en) Entity identification method and device, electronic equipment and storage medium
CN111563166A (en) Pre-training model method for mathematical problem classification
CN117390141B (en) Agricultural socialization service quality user evaluation data analysis method
CN111597340A (en) Text classification method and device and readable storage medium
CN108614855A (en) A kind of rumour recognition methods
CN111460101B (en) Knowledge point type identification method, knowledge point type identification device and knowledge point type identification processor
CN112905739B (en) False comment detection model training method, detection method and electronic equipment
CN110110372B (en) Automatic segmentation prediction method for user time sequence behavior
CN109271513B (en) Text classification method, computer readable storage medium and system
CN109325125B (en) Social network rumor detection method based on CNN optimization
CN111538841B (en) Comment emotion analysis method, device and system based on knowledge mutual distillation
CN112416956A (en) Question classification method based on BERT and independent cyclic neural network
CN111145914B (en) Method and device for determining text entity of lung cancer clinical disease seed bank
CN116245110A (en) Multi-dimensional information fusion user standing detection method based on graph attention network
CN115391520A (en) Text emotion classification method, system, device and computer medium
CN111259115A (en) Training method and device for content authenticity detection model and computing equipment
CN112015760B (en) Automatic question-answering method and device based on candidate answer set reordering and storage medium
CN113360659A (en) Cross-domain emotion classification method and system based on semi-supervised learning
CN117094835A (en) Multi-target group classification method for social media content
CN114757183B (en) Cross-domain emotion classification method based on comparison alignment network
CN116450848A (en) Method, device and medium for evaluating computing thinking level based on event map
CN113051607B (en) Privacy policy information extraction method
CN115659990A (en) Tobacco emotion analysis method, device and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant