CN111930940A - Text emotion classification method and device, electronic equipment and storage medium - Google Patents

Text emotion classification method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111930940A
CN111930940A CN202010748294.1A CN202010748294A CN111930940A CN 111930940 A CN111930940 A CN 111930940A CN 202010748294 A CN202010748294 A CN 202010748294A CN 111930940 A CN111930940 A CN 111930940A
Authority
CN
China
Prior art keywords
emotion
word
model
target text
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010748294.1A
Other languages
Chinese (zh)
Other versions
CN111930940B (en
Inventor
吴双志
谢军
李沐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010748294.1A priority Critical patent/CN111930940B/en
Publication of CN111930940A publication Critical patent/CN111930940A/en
Application granted granted Critical
Publication of CN111930940B publication Critical patent/CN111930940B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • G06F40/35Discourse or dialogue representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Databases & Information Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention relates to the technical field of natural language processing and machine learning, in particular to a text emotion classification method, a device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring a target text to be classified, and processing the target text to obtain a word vector sequence of the target text; processing the word vector sequence by utilizing a pre-trained semantic coding model to obtain a word vector sequence corresponding to the word vector sequence and a semantic vector of the target text; inputting the word vector sequence, the word meaning vector sequence and the semantic vector into a pre-trained emotion generating model to obtain an emotion vector of the target text, wherein the emotion generating model is a neural network model based on attention; and inputting the semantic vector and the emotion vector into a pre-trained emotion classification model to obtain an emotion classification result of the target text. The invention introduces the dynamically generated emotion vector, and improves the accuracy of emotion classification.

Description

Text emotion classification method and device, electronic equipment and storage medium
Technical Field
The invention relates to the technical field of natural language processing and machine learning, in particular to a text emotion classification method and device, electronic equipment and a storage medium.
Background
With the development of internet technology, more and more network products are available. During or after using the network product, a user usually inputs a network text expressing the needs or viewpoints of the user, for example, a problem to be solved is input in an intelligent customer service or a voice assistant, or an evaluation for a certain service is published after using the service, and the like. Because the web text contains rich emotion information, corresponding to the corresponding psychological state of the user, the emotion type of the web text input by the user is judged by performing emotion analysis on the web text input by the user, and an appropriate processing strategy can be determined according to the analysis result, so that a web product which is more in line with the requirements of the user is provided. Therefore, the emotion analysis technology is widely applied to the fields of consumption decision, public opinion analysis, personalized recommendation, man-machine interaction and the like.
The traditional emotion classification method generally uses multivariate grammar, lexical method and the like as features, and adopts traditional classification models for classification, such as a support vector machine, a maximum entropy model and the like. Recently, with the development of neural networks and deep learning, the neural networks are also used in the emotion classification field. One of the existing emotion classification methods is a method for helping emotion classification by using an external dictionary, which provides a self-attention network on the basis of a Long Short Term Memory (LSTM) network and introduces the external dictionary into model training; the other method is an emotion classification method utilizing domain knowledge, which introduces the domain knowledge on the basis of pre-training a language model and aims to extract corresponding domain knowledge aiming at different emotion classification tasks to help the language model to better adapt to a new scene and a new task. The existing emotion classification method needs to rely on an external emotion dictionary or domain knowledge in the process of training a classification model, and the problem of mismatching caused by inconsistent labeling or the problem of poor emotion classification accuracy caused by insufficient dictionary coverage can occur.
Disclosure of Invention
In view of the foregoing problems in the prior art, an object of the present invention is to provide a text emotion classification method, apparatus, electronic device and storage medium, which can improve the accuracy of emotion classification.
In order to solve the above problems, the present invention provides a text emotion classification method, including:
acquiring a target text to be classified, and processing the target text to obtain a word vector sequence of the target text;
processing the word vector sequence by utilizing a pre-trained semantic coding model to obtain a word vector sequence corresponding to the word vector sequence and a semantic vector of the target text;
inputting the word vector sequence, the word meaning vector sequence and the semantic vector into a pre-trained emotion generating model to obtain an emotion vector of the target text, wherein the emotion generating model is a neural network model based on attention;
and inputting the semantic vector and the emotion vector into a pre-trained emotion classification model to obtain an emotion classification result of the target text.
Another aspect of the present invention provides a text emotion classification apparatus, including:
the word vector sequence generating module is used for acquiring a target text to be classified, and processing the target text to obtain a word vector sequence of the target text;
the semantic vector generation module is used for processing the word vector sequence by utilizing a pre-trained semantic coding model to obtain a word vector sequence corresponding to the word vector sequence and a semantic vector of the target text;
the emotion vector generation module is used for inputting the word vector sequence, the word sense vector sequence and the semantic vector into a pre-trained emotion generation model to obtain an emotion vector of the target text, and the emotion generation model is a neural network model based on attention;
and the emotion classification module is used for inputting the semantic vector and the emotion vector into a pre-trained emotion classification model to obtain an emotion classification result of the target text.
Another aspect of the present invention provides an electronic device, which includes a processor and a memory, where at least one instruction or at least one program is stored in the memory, and the at least one instruction or the at least one program is loaded and executed by the processor to implement the text emotion classification method as described above.
Another aspect of the present invention provides a computer-readable storage medium, in which at least one instruction or at least one program is stored, and the at least one instruction or the at least one program is loaded and executed by a processor to implement the text emotion classification method as described above.
Due to the technical scheme, the invention has the following beneficial effects:
(1) according to the text emotion classification method, the emotion generation model trained in advance dynamically generates context-based word emotion vectors for all words of the target text, and the word emotion vectors are introduced into text emotion classification, so that the emotion classification accuracy can be effectively improved. Because the word emotion vectors are dynamically generated for each word, compared with the fixed labels, the problem of one word and multiple emotions can be effectively processed.
(2) The text emotion classification method does not need to rely on an external emotion dictionary in the model training process, and avoids the mismatching problem and the dictionary coverage problem caused by inconsistent labeling. Because an external dictionary is not needed, the text emotion classification method can have good adaptability to new tasks and new data.
Drawings
In order to more clearly illustrate the technical solution of the present invention, the drawings used in the description of the embodiment or the prior art will be briefly described below. It is obvious that the drawings in the following description are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
FIG. 1 is a schematic diagram of an implementation environment of a text emotion classification method according to an embodiment of the present invention;
FIG. 2 is a flowchart of a method for classifying text emotion according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a neural network model according to an embodiment of the present invention;
FIG. 4 is a flowchart of a text emotion classification method according to another embodiment of the present invention;
FIG. 5 is a schematic diagram of a user interface provided by one embodiment of the present invention;
FIG. 6 is a flow chart of a model training method provided by one embodiment of the present invention;
FIG. 7 is a schematic structural diagram of a text emotion classification apparatus according to an embodiment of the present invention;
FIG. 8 is a schematic structural diagram of a text emotion classification apparatus according to another embodiment of the present invention;
fig. 9 is a schematic structural diagram of a server according to an embodiment of the present invention.
Detailed Description
Natural Language Processing (NLP) is an important direction in the fields of computer science and artificial intelligence. It studies various theories and methods that enable efficient communication between humans and computers using natural language. Natural language processing is a science integrating linguistics, computer science and mathematics. Therefore, the research in this field will involve natural language, i.e. the language that people use everyday, so it is closely related to the research of linguistics. Natural language processing techniques typically include text processing, semantic understanding, machine translation, robotic question and answer, knowledge mapping, and the like.
Specifically, the process of analyzing and processing the target text based on the semantic coding model, the emotion generation model and the emotion classification model to obtain the emotion classification result of the target text provided by the embodiment of the present invention relates to text processing, semantic understanding technology and the like in NLP.
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, apparatus, article, or device that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or device.
In order to make the objects, technical solutions and advantages disclosed in the embodiments of the present invention more clearly apparent, the embodiments of the present invention are described in further detail below with reference to the accompanying drawings and the embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the embodiments of the invention and are not intended to limit the embodiments of the invention. First, the embodiments of the present invention explain the following concepts:
and (3) emotion classification: refers to the division of text into predefined classes of emotion (e.g., anger, joy, etc.) that are well defined based on the text content.
BERT: the Encoder is called Bidirectional Encoder recurrents from Transformers, namely a Bidirectional Transformer, and is a Bidirectional language model training method using massive texts. The BERT is used for extracting text characteristics, can fully describe character level, word level, sentence level and even sentence relation characteristics, and is widely used for various natural language processing tasks.
LSTM: the Long-short Term Memory network is a manually designed recurrent neural network structure and mainly aims to solve the problems of gradient loss and gradient explosion in the Long sequence training process. LSTM has found a variety of applications in the field of natural language processing, such as learning translation languages, controlling robots, image analysis, document summarization, and speech recognition image recognition, etc., using LSTM-based systems.
MLP: the Multi-layer perceptron is a Multi-layer feedforward neural network. The MLP is a neural network composed of full connections, at least one hidden layer is contained in the MLP, and the output of each hidden layer is transformed through an activation function.
Referring to the specification and the accompanying fig. 1, a schematic diagram of an implementation environment of a text emotion classification method according to an embodiment of the present invention is shown. As shown in fig. 1, the implementation environment may include at least a terminal 110 and a server 120, and the terminal 110 and the server 120 may be directly or indirectly connected through wired or wireless communication, which is not limited in the present invention. For example, the terminal 110 may upload a corresponding target text that needs to be subjected to emotion classification to the server 120 through a wired or wireless communication manner, and the server 120 may display an emotion classification result of the target text to the terminal 110 through a wired or wireless communication manner.
Specifically, the terminal 110 may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, and the like. The server 120 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud computing services.
It should be noted that fig. 1 is only an example.
Referring to the specification, fig. 2 shows a flow of a text emotion classification method according to an embodiment of the present invention. The text emotion classification method provided by the embodiment of the invention can be applied to any scene needing emotion analysis on the text, for example, the method can be applied to emotion classification on the text input by the user in intelligent customer service and a voice assistant, and can also be applied to text emotion classification on user evaluation information. Specifically, as shown in fig. 2, the method may include:
s210: and acquiring a target text to be classified, and processing the target text to obtain a word vector sequence of the target text.
In the embodiment of the present invention, the target text may be any text, and the method for acquiring the target text is not specifically limited in the embodiment of the present invention, and for example, the target text may be acquired based on an application scenario.
Illustratively, if the text emotion classification method provided by the embodiment of the present invention is applied to scenes such as intelligent question answering, intelligent customer service, and voice assistant, the scenes comprise a server and a terminal for implementing the text emotion classification method. The terminal displays a user interface, obtains a text input by a user based on the user interface, and sends the text to the server, and the server receives the text, namely the obtained target text.
It should be noted that the target text may be a section of speech input by the user, or may be a sentence or a word; the target text may be any content, for example, may be a certain problem, may express a certain user's idea, and the like, and the content and the length of the target text are not limited in the embodiment of the present invention. In practical application, if a text needing emotion analysis is a text paragraph comprising a plurality of sentences, the text paragraph can be directly used as a target text, and the text emotion classification method provided by the embodiment of the invention is adopted to classify the emotion to obtain the emotion classification result of the text paragraph; the text paragraphs may also be subjected to sentence segmentation, and each sentence after the sentence segmentation is taken as a target text, and the emotion classification method provided by the embodiment of the present invention is respectively adopted to perform emotion classification, so as to obtain an emotion classification result of each sentence, thereby determining the emotion classification result of the text paragraphs.
In a possible embodiment, the obtaining a target text to be classified, and processing the target text to obtain a word vector sequence of the target text may include:
performing word segmentation processing on the target text to obtain a plurality of words contained in the target text;
respectively encoding the words to obtain word vectors of the words;
and generating a word vector sequence of the target text according to the word vectors of the words.
In the embodiment of the present invention, any word segmentation method in the field of text processing may be used for word segmentation processing, as long as the word segmentation method included in the target text can be determined, and the method for word segmentation processing is not limited in the embodiment of the present invention. For example, if the target text is "i really stagnate and do not worry today", the word segmentation processing is performed on the target text to obtain the word segmentation words "i", "today", "really", "stagnate" and "do not worry".
In the embodiment of the present invention, a word vector representing method of any word vector in the field of text processing may be used to determine the word vectors of a plurality of words included in the target text, which is not limited in the embodiment of the present invention. For example, word vectors for individual words may be determined by a pre-trained word vector model.
In the embodiment of the invention, the word vector sequence refers to the ordered word vectors, and the ordering of the word vectors corresponds to the front-back order of the words in the target text.
S220: and processing the word vector sequence by utilizing a pre-trained semantic coding model to obtain a word vector sequence corresponding to the word vector sequence and a semantic vector of the target text.
In the embodiment of the present invention, the word vector sequence may be input into a pre-trained semantic coding model, a hidden layer vector is generated for each word in the target text, and the hidden layer vector is used as a word sense vector of each word, and a semantic vector of the target text is generated and output by using the hidden layer vector.
In one possible embodiment, the semantic code model may include a third neural network based on self-attention;
the processing the word vector sequence by using the pre-trained semantic coding model to obtain the word vector sequence corresponding to the word vector sequence and the semantic vector of the target text may include:
processing the word vector sequence based on the third neural network to obtain word sense vectors of all words in the target text;
generating a word sense vector sequence corresponding to the word vector sequence according to the word sense vector of each word in the target text;
and aggregating the word meaning vector sequence to obtain the semantic vector of the target text.
In this embodiment of the present invention, the third Neural Network may be a plurality of Neural networks, such as a Bi-directional Long-short Term Memory (Bi-LSTM), a Recurrent Neural Network (RNN), a Convolutional Neural Network (CNN), and other networks using RNN, CNN and attention in a mixed manner, which is not limited in this embodiment of the present invention. The word semantic vector sequence refers to a sorted word sense vector, and the sorting of the word sense vector corresponds to the front-back sequence of words in the target text.
Illustratively, referring to FIG. 3 in conjunction with the description, the structure of a neural network model provided by an embodiment of the present invention is shown, the neural network model may include a semantic code model 310, the semantic code model 310 may include a Bi-LSTM network, and a word vector sequence { E } of a target text is shownw1,Ew2,Ew3,…,EwnInputting the Bi-LSTM network, and outputting a word sense vector sequence { h) composed of word sense vectors of all words in the target text1,h2,h3,…,hnH, and a semantic vector h of the target texts
S230: and inputting the word vector sequence, the word meaning vector sequence and the semantic vector into a pre-trained emotion generating model to obtain an emotion vector of the target text, wherein the emotion generating model is a neural network model based on attention.
In the embodiment of the present invention, the emotion generating model may generate a context-based word emotion vector for each word of the target text according to the word vector sequence and the word sense vector sequence, and generate an emotion vector of the target text according to the word emotion vector and the semantic vector of each word by using an attention mechanism.
In practice, the same word may represent different emotions in different sentences, such as "punishment", where the badger is eventually given the due punishment! "Zhong expresses the emotion of" anger ", while" I break the vase today and are punished by mother. "expresses the emotion of" sadness ". The word emotion vectors generated in the embodiment of the invention have dynamic property, and the emotion of the words can be adjusted according to the context, so that different word emotion vectors can be generated by the same words in different scenes.
In one possible embodiment, the emotion generation model may include a first neural network based on self-attention and an attention network;
with reference to fig. 4 in the description, the inputting the word vector sequence, the word sense vector sequence, and the semantic vector into a pre-trained emotion generation model to obtain an emotion vector of the target text may include:
s410: analyzing the word vector sequence and the word sense vector sequence based on the first neural network to obtain word emotion vectors of all words in the target text.
S420: and generating a word emotion vector sequence of the target text according to the word emotion vectors of all words in the target text.
S430: analyzing the semantic vector and the word emotion vector sequence based on the attention network, and aggregating the word emotion vector sequence into an emotion vector of the target text.
In this embodiment of the present invention, the first neural network may be a plurality of neural networks, for example, different neural networks such as a bidirectional long-and-short term memory network or a convolutional neural network, which is not limited in this embodiment of the present invention. The word emotion vector sequence refers to the sequenced word emotion vectors, and the sequence of the word emotion vectors corresponds to the front-back sequence of words in the target text. The word emotion vectors of all words in the target text can also be used as byproducts for carrying out subsequent natural language processing tasks.
Illustratively, referring to FIG. 3 in conjunction with the description, the neural network model may further include an emotion generation model 320, and the emotion generation model 320 may include a Bi-LSTM network, a normalization layer, and an attention network, a word vector sequence { E ] of the target textw1,Ew2,Ew3,…,EwnAnd the word sense vector sequence h1,h2,h3,…,hnInputting the Bi-LSTM network, and outputting the word emotion vector e of each word in the target text1,e2,e3,…,enGenerating a word emotion vector sequence { e) of the target text1,e2,e3,…,enInputting the word emotion vector sequence into a normalization layer for normalization and then comparing the normalized word emotion vector sequence with a semantic vector h of the target textsInputting the attention network together to obtain the emotion vector e of the target texts
S240: and inputting the semantic vector and the emotion vector into a pre-trained emotion classification model to obtain an emotion classification result of the target text.
In the embodiment of the present invention, an emotion set may be predetermined for a current application scenario, where the emotion set may include types of emotions that are common in the current scenario, and for example, the emotion set may include "happy", "sad", "surprised", "angry", "disgust", and "fear", and the like. The emotion classification model can determine the component of the target text on each type of emotion in the emotion set according to the semantic vector, and finally determine the emotion classification result of the target text by combining the emotion vector.
It should be noted that different emotion sets may exist in different application scenarios, and the emotion types and the number of emotion types of the emotion sets in different scenarios may also be different, which is not limited in the embodiment of the present invention.
In one possible embodiment, the sentiment classification model may include a second neural network and a classification network;
the inputting the semantic vector and the emotion vector into a pre-trained emotion classification model to obtain an emotion classification result of the target text may include:
processing the semantic vector based on the second neural network to obtain a classification feature vector of the target text;
and analyzing the classification feature vector and the emotion vector based on the classification network to obtain an emotion classification result of the target text.
In this embodiment of the present invention, the second neural network may be various neural networks, for example, different neural networks such as a multilayer perceptron or a convolutional neural network, and the classification network may also be various neural networks, for example, a Softmax classification network, and the like, which is not limited in this embodiment of the present invention.
Illustratively, referring to fig. 3 in conjunction with the description, the neural network model may further include an emotion classification model 330, and the emotion classification model 330 may include an MLPs network and a Softmax classification network, and the semantic vector h of the target text is expressed bysInputting the classification feature vector of the target text which can be output by the MLPs network, and inputting the classification feature vector and the emotion vector e of the target textsInputting the Softmax classification network may output emotion classification results for the target text.
In one possible embodiment, the method can further comprise determining a response text according to the emotion classification result of the target text, and feeding back the response text to the terminal. For example, in the intelligent question-answering system, corresponding answer text sets can be constructed in advance for different emotion classification results, and after emotion classification results of target texts are obtained, matched answer texts can be selected from the corresponding answer text sets based on the emotion classification results and serve as response texts to be fed back to the terminal. For example, some text matching detection algorithms and the like may be specifically used, and this is not particularly limited in the embodiment of the present invention.
The following description takes an intelligent question-answering system as an example, where the intelligent question-answering system includes a terminal and a server, and when the intelligent question-answering system implements intelligent question-answering, the intelligent question-answering system may include the following steps:
1) the method comprises the steps that a terminal obtains a target text input by a user through a client and sends a question-answer request to a server, wherein the question-answer request comprises a text identifier and the target text, and the text identifier is used for marking the target text, so that different target texts are distinguished.
2) After receiving the question-answer request, the server uses the emotion classification method provided by the embodiment of the method of the invention to carry out emotion classification on the target text based on the semantic coding model, the emotion generation model and the emotion classification model to obtain emotion classification results, determines corresponding answer texts according to the emotion classification results, and returns the answer texts to the terminal.
3) And after receiving the answer text, the terminal can display the answer text through an interactive interface of the client.
The interactive process based on the above-mentioned intelligent question-answering system is described by taking the user interface shown in fig. 5 as an example. The user can input and send the target text through the operation control 52 for the user to input the message text and the operation control 53 for the user to trigger sending the message text in the conversation interface 51. And after receiving the target text, the server carries out emotion classification on the target text, determines a corresponding answer text according to the emotion classification result, and returns the answer text to the terminal. The terminal may present the answer text through the session interface 51. When the target text input by the user is' i am happy today, no worry exists. After the emotion classification result is obtained by the emotion classification method provided by the embodiment of the invention, the answer text obtained based on the emotion classification result can be that the user does not cry and the sunshine always exists after the weather. "
In addition, in addition to the above examples, the method provided by the embodiment of the invention can also be applied to application program evaluation analysis, network product evaluation analysis, network business intelligent customer service, voice assistant and the like, so that the robot is endowed with certain emotional ability.
Referring to the specification, fig. 6 is a flow chart illustrating a model training method according to an embodiment of the invention. As shown in fig. 6, the text emotion classification method may further include the step of training the semantic coding model, the emotion generation model and the emotion classification model; the step of training the semantic coding model, the emotion generating model and the emotion classification model may include:
s610: the method comprises the steps of constructing a preset neural network model, wherein the preset neural network model comprises a preset semantic coding model, a preset emotion generating model and a preset emotion classification model, the preset semantic coding model comprises a third neural network based on self-attention, the preset emotion generating model comprises a first neural network and an attention network based on self-attention, and the preset emotion classification model comprises a second neural network and a classification network.
In one possible embodiment, the first neural network comprises a bidirectional long-term memory network or a convolutional neural network, the second neural network comprises a multilayer perceptron network or a convolutional neural network, and the third neural network comprises a bidirectional long-term memory network, a cyclic neural network or a convolutional neural network. Illustratively, the preset semantic coding model may include a Bi-LSTM network, the preset emotion generation model may include a Bi-LSTM network, a normalization layer and an attention network, and the preset emotion classification model may include an MLPs network and a Softmax classification network.
S620: and acquiring a training text set, wherein the training text set comprises a plurality of training texts and corresponding emotion labels.
In the embodiment of the present invention, the training texts in the training text set may be corpus texts labeled with emotion type labels in a target field (for example, an intelligent question and answer field), and the labels may be manually labeled labels for labeling the emotion types of the training texts. In practical application, public data sets can be used as training text sets for training the semantic coding model, the emotion generation model and the emotion classification model.
S630: and training the preset neural network model by using the training texts in the training text set, and adjusting the model parameters of the preset semantic coding model, the preset emotion generation model and the preset emotion classification model in the training process until the output result of the preset neural network model is matched with the emotion label of the training text to obtain the semantic coding model, the emotion generation model and the emotion classification model.
In the embodiment of the invention, machine learning training can be carried out based on a BERT model, an LSTM network and an MLPs network, model parameters of the preset semantic coding model, the preset emotion generation model and the preset emotion classification model are adjusted according to the value of a loss function in the training process until the loss function is converged, then the preset semantic coding model corresponding to the current model parameters is used as a trained semantic coding model, the preset emotion generation model corresponding to the current model parameters is used as a trained emotion generation model, and the preset emotion classification model corresponding to the current model parameters is used as a trained emotion classification model.
Verification on the disclosed data set SemEval 2018 shows that the emotion classification method provided by the embodiment of the invention is effectively higher than other existing methods, the accuracy of a test set on the data set is improved to 59.3%, and compared with the method provided by Baziotis et al 2018, the accuracy of the test set on the data set is improved by 5%.
In summary, the text emotion classification method of the present invention dynamically generates context-based word emotion vectors for each word of the target text through the pre-trained emotion generation model, and introduces the word emotion vectors into the text emotion classification, which can effectively improve the accuracy of emotion classification. Because the word emotion vectors are dynamically generated for each word, compared with the fixed labels, the problem of one word and multiple emotions can be effectively processed. The text emotion classification method does not need to rely on an external emotion dictionary in the model training process, and avoids the mismatching problem and the dictionary coverage problem caused by inconsistent labeling. Because an external dictionary is not needed, the text emotion classification method can have good adaptability to new tasks and new data.
Reference is made to the description of the drawings, fig. 7, which shows the structure of a text emotion classification apparatus according to an embodiment of the present invention. As shown in fig. 7, the apparatus may include:
the word vector sequence generating module 710 is configured to obtain a target text to be classified, and process the target text to obtain a word vector sequence of the target text;
a semantic vector generation module 720, configured to process the word vector sequence by using a pre-trained semantic coding model to obtain a word vector sequence corresponding to the word vector sequence and a semantic vector of the target text;
an emotion vector generation module 730, configured to input the word vector sequence, the word sense vector sequence, and the semantic vector into a pre-trained emotion generation model to obtain an emotion vector of the target text, where the emotion generation model is an attention-based neural network model;
and the emotion classification module 740 is configured to input the semantic vector and the emotion vector into a pre-trained emotion classification model to obtain an emotion classification result of the target text.
In one possible embodiment, the apparatus may further include a model training module 750, the model training module 750 being configured to train the semantic coding model, the emotion generation model and the emotion classification model; referring to fig. 8, the model training module 750 may include:
the model construction unit 751 is used for constructing a preset neural network model, the preset neural network model comprises a preset semantic coding model, a preset emotion generation model and a preset emotion classification model, the preset semantic coding model comprises a third neural network based on self attention, the preset emotion generation model comprises a first neural network and an attention network based on self attention, and the preset emotion classification model comprises a second neural network and a classification network;
a training text set obtaining unit 752, configured to obtain a training text set, where the training text set includes a plurality of training texts and corresponding emotion labels;
a model training unit 753 configured to train the preset neural network model using the training texts in the training text set, and adjust model parameters of the preset semantic coding model, the preset emotion generating model, and the preset emotion classification model in a training process until an output result of the preset neural network model matches an emotion label of the training text, so as to obtain the semantic coding model, the emotion generating model, and the emotion classification model.
In one possible embodiment, the first neural network comprises a bidirectional long-term memory network or a convolutional neural network, the second neural network comprises a multilayer perceptron network or a convolutional neural network, and the third neural network comprises a bidirectional long-term memory network, a cyclic neural network or a convolutional neural network.
It should be noted that the embodiments of the present invention provide embodiments of apparatuses based on the same inventive concept as the embodiments of the method described above. In the device provided in the above embodiment, when the functions of the device are implemented, only the division of the functional modules is illustrated, and in practical applications, the functions may be distributed by different functional modules as needed, that is, the internal structure of the device may be divided into different functional modules to implement all or part of the functions described above.
The embodiment of the present invention further provides an electronic device, where the electronic device includes a processor and a memory, where the memory stores at least one instruction or at least one program, and the at least one instruction or the at least one program is loaded and executed by the processor to implement the text emotion classification method provided in the above method embodiment.
The memory can be used for storing software programs and modules, and the processor executes various functional applications and emotion classification by operating the software programs and modules stored in the memory. The memory can mainly comprise a program storage area and a data storage area, wherein the program storage area can store an operating system, application programs needed by functions and the like; the storage data area may store data created according to use of the apparatus, and the like. Further, the memory may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory may also include a memory controller to provide the processor access to the memory.
The method embodiments provided by the embodiments of the present invention may be executed in a terminal, a server, or a similar computing device, that is, the electronic device may include a terminal, a server, or a similar computing device. Taking the operation on the server as an example, as shown in fig. 9, it shows a schematic structural diagram of the server for operating the text emotion classification method according to an embodiment of the present invention. The server 900 may vary widely in configuration or performance, and may include one or more Central Processing Units (CPUs) 910 (e.g., one or more processors) and memory 930, one or more storage media 920 (e.g., one or more mass storage devices) storing applications 923 or data 922. Memory 930 and storage media 920 may be, among other things, transient or persistent storage. The program stored in the storage medium 920 may include one or more modules, each of which may include a series of instruction operations in a server. Still further, the central processor 910 may be configured to communicate with the storage medium 920, and execute a series of instruction operations in the storage medium 920 on the server 900. The server 900 may also include one or more power supplies 960, one or more wired or wireless network interfaces 950, one or more input-output interfaces 940, and/or one or more operating systems 921, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, and so forth.
The input/output interface 940 may be used to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the server 900. In one example, the input/output Interface 940 includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the input/output interface 940 may be a Radio Frequency (RF) module for communicating with the internet in a wireless manner, and the wireless communication may use any communication standard or protocol, including but not limited to Global System for mobile communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), email, Short Message Service (SMS), and the like.
It will be appreciated by those of ordinary skill in the art that the configuration shown in fig. 9 is merely illustrative and that the server 900 may include more or fewer components than shown in fig. 9 or have a different configuration than shown in fig. 9.
An embodiment of the present invention further provides a computer-readable storage medium, in which at least one instruction or at least one program is stored, and the at least one instruction or the at least one program is loaded and executed by a processor to implement the text emotion classification method provided in the above method embodiment.
Optionally, in an embodiment of the present invention, the computer-readable storage medium may include, but is not limited to: various media capable of storing program codes, such as a usb disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
An embodiment of the invention also provides a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer readable storage medium, and the processor executes the computer instructions to enable the computer device to execute the text emotion classification method provided in various optional implementation manners in the method embodiment.
It should be noted that: the precedence order of the above embodiments of the present invention is only for description, and does not represent the merits of the embodiments. And specific embodiments thereof have been described above. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, as for the device, terminal and server embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (10)

1. A text emotion classification method is characterized by comprising the following steps:
acquiring a target text to be classified, and processing the target text to obtain a word vector sequence of the target text;
processing the word vector sequence by utilizing a pre-trained semantic coding model to obtain a word vector sequence corresponding to the word vector sequence and a semantic vector of the target text;
inputting the word vector sequence, the word meaning vector sequence and the semantic vector into a pre-trained emotion generating model to obtain an emotion vector of the target text, wherein the emotion generating model is a neural network model based on attention;
and inputting the semantic vector and the emotion vector into a pre-trained emotion classification model to obtain an emotion classification result of the target text.
2. The method of claim 1, wherein the emotion generation model comprises a first neural network and an attention network based on self-attention;
the step of inputting the word vector sequence, the word sense vector sequence and the semantic vector into a pre-trained emotion generation model to obtain an emotion vector of the target text comprises:
analyzing the word vector sequence and the word sense vector sequence based on the first neural network to obtain word emotion vectors of all words in the target text;
generating a word emotion vector sequence of the target text according to the word emotion vectors of all words in the target text;
analyzing the semantic vector and the word emotion vector sequence based on the attention network, and aggregating the word emotion vector sequence into an emotion vector of the target text.
3. The method of claim 1 or 2, wherein the sentiment classification model comprises a second neural network and a classification network;
the step of inputting the semantic vector and the emotion vector into a pre-trained emotion classification model to obtain an emotion classification result of the target text comprises the following steps:
processing the semantic vector based on the second neural network to obtain a classification feature vector of the target text;
and analyzing the classification feature vector and the emotion vector based on the classification network to obtain an emotion classification result of the target text.
4. The method of claim 1 or 2, wherein the semantic coding model comprises a self-attention based third neural network;
the processing the word vector sequence by using the pre-trained semantic coding model to obtain the word vector sequence corresponding to the word vector sequence and the semantic vector of the target text comprises:
processing the word vector sequence based on the third neural network to obtain word sense vectors of all words in the target text;
generating a word sense vector sequence corresponding to the word vector sequence according to the word sense vector of each word in the target text;
and aggregating the word meaning vector sequence to obtain the semantic vector of the target text.
5. The method according to claim 1 or 2, wherein the obtaining of the target text to be classified and the processing of the target text to obtain the word vector sequence of the target text comprises:
performing word segmentation processing on the target text to obtain a plurality of words contained in the target text;
respectively encoding the words to obtain word vectors of the words;
and generating a word vector sequence of the target text according to the word vectors of the words.
6. The method of claim 1, further comprising the step of training the semantic coding model, the emotion generation model, and the emotion classification model;
the training the semantic coding model, the emotion generating model and the emotion classification model comprises:
constructing a preset neural network model, wherein the preset neural network model comprises a preset semantic coding model, a preset emotion generation model and a preset emotion classification model, the preset semantic coding model comprises a third neural network based on self attention, the preset emotion generation model comprises a first neural network and an attention network based on self attention, and the preset emotion classification model comprises a second neural network and a classification network;
acquiring a training text set, wherein the training text set comprises a plurality of training texts and corresponding emotion labels thereof;
and training the preset neural network model by using the training texts in the training text set, and adjusting the model parameters of the preset semantic coding model, the preset emotion generation model and the preset emotion classification model in the training process until the output result of the preset neural network model is matched with the emotion label of the training text to obtain the semantic coding model, the emotion generation model and the emotion classification model.
7. The method of claim 6, wherein the first neural network comprises a two-way long-term memory network or a convolutional neural network, wherein the second neural network comprises a multi-layer perceptron network or a convolutional neural network, and wherein the third neural network comprises a two-way long-term memory network, a recurrent neural network, or a convolutional neural network.
8. A text emotion classification device, comprising:
the word vector sequence generating module is used for acquiring a target text to be classified, and processing the target text to obtain a word vector sequence of the target text;
the semantic vector generation module is used for processing the word vector sequence by utilizing a pre-trained semantic coding model to obtain a word vector sequence corresponding to the word vector sequence and a semantic vector of the target text;
the emotion vector generation module is used for inputting the word vector sequence, the word sense vector sequence and the semantic vector into a pre-trained emotion generation model to obtain an emotion vector of the target text, and the emotion generation model is a neural network model based on attention;
and the emotion classification module is used for inputting the semantic vector and the emotion vector into a pre-trained emotion classification model to obtain an emotion classification result of the target text.
9. An electronic device, comprising a processor and a memory, wherein at least one instruction or at least one program is stored in the memory, and the at least one instruction or the at least one program is loaded and executed by the processor to implement the text emotion classification method according to any of claims 1-7.
10. A computer-readable storage medium, wherein at least one instruction or at least one program is stored in the computer-readable storage medium, and the at least one instruction or the at least one program is loaded and executed by a processor to implement the text emotion classification method according to any one of claims 1 to 7.
CN202010748294.1A 2020-07-30 2020-07-30 Text emotion classification method and device, electronic equipment and storage medium Active CN111930940B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010748294.1A CN111930940B (en) 2020-07-30 2020-07-30 Text emotion classification method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010748294.1A CN111930940B (en) 2020-07-30 2020-07-30 Text emotion classification method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111930940A true CN111930940A (en) 2020-11-13
CN111930940B CN111930940B (en) 2024-04-16

Family

ID=73315432

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010748294.1A Active CN111930940B (en) 2020-07-30 2020-07-30 Text emotion classification method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111930940B (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112784048A (en) * 2021-01-26 2021-05-11 海尔数字科技(青岛)有限公司 Method, device and equipment for emotion analysis of user questions and storage medium
CN112820412A (en) * 2021-02-03 2021-05-18 东软集团股份有限公司 User information processing method and device, storage medium and electronic equipment
CN112836053A (en) * 2021-03-05 2021-05-25 三一重工股份有限公司 Man-machine conversation emotion analysis method and system for industrial field
CN113177994A (en) * 2021-03-25 2021-07-27 云南大学 Network social emoticon synthesis method based on image-text semantics, electronic equipment and computer readable storage medium
CN113469197A (en) * 2021-06-29 2021-10-01 北京达佳互联信息技术有限公司 Image-text matching method, device, equipment and storage medium
CN113569584A (en) * 2021-01-25 2021-10-29 腾讯科技(深圳)有限公司 Text translation method and device, electronic equipment and computer readable storage medium
CN113705243A (en) * 2021-08-27 2021-11-26 电子科技大学 Emotion analysis method
CN113780610A (en) * 2020-12-02 2021-12-10 北京沃东天骏信息技术有限公司 Customer service portrait construction method and device
CN113806541A (en) * 2021-09-16 2021-12-17 北京百度网讯科技有限公司 Emotion classification method and emotion classification model training method and device
CN114048319A (en) * 2021-11-29 2022-02-15 中国平安人寿保险股份有限公司 Attention mechanism-based humor text classification method, device, equipment and medium
WO2022142593A1 (en) * 2020-12-28 2022-07-07 深圳壹账通智能科技有限公司 Text classification method and apparatus, electronic device, and readable storage medium
WO2022141861A1 (en) * 2020-12-31 2022-07-07 平安科技(深圳)有限公司 Emotion classification method and apparatus, electronic device, and storage medium
CN116108859A (en) * 2023-03-17 2023-05-12 美云智数科技有限公司 Emotional tendency determination, sample construction and model training methods, devices and equipment
WO2023069017A3 (en) * 2021-10-19 2023-06-01 Grabtaxi Holdings Pte. Ltd. System and method for recognizing sentiment of user's feedback
CN116244440A (en) * 2023-02-28 2023-06-09 深圳市云积分科技有限公司 Text emotion classification method, device, equipment and medium
CN116631583A (en) * 2023-05-30 2023-08-22 华脑科学研究(珠海横琴)有限公司 Psychological dispersion method, device and server based on big data of Internet of things

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107316654A (en) * 2017-07-24 2017-11-03 湖南大学 Emotion identification method based on DIS NV features
CN108536681A (en) * 2018-04-16 2018-09-14 腾讯科技(深圳)有限公司 Intelligent answer method, apparatus, equipment and storage medium based on sentiment analysis
CN109271627A (en) * 2018-09-03 2019-01-25 深圳市腾讯网络信息技术有限公司 Text analyzing method, apparatus, computer equipment and storage medium
CN110377740A (en) * 2019-07-22 2019-10-25 腾讯科技(深圳)有限公司 Feeling polarities analysis method, device, electronic equipment and storage medium
CN110390956A (en) * 2019-08-15 2019-10-29 龙马智芯(珠海横琴)科技有限公司 Emotion recognition network model, method and electronic equipment
CN110532554A (en) * 2019-08-26 2019-12-03 南京信息职业技术学院 Chinese abstract generation method, system and storage medium
CN110866398A (en) * 2020-01-07 2020-03-06 腾讯科技(深圳)有限公司 Comment text processing method and device, storage medium and computer equipment
CN111081280A (en) * 2019-12-30 2020-04-28 苏州思必驰信息科技有限公司 Text-independent speech emotion recognition method and device and emotion recognition algorithm model generation method
CN111368079A (en) * 2020-02-28 2020-07-03 腾讯科技(深圳)有限公司 Text classification method, model training method, device and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107316654A (en) * 2017-07-24 2017-11-03 湖南大学 Emotion identification method based on DIS NV features
CN108536681A (en) * 2018-04-16 2018-09-14 腾讯科技(深圳)有限公司 Intelligent answer method, apparatus, equipment and storage medium based on sentiment analysis
CN109271627A (en) * 2018-09-03 2019-01-25 深圳市腾讯网络信息技术有限公司 Text analyzing method, apparatus, computer equipment and storage medium
CN110377740A (en) * 2019-07-22 2019-10-25 腾讯科技(深圳)有限公司 Feeling polarities analysis method, device, electronic equipment and storage medium
CN110390956A (en) * 2019-08-15 2019-10-29 龙马智芯(珠海横琴)科技有限公司 Emotion recognition network model, method and electronic equipment
CN110532554A (en) * 2019-08-26 2019-12-03 南京信息职业技术学院 Chinese abstract generation method, system and storage medium
CN111081280A (en) * 2019-12-30 2020-04-28 苏州思必驰信息科技有限公司 Text-independent speech emotion recognition method and device and emotion recognition algorithm model generation method
CN110866398A (en) * 2020-01-07 2020-03-06 腾讯科技(深圳)有限公司 Comment text processing method and device, storage medium and computer equipment
CN111368079A (en) * 2020-02-28 2020-07-03 腾讯科技(深圳)有限公司 Text classification method, model training method, device and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
孟仕林: "融合情感与语义信息的情感分析方法", 《信息科技》, vol. 39, no. 7, pages 1931 - 1935 *

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113780610A (en) * 2020-12-02 2021-12-10 北京沃东天骏信息技术有限公司 Customer service portrait construction method and device
WO2022142593A1 (en) * 2020-12-28 2022-07-07 深圳壹账通智能科技有限公司 Text classification method and apparatus, electronic device, and readable storage medium
WO2022141861A1 (en) * 2020-12-31 2022-07-07 平安科技(深圳)有限公司 Emotion classification method and apparatus, electronic device, and storage medium
CN113569584A (en) * 2021-01-25 2021-10-29 腾讯科技(深圳)有限公司 Text translation method and device, electronic equipment and computer readable storage medium
CN112784048A (en) * 2021-01-26 2021-05-11 海尔数字科技(青岛)有限公司 Method, device and equipment for emotion analysis of user questions and storage medium
CN112784048B (en) * 2021-01-26 2023-03-28 海尔数字科技(青岛)有限公司 Method, device and equipment for emotion analysis of user questions and storage medium
CN112820412B (en) * 2021-02-03 2024-03-08 东软集团股份有限公司 User information processing method and device, storage medium and electronic equipment
CN112820412A (en) * 2021-02-03 2021-05-18 东软集团股份有限公司 User information processing method and device, storage medium and electronic equipment
CN112836053A (en) * 2021-03-05 2021-05-25 三一重工股份有限公司 Man-machine conversation emotion analysis method and system for industrial field
CN113177994A (en) * 2021-03-25 2021-07-27 云南大学 Network social emoticon synthesis method based on image-text semantics, electronic equipment and computer readable storage medium
CN113177994B (en) * 2021-03-25 2022-09-06 云南大学 Network social emoticon synthesis method based on image-text semantics, electronic equipment and computer readable storage medium
CN113469197A (en) * 2021-06-29 2021-10-01 北京达佳互联信息技术有限公司 Image-text matching method, device, equipment and storage medium
CN113469197B (en) * 2021-06-29 2024-03-22 北京达佳互联信息技术有限公司 Image-text matching method, device, equipment and storage medium
CN113705243A (en) * 2021-08-27 2021-11-26 电子科技大学 Emotion analysis method
CN113806541A (en) * 2021-09-16 2021-12-17 北京百度网讯科技有限公司 Emotion classification method and emotion classification model training method and device
WO2023069017A3 (en) * 2021-10-19 2023-06-01 Grabtaxi Holdings Pte. Ltd. System and method for recognizing sentiment of user's feedback
CN114048319A (en) * 2021-11-29 2022-02-15 中国平安人寿保险股份有限公司 Attention mechanism-based humor text classification method, device, equipment and medium
CN114048319B (en) * 2021-11-29 2024-04-23 中国平安人寿保险股份有限公司 Humor text classification method, device, equipment and medium based on attention mechanism
CN116244440A (en) * 2023-02-28 2023-06-09 深圳市云积分科技有限公司 Text emotion classification method, device, equipment and medium
CN116244440B (en) * 2023-02-28 2024-02-13 深圳市云积分科技有限公司 Text emotion classification method, device, equipment and medium
CN116108859A (en) * 2023-03-17 2023-05-12 美云智数科技有限公司 Emotional tendency determination, sample construction and model training methods, devices and equipment
CN116631583A (en) * 2023-05-30 2023-08-22 华脑科学研究(珠海横琴)有限公司 Psychological dispersion method, device and server based on big data of Internet of things

Also Published As

Publication number Publication date
CN111930940B (en) 2024-04-16

Similar Documents

Publication Publication Date Title
CN111930940B (en) Text emotion classification method and device, electronic equipment and storage medium
CN109829039B (en) Intelligent chat method, intelligent chat device, computer equipment and storage medium
CN106407178B (en) A kind of session abstraction generating method, device, server apparatus and terminal device
US20160162569A1 (en) Methods and systems for improving machine learning performance
CN109284399B (en) Similarity prediction model training method and device and computer readable storage medium
CN111428010B (en) Man-machine intelligent question-answering method and device
CN110019742B (en) Method and device for processing information
CN113127624B (en) Question-answer model training method and device
CN113590850A (en) Multimedia data searching method, device, equipment and storage medium
WO2017186050A1 (en) Segmented sentence recognition method and device for human-machine intelligent question-answer system
CN107291840B (en) User attribute prediction model construction method and device
WO2023108994A1 (en) Sentence generation method, electronic device and storage medium
CN108268450B (en) Method and apparatus for generating information
CN111539212A (en) Text information processing method and device, storage medium and electronic equipment
CN112131368B (en) Dialogue generation method and device, electronic equipment and storage medium
CN110633475A (en) Natural language understanding method, device and system based on computer scene and storage medium
CN111639162A (en) Information interaction method and device, electronic equipment and storage medium
CN112699686A (en) Semantic understanding method, device, equipment and medium based on task type dialog system
CN113505198A (en) Keyword-driven generating type dialogue reply method and device and electronic equipment
CN112632258A (en) Text data processing method and device, computer equipment and storage medium
CN116821290A (en) Multitasking dialogue-oriented large language model training method and interaction method
KR20190074508A (en) Method for crowdsourcing data of chat model for chatbot
CN114491010A (en) Training method and device of information extraction model
CN110931002B (en) Man-machine interaction method, device, computer equipment and storage medium
CN109002498B (en) Man-machine conversation method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant