CN111898377A - Emotion recognition method and device, computer equipment and storage medium - Google Patents

Emotion recognition method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN111898377A
CN111898377A CN202010649253.7A CN202010649253A CN111898377A CN 111898377 A CN111898377 A CN 111898377A CN 202010649253 A CN202010649253 A CN 202010649253A CN 111898377 A CN111898377 A CN 111898377A
Authority
CN
China
Prior art keywords
emotion
participle
recognized
score
text
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010649253.7A
Other languages
Chinese (zh)
Inventor
刘鹏程
陈超
王岗
杜柏圣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suning Financial Technology Nanjing Co Ltd
Original Assignee
Suning Financial Technology Nanjing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suning Financial Technology Nanjing Co Ltd filed Critical Suning Financial Technology Nanjing Co Ltd
Priority to CN202010649253.7A priority Critical patent/CN111898377A/en
Publication of CN111898377A publication Critical patent/CN111898377A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems

Abstract

The invention discloses an emotion recognition method, an emotion recognition device, computer equipment and a storage medium, wherein the method comprises the following steps: performing word segmentation processing on a text to be recognized to obtain a word segmentation result; traversing each participle in the participle result, sequentially inquiring whether the participle exists in a pre-constructed target emotion dictionary, if so, directly acquiring an original score corresponding to the participle, otherwise, acquiring a similar word corresponding to the participle from the target emotion dictionary, and determining the original score of the participle according to the original score of the similar word; calculating to obtain the emotion score of the text to be recognized according to the original score corresponding to each participle and the attribute of each participle; and determining the emotion type of the text to be recognized according to the emotion score and a preset threshold value. When the text to be recognized contains words which do not exist in the emotion dictionary, similar words are given based on the word vectors on the basis of the emotion dictionary, and corresponding emotion values are calculated and obtained, so that the recognition effect is guaranteed.

Description

Emotion recognition method and device, computer equipment and storage medium
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to an emotion recognition method and apparatus, a computer device, and a storage medium.
Background
The voice conversation system is one of man-machine interaction systems with high naturalness. With the accelerated popularization of intelligent customer service, intelligent sound and intelligent household appliances, the conversation between a machine and a person is one of the important landing modes of artificial intelligence technology. With the use of the existing tools such as the intelligent customer service tool and the collection robot tool, the workload of manual customer service is effectively shared, and the personnel cost of enterprises is saved. However, compared with manual customer service, the customer service robot at the present stage can only solve a series of simple problems of routine and repeatability, and cannot provide targeted service for users temporarily. Moreover, because the communication between people is not only the communication of language information, but also the expressions of speaking habits, emotions and other aspects of two parties in conversation are reflected, if the two parties in conversation achieve resonance in a way of expression even emotion in one conversation, the relationship between the two parties in conversation can be rapidly pulled, however, the reply of the customer service robot is usually not emotional in a way of comparison with a mechanical one.
In the prior art, emotion recognition is usually performed on information such as texts based on some emotion dictionaries, but on one hand, in the current network information era, new words appear like bamboo shoots in late spring, and on the other hand, existing emotion words cannot be completely contained in the emotion dictionaries organized by the people, so that the recognition effect is greatly influenced.
Disclosure of Invention
In order to solve the problems in the prior art, embodiments of the present invention provide an emotion recognition method, apparatus, computer device, and storage medium, so as to overcome the problem in the prior art that when performing emotion recognition on information such as a text based on an emotion dictionary, the recognition is poor because the emotion dictionary cannot completely contain existing emotion words.
In order to solve one or more technical problems, the invention adopts the technical scheme that:
in a first aspect, a method for emotion recognition is provided, which includes the following steps:
performing word segmentation processing on a text to be recognized to obtain a word segmentation result;
traversing each participle in the participle result, sequentially inquiring whether the participle exists in a pre-constructed target emotion dictionary, if so, directly acquiring an original score corresponding to the participle, otherwise, acquiring a similar word corresponding to the participle from the target emotion dictionary, and determining the original score of the participle according to the original score of the similar word;
calculating to obtain the emotion score of the text to be recognized according to the original score corresponding to each word and the attribute of each word;
and determining the emotion type of the text to be recognized according to the emotion score and a preset threshold value.
Further, the obtaining of the similar words corresponding to the participles from the target emotion dictionary and the determining of the original scores of the participles according to the original scores of the similar words include:
respectively calculating the similarity between the participle and the words in the target emotion dictionary, and acquiring the words with the similarity meeting preset conditions as the similar words of the participle;
and calculating to obtain the original score of the participle according to the original score of the similar word and a preset calculation rule.
Further, the method also comprises an updating process of the target emotion dictionary, which comprises the following steps:
adding the participles which do not exist in the target emotion dictionary and original scores corresponding to the participles into a sub-dictionary corresponding to the emotion category of the target emotion dictionary.
Further, before performing word segmentation processing on the text to be recognized, the method further includes:
preprocessing the text to be recognized, and removing unnecessary information in the text to be recognized, wherein the unnecessary information at least comprises special symbols.
Further, the method further comprises a process for acquiring the text to be recognized, which comprises the following steps:
and carrying out voice recognition on the received voice information to be recognized, and converting the voice information to be recognized into a text to be recognized.
Further, the method also comprises a construction process of the target emotion dictionary, which comprises the following steps:
combining and de-duplicating a plurality of emotion dictionaries selected in advance to obtain a basic emotion dictionary;
and training the basic emotion dictionary by utilizing a pre-prepared training corpus to obtain a target emotion dictionary.
In a second aspect, an emotion recognition apparatus is provided, the apparatus including:
the word segmentation processing module is used for carrying out word segmentation processing on the text to be recognized to obtain a word segmentation result;
the score determining module is used for traversing each participle in the participle result, sequentially inquiring whether the participle exists in a pre-constructed target emotion dictionary, if so, directly acquiring an original score corresponding to the participle, otherwise, acquiring a similar word corresponding to the participle from the target emotion dictionary, and determining the original score of the participle according to the original score of the similar word;
the score calculation module is used for calculating and obtaining the emotion score of the text to be recognized according to the original score corresponding to each participle and the attribute of each participle;
and the category determining module is used for determining the emotion category of the text to be recognized according to the emotion score and a preset threshold value.
Further, the score determination module is specifically configured to:
respectively calculating the similarity between the participle and the words in the target emotion dictionary, and acquiring the words with the similarity meeting preset conditions as the similar words of the participle;
and calculating to obtain the original score of the participle according to the original score of the similar word and a preset calculation rule.
In a third aspect, a computer device is provided, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and when the processor executes the computer program, the following steps are implemented:
performing word segmentation processing on a text to be recognized to obtain a word segmentation result;
traversing each participle in the participle result, sequentially inquiring whether the participle exists in a pre-constructed target emotion dictionary, if so, directly acquiring an original score corresponding to the participle, otherwise, acquiring a similar word corresponding to the participle from the target emotion dictionary, and determining the original score of the participle according to the original score of the similar word;
calculating to obtain the emotion score of the text to be recognized according to the original score corresponding to each word and the attribute of each word;
and determining the emotion type of the text to be recognized according to the emotion score and a preset threshold value.
In a fourth aspect, there is provided a computer readable storage medium having a computer program stored thereon, which when executed by a processor, performs the steps of:
performing word segmentation processing on a text to be recognized to obtain a word segmentation result;
traversing each participle in the participle result, sequentially inquiring whether the participle exists in a pre-constructed target emotion dictionary, if so, directly acquiring an original score corresponding to the participle, otherwise, acquiring a similar word corresponding to the participle from the target emotion dictionary, and determining the original score of the participle according to the original score of the similar word;
calculating to obtain the emotion score of the text to be recognized according to the original score corresponding to each word and the attribute of each word;
and determining the emotion type of the text to be recognized according to the emotion score and a preset threshold value.
The technical scheme provided by the embodiment of the invention has the following beneficial effects:
1. the emotion recognition method, the emotion recognition device, the computer equipment and the storage medium provided by the embodiment of the invention are used for obtaining word segmentation results by performing word segmentation processing on a text to be recognized, traversing each word in the word segmentation results, sequentially inquiring whether the word segmentation exists in a pre-constructed target emotion dictionary, if so, directly obtaining an original score corresponding to the word segmentation, otherwise, obtaining similar words corresponding to the word segmentation from the target emotion dictionary, determining the original score of the word segmentation according to the original scores of the similar words, calculating the emotion score of the text to be recognized according to the original score corresponding to each word segmentation and the attribute of each word segmentation, determining the emotion category of the text to be recognized according to the emotion score and a preset threshold, and when the text to be recognized contains words which do not exist in the emotion dictionary, on the basis of the emotion dictionary, similar words are given based on word vectors, and corresponding emotion values are calculated and obtained, so that the recognition effect is guaranteed;
2. according to the emotion recognition method, the emotion recognition device, the computer equipment and the storage medium, the segmentation words which do not exist in the target emotion dictionary and the original scores corresponding to the segmentation words are added into the sub-dictionary corresponding to the emotion categories of the target emotion dictionary, so that the emotion word dictionary is automatically expanded, the emotion dictionary is optimized, and the timeliness of the model is guaranteed.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a flow diagram illustrating a method of emotion recognition in accordance with an exemplary embodiment;
fig. 2 is a schematic structural diagram illustrating an emotion recognition apparatus according to an exemplary embodiment.
FIG. 3 is a schematic diagram illustrating an internal architecture of a computer device, according to an example embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
For example, the customer service robot needs to accurately identify the emotion of the customer in order to achieve emotional resonance with the customer, improve communication effect and service quality, and highlight humanistic care. At present, emotion recognition is usually performed on information such as texts based on an emotion dictionary, but on one hand, in the current network information era, new words appear like bamboo shoots in late spring, and on the other hand, existing emotion words cannot be completely contained in the emotion dictionary organized by people, so that the recognition effect is greatly influenced. Therefore, it is desirable to provide a method for emotion recognition of a location so that an emotion value of a word contained in a text to be recognized and not present in an emotion dictionary can be calculated.
The method comprises the steps of firstly, carrying out word segmentation processing on a text to be recognized to obtain word segmentation results, then traversing each word segmentation in the word segmentation results, sequentially inquiring whether the word segmentation exists in a pre-constructed target emotion dictionary, if so, directly obtaining an original score corresponding to the word segmentation, otherwise, obtaining similar words corresponding to the word segmentation from the target emotion dictionary, determining the original score of the word segmentation according to the original scores of the similar words, calculating the emotion score of the text to be recognized according to the original score corresponding to each word segmentation and the attribute of each word segmentation, and finally, determining the emotion category of the text to be recognized according to the emotion score and a preset threshold.
The emotion recognition method can automatically detect the emotional characteristics contained in the daily conversation text, help enterprises to grasp the product experience more comprehensively, and improve the service quality. The service can classify the negative emotions according to the positive and negative emotions, give a prompt for the negative emotions, adjust dialogues in time or remind manual customer service intervention, and improve the conversation quality; for positive emotion, the problem solution of the robot can be helped to be judged, and the user experience is integrally improved. The emotion recognition method is a user emotion recognition technology based on a customer service scene and is realized by combining a machine learning technology, a deep learning technology, a natural language processing technology, a big data analysis technology and the like.
FIG. 1 is a flow diagram illustrating a method of emotion recognition, according to an exemplary embodiment, and referring to FIG. 1, the method includes the steps of:
s1: and performing word segmentation processing on the text to be recognized to obtain a word segmentation result.
Specifically, the emotion recognition method provided by the embodiment of the present invention is implemented based on an emotion dictionary, and in particular, in order to determine whether corresponding words in the emotion dictionary exist in a sentence, the sentence generally needs to be accurately cut into individual words, that is, automatic word segmentation of the sentence.
In order to improve the accuracy and the usability of word segmentation (such as the usability on a Python platform), as a preferred embodiment, a "Chinese word segmentation in Chinese" may be selected as the word segmentation tool in the embodiment of the present invention. However, since many words are related to the field, for example, "anyhow pay" is a kind of explosive product in a certain finance, but the use of the jieba word segmentation tool can divide the words into two words, i.e., "anyhow" and "pay", which obviously deviates from the intention. Therefore, in order to improve the accuracy of word segmentation, the embodiment of the invention can load a customized professional dictionary according to actual requirements on the basis of the existing word segmentation tool.
S2: traversing each participle in the participle result, sequentially inquiring whether the participle exists in a pre-constructed target emotion dictionary, if so, directly acquiring an original score corresponding to the participle, otherwise, acquiring a similar word corresponding to the participle from the target emotion dictionary, and determining the original score of the participle according to the original score of the similar word.
Specifically, in the network information era of today, new words appear such as bamboo shoots in spring after rain, wherein the new words comprise 'newly constructed network words' and 'endowing existing words with new meanings', and the like, and on the other hand, existing emotional words cannot be completely contained in the emotion dictionary which is organized by us. Therefore, the word segmentation result of the text to be recognized contains words which do not exist in the emotion dictionary, and at this time, a certain word does not have an emotion score (namely, an original score), which greatly affects the effect of continuous emotion recognition. In the embodiment of the invention, in order to avoid the situation, when the participle which is not contained in the target emotion dictionary exists in the participle result, the similar word corresponding to the participle is acquired from the target emotion dictionary based on the word vector, and then the original score of the participle is determined according to the original score of the similar word.
S3: and calculating the emotion score of the text to be recognized according to the original score corresponding to each word and the attribute of each word.
Specifically, in the embodiment of the present invention, the emotion score of the text to be recognized is calculated according to the original score corresponding to each participle in the word segmentation result of the text to be recognized and the attribute of each participle, wherein the attribute of the word (i.e., the attribute of the participle) may be divided into an emotion word, a negative word, a degree adverb, and the like. The specific calculation method may be set by a user according to actual requirements, and is not limited herein.
As a better implementation mode, the emotion score of the text to be recognized can be the sum of the scores of all emotion word groups (participles), and the calculation formula is as follows:
Score=sum[(-1)^t*K*Word(i)]
wherein, the index t of-1 represents the number of negative words in the word segmentation result of the text to be recognized, K represents the value of the degree word, and word (i) is the original score of each word segmentation.
Generally, an emotion word group (text to be recognized) is a word group formed by an emotion word and a negative word and a degree adverb in front of the emotion word, namely:
notwords+degreewords+sentiwords
for example: "not very good" is "not" but "negative," very "is degree adverb," good "is emotional word, then the score of this group of emotional words is:
Score=(-1)^1*1.25*0.7471
where 1 refers to a number of negative words, 1.25 is the numeric value of the degree adverb, and 0.7471 is the sentiment score of goodness.
S4: and determining the emotion type of the text to be recognized according to the emotion score and a preset threshold value.
Specifically, in the embodiment of the present invention, a preset threshold is set in advance according to actual requirements, and after calculating an emotion score of a text to be recognized, the emotion score is compared with the preset threshold to determine an emotion category of the text to be recognized, where the emotion category includes, but is not limited to, negative, positive, and the like.
The preset threshold may be set according to actual requirements, and is not specifically limited herein. For example, the preset threshold may be set to an interval value of [ -2.2, 2.2], the emotion score is neutral in the interval, negative less than the interval, positive more than the interval, and negative with priority is identified as dominant.
As a preferred implementation manner, in an embodiment of the present invention, the obtaining a similar word corresponding to the participle from the target emotion dictionary, and determining an original score of the participle according to an original score of the similar word includes:
respectively calculating the similarity between the participle and the words in the target emotion dictionary, and acquiring the words with the similarity meeting preset conditions as the similar words of the participle;
and calculating to obtain the original score of the participle according to the original score of the similar word and a preset calculation rule.
Specifically, for some segmentation results, there may be a case where there is no corresponding emotion value (i.e., original score) in the target emotion dictionary, that is, there may be a case where the segmentation results of the text to be recognized include a word that does not exist in the emotion dictionary, and at this time, we calculate similarity by using a word vector to give a similar word, and then assign an emotion value of the similar word to the word.
In specific implementation, word vectors corresponding to the participles and words in the target emotion dictionary can be obtained based on a pre-trained word vector model, the word vectors can be 200-dimensional word vectors, then the similarity between the participles and the words in the target emotion dictionary is calculated based on the word vectors, the words with the similarity meeting the preset conditions are obtained to serve as the similar words of the participles, and finally the original scores of the participles are obtained through calculation according to the original scores of the similar words and the preset calculation rules. For example, as an example, the words in the target emotion dictionary may be sorted in a descending order according to the calculated similarity value, then the word ranked at the top5 (top5) is selected as the similar word of the participle, and the average value of the original scores of the similar words is taken as the original score (i.e. emotion value) of the participle.
As a preferred implementation, in an embodiment of the present invention, the method further includes an update process of the target emotion dictionary, including:
adding the participles which do not exist in the target emotion dictionary and original scores corresponding to the participles into a sub-dictionary corresponding to the emotion category of the target emotion dictionary.
Specifically, automatically extending the emotion dictionary is a necessary condition for ensuring timeliness of the emotion classification model. In the embodiment of the invention, when the word segmentation result of the text to be recognized contains a word which does not exist in the emotion dictionary, the original score of the segmented word is calculated in the above mode, the segmented word and the corresponding original score are added into the sub-dictionary corresponding to the emotion category of the target emotion dictionary after the original score is used for participating in the calculation of the emotion value of the text to be recognized and determining the text to be recognized, so that the automatic expansion of the emotion word dictionary is realized, the emotion dictionary is optimized, and the timeliness of the model is ensured.
In addition, the target emotion dictionary can be optimized in an active automatic expansion mode. For example, a large amount of comment data is collected from microblogs and communities by means of web crawlers and the like, new words with emotional tendencies are found from the large amount of data in an unsupervised learning type word frequency statistical mode, and then the original scores of the words with emotional tendencies are calculated according to the steps or other calculation modes of the original scores and are added into a target emotional dictionary.
In specific implementation, unsupervised learning can be performed based on the existing preliminary model to complete dictionary expansion, so that the performance of the model is enhanced, and then iteration is performed in the same manner to adjust the positive feedback of the model. Although a large amount of comment data can be captured from the network, the comment data are unlabeled, the comment data can be subjected to emotion classification through an existing model, then the occurrence frequency of each word is counted in a comment set with the same type of emotion (positive or negative), and finally the word frequencies of the words in the positive and negative comment sets are compared. If the word frequency of a certain word in the positive comment set is quite high and the word frequency of the certain word in the negative comment set is quite low, the word is added into the negative emotion dictionary with confidence, or the weight is given to the word according to the calculation mode of the original score in the steps.
As a preferred implementation manner, in the embodiment of the present invention, before performing word segmentation processing on a text to be recognized, the method further includes:
preprocessing the text to be recognized, and removing unnecessary information in the text to be recognized, wherein the unnecessary information at least comprises special symbols.
Specifically, the text to be recognized (e.g. a message sent by a front-end system) usually contains some unnecessary information, and in order to improve the accuracy of subsequent calculation, we need to remove the unnecessary information from the text to be recognized. Therefore, after the text to be recognized is obtained, the text to be recognized needs to be preprocessed, and the main modes include but are not limited to regular expressions and stop words, and unnecessary information in the text to be recognized is removed, such as some special symbols @.
As a preferred implementation manner, in the embodiment of the present invention, the method further includes a process of acquiring a text to be recognized, including:
and carrying out voice recognition on the received voice information to be recognized, and converting the voice information to be recognized into a text to be recognized.
Specifically, the method provided by the embodiment of the invention can be applied to various different service scenes, including but not limited to scenes such as customer service robots. The text to be recognized can be obtained in various ways, including but not limited to text information input by the user, voice information input by the user, and the like. When the user inputs voice information, the received voice information to be recognized can be subjected to voice recognition, and the voice information is converted into a text to be recognized through technologies such as semantic analysis and understanding.
As a preferred implementation manner, in the embodiment of the present invention, the method further includes a process of constructing a target emotion dictionary, including:
combining and de-duplicating a plurality of emotion dictionaries selected in advance to obtain a basic emotion dictionary;
and training the basic emotion dictionary by utilizing a pre-prepared training corpus to obtain a target emotion dictionary.
Specifically, generally, the emotion dictionary is the most core part of text mining, and an appropriate emotion dictionary is very important in the emotion word dictionary based on the emotion word dictionary adopted by the emotion recognition method in the embodiment of the present invention. As a preferred example, the emotion dictionary in the embodiment of the present invention may include four parts: the general emotion dictionary is more suitable for common chat scenes, and a dictionary related to a more field is needed in a customer service scene.
In order to obtain a more complete emotion dictionary, first, in the embodiment of the present invention, several emotion dictionaries may be collected from the network and integrated de-duplicated, and part of words are adjusted to achieve the highest accuracy rate. The existing emotion dictionaries such as the Bosen dictionary, the Taiwan university emotion dictionary and the HowNet emotion dictionary are merged and deduplicated into the basic emotion dictionary.
Secondly, performing word segmentation according to the existing dialogue training corpus, wherein the word segmentation can also load a professional dictionary in the corresponding field according to actual requirements, then counting word frequency sequencing, and calibrating the emotion words without words in the basic emotion dictionary by a word vector method after the association with the basic emotion dictionary.
It should be noted that, in the embodiment of the present invention, the dictionaries collected by the network are not simply integrated, and the dictionaries are removed and updated in a targeted and targeted manner, and some industry vocabularies are added to increase the hit rate in the classification. The word frequency of some words in different industries can be greatly different, and the words can be one of the keywords of the emotion classification.
Fig. 2 is a schematic structural diagram illustrating an emotion recognition apparatus according to an exemplary embodiment, which includes, as described with reference to fig. 2:
the word segmentation processing module is used for carrying out word segmentation processing on the text to be recognized to obtain a word segmentation result;
the score determining module is used for traversing each participle in the participle result, sequentially inquiring whether the participle exists in a pre-constructed target emotion dictionary, if so, directly acquiring an original score corresponding to the participle, otherwise, acquiring a similar word corresponding to the participle from the target emotion dictionary, and determining the original score of the participle according to the original score of the similar word;
the score calculation module is used for calculating and obtaining the emotion score of the text to be recognized according to the original score corresponding to each participle and the attribute of each participle;
and the category determining module is used for determining the emotion category of the text to be recognized according to the emotion score and a preset threshold value.
As a preferred implementation manner, in an embodiment of the present invention, the score determining module is specifically configured to:
respectively calculating the similarity between the participle and the words in the target emotion dictionary, and acquiring the words with the similarity meeting preset conditions as the similar words of the participle;
and calculating to obtain the original score of the participle according to the original score of the similar word and a preset calculation rule.
As a preferred implementation manner, in an embodiment of the present invention, the apparatus further includes:
and the dictionary updating module is used for adding the participles which do not exist in the target emotion dictionary and the original scores corresponding to the participles into a sub-dictionary corresponding to the emotion category of the target emotion dictionary.
As a preferred implementation manner, in an embodiment of the present invention, the apparatus further includes:
and the preprocessing module is used for preprocessing the text to be recognized and removing unnecessary information in the text to be recognized, wherein the unnecessary information at least comprises a special symbol.
As a preferred implementation manner, in an embodiment of the present invention, the apparatus further includes:
and the text acquisition module is used for carrying out voice recognition on the received voice information to be recognized and converting the voice information to be recognized into a text to be recognized.
As a preferred implementation manner, in an embodiment of the present invention, the apparatus further includes:
the dictionary construction module is used for carrying out merging and de-duplication processing on a plurality of emotion dictionaries selected in advance to obtain a basic emotion dictionary; and training the basic emotion dictionary by utilizing a pre-prepared training corpus to obtain a target emotion dictionary.
Fig. 3 is a schematic diagram illustrating an internal configuration of a computer device according to an exemplary embodiment, which includes a processor, a memory, and a network interface connected through a system bus, as shown in fig. 3. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method of optimization of an execution plan.
Those skilled in the art will appreciate that the configuration shown in fig. 3 is a block diagram of only a portion of the configuration associated with aspects of the present invention and is not intended to limit the computing devices to which aspects of the present invention may be applied, and that a particular computing device may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
As a preferred implementation manner, in an embodiment of the present invention, the computer device includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and the processor implements the following steps when executing the computer program:
performing word segmentation processing on a text to be recognized to obtain a word segmentation result;
traversing each participle in the participle result, sequentially inquiring whether the participle exists in a pre-constructed target emotion dictionary, if so, directly acquiring an original score corresponding to the participle, otherwise, acquiring a similar word corresponding to the participle from the target emotion dictionary, and determining the original score of the participle according to the original score of the similar word;
calculating to obtain the emotion score of the text to be recognized according to the original score corresponding to each word and the attribute of each word;
and determining the emotion type of the text to be recognized according to the emotion score and a preset threshold value.
As a preferred implementation manner, in the embodiment of the present invention, when the processor executes the computer program, the following steps are further implemented:
respectively calculating the similarity between the participle and the words in the target emotion dictionary, and acquiring the words with the similarity meeting preset conditions as the similar words of the participle;
and calculating to obtain the original score of the participle according to the original score of the similar word and a preset calculation rule.
As a preferred implementation manner, in the embodiment of the present invention, when the processor executes the computer program, the following steps are further implemented:
adding the participles which do not exist in the target emotion dictionary and original scores corresponding to the participles into a sub-dictionary corresponding to the emotion category of the target emotion dictionary.
As a preferred implementation manner, in the embodiment of the present invention, when the processor executes the computer program, the following steps are further implemented:
preprocessing the text to be recognized, and removing unnecessary information in the text to be recognized, wherein the unnecessary information at least comprises special symbols.
As a preferred implementation manner, in the embodiment of the present invention, when the processor executes the computer program, the following steps are further implemented:
and carrying out voice recognition on the received voice information to be recognized, and converting the voice information to be recognized into a text to be recognized.
As a preferred implementation manner, in the embodiment of the present invention, when the processor executes the computer program, the following steps are further implemented:
combining and de-duplicating a plurality of emotion dictionaries selected in advance to obtain a basic emotion dictionary;
and training the basic emotion dictionary by utilizing a pre-prepared training corpus to obtain a target emotion dictionary.
In an embodiment of the present invention, a computer-readable storage medium is further provided, on which a computer program is stored, and when the computer program is executed by a processor, the computer program implements the following steps:
performing word segmentation processing on a text to be recognized to obtain a word segmentation result;
traversing each participle in the participle result, sequentially inquiring whether the participle exists in a pre-constructed target emotion dictionary, if so, directly acquiring an original score corresponding to the participle, otherwise, acquiring a similar word corresponding to the participle from the target emotion dictionary, and determining the original score of the participle according to the original score of the similar word;
calculating to obtain the emotion score of the text to be recognized according to the original score corresponding to each word and the attribute of each word;
and determining the emotion type of the text to be recognized according to the emotion score and a preset threshold value.
As a preferred implementation manner, in the embodiment of the present invention, when executed by the processor, the computer program further implements the following steps:
respectively calculating the similarity between the participle and the words in the target emotion dictionary, and acquiring the words with the similarity meeting preset conditions as the similar words of the participle;
and calculating to obtain the original score of the participle according to the original score of the similar word and a preset calculation rule.
As a preferred implementation manner, in the embodiment of the present invention, when executed by the processor, the computer program further implements the following steps:
adding the participles which do not exist in the target emotion dictionary and original scores corresponding to the participles into a sub-dictionary corresponding to the emotion category of the target emotion dictionary.
As a preferred implementation manner, in the embodiment of the present invention, when executed by the processor, the computer program further implements the following steps:
preprocessing the text to be recognized, and removing unnecessary information in the text to be recognized, wherein the unnecessary information at least comprises special symbols.
As a preferred implementation manner, in the embodiment of the present invention, when executed by the processor, the computer program further implements the following steps:
and carrying out voice recognition on the received voice information to be recognized, and converting the voice information to be recognized into a text to be recognized.
As a preferred implementation manner, in the embodiment of the present invention, when executed by the processor, the computer program further implements the following steps:
combining and de-duplicating a plurality of emotion dictionaries selected in advance to obtain a basic emotion dictionary;
and training the basic emotion dictionary by utilizing a pre-prepared training corpus to obtain a target emotion dictionary.
In summary, the technical solution provided by the embodiment of the present invention has the following beneficial effects:
1. the emotion recognition method, the emotion recognition device, the computer equipment and the storage medium provided by the embodiment of the invention are used for obtaining word segmentation results by performing word segmentation processing on a text to be recognized, traversing each word in the word segmentation results, sequentially inquiring whether the word segmentation exists in a pre-constructed target emotion dictionary, if so, directly obtaining an original score corresponding to the word segmentation, otherwise, obtaining similar words corresponding to the word segmentation from the target emotion dictionary, determining the original score of the word segmentation according to the original scores of the similar words, calculating the emotion score of the text to be recognized according to the original score corresponding to each word segmentation and the attribute of each word segmentation, determining the emotion category of the text to be recognized according to the emotion score and a preset threshold, and when the text to be recognized contains words which do not exist in the emotion dictionary, on the basis of the emotion dictionary, similar words are given based on word vectors, and corresponding emotion values are calculated and obtained, so that the recognition effect is guaranteed;
2. according to the emotion recognition method, the emotion recognition device, the computer equipment and the storage medium, the segmentation words which do not exist in the target emotion dictionary and the original scores corresponding to the segmentation words are added into the sub-dictionary corresponding to the emotion categories of the target emotion dictionary, so that the emotion word dictionary is automatically expanded, the emotion dictionary is optimized, and the timeliness of the model is guaranteed.
It should be noted that: in the emotion recognition apparatus provided in the above embodiment, when an emotion recognition service is triggered, only the division of the above functional modules is used for illustration, and in practical applications, the function distribution may be completed by different functional modules according to needs, that is, the internal structure of the apparatus is divided into different functional modules to complete all or part of the functions described above. In addition, the emotion recognition device and the emotion recognition method provided by the above embodiments belong to the same concept, that is, the device is based on the emotion recognition method, and the specific implementation process thereof is described in detail in the method embodiments and is not described herein again.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (10)

1. An emotion recognition method, characterized in that the method comprises the steps of:
performing word segmentation processing on a text to be recognized to obtain a word segmentation result;
traversing each participle in the participle result, sequentially inquiring whether the participle exists in a pre-constructed target emotion dictionary, if so, directly acquiring an original score corresponding to the participle, otherwise, acquiring a similar word corresponding to the participle from the target emotion dictionary, and determining the original score of the participle according to the original score of the similar word;
calculating to obtain the emotion score of the text to be recognized according to the original score corresponding to each word and the attribute of each word;
and determining the emotion type of the text to be recognized according to the emotion score and a preset threshold value.
2. The emotion recognition method of claim 1, wherein the obtaining of the similar word corresponding to the participle from the target emotion dictionary and the determining of the original score of the participle from the original score of the similar word comprise:
respectively calculating the similarity between the participle and the words in the target emotion dictionary, and acquiring the words with the similarity meeting preset conditions as the similar words of the participle;
and calculating to obtain the original score of the participle according to the original score of the similar word and a preset calculation rule.
3. The emotion recognition method according to claim 1 or 2, further comprising an update process of the target emotion dictionary, including:
adding the participles which do not exist in the target emotion dictionary and original scores corresponding to the participles into a sub-dictionary corresponding to the emotion category of the target emotion dictionary.
4. The emotion recognition method according to claim 1 or 2, wherein before performing word segmentation processing on a text to be recognized, the method further comprises:
preprocessing the text to be recognized, and removing unnecessary information in the text to be recognized, wherein the unnecessary information at least comprises special symbols.
5. The emotion recognition method according to claim 1 or 2, wherein the method further comprises a process of acquiring a text to be recognized, including:
and carrying out voice recognition on the received voice information to be recognized, and converting the voice information to be recognized into a text to be recognized.
6. The emotion recognition method according to claim 1 or 2, wherein the method further comprises a construction process of a target emotion dictionary, including:
combining and de-duplicating a plurality of emotion dictionaries selected in advance to obtain a basic emotion dictionary;
and training the basic emotion dictionary by utilizing a pre-prepared training corpus to obtain a target emotion dictionary.
7. An emotion recognition apparatus, characterized in that the apparatus comprises:
the word segmentation processing module is used for carrying out word segmentation processing on the text to be recognized to obtain a word segmentation result;
the score determining module is used for traversing each participle in the participle result, sequentially inquiring whether the participle exists in a pre-constructed target emotion dictionary, if so, directly acquiring an original score corresponding to the participle, otherwise, acquiring a similar word corresponding to the participle from the target emotion dictionary, and determining the original score of the participle according to the original score of the similar word;
the score calculation module is used for calculating and obtaining the emotion score of the text to be recognized according to the original score corresponding to each participle and the attribute of each participle;
and the category determining module is used for determining the emotion category of the text to be recognized according to the emotion score and a preset threshold value.
8. The emotion recognition device of claim 7, wherein the score determination module is specifically configured to:
respectively calculating the similarity between the participle and the words in the target emotion dictionary, and acquiring the words with the similarity meeting preset conditions as the similar words of the participle;
and calculating to obtain the original score of the participle according to the original score of the similar word and a preset calculation rule.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method of any of claims 1 to 6 are implemented when the computer program is executed by the processor.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 6.
CN202010649253.7A 2020-07-07 2020-07-07 Emotion recognition method and device, computer equipment and storage medium Pending CN111898377A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010649253.7A CN111898377A (en) 2020-07-07 2020-07-07 Emotion recognition method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010649253.7A CN111898377A (en) 2020-07-07 2020-07-07 Emotion recognition method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111898377A true CN111898377A (en) 2020-11-06

Family

ID=73191996

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010649253.7A Pending CN111898377A (en) 2020-07-07 2020-07-07 Emotion recognition method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111898377A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112668330A (en) * 2020-12-31 2021-04-16 北京大米科技有限公司 Data processing method and device, readable storage medium and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112668330A (en) * 2020-12-31 2021-04-16 北京大米科技有限公司 Data processing method and device, readable storage medium and electronic equipment
CN112668330B (en) * 2020-12-31 2024-01-26 北京大米科技有限公司 Data processing method and device, readable storage medium and electronic equipment

Similar Documents

Publication Publication Date Title
CN107451126B (en) Method and system for screening similar meaning words
CN112487173B (en) Man-machine conversation method, device and storage medium
CN112749344B (en) Information recommendation method, device, electronic equipment, storage medium and program product
CN111125334A (en) Search question-answering system based on pre-training
CN108287848B (en) Method and system for semantic parsing
CN110717021B (en) Input text acquisition and related device in artificial intelligence interview
CN110633464A (en) Semantic recognition method, device, medium and electronic equipment
CN111026840A (en) Text processing method, device, server and storage medium
CN112487824A (en) Customer service speech emotion recognition method, device, equipment and storage medium
CN113780007A (en) Corpus screening method, intention recognition model optimization method, equipment and storage medium
CN115481229A (en) Method and device for pushing answer call, electronic equipment and storage medium
CN115062718A (en) Language model training method and device, electronic equipment and storage medium
CN114625834A (en) Enterprise industry information determination method and device and electronic equipment
CN111898377A (en) Emotion recognition method and device, computer equipment and storage medium
TWI734085B (en) Dialogue system using intention detection ensemble learning and method thereof
CN113590774B (en) Event query method, device and storage medium
CN109977397A (en) Hot news extracting method, system and storage medium based on part of speech combination
CN114969195A (en) Dialogue content mining method and dialogue content evaluation model generation method
CN110428814B (en) Voice recognition method and device
CN113095073A (en) Corpus tag generation method and device, computer equipment and storage medium
JP3611913B2 (en) Similarity search method and apparatus
CN111382265A (en) Search method, apparatus, device and medium
CN116244413B (en) New intention determining method, apparatus and storage medium
CN115618968B (en) New idea discovery method and device, electronic device and storage medium
CN116244432B (en) Pre-training method and device for language model and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination