CN110263321B - Emotion dictionary construction method and system - Google Patents

Emotion dictionary construction method and system Download PDF

Info

Publication number
CN110263321B
CN110263321B CN201910372297.7A CN201910372297A CN110263321B CN 110263321 B CN110263321 B CN 110263321B CN 201910372297 A CN201910372297 A CN 201910372297A CN 110263321 B CN110263321 B CN 110263321B
Authority
CN
China
Prior art keywords
emotion
word
vector
layer
words
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910372297.7A
Other languages
Chinese (zh)
Other versions
CN110263321A (en
Inventor
罗镇权
练睿
唐远洋
刘世林
张发展
李焕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Business Big Data Technology Co Ltd
Original Assignee
Chengdu Business Big Data Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Business Big Data Technology Co Ltd filed Critical Chengdu Business Big Data Technology Co Ltd
Priority to CN201910372297.7A priority Critical patent/CN110263321B/en
Publication of CN110263321A publication Critical patent/CN110263321A/en
Application granted granted Critical
Publication of CN110263321B publication Critical patent/CN110263321B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/237Lexical tools
    • G06F40/242Dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention relates to a method and a system for constructing an emotion dictionary, wherein the method comprises the following steps: dividing the text corpus of a single sentence into a plurality of words; inputting each divided word into an emotion recognition model, and outputting the weight of each word and the emotion probability value of the whole sentence; multiplying the weight of each word by the emotion probability value of the whole sentence to obtain emotion scores of each word, and adding the words with emotion scores greater than or equal to a set threshold value into an emotion dictionary. Compared with a manual emotion dictionary construction mode, the novel emotion word dictionary construction method is higher in efficiency, and solves the problem that the cost of complicated manual emotion word dictionary construction is high.

Description

Emotion dictionary construction method and system
Technical Field
The invention relates to the technical field of natural language processing, in particular to a method and a system for constructing an emotion dictionary.
Background
Emotion analysis refers to the classification of text into two or more types of recognition and detraction according to meaning and emotion information expressed by the text, and is also sometimes called tendency analysis because of the classification of tendencies, ideas and attitudes of text authors. Emotion analysis is a special classification problem, and has common problems of general pattern classification and specificity, such as hidden property, ambiguity, unobvious polarity and the like of emotion information expression. At present, there are two general ways to perform emotion analysis: through emotion word dictionary analysis and a machine learning-based method, implementation of the emotion word dictionary analysis method depends on a pre-established emotion dictionary, and at present, most emotion dictionaries are constructed by adopting a manual labeling mode, so that great cost is required for manually constructing the dictionary, such as great consumption of manpower and material resources, and efficiency is low.
Disclosure of Invention
The invention aims to provide a method and a system for constructing an emotion dictionary, which solve the problems of low efficiency and labor and material consumption in manually constructing the emotion dictionary.
In order to achieve the above object, the embodiment of the present invention provides the following technical solutions:
a method for constructing emotion dictionary comprises the following steps:
dividing the text corpus of a single sentence into a plurality of words;
inputting each divided word into an emotion recognition model, and outputting the weight of each word and the emotion probability value of the whole sentence;
multiplying the weight of each word by the emotion probability value of the whole sentence to obtain emotion scores of each word, and adding the words with emotion scores greater than or equal to a set threshold value into an emotion dictionary as emotion words.
On the other hand, the embodiment of the invention also provides a emotion dictionary construction system, which comprises the following modules:
the word segmentation module is used for segmenting the text corpus and segmenting a single sentence into a plurality of words;
the emotion recognition module is used for inputting each divided word into an emotion recognition model and outputting the weight of each word and the emotion probability value of the whole sentence;
the dictionary construction module is used for multiplying the weight of each word by the emotion probability value of the whole sentence to obtain emotion scores of each word respectively, and adding the words with emotion scores greater than or equal to a set threshold value into the emotion dictionary as emotion words.
In yet another aspect, embodiments of the present invention also provide a computer-readable storage medium comprising computer-readable instructions that, when executed, cause a processor to perform operations in the methods described in embodiments of the present invention.
In still another aspect, an embodiment of the present invention also provides an electronic device, including: a memory storing program instructions; and the processor is connected with the memory and executes program instructions in the memory to realize the steps in the method in the embodiment of the invention.
Compared with the prior art, the method for constructing the emotion word dictionary is novel, has higher efficiency compared with a mode of manually constructing the emotion word dictionary, and solves the problem of high cost of complicated manual construction of the emotion word dictionary.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of an emotion dictionary construction method according to a preferred embodiment of the present invention.
FIG. 2 is a flowchart of an emotion recognition model training method in a preferred embodiment of the present invention.
FIG. 3 is a block diagram of an emotion recognition model in a preferred embodiment of the present invention.
FIG. 4 is a schematic diagram of the generation of a new representation by Self-Attention in this embodiment.
Fig. 5 is a functional block diagram of an emotion dictionary construction system provided in the present embodiment.
Fig. 6 is a block diagram of an electronic device according to the present embodiment.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be made by a person skilled in the art without making any inventive effort, are intended to be within the scope of the present invention.
Referring to fig. 1, the embodiment schematically provides a method for creating an emotion dictionary, which includes the following steps:
s10, word segmentation is carried out on the text corpus. In this embodiment, the text corpus refers to a single sentence, and punctuation marks, such as a period and an exclamation mark, which represent the end of the sentence, are arranged at the end of the sentence. For example, "I are happy today-! "is classified as" I "," today "," very "," happy "", and-! "five words. If there is also a punctuation mark in the sentence, such as a comma, the comma is also divided into separate words. Since punctuation is an integral part of a sentence, in this embodiment, punctuation is also used as a word, and of course, punctuation may be omitted as another embodiment. The segmentation of text corpus into words is a common means in natural language processing technology, and the specific process is not described in detail here.
S20, inputting each divided word into an emotion recognition model, and outputting the weight of each word and the emotion probability value of the whole sentence.
S30, multiplying the weight of each word by the emotion probability value of the whole sentence to obtain emotion scores of each word, and adding the words with emotion scores greater than or equal to a set threshold value into an emotion dictionary as emotion words. The threshold may be dependent on the actual situation.
Only the sentence "i are happy today-! For example, after emotion recognition is performed on the word "happy" through an emotion recognition model, the weight of the obtained word "happy" is a=0.56, and the emotion probability value of the whole sentence is p, so that the emotion score of the word "happy" is 0.56×p.
Referring to fig. 2, the present embodiment schematically provides a training method of the emotion recognition model, which includes the following steps:
s101, manually labeling the text corpus, and dividing the labeled text corpus into a training set and a testing set according to a certain proportion (for example, 8:2). It should be noted that, when model training is performed, all text corpus may or may not have emotion words, and the whole sentence is judged to be positive manually, marked as positive, and negative marked by manually judging that the whole sentence is negative. The positive and negative labels herein are just two different forms of labels to distinguish between them. By way of example only, sentences with positive emotion words are labeled 1 and sentences with negative emotion words are labeled 0.
S102, dividing each sentence in the training set and the testing set into a plurality of words respectively.
S103, initializing parameters of the emotion recognition model, and inputting all training sets into the initial emotion recognition model for training.
S104, inputting all the test sets into the emotion recognition model trained in the step S103 to predict, calculating loss according to the prediction result, optimizing parameters of the model if the loss change is large, for example, the loss change is larger than a set threshold value, returning to the step S103, and circularly executing the steps S103 to S104; if the loss change is small, for example, smaller than a set threshold, training is completed, and a final emotion recognition model is obtained.
After the emotion recognition model is constructed, emotion recognition can be carried out on the text corpus through the emotion recognition model, and the weight of each word in the sentence and the emotion probability value of the whole sentence are obtained.
Referring to fig. 3, in the present embodiment, the emotion recognition model is divided into five layers from bottom to top.
The first layer is to receive word vectors, which are converted from words resulting from sentence segmentation. For example, "I" today, "" very, "" happy "]! And 5 words in total, converting to obtain 5 word vectors, and inputting the 5 word vectors into a first layer.
The second layer is to generate a new representation of the input word vector (denoted by X) by Self-Attention:
Figure GDA0004082182920000051
referring to FIG. 4, taking the word vector of "I" as an example, the process of generating a new representation by Self-Attention is as follows:
and taking I 'as a query vector of search, removing the matching of key vectors of all words (including I's) in the sentence, and looking at how high the correlation degree is. Using q1 to represent the query vector corresponding to "I" and k1 to represent the key vector corresponding to "I", the point multiplication of q1 and k1 needs to be calculated when calculating the attention score of "I", and then scaling is performed, divided by a scale
Figure GDA0004082182920000052
Wherein->
Figure GDA0004082182920000053
For the dimension of a query vector and a key vector, the scaled result is normalized to a probability distribution by Softmax operation, and then multiplied by a matrix V (the result obtained by adding all word vectors in the sentence) to obtain a weighted sum representation Z.
The third layer is to generate a new representation of the input word vector Z (the word vector Z is the output result of the second layer) through Self-Attention:
Figure GDA0004082182920000061
in the embodiment, the word vector obtained by dividing the text sentence is expressed by Self-Attention twice, so that the accuracy of the expression result can be improved.
The fourth layer is a value layer, Z is expressed as W through Self-attribute, and then the weight of each Z mapped to W is used as the weight a of the word. For example, "I" weigh 0.02, "today" weigh 0.07, "very" weigh 0.11, "happy" weigh 0.56, "+|! "weight is 0.24.
As shown in FIG. 3, in the third layer, the word vector Z is expressed as r through Self-Attention, and in the process from r to W, W is obtained by adding the r1-r5 word vectors, and then Self-Attention operation is carried out on the r1-r5 word vectors and W, so that the result of the layer is obtained.
And fifth, obtaining the emotion recognition result p of the whole sentence through a sigmoid function.
The emotion score of each word can be calculated to be a×p by multiplying the emotion recognition result p of the whole sentence by a obtained by the fourth layer, for example, the weight of "happy" is 0.56×p.
Experimental example
By applying the method of the embodiment, emotion recognition is carried out on certain hotel comment data, and an emotion dictionary is constructed.
Such as: although the surface is quite common, the decoration inside is delicate, clean and comfortable; the user can choose to go to the first choice of happiness or to go to the hotel-!
After the sentence is segmented, the following words are obtained: "although," "surface," "look," "very," "general," "," "but," "inside," "finishing," "still," "very," "exquisite," "," "clean," "," "comfortable," etc.; "after", "go", "happy", "preferred", "still", "happy", "hotel", "and" ]! "
After converting each word into a word vector, inputting an emotion recognition model, and outputting an emotion probability value p of 0.98 of the whole sentence after the emotion recognition model is processed, so that the weight of each word is as shown in the following table:
Figure GDA0004082182920000071
/>
finally, selecting words with the value of more than or equal to 0.09 as emotion words, namely selecting 'exquisite', 'clean', 'comfortable' as emotion words, and adding the emotion words into an emotion dictionary.
According to the experimental example, the emotion words can be extracted more accurately by the method, so that the emotion dictionary is constructed, and compared with a manual construction method, the efficiency can be greatly improved by the method, and the labor cost is reduced.
Referring to fig. 5, based on the same inventive concept, an emotion dictionary construction system is provided in this embodiment, and arrows between the modules shown in fig. 5 indicate the transmission direction of data. Specifically, the emotion dictionary construction system comprises the following modules:
the word segmentation module is used for segmenting the text corpus and dividing a single sentence into a plurality of words.
And the emotion recognition module is used for inputting each divided word into the emotion recognition model and outputting the weight of each word and the emotion probability value of the whole sentence.
And the dictionary construction module is used for multiplying the weight of each word with the emotion probability value of the whole sentence to respectively obtain emotion scores of each word, and adding the words with emotion scores greater than or equal to a set threshold value into the emotion dictionary.
The emotion recognition module is also used for training to obtain the emotion recognition model. For a specific training process of emotion recognition model, please refer to the flowchart shown in fig. 3 and described above.
The emotion dictionary construction system and the emotion dictionary construction method in this embodiment are proposed based on the same concept, and therefore, reference is made to the related content in the foregoing method description for the point where the description of the system is not concerned.
As shown in fig. 6, the present embodiment also provides an electronic device that may include a processor 51 and a memory 52, wherein the memory 52 is coupled to the processor 51. It is noted that the figure is exemplary and that other types of structures may be used in addition to or in place of the structure to implement data extraction, report generation, communication, or other functions.
As shown in fig. 6, the electronic device may further include: an input unit 53, a display unit 54, and a power supply 55. It is noted that the electronic device need not necessarily include all of the components shown in fig. 6. In addition, the electronic device may further comprise components not shown in fig. 6, to which reference is made to the prior art.
The processor 51, sometimes also referred to as a controller or operational control, may include a microprocessor or other processor device and/or logic device, which processor 51 receives inputs and controls the operation of the various components of the electronic device.
The memory 52 may be, for example, one or more of a buffer, a flash memory, a hard drive, a removable medium, a volatile memory, a nonvolatile memory, or other suitable devices, and may store information such as configuration information of the processor 51, instructions executed by the processor 51, and recorded table data. The processor 51 may execute programs stored in the memory 52 to realize information storage or processing, and the like. In one embodiment, a buffer memory, i.e., a buffer, is also included in memory 52 to store intermediate information.
The input unit 53 is for example used to provide the respective text reports to the processor 51. The display unit 54 is used to display various results in the processing, and may be, for example, an LCD display, but the present invention is not limited thereto. The power supply 55 is used to provide power to the electronic device.
Embodiments of the present invention also provide a computer readable instruction, wherein the program when executed in an electronic device causes the electronic device to perform the operational steps comprised by the method of the present invention.
Embodiments of the present invention also provide a storage medium storing computer-readable instructions that cause an electronic device to perform the operational steps involved in the methods of the present invention.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps described in connection with the embodiments disclosed herein may be embodied in electronic hardware, in computer software, or in a combination of the two, and that the elements and steps of the examples have been generally described in terms of function in the foregoing description to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention is essentially or a part contributing to the prior art, or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-only memory (ROM), a random access memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present invention.

Claims (6)

1. The emotion dictionary construction method is characterized by comprising the following steps of:
dividing the text corpus of a single sentence into a plurality of words;
inputting each divided word into an emotion recognition model, and outputting the weight of each word and the emotion probability value of the whole sentence;
the emotion recognition model completes emotion recognition based on a Self-Attention mechanism;
the emotion recognition model is divided into five layers;
the first layer is used for receiving a word vector X, wherein the word vector X is obtained by converting a plurality of words obtained by word segmentation;
the second layer is to generate a new representation of the input word vector X through Self-intent:
Figure FDA0004059074570000011
q1 is used for representing the query vector corresponding to X, k1 is used for representing the key vector corresponding to X, the point multiplication of q1 and k1 is calculated when the attextioncore of X is calculated, and then scaling is carried out, and the scaling is divided by a scale
Figure FDA0004059074570000012
Wherein->
Figure FDA0004059074570000013
Normalizing the scaled result into probability distribution by Softmax operation for the dimension of a query vector and a key vector, and multiplying the probability distribution by a matrix V to obtain a weight summation representation word vector Z, wherein the matrix V is obtained by adding all word vectors in a sentence;
the third layer is to generate a new representation of the input word vector Z through Self-attribute:
Figure FDA0004059074570000014
the fourth layer is a value layer, a word vector Z is expressed as W through Self-attribute, and then the weight of each Z mapped to W is used as a weight a of the word; in the third layer, a word vector Z is expressed as r through Self-attribute, in the process of r to W, a word vector W is obtained by adding ri, wherein i represents an ith word vector, and then a fourth layer result is obtained by performing Self-attribute operation on ri and W;
fifth, obtaining emotion probability value p of the whole sentence through a sigmoid function;
multiplying the weight of each word by the emotion probability value of the whole sentence to obtain emotion scores of each word, and adding the words with emotion scores greater than or equal to a set threshold value into an emotion dictionary as emotion words.
2. The method according to claim 1, wherein the emotion recognition model is trained by:
s101, manually labeling the text corpus of a single sentence, and dividing the labeled text corpus into a training set and a testing set according to the proportion of 8:2;
s102, dividing each sentence in a training set and a testing set into a plurality of words respectively;
s103, initializing parameters of the emotion recognition model, and inputting all training sets into the initial emotion recognition model for training;
s104, inputting all the test sets into the emotion recognition model trained in the step S103 to predict, carrying out loss calculation according to the prediction result, optimizing parameters of the model if the loss change is greater than a set threshold value, returning to the step S103, and circularly executing the steps S103 to S104; if the loss variation is smaller than the set threshold, training is ended.
3. An emotion dictionary construction system, characterized by comprising the following modules:
the word segmentation module is used for segmenting the text corpus and segmenting a single sentence into a plurality of words;
the emotion recognition module is used for inputting each divided word into an emotion recognition model and outputting the weight of each word and the emotion probability value of the whole sentence;
the emotion recognition model completes emotion recognition based on a Self-Attention mechanism;
the emotion recognition model is divided into five layers;
the first layer is used for receiving a word vector X, wherein the word vector X is obtained by converting a plurality of words obtained by word segmentation;
the second layer is to generate a new representation of the input word vector X through Self-intent:
Figure FDA0004059074570000031
q1 is used for representing the query vector corresponding to X, k1 is used for representing the key vector corresponding to X, the point multiplication of q1 and k1 is calculated when the attextioncore of X is calculated, and then scaling is carried out, and the scaling is divided by a scale
Figure FDA0004059074570000032
Wherein->
Figure FDA0004059074570000033
Normalizing the scaled result into probability distribution by Softmax operation for the dimension of a query vector and a key vector, and multiplying the probability distribution by a matrix V to obtain a weight summation representation word vector Z, wherein the matrix V is obtained by adding all word vectors in a sentence;
the third layer is to generate a new representation of the input word vector Z through Self-attribute:
Figure FDA0004059074570000034
the fourth layer is a value layer, a word vector Z is expressed as W through Self-attribute, and then the weight of each Z mapped to W is used as a weight a of the word; in the third layer, a word vector Z is expressed as r through Self-attribute, in the process of r to W, a word vector W is obtained by adding ri, wherein i represents an ith word vector, and then a fourth layer result is obtained by performing Self-attribute operation on ri and W;
fifth, obtaining emotion probability value p of the whole sentence through a sigmoid function;
the dictionary construction module is used for multiplying the weight of each word by the emotion probability value of the whole sentence to obtain emotion scores of each word respectively, and adding the words with emotion scores greater than or equal to a set threshold value into the emotion dictionary as emotion words.
4. The system of claim 3, wherein the emotion recognition module is further configured to derive the emotion recognition model through machine learning training.
5. A computer readable storage medium comprising computer readable instructions which, when executed, cause a processor to perform the operations of any of the methods of claims 1-2.
6. An electronic device, said device comprising:
a memory storing program instructions;
a processor, coupled to the memory, for executing program instructions in the memory, for implementing the steps of the method of any of claims 1-2.
CN201910372297.7A 2019-05-06 2019-05-06 Emotion dictionary construction method and system Active CN110263321B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910372297.7A CN110263321B (en) 2019-05-06 2019-05-06 Emotion dictionary construction method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910372297.7A CN110263321B (en) 2019-05-06 2019-05-06 Emotion dictionary construction method and system

Publications (2)

Publication Number Publication Date
CN110263321A CN110263321A (en) 2019-09-20
CN110263321B true CN110263321B (en) 2023-06-09

Family

ID=67914276

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910372297.7A Active CN110263321B (en) 2019-05-06 2019-05-06 Emotion dictionary construction method and system

Country Status (1)

Country Link
CN (1) CN110263321B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110991167B (en) * 2019-12-05 2021-10-08 北京理工大学 Emotion dictionary construction method based on emotion hierarchy system
CN111767399B (en) * 2020-06-30 2022-12-06 深圳平安智慧医健科技有限公司 Method, device, equipment and medium for constructing emotion classifier based on unbalanced text set
CN115796158A (en) * 2023-02-07 2023-03-14 中国传媒大学 Emotion dictionary construction method and device, electronic equipment and computer readable medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130055429A (en) * 2011-11-18 2013-05-28 삼성전자주식회사 Apparatus and method for emotion recognition based on emotion segment
CN107679580A (en) * 2017-10-21 2018-02-09 桂林电子科技大学 A kind of isomery shift image feeling polarities analysis method based on the potential association of multi-modal depth
JP2018073343A (en) * 2016-11-04 2018-05-10 トヨタ自動車株式会社 Emotion estimation method
CN109243494A (en) * 2018-10-30 2019-01-18 南京工程学院 Childhood emotional recognition methods based on the long memory network in short-term of multiple attention mechanism
CN109408823A (en) * 2018-10-31 2019-03-01 华南师范大学 A kind of specific objective sentiment analysis method based on multi-channel model
CN109408680A (en) * 2018-10-08 2019-03-01 腾讯科技(深圳)有限公司 Automatic question-answering method, device, equipment and computer readable storage medium
CN109657246A (en) * 2018-12-19 2019-04-19 中山大学 A kind of extraction-type machine reading based on deep learning understands the method for building up of model

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108460009B (en) * 2017-12-14 2022-09-16 中山大学 Emotion dictionary embedded attention mechanism cyclic neural network text emotion analysis method
CN108563635A (en) * 2018-04-04 2018-09-21 北京理工大学 A kind of sentiment dictionary fast construction method based on emotion wheel model
CN109376251A (en) * 2018-09-25 2019-02-22 南京大学 A kind of microblogging Chinese sentiment dictionary construction method based on term vector learning model
CN109543722A (en) * 2018-11-05 2019-03-29 中山大学 A kind of emotion trend forecasting method based on sentiment analysis model
CN109710761A (en) * 2018-12-21 2019-05-03 中国标准化研究院 The sentiment analysis method of two-way LSTM model based on attention enhancing

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130055429A (en) * 2011-11-18 2013-05-28 삼성전자주식회사 Apparatus and method for emotion recognition based on emotion segment
JP2018073343A (en) * 2016-11-04 2018-05-10 トヨタ自動車株式会社 Emotion estimation method
CN107679580A (en) * 2017-10-21 2018-02-09 桂林电子科技大学 A kind of isomery shift image feeling polarities analysis method based on the potential association of multi-modal depth
CN109408680A (en) * 2018-10-08 2019-03-01 腾讯科技(深圳)有限公司 Automatic question-answering method, device, equipment and computer readable storage medium
CN109243494A (en) * 2018-10-30 2019-01-18 南京工程学院 Childhood emotional recognition methods based on the long memory network in short-term of multiple attention mechanism
CN109408823A (en) * 2018-10-31 2019-03-01 华南师范大学 A kind of specific objective sentiment analysis method based on multi-channel model
CN109657246A (en) * 2018-12-19 2019-04-19 中山大学 A kind of extraction-type machine reading based on deep learning understands the method for building up of model

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Chen Kan 等.AMC: Attention guided multi-modal correlation learning for image search.《Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition》.2017,2644-2652. *
杨鹏.基于领域词典与机器学习的中文评论情感分析.《中国优秀硕士学位论文全文数据库信息科技辑》.2019,(第02期),I138-2579. *
黄琼霞.基于深度学习的文本情感分析研究.《中国优秀硕士学位论文全文数据库社会科学Ⅱ辑》.2019,(第01期),H123-774. *

Also Published As

Publication number Publication date
CN110263321A (en) 2019-09-20

Similar Documents

Publication Publication Date Title
CN108363790B (en) Method, device, equipment and storage medium for evaluating comments
CN110245229B (en) Deep learning theme emotion classification method based on data enhancement
CN106502985B (en) neural network modeling method and device for generating titles
CN109408823B (en) A kind of specific objective sentiment analysis method based on multi-channel model
CN111344779A (en) Training and/or determining responsive actions for natural language input using coder models
CN108304468A (en) A kind of file classification method and document sorting apparatus
CN111401084B (en) Method and device for machine translation and computer readable storage medium
CN110263321B (en) Emotion dictionary construction method and system
CN111428490B (en) Reference resolution weak supervised learning method using language model
CN113704416B (en) Word sense disambiguation method and device, electronic equipment and computer-readable storage medium
CN111027292B (en) Method and system for generating limited sampling text sequence
CN114818891A (en) Small sample multi-label text classification model training method and text classification method
CN109933792A (en) Viewpoint type problem based on multi-layer biaxially oriented LSTM and verifying model reads understanding method
WO2019160096A1 (en) Relationship estimation model learning device, method, and program
CN111832278B (en) Document fluency detection method and device, electronic equipment and medium
CN114528398A (en) Emotion prediction method and system based on interactive double-graph convolutional network
CN107797981B (en) Target text recognition method and device
CN113361252B (en) Text depression tendency detection system based on multi-modal features and emotion dictionary
JP2012146263A (en) Language model learning device, language model learning method, language analysis device, and program
CN113326367A (en) Task type dialogue method and system based on end-to-end text generation
CN111723583B (en) Statement processing method, device, equipment and storage medium based on intention role
CN107783958B (en) Target statement identification method and device
CN110837730B (en) Method and device for determining unknown entity vocabulary
CN116795970A (en) Dialog generation method and application thereof in emotion accompanying
CN116186219A (en) Man-machine dialogue interaction method, system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant