CN111079404A - Data analysis method, device and storage medium - Google Patents

Data analysis method, device and storage medium Download PDF

Info

Publication number
CN111079404A
CN111079404A CN201911115580.8A CN201911115580A CN111079404A CN 111079404 A CN111079404 A CN 111079404A CN 201911115580 A CN201911115580 A CN 201911115580A CN 111079404 A CN111079404 A CN 111079404A
Authority
CN
China
Prior art keywords
emotion
data
target data
multimedia
multimedia data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911115580.8A
Other languages
Chinese (zh)
Inventor
葛雨辰
杨帆
杨沛
张成松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201911115580.8A priority Critical patent/CN111079404A/en
Publication of CN111079404A publication Critical patent/CN111079404A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification

Abstract

The embodiment of the application discloses a data analysis method, data analysis equipment and a storage medium, wherein the data analysis method comprises the following steps: obtaining multimedia data; dividing multimedia data to obtain at least one first target data and at least one second target data, wherein the first target data are at least data which are characterized as emotions in the multimedia data, and the second target data are characterized as emotion objects aiming at the emotions which are characterized by the first target data; allocating a corresponding first parameter to each first target data, wherein the first parameter is a proportion parameter which represents that the first target data is emotion data; analyzing the first target data allocated with the first parameters by using an analysis model to obtain a result of classifying the emotions represented by each first target data; and determining the emotion of the multimedia data expressed to each emotion object according to the classification result.

Description

Data analysis method, device and storage medium
Technical Field
The present application relates to sentiment analysis technologies, and in particular, to a data analysis method, device, and storage medium.
Background
In the related art, emotion expressed by characters and languages is analyzed by using an emotion analysis model, so as to obtain emotion expressed by the characters or emotion expressed by a speaker, such as positive emotion, negative emotion or neutral emotion. In practical application, the emotion expression modes of characters and languages are flexible and diversified, and there may be a situation that the emotion expressed by the characters or the speakers cannot be accurately analyzed.
Disclosure of Invention
In order to solve the existing technical problem, embodiments of the present application provide a data analysis method, a device, and a storage medium.
The technical scheme of the embodiment of the application is realized as follows:
an embodiment of the present application provides a data analysis method, including:
obtaining multimedia data;
dividing multimedia data to obtain at least one first target data and at least one second target data, wherein the first target data are at least data which are characterized as emotions in the multimedia data, and the second target data are characterized as emotion objects aiming at the emotions which are characterized by the first target data;
allocating a corresponding first parameter to each first target data, wherein the first parameter is a proportion parameter which represents that the first target data is emotion data;
analyzing the first target data allocated with the first parameters by using an analysis model to obtain a result of classifying the emotions represented by each first target data;
and determining the emotion of the multimedia data expressed to each emotion object according to the classification result.
In the above scheme, the method further comprises:
obtaining the position information of each first target data and each second target data in the multimedia data;
determining emotion changes of the multimedia data to the emotion objects according to the position information of the first target data in the multimedia data, the position information of the second target data in the multimedia data and the classification results of the emotions represented by the first target data;
and determining the emotion of the multimedia data expressed to each emotional object according to the emotion change of the multimedia data expressed to each emotional object.
In the above scheme, the method further comprises:
obtaining at least two first target data for the same emotional object;
obtaining the position information of each first target data in the at least two first target data in the multimedia data;
determining the emotion change of the multimedia data to the same emotion object according to the position relation of each first target data in the multimedia data and the emotion classification result represented by each first target data;
and determining the emotion of the multimedia data to the same emotional object according to the emotion change of the multimedia data to the same emotional object.
In the above scheme, the method further comprises:
distributing corresponding second parameters for each second target data; the second parameter is a specific gravity parameter which represents that the second data is an emotional object;
analyzing the second target data distributed with the second parameters by using the analysis model to obtain a result of carrying out sentiment object classification on each second target data;
correspondingly, the determining the emotion of the multimedia data to each emotion object according to the classification result includes:
and determining the emotion of the multimedia data to each emotion object according to the result of classifying the emotion represented by each first target data and the result of classifying each second target data.
In the foregoing solution, the obtaining the classification result of the emotion represented by each first target data includes:
obtaining the probability that the emotion represented by each first target data is at least one preset emotion by using an analysis model;
determining the type of the emotion represented by each first target data according to the probability;
and determining the emotion change of the multimedia data to each emotion object according to the type of the emotion represented by each first target data and the position information of each first target data in the multimedia data.
In the foregoing solution, before analyzing, by using the analysis model, the first target data assigned with the first parameter, the method further includes:
re-extracting data which are expressed as emotion in the multimedia data;
correspondingly, the analyzing the first target data assigned with the first parameter by using the analysis model includes:
in case the re-extracted data representing emotion coincides with the first target data,
and classifying the emotion represented by the first target data assigned with the first parameter by using the weight parameter in the analysis model.
In the above-described arrangement, in the case where the newly extracted data representing emotion does not coincide with the first target data,
discarding the first target data, and performing emotion analysis on the newly extracted data which are expressed as emotion by using the weight parameters in the analysis model;
or, the multimedia data is divided again until the first target data consistent with the re-extracted data representing emotion is obtained.
An embodiment of the present application provides a data analysis device, including:
an obtaining unit configured to obtain multimedia data;
the dividing unit is used for dividing the multimedia data to obtain at least one first target data and at least one second target data, wherein the first target data is at least data which are characterized as emotions in the multimedia data, and the second target data is characterized as an emotion object aiming at the emotion which is characterized by each first target data;
the allocation unit is used for allocating corresponding first parameters to the first target data, and the first parameters are proportion parameters which represent that the first target data are emotion data;
the analysis unit is used for analyzing the first target data assigned with the first parameters by using the analysis model to obtain a result of classifying the emotions represented by each first target data;
and the determining unit is used for determining the emotion of the multimedia data expressed to each emotion object according to the classification result.
An embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the steps of the foregoing data analysis method.
An embodiment of the present application provides a data analysis apparatus, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the data analysis method when executing the program.
The embodiment of the application provides a data analysis method, equipment and a storage medium, wherein the data analysis method comprises the following steps: obtaining multimedia data; dividing multimedia data to obtain at least one first target data and at least one second target data, wherein the first target data are at least data which are characterized as emotions in the multimedia data, and the second target data are characterized as emotion objects aiming at the emotions which are characterized by the first target data; allocating a corresponding first parameter to each first target data, wherein the first parameter is a proportion parameter which represents that the first target data is emotion data; analyzing the first target data allocated with the first parameters by using an analysis model to obtain a result of classifying the emotions represented by each first target data; and determining the emotion of the multimedia data expressed to each emotion object according to the classification result.
In the embodiment of the application, the corresponding proportion parameter is allocated to the data expressed as the emotion in the multimedia data to highlight or embody the data as the emotion data, so that the analysis model can acquire the key point of the analysis in the process of analyzing the emotion, such as the key point of performing emotion analysis on the emotion data allocated with the proportion parameter, and the method is equivalent to prompting the analysis model to increase the attention on the key point analysis data. Therefore, the analysis model can be helped to more accurately analyze the emotion expressed by the characters or the speaker, the occurrence of the emotion condition which cannot be accurately analyzed and expressed by the characters or the speaker is avoided, and the accuracy of emotion analysis can be improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a schematic flow chart illustrating implementation of a first embodiment of a data analysis method provided in the present application;
fig. 2 is a schematic flow chart illustrating an implementation of a second embodiment of the data analysis method provided in the present application;
fig. 3 is a schematic flow chart illustrating implementation of a third embodiment of the data analysis method provided in the present application;
fig. 4 is a first scenario diagram illustrating a method for implementing data analysis according to the present application;
fig. 5 is a schematic view of a second scenario for implementing the data analysis method provided in the present application;
FIG. 6 is a first schematic diagram of a dispense specific gravity parameter provided herein;
FIG. 7 is a second schematic diagram of the dispense specific gravity parameter provided herein;
FIG. 8 is a schematic diagram of the structure of the data analysis device provided in the present application;
fig. 9 is a schematic diagram of a hardware structure of the data analysis device provided in the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions in the embodiments of the present application will be described clearly and completely with reference to the accompanying drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application. In the present application, the embodiments and features of the embodiments may be arbitrarily combined with each other without conflict. The steps illustrated in the flow charts of the figures may be performed in a computer system such as a set of computer-executable instructions. Also, while a logical order is shown in the flow diagrams, in some cases, the steps shown or described may be performed in an order different than here.
The present application provides an embodiment of a data analysis method, applied to a data analysis device, as shown in fig. 1, the method includes:
step (S) 101: obtaining multimedia data;
here, the multimedia data may be audio data, video data, image data, text data, and the like. To facilitate the subsequent division of the multimedia data, text data is preferred. For audio data, the audio data may be converted into text data. For video data and image data, text portions are read out, and the read text portions are used as text data.
S102: dividing multimedia data to obtain at least one first target data and at least one second target data, wherein the first target data are at least data which are characterized as emotions in the multimedia data, and the second target data are characterized as emotion objects aiming at the emotions which are characterized by the first target data;
here, the multimedia data is divided to obtain data representing emotion and data representing emotion objects in the multimedia data. It is understood that in the field of natural language, data characterized as emotion are mostly adjectives, adverbs, and the like. Data characterized as emotional objects are mostly nouns, pronouns, etc. expressed as objects.
S103: allocating a corresponding first parameter to each first target data, wherein the first parameter is a proportion parameter which represents that the first target data is emotion data;
here, data characterized as emotion in the multimedia data is assigned a corresponding weight parameter.
S104: analyzing the first target data allocated with the first parameters by using an analysis model to obtain a result of classifying the emotions represented by each first target data;
s105: and determining the emotion of the multimedia data expressed to each emotion object according to the classification result.
In S104 and S105, the analysis model may be any reasonable model for analyzing emotion. And analyzing the emotion data distributed with certain proportion parameters by using an analysis model, specifically classifying the emotion represented by the data characterized as emotion to obtain a classification result, and determining the emotion of the multimedia data represented by each emotion object according to the classification result.
In S101 to S105, the corresponding proportion parameter is assigned to the data indicating emotion in the multimedia data to highlight or represent the data as emotion data, and the emotion data to which the proportion parameter is assigned is analyzed by the analysis model. In the embodiment of the application, the proportion parameter is allocated to the data expressed as emotion in the multimedia data, so that the proportion of the data expressed as emotion is highlighted or reflected, and the preprocessing is performed on the data expressed as emotion in the multimedia data, so that the analysis model can know the key point of analysis in the process of analyzing emotion, such as performing emotion analysis on the emotion data allocated with the proportion parameter. Therefore, the situation that the emotion expressed by the characters or the speaker cannot be accurately analyzed can be avoided, and the accuracy of emotion analysis is improved.
In practical applications, there may be one or more emotion objects included in the multimedia data. It can be understood that, in the case of one, each data characterized as emotion in the multimedia data is emotion data for an emotion object, and applying the schemes shown in S101 to S104 is equivalent to determining the emotion ultimately expressed by the multimedia data for the emotion object according to a plurality of emotion data expressed in the multimedia data for the same emotion object. In the case of a plurality of objects, the schemes shown in S101 to S104 are applied to obtain the final emotion expressed by the multimedia data for each emotion object from the emotion data corresponding to each emotion object expressed in the multimedia data.
In practical applications, two adjacent data representing emotions may be the emotions expressed in the same emotional object or the emotions expressed in different emotional objects. The two adjacent data representing emotion may be both data expressing a positive emotion, both data expressing a negative emotion, and both data expressing a neutral emotion. Of course, two adjacent data representing emotions may also express different emotion data such as one of which expresses a positive emotion and the other of which expresses a negative emotion. In all of the above cases, the final emotion expressed in the multimedia data for each emotion object can be obtained according to the schemes shown in S101 to S104.
In an optional embodiment, after obtaining each first target data and each second target data, as shown in fig. 2, after S104, the method further includes:
s106: obtaining the position information of each first target data and each second target data in the multimedia data;
s107: determining emotion changes of the multimedia data to the emotion objects according to the position information of the first target data in the multimedia data, the position information of the second target data in the multimedia data and the classification results of the emotions represented by the first target data;
s108: and determining the emotion of the multimedia data expressed to each emotional object according to the emotion change of the multimedia data expressed to each emotional object.
In the schemes S106 to S108, the position information of each data representing emotion in the multimedia data and the position information of each data representing emotion object in the multimedia data are combined to determine the emotion represented by the data representing emotion to the emotion object to which the data represent emotion at the position of the data, each data representing emotion generates a corresponding emotion to the emotion object to which the data represent emotion at the position of the data, so as to obtain the emotion change represented by the multimedia data to each emotion object, and further determine the final emotion represented by the multimedia data to each emotion object according to the emotion change represented by the multimedia data to each emotion object. In the scheme, the emotion change is determined by combining the position information of the data characterized as the emotion in the multimedia data, and then the final emotion expressed by the multimedia data is determined according to the emotion change. The scheme is a brand new scheme for emotion analysis of the multimedia data, and not only can determine emotion changes of the multimedia data, but also can determine emotion to be expressed finally according to the emotion changes. The data expressed as emotion in the multimedia data are allocated with certain proportion parameters before the analysis model is used for emotion analysis, and the data expressed as emotion are highlighted or embodied, so that the analysis model can grasp the analysis key points, such as more accurately analyzing the emotion of the data expressed as emotion.
In an optional embodiment, for at least two first target data aiming at the same emotional object in multimedia data, obtaining position information of each first target data in the at least two first target data in the multimedia data; determining the emotion change of the multimedia data to the same emotion object according to the position relation of each first target data in the multimedia data and the emotion classification result represented by each first target data; and determining the emotion of the multimedia data to the same emotional object according to the emotion change of the multimedia data to the same emotional object. In this optional embodiment, two or more pieces of emotion data characterized as being for the same emotion object are extracted from the multimedia data, emotion change generated for the same emotion object in the multimedia data is determined according to emotion classification results for the extracted pieces of emotion data and position information of the pieces of emotion data in the multimedia data, and a final emotion expressed by the multimedia data for the same emotion object is determined according to the emotion change. The scheme can determine the emotion change of the multimedia data and can also determine the emotion to be expressed finally according to the emotion change, and the scheme is a novel emotion analysis scheme.
In an optional scheme, the obtaining of the classification result of the emotion represented by each first target data includes: obtaining the probability that the emotion represented by each first target data is at least one preset emotion by using an analysis model; determining the type of the emotion represented by each first target data according to the probability; and determining the emotion change of the multimedia data to each emotion object according to the type of the emotion represented by each first target data and the position information of each first target data in the multimedia data. The category of emotion represented by the first target data may be one of a category represented as positive, a category represented as negative or a category represented as neutral. The analysis model is used to calculate the probability that the emotion represented by the first target data is a positive emotion, a negative emotion and/or a neutral emotion, and based on the probabilities, the emotion represented by the first target data is determined to be a positive emotion, a negative emotion or a neutral emotion. And determining the emotional changes (emotional tendency) of the multimedia data to the emotional objects according to the positions of the first target data in the multimedia data. Because the analysis model has strong robustness and is not easily interfered by the outside, the analysis model is used for analyzing the emotion types, so that the analysis accuracy can be greatly improved, and the emotion analysis accuracy can be further ensured.
In an alternative embodiment, in the case of performing S103, as shown in fig. 3, the method further includes:
s1031: distributing corresponding second parameters for each second target data; the second parameter is a specific gravity parameter which represents that the second data is an emotional object;
correspondingly, the S104 and S105 are:
s1041: analyzing the first target data distributed with the first parameters and the second target data distributed with the second parameters by using an analysis model to obtain a result of classifying the emotion represented by each first target data and a result of classifying objects of each second target data;
s1051: and determining the emotion of the multimedia data to each emotion object according to the result of classifying the emotion represented by each first target data and the result of classifying each second target data.
In the schemes of S1031 to S1051, not only the specific gravity parameters are allocated to the data characterized by emotion, but also the specific gravity parameters are allocated to the data characterized by emotion objects, so as to highlight or embody the data represented by emotion objects, thereby facilitating the analysis model to grasp the important points of analysis, and analyzing the emotion of emotion objects expressed in the multimedia data, thereby improving the accuracy of emotion analysis of emotion objects. Wherein, the multimedia data can use direct nouns or pronouns to describe emotional objects, such as dogs. Indirect nouns or pronouns may also be used to describe emotional objects such as hastells, golden hair, etc. The analysis model is also used for carrying out object classification on the emotional objects represented by the second target data, such as classifying the emotional objects into human beings, animals, trees, flowers and plants and the like. For example, indirect nouns or pronouns such as Hashima and golden hair can be classified into the categories of animals, particularly dogs. Thus, it can be understood which object(s) the emotion expressed by the multimedia data is directed to.
In practical application, the second target data is assigned with the second parameter, so that on one hand, the importance of the data characterized as the emotional object relative to other data in the multimedia data except the data characterized as the emotion and the data characterized as the emotional object can be highlighted, and the important analysis of the analysis model is facilitated. On the other hand, the analysis of the emotion objects may be more heavily focused, such that the analysis focuses on which emotion object(s) are expressed. In practical application, the emotion object may be classified without an analysis model, the object class represented by the second target data is matched with multiple preset object classes one by one, and the preset object class which is the same as the object class represented by the second target data is determined as the class of the object represented by the second target data.
As an option, before the analyzing the first target data assigned with the first parameter using the analysis model, the method further includes: re-extracting data which are expressed as emotion in the multimedia data;
correspondingly, the analyzing the first target data assigned with the first parameter by using the analysis model includes:
under the condition that the re-extracted data which are expressed as the emotions are consistent with the first target data, classifying the emotions which are expressed by the first target data and are distributed with first parameters by using weight parameters in the analysis model;
under the condition that the re-extracted data which are expressed as the emotion are inconsistent with the first target data, discarding the first target data, and performing emotion analysis on the re-extracted data which are expressed as the emotion by using the weight parameters in the analysis model; or, the multimedia data is divided again until the first target data consistent with the re-extracted data representing emotion is obtained.
In the foregoing alternative, in order to ensure accuracy of obtaining data expressed as emotion in multimedia data, before performing emotion analysis using an analysis model, data expressed as emotion in multimedia data needs to be re-extracted; whether the newly extracted data representing emotion matches first target data (data representing emotion obtained for the first time from multimedia data) is determined, and if it is determined that the data representing emotion matches the first target data, emotion represented by the first target data to which a first parameter is assigned is classified using a weight parameter in the analysis model. Under the condition that the data are judged to be inconsistent, the first target data can be discarded, and the newly extracted data which are expressed as emotion are subjected to emotion analysis by using the weight parameters in the analysis model; or, the multimedia data is divided again until the first target data consistent with the re-extracted data representing emotion is obtained. The scheme of extracting the data expressed as the emotion from the multimedia data again can ensure the extraction accuracy of the emotion data and further ensure the accuracy of emotion analysis. The re-extraction process can be executed by an analysis model, and the analysis model re-extracts the data which are characterized as emotion in the multimedia data and analyzes the emotion expressed by the multimedia data according to the weight parameter distributed to the data which are characterized as emotion by the analysis model and the proportion parameter distributed to the data which are represented as emotion before emotion analysis is carried out by the analysis model. Therefore, the emotion analysis accuracy can be guaranteed.
The embodiments of the present application will be described in further detail with reference to fig. 4 to fig. 7 and the following application scenarios.
In this application scenario, text data that needs to be analyzed for emotion, for example, emotion expressed by an input article, is analyzed. Referring to fig. 4 and 5, the main implementation flow for analyzing the emotion expressed by the input article is as follows: the method comprises the steps of dividing natural sections of an input article, splitting sentences of each natural section (the content can be regarded as hierarchical splitting of the article), extracting data which are characterized as emotions in the sentences and data which are characterized as emotion objects, distributing certain proportion parameters to the extracted data, distributing certain proportion parameters to the data which are characterized as the emotions and the emotion objects so as to highlight or reflect the importance of the data which are characterized as the emotions and the emotion objects in each sentence, inputting each sentence (including the data which are characterized as the emotions and the emotion objects and distributed with certain proportion parameters) into an analysis model, and analyzing the emotions of each sentence by using the analysis model to obtain the emotion of each sentence. And then the emotions of all sentences belonging to the same natural segment are fused to obtain the emotion expressed by the same natural segment. And finally classifying the article expression emotions according to the emotions expressed by the respective natural segments to determine the emotion expressed by the whole article, and performing associated recording on the emotion expressed by the emotion objects of the articles and storing the associated recording for convenient application. When applied, a person searching for articles may search out articles from the stored articles that represent a positive emotion to the pet based on the keyword they entered, such as "like for pet" to facilitate the search by the searcher. It is to be appreciated that the sentiment of the sentence, paragraph, and/or article may be embodied in different sentiment codes on a particular implementation. The same emotion codes are clustered and marked by experts, and clustering of articles belonging to the same emotion can be obtained for convenient use.
The analysis model in the application scenario is any reasonable model capable of analyzing the multimedia data emotion, such as a deep learning model and a neural network model, and further may be an LSTM (long short term memory network) model and a bert (bidirectional encoder retrieval from transformations) model. Before emotion analysis is performed by using the analysis model, the analysis model needs to be trained, and the analysis model is used after training is completed. The training process is as follows: and acquiring the sentences with emotion classification results as y of the analysis model, and manually marking the emotion represented by the words characterized as emotion and the emotion object represented by the words characterized as emotion objects in the acquired sentences as x of the analysis model. The analysis model can be represented by a function y ═ f (x), and the training process is to use the known x and y to obtain a mapping relationship f between the output y and the input x. Further, the analytical model may be represented by y ═ f (x) ═ ωixi+ b, where i is the vocabulary input to the model is the second vocabulary of the sentence. x is the number ofiRepresenting the ith word in the sentence; omegaiRepresenting the weight parameter corresponding to the ith word in the sentence; b is a bias term. The training process is to find each omega by using the known output y and input xiAnd b. Omega obtained by training the analytical model when the loss function or the cost function of the analytical model is minimumiAnd b is the desired ωiAnd b of the first and second groups,and finishing the training of the analysis model. The loss function or the cost function may be a square loss function or a logarithmic loss function, which is described in the related description. After the training of the analysis model is completed, an application stage can be entered for analyzing the emotion expressed by the sentence, the natural segment or the article.
The article to be analyzed is taken as an article for mainly publishing the emotion to the pets,
application scenario 1:
it is assumed that the sentence "i has a lovely golden hair" shown in fig. 6 is obtained by dividing the natural segments and dividing each sentence of one of the natural segments, and the sentence is more graceful. "regarding the above sentence as multimedia data (text data) to be analyzed, the vocabulary of the sentence is divided by using the preset database, as shown in fig. 6. The database comprises an entity recognition library, a knowledge graph, a syntax tree, a user dictionary library and the like. The entity recognition library records commonly used emotional subjects (objects) such as cats, dogs, people, flowers, and the like or terms such as golden hair and husky which can be indirectly expressed as emotional subjects. Knowledge-graph and syntax trees are used to help identify subjects, predicates, objects, determinants, subjects, complements of an input sentence. And combines the laws of natural language: idioms, idioms and complements are usually adjectives and may be able to embody the emotion of the speaker. The subject and object are usually pronouns, nouns denoted as names, and can serve as data characterizing the emotional object. The vocabulary of the subject component (subject, predicate, object, etc.) of the sentence is divided, and the entity recognition library is combined to obtain the vocabulary of the sentence characterized as the emotional object.
Through the division of the words of the sentence, the data representing the emotion in the sentence are 'lovely' and 'mild', and the data representing the emotion object is 'golden hair'. The positions of the data "lovely" and "mild" in the sentence, which are characterized as emotions, and the positions of the data "golden hair" in the sentence, which are characterized as emotion objects, are read. According to the positions, the emotional subjects aimed at by the lovely and the gentle are the same emotional subjects and are golden hair. According to the record of the user dictionary library, corresponding proportion parameters are distributed to data which are characterized as emotion in the sentence, such as 'lovely' and 'gentle' for example. The specific weight parameters provided by the data which are recorded in the user dictionary library and possibly analyzed and are characterized as emotion are allocated to the words of 'lovely' and 'mild' which are characterized as emotion data and the words of 'golden hair' which is characterized as emotion objects. As shown in fig. 6, the specific gravity parameter assigned to "lovely" is wa, the specific gravity parameter assigned to "gentle" is wc, and the specific gravity parameter assigned to "golden hair" is wb. Meanwhile, the specific gravity parameter of the words other than the above words in the sentence is assigned as 1. And (4) taking each vocabulary in the sentence and the weight parameters distributed to each vocabulary as a sentence vector, and inputting the sentence vector into the trained analysis model. It can be understood that in order to realize the analysis of emotion in the present application scenario, the proportion parameter set for each data characterized as emotion and data characterized as emotion object needs to be 1, which is different from the proportion parameter set for other words. For example, setting specific gravity parameters such as wa and wc of emotion data expressed as positive is usually larger than 1. The specific gravity parameter, e.g. wb, set to be represented as affective object data is also typically larger than 1. To better distinguish between data represented as emotion and data represented as emotion objects, it is common to set a greater weight parameter for emotion data represented as recognition than for emotion objects. Such a scheme of labeling the proportion parameters of the data expressed as emotion and the data expressed as emotion objects more heavily than other data in the sentence can prompt the analysis model to analyze the data labeled in the sentence, so as to avoid the situation that emotion cannot be analyzed as much as possible.
It will be appreciated that the trained analytical model also has a corresponding weight parameter ω for each word in the sentence that is input to iti. Analytical model incorporating weight parameter ωiAnd assigning a proportion parameter to each vocabulary in the sentence, and analyzing the emotion expressed by the sentence by using the proportion parameter. Here, the analysis model is equivalent to adding the weight parameter trained for each vocabulary and the specific gravity parameter assigned to each vocabulary in correspondence, and the analysis model updates each vocabulary to be a parameter after each vocabulary is updatedAs final weighting parameters for each vocabulary to analyze the emotion expressed by the sentence. Because the proportion parameter allocated to the data expressed as a positive emotion in the sentence and the proportion parameter allocated to the data expressed as an emotion object in the sentence are more than 1 and more than the proportion parameters of other words and phrases in the sentence, the allocation scheme of the proportion parameters achieves the purpose of highlighting or embodying words and phrases needing important attention in the sentence, and is equivalent to the purpose of reminding the analysis model of which words and phrases in the sentence need important attention for the currently input sentence. If the scheme of assigning the weight parameter to the data expressed as the emotion and the data expressed as the emotion object is regarded as a preprocessing scheme, the preprocessing scheme can enable the analysis model to know the key points of analysis in the process of analyzing the emotion, such as the key points of emotion analysis on the emotion data and the emotion object data. Therefore, the situation that the emotion expressed by the characters or the speaker cannot be accurately analyzed can be avoided, and the accuracy of emotion analysis is improved.
In a specific implementation, the analysis model calculates the probability that "lovely", "warm" belongs to a commendable emotional vocabulary or calculates the probability that it belongs to a devastating emotional vocabulary or calculates the probability that it belongs to a neutral emotional vocabulary. If for example the probability of being a word that belongs to a recognition emotion is calculated, in the case that both probabilities are greater than a first threshold, such as 0.8 or 0.7, it may be considered that "lovely" is a classification of recognition emotion in the sentence and that "temperate" is also a classification of recognition emotion in the sentence. The analysis model calculates the probability that the golden hair belongs to each emotional subject, such as the probability of belonging to the emotional subject, namely the dog, and if the probability is larger than a second threshold, such as 0.6, the emotional object is considered to be the animal, such as the dog. Combining the positions of "lovely" and "wenshun" in the sentence, the result is that the sentence expresses the positive → positive emotional change to gold hair. That is, for the aforementioned sentence, the first half of the sentence is expressed by the fair emotion to the dog and the second half is also expressed by the fair emotion to the dog, and it can be considered that the sentence is expressed by the fair emotion, specifically, the fair emotion to the dog is like or liked to the dog.
Application scenario 2:
suppose that the sentence "i has a beautiful golden hair, but it is violent in character, i likes my mild english-short" as shown in fig. 7 by dividing each sentence of a certain natural segment. The sentence is divided into words and phrases, and the data which are characterized as emotion in the sentence are 'beautiful', 'violent' and 'mild', and the data which are characterized as emotion objects are 'golden hair' and 'English short'. Combining the positions of the emotional data such as 'beautiful', 'violent' and 'mild' in the sentence and the positions of the emotional object data such as 'golden hair' and 'English short' in the sentence, the two emotional data of 'beautiful', 'violent' are known to be directed to the emotional object 'golden hair', and the emotional data of 'mild' is directed to the emotional object 'English short'. According to the record of the user dictionary library, corresponding specific gravity parameters such as wd, we and wf are allocated to 'beautiful', 'violent' and 'gentle' in the sentence. The "golden hair" in this sentence, the "it" and the "english short" which refer to the golden hair, are assigned corresponding specific gravity parameters such as wg, wh. Meanwhile, the specific gravity parameter of the words other than the above words in the sentence is assigned as 1. And inputting the vocabulary and the statement carrying the respective proportion parameters into a trained analysis model as a statement vector.
The analysis model calculates the probability that "beautiful", "violent" and "mild" belong to a commendative emotional vocabulary or calculates the probability that it belongs to a devastating emotional vocabulary or calculates the probability that it belongs to a neutral emotional vocabulary. If for example the probabilities of being a positive emotion word are calculated, the "beautiful" classification of a positive emotion in the sentence and the "mild" classification of a positive emotion in the sentence may be considered in the case where the calculated probabilities of being "beautiful" and "violent" are both greater than a first threshold, such as 0.8 or 0.7. In the case where the calculated probability that "violence" belongs to a positive emotion vocabulary is not greater than the first threshold, the probability that "violence" belongs to a negative emotion vocabulary is calculated, and if the probability is greater than a third threshold, such as 0.75, then "violence" may be considered a classification of negative emotions in the sentence. The analysis model calculates the probability that the golden hair belongs to each emotional subject, such as the probability of belonging to the emotional subject, namely the dog, and if the probabilities are all larger than a second threshold, such as 0.6, the emotional object is considered to be the animal, namely the dog. Calculating the probability of the English short belonging to each emotional subject, such as the probability of belonging to the emotional subject-cat, and if the probabilities are all larger than a fourth threshold, such as 0.65, then the emotional object is considered as the animal such as the cat. Combining the positions of "lovely" and "wink" in the sentence results in that the sentence first expresses the positive- > derelict emotion change for the dog, and then transforms from positive to derelict emotion for the dog to positive emotion for the cat. That is, for the preceding sentence, the sentence is treated as three parts bounded by punctuation, the first part of the sentence expressing the positive emotion for the dog, the second part expressing the negative emotion for the dog, and the third part expressing the positive emotion for the cat. In combination, the first and second portions of the sentence may be considered to express a neutral emotion on the dog and the third portion to express a positive emotion on the cat. From the overall emotion expressed in this phrase, the pet owner prefers his cat over his dog. The first threshold to the third threshold can be flexibly set according to actual conditions.
In the foregoing scenario, in order to analyze the overall emotion of the foregoing sentence in a certain natural segment, if the emotion of other sentences in the natural segment is also analyzed according to the foregoing scheme, the emotion expressed by the pet owner for his pet in the natural segment is obtained. And (4) integrating the emotions expressed by the pets of all the natural sections by the pet owners to see the final overall emotion expressed by the pet articles written by the pet owners for the pets. The emotion of each segment or the overall emotion of the article may be a single emotion or a complex emotion. A single emotion such as a purely positive emotion, a derogative emotion or a neutral emotion. The complex emotion may be an insignificantly adulterated emotion such as an emotion to be raised or depressed.
In the foregoing solution applying scenarios 1 and/or 2, in order to ensure the accuracy of obtaining data expressed as emotion in a sentence, before performing emotion analysis by using an analysis model, data expressed as emotion in the sentence needs to be re-extracted; and judging whether the re-extracted data expressing the emotion is consistent with the data expressing the emotion obtained for the first time from the multimedia data, and if so, classifying the emotion expressed by the emotion data distributed with the first parameter by using the weight parameter in the analysis model. If the data are inconsistent, discarding the first target data, and performing emotion analysis on the re-extracted data which are expressed as emotion by using the weight parameters in the analysis model; or, the multimedia data is divided again until the first target data consistent with the re-extracted data representing emotion is obtained. The scheme of extracting the data expressed as the emotion from the multimedia data again can ensure the extraction accuracy of the emotion data and further ensure the accuracy of emotion analysis. The re-extraction process may be performed by or without an analysis model, which re-extracts data characterized as emotion from the multimedia data and analyzes the emotion expressed by the multimedia data according to the weight parameters assigned to the data characterized as emotion by the analysis model and the weight parameters assigned to the data characterized as emotion before emotion analysis by the analysis model. Therefore, the emotion analysis accuracy can be guaranteed.
An embodiment of the present application further provides a data analysis device, as shown in fig. 8, the device includes: a first obtaining unit 11, a dividing unit 12, an assigning unit 13, an analyzing unit 14, and a determining unit 15; wherein the content of the first and second substances,
a first obtaining unit 11 for obtaining multimedia data;
the dividing unit 12 is configured to divide the multimedia data to obtain at least one first target data and at least one second target data, where the first target data is at least data that is characterized as emotion in the multimedia data, and the second target data is characterized as an emotion object for which the emotion represented by each first target data is aimed;
an assigning unit 13, configured to assign a corresponding first parameter to each first target data, where the first parameter is a specific gravity parameter indicating that the first target data is emotion data;
an analysis unit 14, configured to analyze the first target data assigned with the first parameter by using an analysis model, and obtain a result of classifying the emotion represented by each first target data;
and the determining unit 15 is configured to determine, according to the classification result, an emotion represented by the multimedia data for each emotion object.
In an optional scheme, the apparatus further includes a second obtaining unit, configured to obtain location information of the respective first target data and the respective second target data in the multimedia data. And the analysis unit 14 is used for determining the emotion change of the multimedia data to each emotion object according to the position information of each first target data in the multimedia data, the position information of each second target data in the multimedia data and the classification result of the emotion represented by each first target data. And the determining unit 15 is configured to determine the emotion represented by the multimedia data for each emotional object according to the emotion change represented by the multimedia data for each emotional object.
In an alternative scheme, the second obtaining unit is used for obtaining at least two first target data aiming at the same emotional object; and obtaining the position information of each first target data in the at least two first target data in the multimedia data. And the analysis unit 14 is configured to determine, according to the position relationship of each first target data in the multimedia data and the emotion classification result represented by each first target data, an emotion change of the multimedia data to the same emotion object. And the determining unit 15 is configured to determine the emotion of the multimedia data to the same emotional object according to the emotion change of the multimedia data to the same emotional object.
In an optional scheme, the allocating unit 13 is configured to allocate a corresponding second parameter to each second target data; the second parameter is a specific gravity parameter which represents that the second data is an emotional object. And the analysis unit 14 is used for analyzing the second target data assigned with the second parameters by using the analysis model to obtain a result of classifying the emotional objects of each second target data. And the determining unit 15 is used for determining the emotion of the multimedia data expressed to each emotion object according to the result of classifying the emotion expressed by each first target data and the result of classifying each second target data.
In an optional scheme, the analysis unit 14 is configured to obtain, by using the analysis model, a probability that the emotion represented by each first target data is at least one emotion preset; determining the type of the emotion represented by each first target data according to the probability; and determining the emotion change of the multimedia data to each emotion object according to the type of the emotion represented by each first target data and the position information of each first target data in the multimedia data.
In an optional scheme, the device further comprises an extraction unit, configured to re-extract data representing emotion in the multimedia data; an analyzing unit 14 configured to classify the emotion represented by the first target data to which the first parameter is assigned, by using the weight parameter in the analysis model, when the newly extracted data representing emotion matches the first target data. Under the condition that the re-extracted data which are expressed as the emotion are inconsistent with the first target data, discarding the first target data, and performing emotion analysis on the re-extracted data which are expressed as the emotion by using the weight parameters in the analysis model; or, the multimedia data is divided again until the first target data consistent with the re-extracted data representing emotion is obtained.
It is understood that the first obtaining Unit 11, the dividing Unit 12, the allocating Unit 13, the analyzing Unit 14, and the determining Unit 15 in the apparatus may be implemented by a Central Processing Unit (CPU), a Digital Signal Processor (DSP), a Micro Control Unit (MCU), or a Programmable Gate Array (FPGA) of a data analyzing apparatus in practical applications.
It should be noted that, in the data analysis device according to the embodiment of the present application, because the principle of solving the problem of the data analysis device is similar to that of the data analysis method, the implementation process and the implementation principle of the data analysis device can be described by referring to the implementation process and the implementation principle of the method, and repeated details are not repeated.
An embodiment of the present application further provides a computer-readable storage medium, on which a computer program is stored, where the computer program is configured to, when executed by a processor, perform at least the steps of the method shown in any one of fig. 1 to 7. The computer readable storage medium may be specifically a memory. The memory may be memory 62 as shown in fig. 9.
The embodiment of the application also provides a terminal. Fig. 9 is a schematic diagram of a hardware structure of a data analysis device according to an embodiment of the present application, and as shown in fig. 9, the data analysis device includes: a communication component 63 for data transmission, at least one processor 61 and a memory 62 for storing computer programs capable of running on the processor 61. The various components in the terminal are coupled together by a bus system 64. It will be appreciated that the bus system 64 is used to enable communications among the components. The bus system 64 includes a power bus, a control bus, and a status signal bus in addition to the data bus. For clarity of illustration, however, the various buses are labeled as bus system 64 in fig. 9.
Wherein the processor 61 executes the computer program to perform at least the steps of the method of any of fig. 1 to 7.
It will be appreciated that the memory 62 can be either volatile memory or nonvolatile memory, and can include both volatile and nonvolatile memory. Among them, the nonvolatile Memory may be a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), a magnetic random access Memory (FRAM), a Flash Memory (Flash Memory), a magnetic surface Memory, an optical disk, or a Compact Disc Read-Only Memory (CD-ROM); the magnetic surface storage may be disk storage or tape storage. Volatile memory can be Random Access Memory (RAM), which acts as external cache memory. By way of illustration and not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), Synchronous Static Random Access Memory (SSRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic Random Access Memory (SDRAM), Double Data Rate Synchronous Dynamic Random Access Memory (DDRSDRAM), Enhanced Synchronous Dynamic Random Access Memory (ESDRAM), Enhanced Synchronous Dynamic Random Access Memory (Enhanced DRAM), Synchronous Dynamic Random Access Memory (SLDRAM), Direct Memory (DRmb Access), and Random Access Memory (DRAM). The memory 62 described in embodiments herein is intended to comprise, without being limited to, these and any other suitable types of memory.
The method disclosed in the above embodiments of the present application may be applied to the processor 61, or implemented by the processor 61. The processor 61 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 61. The processor 61 described above may be a general purpose processor, a DSP, or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. The processor 61 may implement or perform the methods, steps and logic blocks disclosed in the embodiments of the present application. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the method disclosed in the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software modules may be located in a storage medium located in the memory 62, and the processor 61 reads the information in the memory 62 and performs the steps of the aforementioned method in conjunction with its hardware.
In an exemplary embodiment, the data analysis Device may be implemented by one or more Application Specific Integrated Circuits (ASICs), DSPs, Programmable Logic Devices (PLDs), Complex Programmable Logic Devices (CPLDs), FPGAs, general purpose processors, controllers, MCUs, microprocessors (microprocessors), or other electronic components for executing the aforementioned data analysis Device.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Alternatively, the integrated units described above in the present application may be stored in a computer-readable storage medium if they are implemented in the form of software functional modules and sold or used as independent products. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially implemented or portions thereof contributing to the prior art may be embodied in the form of a software product stored in a storage medium, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a removable storage device, a ROM, a RAM, a magnetic or optical disk, or various other media that can store program code.
The methods disclosed in the several method embodiments provided in the present application may be combined arbitrarily without conflict to obtain new method embodiments.
Features disclosed in several of the product embodiments provided in the present application may be combined in any combination to yield new product embodiments without conflict.
The features disclosed in the several method or apparatus embodiments provided in the present application may be combined arbitrarily, without conflict, to arrive at new method embodiments or apparatus embodiments.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A method of data analysis, comprising:
obtaining multimedia data;
dividing multimedia data to obtain at least one first target data and at least one second target data, wherein the first target data are at least data which are characterized as emotions in the multimedia data, and the second target data are characterized as emotion objects aiming at the emotions which are characterized by the first target data;
allocating a corresponding first parameter to each first target data, wherein the first parameter is a proportion parameter which represents that the first target data is emotion data;
analyzing the first target data allocated with the first parameters by using an analysis model to obtain a result of classifying the emotions represented by each first target data;
and determining the emotion of the multimedia data expressed to each emotion object according to the classification result.
2. The method of claim 1, further comprising:
obtaining the position information of each first target data and each second target data in the multimedia data;
determining emotion changes of the multimedia data to the emotion objects according to the position information of the first target data in the multimedia data, the position information of the second target data in the multimedia data and the classification results of the emotions represented by the first target data;
and determining the emotion of the multimedia data expressed to each emotional object according to the emotion change of the multimedia data expressed to each emotional object.
3. The method of claim 2, further comprising:
obtaining at least two first target data for the same emotional object;
obtaining the position information of each first target data in the at least two first target data in the multimedia data;
determining the emotion change of the multimedia data to the same emotion object according to the position relation of each first target data in the multimedia data and the emotion classification result represented by each first target data;
and determining the emotion of the multimedia data to the same emotional object according to the emotion change of the multimedia data to the same emotional object.
4. The method of claim 1, further comprising:
distributing corresponding second parameters for each second target data; the second parameter is a specific gravity parameter which represents that the second data is an emotional object;
analyzing the second target data distributed with the second parameters by using the analysis model to obtain a result of carrying out sentiment object classification on each second target data;
correspondingly, the determining the emotion of the multimedia data to each emotion object according to the classification result includes:
and determining the emotion of the multimedia data to each emotion object according to the result of classifying the emotion represented by each first target data and the result of classifying each second target data.
5. The method of claim 2, wherein obtaining the classification result of the emotion represented by each first target data comprises:
obtaining the probability that the emotion represented by each first target data is at least one preset emotion by using an analysis model;
determining the type of the emotion represented by each first target data according to the probability;
and determining the emotion change of the multimedia data to each emotion object according to the type of the emotion represented by each first target data and the position information of each first target data in the multimedia data.
6. The method of any of claims 1 to 5, prior to analyzing the first target data assigned the first parameter using the analytical model, the method further comprising:
re-extracting data which are expressed as emotion in the multimedia data;
correspondingly, the analyzing the first target data assigned with the first parameter by using the analysis model includes:
in case the re-extracted data representing emotion coincides with the first target data,
and classifying the emotion represented by the first target data assigned with the first parameter by using the weight parameter in the analysis model.
7. The method of claim 6, in the event that the re-extracted data representing emotion does not coincide with the first target data,
discarding the first target data, and performing emotion analysis on the newly extracted data which are expressed as emotion by using the weight parameters in the analysis model;
or, the multimedia data is divided again until the first target data consistent with the re-extracted data representing emotion is obtained.
8. A data analysis device comprising:
an obtaining unit configured to obtain multimedia data;
the dividing unit is used for dividing the multimedia data to obtain at least one first target data and at least one second target data, wherein the first target data is at least data which are characterized as emotions in the multimedia data, and the second target data is characterized as an emotion object aiming at the emotion which is characterized by each first target data;
the allocation unit is used for allocating corresponding first parameters to the first target data, and the first parameters are proportion parameters which represent that the first target data are emotion data;
the analysis unit is used for analyzing the first target data assigned with the first parameters by using the analysis model to obtain a result of classifying the emotions represented by each first target data;
and the determining unit is used for determining the emotion of the multimedia data expressed to each emotion object according to the classification result.
9. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
10. A data analysis device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the steps of the method of any one of claims 1 to 7 are carried out when the program is executed by the processor.
CN201911115580.8A 2019-11-14 2019-11-14 Data analysis method, device and storage medium Pending CN111079404A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911115580.8A CN111079404A (en) 2019-11-14 2019-11-14 Data analysis method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911115580.8A CN111079404A (en) 2019-11-14 2019-11-14 Data analysis method, device and storage medium

Publications (1)

Publication Number Publication Date
CN111079404A true CN111079404A (en) 2020-04-28

Family

ID=70311150

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911115580.8A Pending CN111079404A (en) 2019-11-14 2019-11-14 Data analysis method, device and storage medium

Country Status (1)

Country Link
CN (1) CN111079404A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105117428A (en) * 2015-08-04 2015-12-02 电子科技大学 Web comment sentiment analysis method based on word alignment model
CN106407236A (en) * 2015-08-03 2017-02-15 北京众荟信息技术有限公司 An emotion tendency detection method for comment data
CN109522412A (en) * 2018-11-14 2019-03-26 北京神州泰岳软件股份有限公司 Text emotion analysis method, device and medium
CN109960791A (en) * 2017-12-25 2019-07-02 上海智臻智能网络科技股份有限公司 Judge the method and storage medium, terminal of text emotion

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106407236A (en) * 2015-08-03 2017-02-15 北京众荟信息技术有限公司 An emotion tendency detection method for comment data
CN105117428A (en) * 2015-08-04 2015-12-02 电子科技大学 Web comment sentiment analysis method based on word alignment model
CN109960791A (en) * 2017-12-25 2019-07-02 上海智臻智能网络科技股份有限公司 Judge the method and storage medium, terminal of text emotion
CN109522412A (en) * 2018-11-14 2019-03-26 北京神州泰岳软件股份有限公司 Text emotion analysis method, device and medium

Similar Documents

Publication Publication Date Title
CN110162627B (en) Data increment method and device, computer equipment and storage medium
CN110874531B (en) Topic analysis method and device and storage medium
US10496928B2 (en) Non-factoid question-answering system and method
US10061766B2 (en) Systems and methods for domain-specific machine-interpretation of input data
CN111291570B (en) Method and device for realizing element identification in judicial documents
CN111401066B (en) Artificial intelligence-based word classification model training method, word processing method and device
Dang et al. Investigating word affect features and fusion of probabilistic predictions incorporating uncertainty in AVEC 2017
US20170270096A1 (en) Method and system for generating large coded data set of text from textual documents using high resolution labeling
KR20200007969A (en) Information processing methods, terminals, and computer storage media
CN106446018B (en) Query information processing method and device based on artificial intelligence
US10417338B2 (en) External resource identification
CN111309916B (en) Digest extracting method and apparatus, storage medium, and electronic apparatus
CN111259660A (en) Method, device and equipment for extracting keywords based on text pairs and storage medium
CN112765974B (en) Service assistance method, electronic equipment and readable storage medium
CN113821605A (en) Event extraction method
CN110473543B (en) Voice recognition method and device
Van Cranenburgh Literary authorship attribution with phrase-structure fragments
CN111079404A (en) Data analysis method, device and storage medium
CN111680493B (en) English text analysis method and device, readable storage medium and computer equipment
KR102518895B1 (en) Method of bio information analysis and storage medium storing a program for performing the same
CN117271778B (en) Insurance outbound session information output method and device based on generation type large model
CN116992874B (en) Text quotation auditing and tracing method, system, device and storage medium
KR102254300B1 (en) Suggestion of evidence sentence for utterance in debate situation
CN117577348B (en) Identification method and related device for evidence-based medical evidence
CN114302227B (en) Method and system for collecting and analyzing network video based on container collection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination