CN114579751A - Emotion analysis method and device, electronic equipment and storage medium - Google Patents

Emotion analysis method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114579751A
CN114579751A CN202210364877.3A CN202210364877A CN114579751A CN 114579751 A CN114579751 A CN 114579751A CN 202210364877 A CN202210364877 A CN 202210364877A CN 114579751 A CN114579751 A CN 114579751A
Authority
CN
China
Prior art keywords
emotion
sentences
client
sentence
single sentence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210364877.3A
Other languages
Chinese (zh)
Inventor
林仕锋
文博
刘云峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Zhuiyi Technology Co Ltd
Original Assignee
Shenzhen Zhuiyi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Zhuiyi Technology Co Ltd filed Critical Shenzhen Zhuiyi Technology Co Ltd
Priority to CN202210364877.3A priority Critical patent/CN114579751A/en
Publication of CN114579751A publication Critical patent/CN114579751A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • G06F16/353Clustering; Classification into predefined classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • G06F40/35Discourse or dialogue representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses an emotion analysis method, an emotion analysis device, electronic equipment and a storage medium, wherein the emotion analysis method comprises the steps of obtaining a plurality of single sentences within a preset time length, wherein the single sentences comprise single sentences carrying business classification labels; inputting a plurality of single sentences into an emotion analysis model for processing to obtain an emotion category corresponding to each single sentence; determining a target customer single sentence corresponding to each service classification label in a plurality of single sentences according to the single sentence type corresponding to the single sentence under each service classification label; and obtaining a client emotion index corresponding to each business classification label according to the emotion category and the total number of the target client single sentences. In the application, the emotion index of the client can accurately reflect the emotion information of the client on the service corresponding to the service classification label, so that the emotion index of the client can accurately reflect the real requirement of the client on the service information corresponding to the service classification label.

Description

Emotion analysis method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of internet information processing technologies, and in particular, to a method and an apparatus for emotion analysis, an electronic device, and a storage medium.
Background
Every day, enterprises receive a great deal of dialogue information of various consultation, complaint and the like of clients, and the dialogue information relates to different services. And the enterprise analyzes the emotion through the conversation information and then determines the real requirements of the client on different services according to the emotion analysis result.
Currently, in a traditional emotion analysis method, an emotion analysis result of each service is determined according to agent speech information corresponding to the service, so as to determine real requirements of a client on different services according to the emotion analysis result. However, the emotion analysis result obtained by the above method is difficult to be accurate.
Disclosure of Invention
In view of the above, embodiments of the present application provide a method and an apparatus for emotion analysis, an electronic device, and a storage medium to solve the above problems.
In a first aspect, an embodiment of the present application provides an emotion analysis method, including: acquiring a plurality of single sentences within a preset time length, wherein the plurality of single sentences comprise single sentences carrying service classification labels; inputting a plurality of single sentences into an emotion analysis model for processing to obtain an emotion category corresponding to each single sentence; determining a target customer single sentence corresponding to each service classification label in a plurality of single sentences according to the single sentence type corresponding to the single sentence under each service classification label; and obtaining a client emotion index corresponding to each business classification label according to the emotion category and the total number of the target client single sentences.
In a second aspect, an embodiment of the present application provides an emotion analysis apparatus, including: the system comprises a single sentence acquisition module, a service classification module and a service classification module, wherein the single sentence acquisition module is used for acquiring a plurality of single sentences within a preset time length, and the plurality of single sentences comprise single sentences carrying service classification labels; the emotion analysis module is used for inputting the single sentences into the emotion analysis model for processing to obtain emotion categories corresponding to the single sentences; the client single sentence determining module is used for determining a target client single sentence corresponding to each service classification label in the plurality of single sentences according to the single sentence type corresponding to the single sentence under each service classification label; and the index determining module is used for obtaining the client emotion index corresponding to each service classification label according to the emotion category and the total number of the target client clauses.
Optionally, the apparatus further comprises:
the model training module is used for acquiring a plurality of single sentence samples and emotion categories corresponding to the single sentence samples; and inputting the plurality of single sentence samples and the emotion types corresponding to the single sentence samples into a pre-training model for training to obtain an emotion analysis model.
Optionally, the model training module is further configured to encode the plurality of single sentence samples using a text pre-training model; inputting the coded result into a classifier for classification; and adjusting model parameters of the text pre-training model based on the target loss function to obtain an emotion classification model.
Optionally, the emotion analysis module is further configured to input the multiple single sentences into an emotion analysis model for processing, so as to obtain confidence levels of the multiple single sentences corresponding to different emotion categories; and obtaining the emotion categories of the multiple single sentences according to the confidence degrees and the magnitude relation of confidence degree thresholds corresponding to different emotion categories.
Optionally, the sentence type includes at least one of a customer sentence and an agent sentence;
the client single sentence determining module is also used for determining the single sentence type corresponding to the single sentence under each service classification label; if the single sentence type is a seat single sentence, acquiring upper and lower client single sentences of the seat single sentence; if the single sentence type is a client single sentence, acquiring the client single sentence; and summarizing the upper and lower client single sentences corresponding to the seat single sentence and the client single sentences to obtain a target client single sentence.
Optionally, the index determining module is further configured to count the number of the client single sentences corresponding to each emotion category according to the emotion category of each client single sentence in the target client single sentence; and obtaining a customer emotion index corresponding to each service classification label according to the total number of the single sentences, the number of the customer single sentences and the preset weight corresponding to each emotion category.
Optionally, the index determining module is further configured to calculate a number product of the number of the client single sentences and the corresponding preset weight; calculating the sum of the quantity products of all emotion categories corresponding to the target client single sentence; and determining the ratio of the sum of the quantity products to the total quantity of the single sentences as the emotion index of the client corresponding to each business classification label.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor and a memory; one or more programs are stored in the memory and configured to be executed by the processor to implement the methods described above.
In a fourth aspect, the present application provides a computer-readable storage medium having program code stored therein, where the program code executes the method described above when executed by a processor.
In a fifth aspect, embodiments of the present application provide a computer program product or a computer program comprising computer instructions stored in a computer-readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the above-described method.
According to the emotion method, the emotion device, the electronic equipment and the storage medium, a target customer single sentence corresponding to each business classification label is determined in a plurality of single sentences according to the single sentence type corresponding to the single sentence under each business classification label; and obtaining a client emotion index corresponding to each business classification label according to the emotion category and the total number of the target client single sentences. In the application, the client emotion index is obtained based on the emotion category of the target client sentence and the total number of the sentences, and the client emotion index can accurately reflect the emotion information of the client on the service corresponding to the service classification label, so that the client emotion index can accurately reflect the real requirement of the client on the service information corresponding to the service classification label.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 shows a flowchart of a method for emotion analysis according to an embodiment of the present application;
FIG. 2 is a flow chart of a method for training an emotion analysis model in an embodiment of the present application;
FIG. 3 is a flow chart of a method for obtaining emotion classification in an embodiment of the present application;
FIG. 4 is a flow chart of a method for obtaining a sentiment index of a client in an embodiment of the present application;
fig. 5 is a block diagram illustrating a structure of an emotion analyzing apparatus provided in an embodiment of the present application;
fig. 6 shows a schematic structural diagram of an electronic device provided in an embodiment of the present application;
fig. 7 shows a schematic structural diagram of a computer-readable storage medium provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
In the following description, references to the terms "first", "second", and the like are only used for distinguishing similar objects and do not denote a particular order or importance, but rather the terms "first", "second", and the like may be used interchangeably with the order of priority or the order in which they are expressed, where permissible, to enable embodiments of the present application described herein to be practiced otherwise than as specifically illustrated and described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
Enterprises receive a large amount of dialogue information such as various consultations, complaints and the like of clients every day, and the dialogue information relates to different services. And the enterprise analyzes the emotion through the conversation information and then determines the real requirements of the client on different services according to the emotion analysis result.
Currently, in a traditional emotion analysis method, an emotion analysis result of each service is determined according to agent speech information corresponding to the service, so as to determine real requirements of a client on different services according to the emotion analysis result. However, the emotion analysis result obtained by the above method is difficult to be accurate.
In other documents, emotion analysis results of a single sentence are determined by an emotion dictionary, a rule method, or a word vector model, resulting in a low accuracy of emotion analysis results.
According to the emotion method, the emotion device, the electronic equipment and the storage medium, a plurality of single sentences in a preset time duration are obtained, wherein the single sentences comprise single sentences carrying business classification labels; inputting a plurality of single sentences into an emotion analysis model for processing to obtain an emotion category corresponding to each single sentence; determining a target customer single sentence corresponding to each service classification label in a plurality of single sentences according to the single sentence type corresponding to the single sentence under each service classification label; and obtaining a client emotion index corresponding to each business classification label according to the emotion category and the total number of the target client single sentences. In the application, the client emotion index is obtained based on the emotion type of the target client single sentence and the total number of the single sentences, and can accurately reflect the emotion information of the client on the service corresponding to the service classification label, so that the client emotion index can accurately reflect the real requirement of the client on the service information corresponding to the service classification label.
Meanwhile, the pre-training model is trained to obtain the emotion analysis model, and the emotion analysis model has good text representation capability, so that the emotion classification accuracy output by the emotion analysis model is high.
Referring to fig. 1, fig. 1 is a flowchart illustrating an emotion analyzing method proposed in an embodiment of the present application, where the method is used in an electronic device, and the method includes:
s110, a plurality of single sentences in a preset time length are obtained, wherein the single sentences comprise single sentences carrying service classification labels.
In the present application, the preset time period may be one hour, three hours, five hours, or the like, and the preset time period may also be in the form of a preset period, for example, from 2 pm to 3 pm of each day. The method can also be implemented by dividing the single sentence in the preset time into the single sentences corresponding to different geographical areas according to the geographical areas to which the single sentences belong, and then respectively executing the method for the single sentences in the different geographical areas.
In the application, firstly, voice conversation information within a preset duration is acquired, the speaker role of the voice conversation information comprises an agent and a client, the agent can be a telephone specialist or customer service serving the client, and the client can be a user consulting, feeding back or complaining. The customer can communicate with the seat through a telephone, the voice recording device records the communicated voice conversation information, or the customer can communicate with the seat on site, and the voice recording device records the voice conversation information of the site communication. In some embodiments, the recording device may be a recording pen, telephone, or other recording device dedicated to recording, and the electronic device obtains the voice dialog information from the recording device. In other embodiments, the voice dialog information may be recorded by the electronic device, i.e., the recording device may also be the electronic device itself.
After the voice conversation information is obtained, text conversion is carried out on the voice conversation information to obtain text information, and the text information comprises a plurality of single sentences. Wherein, a plurality of sentences which are spoken once by the same speaking character in the voice dialogue information correspond to a single sentence in the text information. In other words, a single seating sentence (a seating sentence is a sentence with the speaker character as the seating) is separated between two client sentences (a client sentence is a sentence with the speaker character as the client), and a single client sentence is separated between two seating sentences. That is, when voice information is converted and the speaker role is not changed, all sentences of the same speaker role correspond to one single sentence, for example, when a client communicates with an agent, the client continuously speaks 3 sentences, and then the agent continues to speak 4 sentences, then at this time, the corresponding two single sentences: the client sentence consists of 3 words spoken by the client, and the seat sentence consists of 4 words spoken by the seat.
In some embodiments, the initial text information may be converted from the speech information by a speech conversion model. And then, carrying out single sentence type division on the initial text information according to the voice role recognition model to obtain each single sentence, wherein all the sentences correspond to one single sentence of the same speaking role when the speaking role is not changed.
After the single sentence is obtained, the single sentence may be manually analyzed by a technician to determine whether the single sentence includes the service information (by determining whether the single sentence includes the keyword related to the service information); if the single sentence includes the service information, adding a corresponding service classification label to the single sentence, and if the single sentence does not include the service information, not adding the service classification label to the single sentence, wherein the service information may refer to a specific service name or service content, and the service classification label may be a keyword corresponding to the service information or identification information with an identification function, such as a name of the service information.
For example, if a single sentence a includes service information a1 and service information a2, the technician adds a pair of service class labels A3 and a4 corresponding to a1 and a2 to the single sentence a, and at this time, the single sentence a carries the service class label A3 and the service class label a 4.
In other embodiments, the service dictionary may be used to analyze the service of the single sentence to obtain the service information included in the single sentence, and then the service classification label corresponding to the service information is added to the single sentence. In other embodiments, the single sentence may be further subjected to service information classification by a service classification model (a model obtained according to a neural network or a model constructed according to a classification algorithm) to obtain a service classification result of the single sentence, and then a service classification label corresponding to the service classification result is added to the single sentence.
In the specific implementation of the present application, a plurality of candidate service classification labels corresponding to a plurality of candidate service information one to one may be determined (the candidate service information may refer to all service information involved in an analysis process), then, for each single sentence, identification of a plurality of candidate service information is performed to determine that the single sentence specifically includes those candidate service information, and then, the candidate service classification label corresponding to the candidate service information included in the single sentence is added to the single sentence.
It can be understood that the obtained multiple single sentences may include single sentences carrying service classification tags, or may include single sentences not carrying service classification tags, where a single sentence carrying a service classification tag carries at least one service classification tag.
In the application, each single sentence may further include other related information such as a dialog ID, a speaker role (which refers to a speaker, a speaker client, and a speaker role as a client, a speaker seat, and a speaker role seat), round information, and the like, where the dialog ID is used to identify voice dialog information from which the single sentence originates, the dialog IDs corresponding to different voice dialog information are different, the round information may be information used to identify a sequence order of multiple single sentences of the same dialog ID, the round information may be a sequence number in which the single sentences are arranged according to a corresponding speaking time order, and the round information is adjacent, which indicates that two corresponding single sentences are also adjacent. For example, the turn information is a sequence number in which the words are arranged in the order of the corresponding speaking time, for a plurality of words of the same dialog ID, a word of the turn information 001 occurs before a word of the turn information 002, the corresponding two words are also adjacent words, and a word of the turn information 001 is the first word of the current speech dialog information.
And S120, inputting the plurality of single sentences into an emotion analysis model for processing to obtain the emotion category corresponding to each single sentence.
The emotion analysis model may refer to a model in which emotion analysis is performed on a single sentence. And inputting a single sentence into the emotion analysis model to obtain the emotion category of the single sentence, wherein one single sentence corresponds to one emotion category. The emotion categories can be distinguished according to different modes, for example, according to enthusiasm of the emotion, the emotion categories can be divided into three types of negative direction, neutral direction and positive direction, for example, according to specific content of the emotion, the emotion categories are divided into five types of anger, complaint, neutral direction, thank you and happy, and the user can set other emotion categories based on the requirement, which is not limited in the application.
It can be understood that all of the multiple single sentences obtained in S110 may be input into the emotion analysis model for processing, so as to obtain an emotion category corresponding to each single sentence, and both the client single sentence and the agent single sentence correspond to the emotion category.
Before the single sentence is input into the emotion analysis model, the single sentence can be preprocessed to remove the language words, stop words and the like so as to reduce redundant information, and meanwhile, the same text can be deduplicated so as to reduce the data volume needing to be analyzed, so that the time consumption of an analysis task is reduced.
S130, determining a target client single sentence corresponding to each business classification label in the plurality of single sentences according to the single sentence type corresponding to the single sentence under each business classification label.
In the present application, a single sentence type refers to a speaker role of a single sentence, and a single sentence type may include a client single sentence (a speaker role is a client single sentence) and an agent single sentence (a speaker role is an agent single sentence).
Firstly, a single sentence corresponding to each service classification label can be used as a single sentence set to obtain the single sentence set corresponding to each service classification label, for a single sentence carrying a plurality of service classification labels, the single sentence carrying the plurality of service classification labels can be shared by the plurality of single sentence sets, and a single sentence not carrying any service classification label can correspond to a non-label single sentence set.
And for each service classification label, determining the single sentence type as the single sentence of the client single sentence in the plurality of single sentences according to the single sentence type of each single sentence in the corresponding single sentence set, namely determining the target client single sentence corresponding to the service classification label. Specifically, determining a target customer single sentence corresponding to each service classification label in a plurality of single sentences according to the single sentence type corresponding to the single sentence under each service classification label includes: determining a single sentence type corresponding to a single sentence under each service classification label; if the single sentence type is a seat single sentence, acquiring upper and lower client single sentences of the seat single sentence; if the single sentence type is a client single sentence, acquiring the client single sentence; and summarizing the upper and lower client single sentences corresponding to the seat single sentence and the client single sentence to obtain a target client single sentence.
For each single sentence in the single sentence set under each service classification label, if the single sentence is a client single sentence, the single sentence is determined as a target client single sentence corresponding to the service classification label, if the single sentence is a seat single sentence, an upper single sentence and a lower single sentence adjacent to the seat single sentence can be determined in the multiple single sentences acquired in the S110 according to the conversation ID and the round information of the single sentence, and the upper single sentence and the lower single sentence are two target client single sentences. Because the seat single sentence and the client single sentence appear at intervals, the upper and lower single sentences of the seat single sentence are necessarily the client single sentence.
It can be understood that, for each single sentence in the single sentence set under each service classification label, if the single sentence is an agent single sentence, the upper and lower two single sentences need to be determined in all the multiple single sentences (including each single sentence set and the unlabeled single sentence set), because the upper and lower two single sentences of the agent single sentence may not carry any label, the upper and lower two single sentences do not appear in the corresponding single sentence set, and therefore, the upper and lower two single sentences are determined in all the multiple single sentences, and the upper and lower two single sentences are not determined in the corresponding single sentence set.
For a single sentence set under a service classification label, which comprises B1 customer single sentences and B2 agent single sentences, the total number of the single sentences of the determined target customer single sentences does not exceed (B1+2B 2).
In the specific implementation of the application, for a single sentence set under a service classification label, one client single sentence may be one of upper and lower single sentences of one seat single sentence, that is, there is an adjacent client single sentence and an adjacent seat single sentence both carrying the service classification label, which results in that the client single sentence is repeatedly acquired, so that if the total number of target client single sentences is directly solved according to (B1+2B2), the target client single sentence is inaccurate, the determined same single sentence needs to be removed, and the target client single sentence is obtained, where the number of target client single sentences does not exceed (B1+2B 2).
And S140, obtaining a client emotion index corresponding to each business classification label according to the emotion category of the target client single sentence and the total number of the single sentences.
And aiming at one service classification label, after a corresponding target client single sentence is obtained, calculating a client emotion index according to the emotion category of the single sentence in the target client single sentence, the number of the client single sentences under each emotion category and the total number of the single sentences of the target client single sentence. Illustratively, for the traffic classification label C1, the target client sentence includes C2 sentences, wherein the emotion classification negatively corresponds to C3 sentences, the neutral corresponds to C4 sentences, and the positive corresponds to C5 sentences, and the client emotion index is determined according to the negative, neutral and positive emotion types of C3, the neutral emotion type of C4, the positive emotion type of C5 and the total number of sentences of the target client sentence, C2.
In the application, the ratio of the number of the client single sentences to the total number of the single sentences corresponding to the emotion type with the maximum ratio of the number of the client single sentences to the total number of the target client single sentences can be used as the client emotion index, and the larger the client emotion index is, the more the client will cause the emotion of the user to the service information corresponding to the service classification label. For example, for the traffic classification label C6, if the ratio of the number of the client sentences in the neutral emotion category to the total number of the target client sentences is the largest, the number of the client sentences in the neutral emotion category is C7, and the total number of the target client sentences is C8, then the client emotion index may be C7/C8, where at this time, the larger the ratio C7/C8 is, the higher the probability of indicating the neutral emotion of the client to the traffic information corresponding to C6 is, and further indicating that the client's demand state for the traffic information corresponding to C6 is optional.
In the embodiment, a plurality of single sentences in a preset time length are obtained, wherein the plurality of single sentences comprise single sentences carrying service classification labels; inputting a plurality of single sentences into an emotion analysis model for processing to obtain an emotion category corresponding to each single sentence; determining a target customer single sentence corresponding to each service classification label in a plurality of single sentences according to the single sentence type corresponding to the single sentence under each service classification label; and obtaining a client emotion index corresponding to each business classification label according to the emotion category of the target client single sentence and the total number of the single sentences. In the application, the client emotion index is obtained based on the emotion category of the target client sentence and the total number of the sentences, and the client emotion index can accurately reflect the emotion information of the client on the service corresponding to the service classification label, so that the client emotion index can accurately reflect the real requirement of the client on the service information corresponding to the service classification label.
Referring to fig. 2, fig. 2 is a flowchart illustrating a method for training an emotion analysis model in an embodiment of the present application, where the method includes:
s210: and acquiring a plurality of single sentence samples and the emotion category corresponding to each single sentence sample.
In the present application, the obtaining manner of the single sentence sample refers to the obtaining manner of the single sentence in S110, which is not described herein again, and the obtaining manner of the emotion category corresponding to each single sentence sample refers to the description of S120, which is not described herein again.
S220: and inputting the plurality of single sentence samples and the emotion types corresponding to the single sentence samples into a pre-training model for training to obtain an emotion analysis model.
In the application, the emotion analysis model is obtained by training the Pre-trained emotion Models (PLMs), and the Pre-trained model has good text representation capability, so that the accuracy of the emotion type output by the obtained emotion analysis model is high. The pre-training model may be an ELMo model, a GPT model, a BERT model, or the like.
Optionally, the method includes inputting a plurality of single sentence samples and emotion categories corresponding to the single sentence samples into a pre-training model for training to obtain an emotion analysis model, and includes: coding a plurality of single sentence samples by using a text pre-training model; inputting the coded result into a classifier for classification; and adjusting model parameters of the text pre-training model based on the target loss function to obtain an emotion classification model. .
The target loss function is a loss function taking the minimum difference between the model classification result of the single sentence sample output by the classifier and the emotion classification corresponding to the single sentence sample as a target, and when the target loss function converges to the optimal effect and is stable, the emotion classification model is obtained.
In the present application, the pre-training model includes a text pre-training model and a classifier, wherein the text pre-training model may be a WoBERT pre-training model based on word granularity. Inputting the single sentence sample into a WoBERT pre-training model for coding to obtain a coded result output by the WoBERT pre-training model, then inputting the coded result into a classifier for classification to obtain a model classification result, and adjusting parameters of the text pre-training model and the classifier according to the model classification result corresponding to the single sentence sample, the emotion category corresponding to the single sentence sample and the target loss function until the target loss function converges to the optimal effect and is stable to obtain the emotion classification model.
Similarly, in the embodiment, in the training process of the pre-training model, the sample single sentence may also be preprocessed to remove the language words, stop words, and the like, so as to reduce redundant information, and at the same time, the same text may also be deduplicated to reduce the amount of data to be trained, thereby reducing the time consumption of the training task.
Referring to fig. 3, fig. 3 is a flowchart illustrating an emotion classification obtaining method in an embodiment of the present application, where the method includes:
and S310, inputting the multiple single sentences into the emotion analysis model for processing to obtain confidence coefficients of the multiple single sentences corresponding to different emotion types.
In the application, after a single sentence is input into the emotion analysis model, the emotion analysis model outputs the confidence degrees of different emotion categories corresponding to the single sentence. For example, the emotion classification corresponding to the emotion analysis model is neutral, negative, and positive, the single sentence D1 is input to the emotion analysis model, and the emotion analysis model outputs the confidence degrees of D1 corresponding to different emotion classifications: neutral (confidence 0.1), negative (confidence 0.4), positive (confidence 0.5).
In some embodiments, the single sentence may also be preprocessed to remove the tone words, stop words, and the like, so as to reduce redundant information, and at the same time, the same text may also be deduplicated so as to reduce the amount of data to be analyzed, thereby reducing the time consumption of the analysis task.
And S320, obtaining emotion types of a plurality of single sentences according to the confidence degrees and the magnitude relation of confidence degree thresholds corresponding to different emotion types.
In the present application, the confidence thresholds corresponding to different emotion categories may be 0.5, 0.6, 0.7, or the like. When a certain emotion type of a single sentence is larger than a corresponding confidence threshold, the emotion type corresponding to the confidence larger than the corresponding confidence threshold is used as the emotion type of the single sentence, and if the confidence larger than the confidence threshold does not exist, the emotion type of the single sentence is determined as the emotion classification of a neutral type (the neutral type refers to an emotion which is not positive or negative, or an emotion which does not exist, and is neutral expression).
In an example, confidence threshold values of all emotion classifications corresponding to the emotion analysis model are 0.6, emotion categories corresponding to the emotion analysis model are neutral, negative and positive, after the single sentence E1 is input into the emotion analysis model, the obtained confidence levels are neutral (0.5), negative (0.3) and positive (0.2), and at this time, if the confidence level is not greater than the confidence threshold value, the emotion category of the single sentence E1 is determined to be neutral; when the confidence degrees obtained after the single sentence E2 is input into the emotion analysis model are neutral (0.2), negative (0.1) and positive (0.7), and at this time, the confidence degree of the positive emotion category is greater than the confidence degree threshold value, the emotion category of the single sentence E1 is determined to be positive.
The emotion classification of the single sentence is obtained through the confidence coefficient threshold value and the confidence coefficients of the single sentence corresponding to different emotion classifications, the emotion classification of the single sentence is more fit with the emotion of a client (or an agent) when the client (or the agent) speaks the single sentence, and the accuracy rate of the emotion classification is higher.
Referring to fig. 4, fig. 4 is a flowchart illustrating a method for obtaining a client emotion index in an embodiment of the present application, where the method includes:
and S410, counting the number of the client single sentences corresponding to each emotion type according to the emotion type of each client single sentence in the target client single sentence.
The target client single sentence under each service label can comprise a plurality of target client single sentences, the emotion types of the target client single sentences are different, the emotion types are divided into a group, the number of corresponding single sentences is counted, and the number of the client single sentences of the emotion types is determined.
For example, the emotion categories include negative direction, positive direction and neutral, and the corresponding target client sentences include 300 negative direction sentences, 350 positive direction sentences and 320 neutral sentences, in this case, the number of client sentences in the negative emotion category is 300, the number of client sentences in the positive emotion category is 350, and the number of client sentences in the neutral emotion category is 320. The total number of sentences for the corresponding target customer sentence is 970.
And S420, obtaining a client emotion index corresponding to each service classification label according to the total number of the single sentences, the number of the client single sentences and the preset weight corresponding to each emotion type.
In the present application, each emotion category corresponds to a corresponding preset weight, for example, when the emotion categories include negative direction, positive direction and neutral direction, their preset weights are negative direction (preset weight of 0), neutral direction (preset weight of 0.5) and positive direction (preset weight of 1), and for example, when the emotion categories include anger, complain, neutral, thank you and happiness, their preset weights are anger (preset weight of 0), complain (preset weight of 0.2), neutral direction (preset weight of 0.5), thank you (preset weight of 0.7) and happiness (preset weight of 1).
It can be understood that if the preset weight corresponding to the emotion classification is sequentially increased from negative direction to positive direction along with the emotion classification, the higher the emotion index of the client is, the higher the number of single sentences of the client indicating that the emotion classification is positive direction is, the more the proportion of positive emotion is, the more the emotion of the client is positive direction; if the preset weight corresponding to the emotion category is sequentially reduced from negative direction to positive direction along with the emotion category, the lower the emotion index of the client is, the higher the number of the single sentences of the client indicating that the emotion category is positive direction is, the more the proportion of positive emotion is, and the more the emotion of the client is positive direction.
For example, when the emotion categories include negative direction (preset weight is 0), neutral direction (preset weight is 0.5) and positive direction (preset weight is 1), the higher the emotion index of the client, the more the number of client sentences with preset weight of 1 is, that is, the higher the number of client sentences with emotion category of positive direction is, the more proportion of positive emotion is, the more positive emotion is; when the emotion categories include negative direction (preset weight is 1), neutral direction (preset weight is 0.5) and positive direction (preset weight is 0), the higher the emotion index of the client is, the larger the number of the client single sentences with the preset weight of 1 is, that is, the higher the number of the client single sentences with the emotion category of negative direction is, the more negative emotion is.
After a target customer single sentence corresponding to a business classification label is determined, a customer emotion index corresponding to the business classification label is calculated according to the number of the customer single sentences of each emotion type in the target customer single sentence, the preset weight corresponding to each emotion type and the total number of the single sentences of the target customer single sentence, wherein the higher the customer emotion index is, the higher the positive emotion of the customer to the business information corresponding to the business classification label is, and the larger the demand of the customer to the business information corresponding to the business classification label is.
Optionally, obtaining a client emotion index corresponding to each service classification label according to the total number of the target client clauses, the number of the client clauses corresponding to each emotion category, and the preset weight corresponding to each emotion category, includes: calculating the number product of the number of the client single sentences and the corresponding preset weight; calculating the sum of the quantity products of all emotion categories corresponding to the target client single sentence; and determining the ratio of the sum of the quantity products to the total quantity of the single sentences as the emotion index of the client corresponding to each business classification label.
Illustratively, for the business class label F1, the corresponding emotion classifications include anger, complaint, neutrality, thank you and happiness, and their preset weights are anger (preset weight of 0), complaint (preset weight of 0.2), neutrality (preset weight of 0.5), thank you (preset weight of 0.7) and happiness (preset weight of 1). The total number of target customer sentences corresponding to the business classification label F1 includes F2, where anger corresponds to F3 sentences, complaints correspond to F4 sentences, neutrality corresponds to F5 sentences, thank you correspond to F6 sentences, and happiness corresponds to F3 sentences, and the customer emotion index at this time is (0 × F3+0.2 × F4+0.5 × F5+0.7 × F6+1 × F7)/F2.
In this embodiment, the emotion index of the client can accurately reflect the emotion category of the service classification tag of the client, so that the emotion index of the client is high in accuracy.
Referring to fig. 5, fig. 5 is a block diagram illustrating a structure of an emotion analyzing apparatus according to an embodiment of the present application, where the apparatus 500 includes:
a single sentence obtaining module 510, configured to obtain multiple single sentences within a preset time duration, where the multiple single sentences include a single sentence with a service classification label;
the emotion analysis module 520 is used for inputting the single sentences into the emotion analysis model for processing to obtain emotion categories corresponding to the single sentences;
a client single sentence determining module 530, configured to determine, according to the single sentence type corresponding to the single sentence under each service classification label, a target client single sentence corresponding to each service classification label in the multiple single sentences;
and an index determining module 540, configured to obtain a client emotion index corresponding to each service classification label according to the emotion category of the target client clause and the total number of the clauses.
Optionally, the apparatus 500 further comprises:
the model training module is used for acquiring a plurality of single sentence samples and emotion categories corresponding to the single sentence samples; and inputting the plurality of single sentence samples and the emotion types corresponding to the single sentence samples into a pre-training model for training to obtain an emotion analysis model.
Optionally, the model training module is further configured to encode the plurality of single sentence samples using a text pre-training model; inputting the coded result into a classifier for classification; and adjusting model parameters of the text pre-training model based on the target loss function to obtain an emotion classification model.
Optionally, the emotion analyzing module 520 is further configured to input the multiple single sentences into an emotion analyzing model for processing, so as to obtain confidence levels of the multiple single sentences corresponding to different emotion categories; and obtaining the emotion categories of the multiple single sentences according to the confidence degrees and the magnitude relation of confidence degree thresholds corresponding to different emotion categories.
Optionally, the sentence type includes at least one of a customer sentence and an agent sentence;
the client single sentence determining module 530 is further configured to determine a single sentence type corresponding to a single sentence under each service classification label; if the single sentence type is a seat single sentence, acquiring upper and lower client single sentences of the seat single sentence; if the single sentence type is a client single sentence, acquiring the client single sentence; and summarizing the upper and lower client single sentences corresponding to the seat single sentence and the client single sentences to obtain a target client single sentence.
Optionally, the index determining module 540 is further configured to count the number of the client single sentences corresponding to each emotion category according to the emotion category of each client single sentence in the target client single sentence; and obtaining a client emotion index corresponding to each service classification label according to the total number of the single sentences, the number of the client single sentences and the preset weight corresponding to each emotion category.
Optionally, the index determining module 540 is further configured to calculate a number product of the number of the client single sentences and the corresponding preset weight; calculating the sum of the quantity products of all emotion categories corresponding to the target client single sentence; and determining the ratio of the sum of the quantity products to the total quantity of the single sentences as the emotion index of the client corresponding to each business classification label.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and modules may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, the coupling or direct coupling or communication connection between the modules shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or modules may be in an electrical, mechanical or other form.
In addition, functional modules in the embodiments of the present application may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
Referring to fig. 6, fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. The electronic device 100 includes one or more processors 102 (only one is shown in the figure) and a memory 104 coupled to each other, wherein the memory 104 stores therein a program capable of executing the contents of the foregoing embodiments, and the processor 102 executes the program stored in the memory 104.
The processor 102 may include one or more processors, among others. The processor 102 interfaces with various components throughout the electronic device 100 using various interfaces and circuitry to perform various functions of the electronic device 100 and process data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 104 and invoking data stored in the memory 104. Alternatively, the processor 102 may be implemented in hardware using at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 102 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 102, but may be implemented by a communication chip.
The Memory 104 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). The memory 104 may be used to store instructions, programs, code sets, or instruction sets. The memory 104 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing various method embodiments described below, and the like. The stored data area can also store data created by terminal 1000 in use (e.g., phonebook, audio-video data, chat log data), and the like.
Referring to fig. 7, fig. 7 is a schematic structural diagram illustrating a computer-readable storage medium according to an embodiment of the present disclosure. The computer-readable storage medium 900 has stored therein program code that can be called by a processor to execute the methods described in the above-described method embodiments.
The computer-readable storage medium 900 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. Alternatively, the computer-readable storage medium 900 includes a non-volatile computer-readable storage medium. The computer readable storage medium 900 has storage space for program code 910 to perform any of the method steps of the method described above. The program code can be read from or written to one or more computer program products. The program code 910 may be compressed, for example, in a suitable form.
In summary, according to the calibration pattern generation method, the calibration pattern registration device, and the electronic device provided by the present application, after a calibration scene is obtained, a pseudo-random array corresponding to the calibration scene is obtained, and a calibration pattern is generated based on the pseudo-random array and a plurality of graphic primitives, where the pseudo-random array is used to determine positions of the graphic primitives in the calibration pattern. By the method, different pseudo-random arrays can be generated based on different calibration scenes, so that different calibration patterns can be generated based on different pseudo-random arrays, and the calibration accuracy of the sensor under different calibration scenes is improved.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art will understand that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not necessarily depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (10)

1. A method of sentiment analysis, the method comprising:
acquiring a plurality of single sentences within a preset time length, wherein the plurality of single sentences comprise single sentences carrying service classification labels;
inputting the single sentences into an emotion analysis model for processing to obtain emotion categories corresponding to the single sentences;
determining a target customer single sentence corresponding to each service classification label in the plurality of single sentences according to the single sentence type corresponding to the single sentence under each service classification label;
and obtaining a client emotion index corresponding to each business classification label according to the emotion category and the total number of the target client single sentences.
2. The method of claim 1, wherein the method of training the emotion analysis model comprises:
obtaining a plurality of single sentence samples and emotion types corresponding to the single sentence samples;
and inputting the plurality of single sentence samples and the emotion types corresponding to the single sentence samples into a pre-training model for training to obtain the emotion analysis model.
3. The method of claim 2, wherein the inputting the plurality of single sentence samples and the emotion classification corresponding to each single sentence sample into a pre-training model for training to obtain the emotion analysis model comprises:
encoding the plurality of single sentence samples using a text pre-training model;
inputting the coded result into a classifier for classification;
and adjusting model parameters of the text pre-training model based on a target loss function to obtain the emotion classification model.
4. The method of claim 1, wherein the inputting the plurality of single sentences into an emotion analysis model for processing to obtain an emotion classification corresponding to each single sentence, comprises:
inputting the single sentences into the emotion analysis model for processing to obtain confidence degrees of the single sentences corresponding to different emotion categories;
and obtaining the emotion types of the single sentences according to the confidence degrees and the magnitude relation of confidence degree thresholds corresponding to the different emotion types.
5. The method of claim 1, wherein the sentence type comprises at least one of a customer sentence and an agent sentence;
the determining, according to the sentence type corresponding to the sentence under each service classification label, a target customer sentence corresponding to each service classification label in the plurality of sentences includes:
determining a single sentence type corresponding to a single sentence under each service classification label;
if the single sentence type is a seat single sentence, acquiring upper and lower client single sentences of the seat single sentence;
if the single sentence type is a client single sentence, acquiring the client single sentence;
and summarizing the upper and lower client single sentences corresponding to the seat single sentence and the client single sentences to obtain the target client single sentence.
6. The method of claim 1, wherein obtaining a customer emotion index corresponding to each of the traffic class labels according to the emotion classification of the target customer sentence and the total number of sentences comprises:
counting the number of the client single sentences corresponding to each emotion type according to the emotion type of each client single sentence in the target client single sentences;
and obtaining a client emotion index corresponding to each service classification label according to the total number of the single sentences, the number of the client single sentences and a preset weight corresponding to each emotion classification.
7. The method of claim 6, wherein obtaining the emotion index of the client corresponding to each service classification label according to the total number of the target client sentences, the number of the client sentences corresponding to each emotion classification and the preset weight corresponding to each emotion classification comprises:
calculating the number product of the number of the client single sentences and the corresponding preset weight;
calculating the sum of the quantity products of all emotion categories corresponding to the target client single sentence;
and determining the ratio of the sum of the quantity products to the total quantity of the single sentences as the emotion index of the client corresponding to each business classification label.
8. An emotion analyzing apparatus, characterized in that the apparatus comprises:
the system comprises a single sentence acquisition module, a service classification module and a service classification module, wherein the single sentence acquisition module is used for acquiring a plurality of single sentences within a preset time length, and the single sentences comprise single sentences carrying service classification labels;
the emotion analysis module is used for inputting the single sentences into an emotion analysis model for processing to obtain emotion categories corresponding to the single sentences;
a client single sentence determining module, configured to determine, according to a single sentence type corresponding to a single sentence under each service classification label, a target client single sentence corresponding to each service classification label in the multiple single sentences;
and the index determining module is used for obtaining the client emotion index corresponding to each service classification label according to the emotion category and the total sentence number of the target client single sentence.
9. An electronic device, comprising:
one or more processors;
a memory;
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the method of any of claims 1-7.
10. A computer-readable storage medium, having stored thereon program code that can be invoked by a processor to perform the method according to any one of claims 1-7.
CN202210364877.3A 2022-04-07 2022-04-07 Emotion analysis method and device, electronic equipment and storage medium Pending CN114579751A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210364877.3A CN114579751A (en) 2022-04-07 2022-04-07 Emotion analysis method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210364877.3A CN114579751A (en) 2022-04-07 2022-04-07 Emotion analysis method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114579751A true CN114579751A (en) 2022-06-03

Family

ID=81778408

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210364877.3A Pending CN114579751A (en) 2022-04-07 2022-04-07 Emotion analysis method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114579751A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116561350A (en) * 2023-07-07 2023-08-08 腾讯科技(深圳)有限公司 Resource generation method and related device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106776529A (en) * 2015-11-23 2017-05-31 北京国双科技有限公司 Business sentiment analysis method and device
US20170169008A1 (en) * 2015-12-15 2017-06-15 Le Holdings (Beijing) Co., Ltd. Method and electronic device for sentiment classification
CN112073757A (en) * 2020-08-13 2020-12-11 北京奇艺世纪科技有限公司 Emotion fluctuation index acquisition method, emotion fluctuation index display method and multimedia content production method
CN112527994A (en) * 2020-12-18 2021-03-19 平安银行股份有限公司 Emotion analysis method, emotion analysis device, emotion analysis equipment and readable storage medium
CN112580366A (en) * 2020-11-30 2021-03-30 科大讯飞股份有限公司 Emotion recognition method, electronic device and storage device
CN113990352A (en) * 2021-10-22 2022-01-28 平安科技(深圳)有限公司 User emotion recognition and prediction method, device, equipment and storage medium
CN114116965A (en) * 2021-11-08 2022-03-01 竹间智能科技(上海)有限公司 Opinion extraction method for comment text and electronic equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106776529A (en) * 2015-11-23 2017-05-31 北京国双科技有限公司 Business sentiment analysis method and device
US20170169008A1 (en) * 2015-12-15 2017-06-15 Le Holdings (Beijing) Co., Ltd. Method and electronic device for sentiment classification
CN112073757A (en) * 2020-08-13 2020-12-11 北京奇艺世纪科技有限公司 Emotion fluctuation index acquisition method, emotion fluctuation index display method and multimedia content production method
CN112580366A (en) * 2020-11-30 2021-03-30 科大讯飞股份有限公司 Emotion recognition method, electronic device and storage device
CN112527994A (en) * 2020-12-18 2021-03-19 平安银行股份有限公司 Emotion analysis method, emotion analysis device, emotion analysis equipment and readable storage medium
CN113990352A (en) * 2021-10-22 2022-01-28 平安科技(深圳)有限公司 User emotion recognition and prediction method, device, equipment and storage medium
CN114116965A (en) * 2021-11-08 2022-03-01 竹间智能科技(上海)有限公司 Opinion extraction method for comment text and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116561350A (en) * 2023-07-07 2023-08-08 腾讯科技(深圳)有限公司 Resource generation method and related device
CN116561350B (en) * 2023-07-07 2024-01-09 腾讯科技(深圳)有限公司 Resource generation method and related device

Similar Documents

Publication Publication Date Title
CN108737667B (en) Voice quality inspection method and device, computer equipment and storage medium
CN111883115B (en) Voice flow quality inspection method and device
CN110942229A (en) Service quality evaluation method and device, electronic equipment and storage medium
CN110781273B (en) Text data processing method and device, electronic equipment and storage medium
CN111694940A (en) User report generation method and terminal equipment
CN111598485A (en) Multi-dimensional intelligent quality inspection method, device, terminal equipment and medium
CN111182162A (en) Telephone quality inspection method, device, equipment and storage medium based on artificial intelligence
CN112417127A (en) Method, device, equipment and medium for training conversation model and generating conversation
CN114818649A (en) Service consultation processing method and device based on intelligent voice interaction technology
CN115099239B (en) Resource identification method, device, equipment and storage medium
CN112951233A (en) Voice question and answer method and device, electronic equipment and readable storage medium
CN114579751A (en) Emotion analysis method and device, electronic equipment and storage medium
CN114138960A (en) User intention identification method, device, equipment and medium
CN110781327B (en) Image searching method and device, terminal equipment and storage medium
CN113055751A (en) Data processing method and device, electronic equipment and storage medium
CN115831125A (en) Speech recognition method, device, equipment, storage medium and product
CN110781329A (en) Image searching method and device, terminal equipment and storage medium
CN115292495A (en) Emotion analysis method and device, electronic equipment and storage medium
CN112883183B (en) Method for constructing multi-classification model, intelligent customer service method, and related device and system
CN112002306B (en) Speech class recognition method and device, electronic equipment and readable storage medium
CN111310460B (en) Statement adjusting method and device
CN112632229A (en) Text clustering method and device
CN113010664A (en) Data processing method and device and computer equipment
CN114117034B (en) Method and device for pushing texts of different styles based on intelligent model
CN116127074B (en) Anchor image classification method based on LDA theme model and kmeans clustering algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20220603

RJ01 Rejection of invention patent application after publication