CN118035454A - Expression package classification recognition method, apparatus, computer device and storage medium - Google Patents

Expression package classification recognition method, apparatus, computer device and storage medium Download PDF

Info

Publication number
CN118035454A
CN118035454A CN202410424210.7A CN202410424210A CN118035454A CN 118035454 A CN118035454 A CN 118035454A CN 202410424210 A CN202410424210 A CN 202410424210A CN 118035454 A CN118035454 A CN 118035454A
Authority
CN
China
Prior art keywords
expression
feature
representation
classification
text
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410424210.7A
Other languages
Chinese (zh)
Other versions
CN118035454B (en
Inventor
薛云
刘俊希
冯燕燕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China Normal University
Original Assignee
South China Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China Normal University filed Critical South China Normal University
Priority to CN202410424210.7A priority Critical patent/CN118035454B/en
Publication of CN118035454A publication Critical patent/CN118035454A/en
Application granted granted Critical
Publication of CN118035454B publication Critical patent/CN118035454B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention relates to the field of speech recognition, in particular to an expression package classification recognition method, an expression package classification recognition device, computer equipment and a storage medium.

Description

Expression package classification recognition method, apparatus, computer device and storage medium
Technical Field
The present invention relates to the field of speech recognition, and in particular, to a method and apparatus for classifying and recognizing expression packages, a computer device, and a storage medium.
Background
With the vigorous development of the internet industry, social media and online platforms have become the main channels of communication, information sharing and viewpoint expression for people. However, with the widespread popularity of social media platforms, a new type of multimodal entity, the expression bag (Meme), is also growing, often from a combination of images and short text. Such entities are increasingly popular in large social media networks and are one of the means by which certain network users propagate speech.
To address this problem, early work used a pre-trained model to detect the fusion alignment between modalities in the social media, and focused on the hate meme. In view of the complex reasoning and contextual background knowledge required for the judgment of meme on the back-end, the later method tries to combine an external tool or increase external knowledge on the basis of the visual language model framework to improve the classification accuracy of the model, however, the current method for classifying and identifying the hues expression package is too focused on improving the classification performance of the model by introducing more external knowledge, and possibly ignores the problem of irrelevant or redundant content possibly contained in the external knowledge. Taking the picture entity identification information as an example, the additionally added entity information may include entities irrelevant to or redundant to the expression, so that some irrelevant information is introduced, classification judgment of a model may be interfered, and accurate classification identification of the expression package is difficult.
Disclosure of Invention
Based on the text data, the invention provides an expression package classification and identification method, device, computer equipment and storage medium, which are used for extracting characteristics of an expression package to be tested to obtain text characteristic representation of the expression package to be tested, combining text data and label data of a plurality of types of demonstration expression packages, adopting an information perception method to carry out multi-view information perception on the text characteristic representation of the expression package to be tested to obtain information perception characteristic representations of the plurality of types, strengthening entity information associated with the expression contained in the text characteristic representation of the expression package to be tested, and carrying out characteristic fusion on the information perception characteristic representations of the plurality of types, so that the classification and identification of the expression package to be tested are carried out more comprehensively, and the accuracy of classification and identification of the expression package to be tested is improved. The technical method comprises the following steps:
In a first aspect, an embodiment of the present application provides an expression packet classification and identification method, including the following steps:
Obtaining text data of an expression package to be detected, text data of a plurality of categories of demonstration expression packages, tag data and a preset expression package classification and identification model, wherein the expression package classification and identification model comprises a coding module, a feature extraction module, a feature processing module and a classification and identification module;
Inputting the text data of the expression package to be tested, the text data of the demonstration expression packages of a plurality of categories and the tag data into the coding module for coding processing to obtain text coding representation of the expression package to be tested, the text coding representation of the demonstration expression packages of a plurality of categories and the tag coding representation;
Inputting the text coding representation of the expression package to be tested, the text coding representations of the demonstration expression packages of a plurality of categories and the tag coding representation into the feature extraction module for feature extraction to obtain the text feature representation of the expression package to be tested, the text feature representation of the demonstration expression packages of a plurality of categories and the tag feature representation;
inputting the text feature representation of the expression package to be detected, the text feature representations of the demonstration expression packages of a plurality of categories and the tag feature representations into the feature processing module for information perception to obtain information perception feature representations of a plurality of categories, and carrying out feature fusion on the information perception feature representations of a plurality of categories to obtain feature fusion representations;
Inputting the text feature representation, the information perception feature representation of a plurality of categories and the feature fusion representation of the expression package to be detected into the classification recognition to carry out classification recognition, obtaining classification recognition prediction probability data, obtaining a target classification recognition prediction probability vector with the largest dimension in the classification recognition prediction probability data according to the classification recognition prediction probability vector, and taking the classification type of the target classification recognition prediction probability vector as the classification recognition result of the expression package to be detected.
In a second aspect, an embodiment of the present application provides an expression packet classification and identification device, including:
The system comprises a data acquisition module, a feature extraction module, a feature processing module and a classification recognition module, wherein the data acquisition module is used for acquiring text data of an expression package to be detected, text data of a plurality of categories of demonstration expression packages, tag data and a preset expression package classification recognition model;
The data coding module is used for inputting the text data of the expression package to be tested, the text data of the demonstration expression packages of a plurality of categories and the tag data into the coding module for coding processing to obtain text coding representation of the expression package to be tested, the text coding representation of the demonstration expression packages of a plurality of categories and the tag coding representation;
The data feature extraction module is used for inputting the text coding representation of the expression package to be detected, the text coding representations of the demonstration expression packages of the categories and the tag coding representations into the feature extraction module for feature extraction to obtain the text feature representation of the expression package to be detected, the text feature representations of the demonstration expression packages of the categories and the tag feature representations;
The data feature processing module is used for inputting the text feature representation of the expression package to be detected, the text feature representation of the demonstration expression package of a plurality of categories and the tag feature representation into the feature processing module for information perception to obtain information perception feature representations of a plurality of categories, and carrying out feature fusion on the information perception feature representations of a plurality of categories to obtain feature fusion representation;
The expression packet classification recognition module is used for inputting the text feature representation, the information perception feature representation of a plurality of categories and the feature fusion representation of the expression packet to be detected into the classification recognition to carry out classification recognition, obtaining classification recognition prediction probability data, obtaining a target classification recognition prediction probability vector with the largest dimension in the classification recognition prediction probability data according to the classification recognition prediction probability vector, and taking the classification type of the target classification recognition prediction probability vector as the classification recognition result of the expression packet to be detected.
In a third aspect, an embodiment of the present application provides a computer apparatus, including: a processor, a memory, and a computer program stored on the memory and executable on the processor; the computer program when executed by the processor implements the steps of the expression package classification and identification method as described in the first aspect.
In a fourth aspect, an embodiment of the present application provides a storage medium storing a computer program, which when executed by a processor implements the steps of the expression package classification recognition method according to the first aspect.
In this embodiment, a method, an apparatus, a computer device, and a storage medium for classifying and identifying an expression package are provided, based on text data, feature extraction is performed on the expression package to be tested, text feature representation of the expression package to be tested is obtained, text data and tag data of a plurality of types of demonstration expression packages are combined, information perception is performed on the text feature representation of the expression package to be tested by adopting an information perception method, information perception feature representations of a plurality of types are obtained, so that entity information associated with the expression is contained in the text feature representation of the expression package to be tested, feature fusion is performed on the information perception feature representations of the plurality of types, and therefore classification and identification are performed on the expression package to be tested more comprehensively, and accuracy of classification and identification of the expression package to be tested is improved.
For a better understanding and implementation, the present invention is described in detail below with reference to the drawings.
Drawings
Fig. 1 is a flowchart of a multi-modal expression package classification method according to a first embodiment of the present application;
fig. 2 is a schematic flow chart of S4 in the multi-modal expression packet classification method according to the first embodiment of the present application;
fig. 3 is a flowchart of S4 in the multi-modal expression package classification method according to the first embodiment of the present application;
fig. 4 is a flowchart of S5 in the multi-modal expression package classification method according to the first embodiment of the present application;
Fig. 5 is a flowchart of S6 in the multi-modal expression packet classification method according to the second embodiment of the present application;
Fig. 6 is a flowchart of S6 in the multi-modal expression package classification method according to the third embodiment of the present application;
Fig. 7 is a flowchart of S6 in the multi-modal expression package classification method according to the fourth embodiment of the present application;
fig. 8 is a schematic structural diagram of a multi-modal expression packet classifying apparatus according to a fifth embodiment of the present application;
Fig. 9 is a schematic structural diagram of a computer device according to a sixth embodiment of the present application.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the application. Rather, they are merely examples of apparatus and methods consistent with aspects of the application as detailed in the accompanying claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the application. The word "if"/"if" as used herein may be interpreted as "at … …" or "at … …" or "in response to a determination", depending on the context.
Referring to fig. 1, fig. 1 is a flowchart of an expression packet classification and identification method according to a first embodiment of the present application, including the following steps:
S1: obtaining text data of an expression package to be detected, text data of a plurality of categories of demonstration expression packages, tag data and a preset expression package classification and identification model, wherein the expression package classification and identification model comprises a coding module, a feature extraction module, a feature processing module and a classification and identification module.
The execution subject of the expression pack classification recognition method of the present application is a classification apparatus (hereinafter referred to as a classification apparatus) of the expression pack classification recognition method.
In an alternative embodiment, the sorting device may be a computer device, may be a server, or may be a server cluster formed by combining multiple computer devices.
In this embodiment, the classification device may obtain the expression package to be detected and the presentation expression packages of the plurality of categories input by the user, and may also obtain the expression package to be detected and the presentation expression packages of the plurality of categories from a preset database. The classification equipment adopts a picture to be tested Wen Gongju (ClipCap) to extract texts of the expression packages to be tested and the demonstration expression packages of a plurality of categories to obtain text data of the expression packages to be tested, text data of the demonstration expression packages of the plurality of categories and tag data, wherein the tag data is used for indicating type information of the text data, and the types comprise positive types and negative types.
The expression package classification recognition model is preset by the classification equipment, wherein the expression package classification recognition model comprises a coding module, a feature extraction module, a feature processing module and a classification recognition module.
S2: inputting the text data of the expression package to be tested, the text data of the demonstration expression packages of the categories and the tag data into the coding module for coding processing to obtain text coding representation of the expression package to be tested, the text coding representation of the demonstration expression packages of the categories and the tag coding representation.
The classification equipment inputs the text data of the expression package to be detected, the text data of the demonstration expression packages of the categories and the tag data into the coding module for coding processing, and obtains the text coding representation of the expression package to be detected, the text coding representation of the demonstration expression packages of the categories and the tag coding representation.
Specifically, the classification device adopts a pre-training language model (Roberta-large) as a coding module, and in an optional embodiment, the classification device carries out sequence construction on text data of the expression pack to be detected, text data of the expression pack to be demonstrated and tag data of the expression pack to be demonstrated in a plurality of categories according to the preset reasoning example area length and the demonstration example area length of the plurality of categories, so as to obtain an input sequence for inputting the text data to the coding module, facilitate extracting global information of each area and strengthen the understanding capability of the pre-training language model on the whole sequence. And the classifying equipment inputs the input sequence into the pre-training language model for coding processing to obtain text coding representation of the expression package to be detected, text coding representation of the demonstration expression packages of a plurality of categories and label coding representation.
S3: inputting the text coding representation of the expression package to be tested, the text coding representations of the demonstration expression packages of a plurality of categories and the tag coding representation into the feature extraction module for feature extraction, and obtaining the text feature representation of the expression package to be tested, the text feature representation of the demonstration expression packages of a plurality of categories and the tag feature representation.
In this embodiment, the classifying device uses a long-short-time memory network (LSTM) as a feature extraction module, and inputs the text code representation of the expression pack to be detected, the text code representation of the presentation expression pack of a plurality of categories, and the tag code representation into the feature extraction module to perform feature extraction, so as to obtain the text feature representation of the expression pack to be detected, the text feature representation of the presentation expression pack of a plurality of categories, and the tag feature representation, so as to extract global information of the expression pack to be detected and the presentation expression pack.
S4: inputting the text feature representation of the expression package to be detected, the text feature representations of the demonstration expression packages of a plurality of categories and the tag feature representations into the feature processing module for information perception to obtain information perception feature representations of the plurality of categories, and carrying out feature fusion on the information perception feature representations of the plurality of categories to obtain feature fusion representation.
In this embodiment, the classification device inputs the text feature representation of the expression package to be detected, the text feature representation of the presentation expression package of the plurality of categories, and the tag feature representation into the feature processing module to perform information sensing, obtain information sensing feature representations of the plurality of categories, and performs feature fusion on the information sensing feature representations of the plurality of categories to obtain feature fusion representation.
The feature extraction module includes a fully-connected network corresponding to a positive category and a fully-connected network corresponding to a negative category, refer to fig. 2, and fig. 2 is a schematic flow chart of step S4 in the expression packet classification and identification method provided by the first embodiment of the present application, including steps S41 to S43, specifically as follows:
S41: and constructing mask data of the text data of the expression package to be detected, and carrying out coding processing and feature extraction processing on the mask data of the text data of the expression package to be detected to obtain mask feature representation of the expression package to be detected.
In this embodiment, the classification device constructs mask data of text data of the expression package to be detected, where the mask data is used for classifying and identifying the expression package to be detected.
And the classifying equipment carries out coding processing and feature extraction processing on mask data of the text data of the expression package to be detected, and obtains mask feature representation of the expression package to be detected.
S42: and inputting the text feature representation, the mask feature representation, the text feature representation and the label feature representation of the presentation expression package corresponding to the active category into the fully-connected network corresponding to the active category, and extracting according to the preset first information perception feature to obtain the information perception feature representation corresponding to the active category.
The first information perception feature is extracted as follows:
In the method, in the process of the invention, Information perception feature representation corresponding to active category,/>For a fully connected network corresponding to an active class,/>For the text feature representation of the expression package to be tested,/>For the mask feature representation of the expression package to be tested,Text feature representation of presentation expression package corresponding to active category,/>Tag feature representation of presentation expression package corresponding to active category,/>For splice symbols.
In this embodiment, the classification device inputs the text feature representation, the mask feature representation, the text feature representation and the tag feature representation of the presentation expression package corresponding to the active category into the fully connected network corresponding to the active category, extracts the information perception feature representation corresponding to the active category according to the preset first information perception feature, and strengthens global information in text data of the expression package to be detected and relevant positive type features in text data of the presentation expression package in an information perception manner, thereby improving accuracy of classification recognition of the expression package to be detected.
S43: and inputting the text feature representation of the expression package to be detected, the mask feature representation, the text feature representation of the presentation expression package of the negative category and the tag feature representation into the fully-connected network corresponding to the negative category, and extracting according to the preset second information perception feature to obtain the information perception feature representation corresponding to the negative category.
The second information perception feature is extracted as follows:
In the method, in the process of the invention, Information perception feature representation corresponding to negative category,/>For a fully connected network corresponding to a passive category,/>Text feature representation of presentation expression package for passive category,/>And the label characteristic of the presentation expression package is in a negative category.
In this embodiment, the classification device inputs the text feature representation of the expression package to be detected, the mask feature representation, the text feature representation of the presentation expression package of the negative category, and the tag feature representation into the fully connected network corresponding to the negative category, extracts according to the preset second information sensing feature, and obtains the information sensing feature representation corresponding to the negative category, and in an information sensing manner, enhances global information in text data of the expression package to be detected and relevant features of the negative type in the text data of the presentation expression package, thereby improving accuracy of classification identification of the expression package to be detected.
Referring to fig. 3, fig. 3 is a flowchart of step S4 in the expression packet classification and recognition method according to the first embodiment of the present application, including step S44, specifically as follows:
S44: and obtaining the feature fusion representation according to the information perception feature representation corresponding to the positive category, the information perception feature representation of the negative category and a preset feature fusion algorithm by adopting a soft gate mechanism.
The feature fusion algorithm is as follows:
In the method, in the process of the invention, For feature fusion representation,/>Is a soft gate mechanism.
In this embodiment, the classification device adopts a soft gate mechanism, obtains the feature fusion representation according to the information perception feature representation corresponding to the active category, the information perception feature representation of the passive category and a preset feature fusion algorithm, and extracts the relationship between the expression package to be detected and the active type demonstration expression package and the vanishing type demonstration expression package so as to improve the accuracy of classification and identification of the expression package to be detected.
S5: inputting the text feature representation, the information perception feature representation of a plurality of categories and the feature fusion representation of the expression package to be detected into the classification recognition to carry out classification recognition, obtaining classification recognition prediction probability data, obtaining a target classification recognition prediction probability vector with the largest dimension in the classification recognition prediction probability data according to the classification recognition prediction probability vector, and taking the classification type of the target classification recognition prediction probability vector as the classification recognition result of the expression package to be detected.
In this embodiment, the classification device inputs the text feature representation of the expression packet to be detected, the information perception feature representations of the plurality of categories, and the feature fusion representation to the classification recognition to perform classification recognition, obtains classification recognition prediction probability data, obtains a target classification recognition prediction probability vector with the largest dimension in the classification recognition prediction probability data according to the classification recognition prediction probability vector, and uses the classification type of the target classification recognition prediction probability vector as the classification recognition result of the expression packet to be detected.
Based on text data, extracting features of the expression package to be detected to obtain text feature representation of the expression package to be detected, combining text data and label data of the expression package to be detected for demonstration of multiple categories, adopting an information perception method to carry out multi-view information perception on the text feature representation of the expression package to be detected to obtain information perception feature representations of multiple categories so as to strengthen entity information associated with the expression contained in the text feature representation of the expression package to be detected, carrying out feature fusion on the information perception feature representations of the multiple categories, and therefore carrying out classification recognition on the expression package to be detected more comprehensively and improving accuracy of classification recognition of the expression package to be detected.
Referring to fig. 4, fig. 4 is a flowchart of step S5 in the expression packet classification and recognition method according to the first embodiment of the present application, including step S51, specifically as follows:
S51: and obtaining the classification recognition prediction probability data according to the text feature representation of the expression package to be detected, the information perception feature representation corresponding to the active category, the information perception feature representation of the passive category, the feature fusion representation and a preset classification recognition prediction probability calculation algorithm.
The classification recognition prediction probability calculation algorithm is as follows:
In the method, in the process of the invention, Identifying predictive probability data for a first sub-category based on text feature representation,/>Identifying predictive probability data for a second sub-category based on a corresponding information-aware feature representation of the passive category,/>Identifying predictive probability data for a third sub-category based on a representation of information-aware features corresponding to the active category,/>Identifying predictive probability data for a fourth sub-category of information-aware feature representations based on feature fusion representations,/>Predictive probability data for classification recognition,/>Is a linear classification function.
In this embodiment, the classification device obtains the classification recognition prediction probability data according to the text feature representation of the expression package to be detected, the information perception feature representation corresponding to the active category, the information perception feature representation of the passive category, the feature fusion representation and a preset classification recognition prediction probability calculation algorithm.
In an alternative embodiment, step S6 is further included: referring to fig. 5, fig. 5 is a schematic flow chart of step S6 in the expression packet classification and recognition method according to the second embodiment of the present application, including steps S611 to S612, specifically as follows:
S611: obtaining text data and mask data of a plurality of sample expression packages, inputting the text data and the mask data of the plurality of sample expression packages into an initial expression package classification and identification model for coding processing and feature extraction, and obtaining mask feature representations of the plurality of sample expression packages.
In this embodiment, the classification device obtains text data and mask data of a plurality of sample expression packages, inputs the text data and mask data of the plurality of sample expression packages into an initial expression package classification and identification model for encoding processing and feature extraction, and obtains mask feature representations of the plurality of sample expression packages, which may be referred to steps S2 to S3 in specific embodiments, and will not be described herein.
S612: obtaining category information of a plurality of sample expression packages, taking the plurality of sample expression packages of the same category as positive examples according to the category information, taking the plurality of sample expression packages of different categories as negative examples, adopting a contrast learning method, obtaining a first loss value according to mask feature representation of the plurality of sample expression packages and a preset first contrast learning loss function, and training the initial expression package classification recognition model according to the first loss value.
The first contrast learning loss function is:
In the method, in the process of the invention, For the first loss value, M is the number of sample expression packages,/>As a category judgment function,/>To judge the i-th sample expression package/>And j-th sample expression package/>If the same category is present, when the same category is present,/>=1, Otherwise 0,/>Mask feature representation for the i-th sample expression package,/>Mask feature representation for j-th sample expression package,/>Mask feature representation for kth sample expression package,/>Is a first temperature coefficient.
In this embodiment, the classification device obtains category information of a plurality of sample expression packages, uses a plurality of sample expression packages of the same category as positive examples according to the category information, uses a comparison learning method as negative examples, obtains a first loss value according to mask feature representations of a plurality of sample expression packages and a preset first comparison learning loss function, trains the initial expression package classification recognition model according to the first loss value, and enables the initial expression package classification recognition model to better understand the relationship between the sample expression packages of positive types and the sample expression packages of negative types on the feature level so as to improve the accuracy of performing expression package classification recognition on the model.
Referring to fig. 6, fig. 6 is a flowchart of step S6 in the expression packet classification and recognition method according to the third embodiment of the present application, including steps S621 to S622, specifically as follows:
S621: obtaining tag data of a plurality of categories of demonstration expression packages, inputting the tag data of the plurality of categories of demonstration expression packages into an initial expression package classification and identification model for coding processing and feature extraction, and obtaining tag feature representations of the plurality of categories of demonstration expression packages.
In this embodiment, the classification device obtains tag data of a plurality of types of presentation expression packages, inputs the tag data of the plurality of types of presentation expression packages into an initial expression package classification and identification model for encoding processing and feature extraction, and obtains tag feature representations of the plurality of types of presentation expression packages, and specific embodiments may refer to steps S2 to S3, which are not described herein again.
S622: according to label data of the demonstration expression packages of a plurality of categories, category information of the demonstration expression packages of the plurality of categories is obtained, the sample expression packages and the demonstration expression packages of the same category are used as positive examples, the sample expression packages and the demonstration expression packages of different categories are used as negative examples, a comparison learning method is adopted, and according to mask feature representations of the sample expression packages, label feature representations of the demonstration expression packages of the plurality of categories and a preset second comparison learning loss function, a second loss value is obtained, and according to the second loss value, the initial expression package classification recognition model is trained.
The second contrast learning loss function is:
In the method, in the process of the invention, For the second loss value,/>To judge the i-th sample expression package/>With j-th presentation expression package/>If the same category is present, when the same category is present,/>=1, Otherwise 0, k is the number of presentation expression packages,/>Mask feature representation for the ith presentation expression package,/>Mask feature representation for kth presentation expression package,/>Is a second temperature coefficient.
In this embodiment, the classification device obtains category information of a plurality of categories of demonstration expression packages according to tag data of the plurality of categories of demonstration expression packages, uses a plurality of sample expression packages and a plurality of demonstration expression packages of the same category as positive examples, uses a plurality of sample expression packages and a plurality of demonstration expression packages of different categories as negative examples, adopts a contrast learning method, obtains a second loss value according to mask feature representations of the plurality of sample expression packages, tag feature representations of the plurality of categories of demonstration expression packages and a preset second contrast learning loss function, trains the initial expression package classification recognition model according to the second loss value, and enables the initial expression package classification recognition model to better understand the relationship between the sample expression packages, the positive types of demonstration expression packages, the middle-class and the negative types of demonstration expression packages on the feature level so as to improve accuracy of the model for carrying out expression package classification recognition.
Referring to fig. 7, fig. 7 is a flowchart of step S6 in the expression packet classification and recognition method according to the fourth embodiment of the present application, including steps S63 to S64, specifically as follows:
s63: obtaining classification recognition prediction probability data and real classification recognition results of a plurality of sample expression packages, obtaining prediction classification recognition results of the plurality of sample expression packages according to the classification recognition prediction probability data, and obtaining a third loss value according to the classification recognition prediction probability data, the prediction classification recognition results, the real classification recognition results and a preset cross entropy loss function of the plurality of sample expression packages by adopting a cross entropy learning method.
In this embodiment, the classification device obtains the classification recognition prediction probability data and the real classification recognition result of the plurality of sample expression packages, and obtains the prediction classification recognition result of the plurality of sample expression packages according to the classification recognition prediction probability data, and the specific embodiment may refer to step S5, which is not described herein.
The classification equipment adopts a cross entropy learning method, and obtains a third loss value according to classification recognition prediction probability data, prediction classification recognition results, real classification recognition results and a preset cross entropy loss function of a plurality of sample expression packages, wherein the cross entropy loss function is as follows:
In the method, in the process of the invention, For the third loss value, P is the number of categories,/>Predictive classification recognition result/>, for judging ith sample expression packageWhether to identify the result/>, with the true classificationIs of the same category, when of the same category,/>=1, Otherwise 0,/>Predictive probability data is identified for the classification of the ith sample expression packet.
S64: and according to the first loss value, the second loss value, the third loss value and a preset total loss function, obtaining a total loss value, and training the initial expression packet classification recognition model according to the total loss value to obtain the expression packet classification recognition model.
The total loss function is:
where Loss is the total Loss value, For the first super parameter,/>Is the second super parameter.
In this embodiment, the classification device obtains a total loss value according to the first loss value, the second loss value, the third loss value and a preset total loss function, trains the initial expression packet classification and identification model according to the total loss value, and obtains the expression packet classification and identification model, thereby improving the classification and identification efficiency and accuracy of the expression packet.
Referring to fig. 8, fig. 8 is a schematic structural diagram of an expression packet classification and identification device according to a fifth embodiment of the present application, the device may implement all or a part of an expression packet classification and identification method through software, hardware or a combination of the two, and the device 8 includes:
The data acquisition module 81 is configured to obtain text data of an expression package to be detected, text data of a plurality of types of presentation expression packages, tag data, and a preset expression package classification and identification model, where the expression package classification and identification model includes a coding module, a feature extraction module, a feature processing module, and a classification and identification module;
The data encoding module 82 is configured to input the text data of the expression package to be detected, the text data of the presentation expression packages of the plurality of categories, and the tag data into the encoding module for encoding, so as to obtain a text encoding representation of the expression package to be detected, a text encoding representation of the presentation expression packages of the plurality of categories, and a tag encoding representation;
The data feature extraction module 83 is configured to input the text code representation of the expression package to be detected, the text code representations of the demonstration expression packages of the plurality of categories, and the tag code representation into the feature extraction module to perform feature extraction, so as to obtain text feature representations of the expression package to be detected, the text feature representations of the demonstration expression packages of the plurality of categories, and the tag feature representations;
The data feature processing module 84 is configured to input the text feature representation of the expression package to be detected, the text feature representation of the presentation expression package of the plurality of categories, and the tag feature representation into the feature processing module for information sensing, obtain information sensing feature representations of the plurality of categories, and perform feature fusion on the information sensing feature representations of the plurality of categories to obtain feature fusion representation;
the expression packet classification recognition module 85 is configured to input a text feature representation of the expression packet to be detected, information perception feature representations of a plurality of categories, and feature fusion representations to the classification recognition for classification recognition, obtain classification recognition prediction probability data, obtain a target classification recognition prediction probability vector with a largest dimension in the classification recognition prediction probability data according to the classification recognition prediction probability vector, and take a classification type of the target classification recognition prediction probability vector as a classification recognition result of the expression packet to be detected.
In the embodiment of the application, text data of an expression package to be detected, text data of a plurality of categories of demonstration expression packages, tag data and a preset expression package classification and identification model are obtained through a data acquisition module, wherein the expression package classification and identification model comprises a coding module, a feature extraction module, a feature processing module and a classification and identification module; inputting the text data of the expression package to be tested, the text data of the demonstration expression packages of a plurality of categories and the tag data into the coding module through a data coding module for coding processing to obtain text coding representation of the expression package to be tested, the text coding representation of the demonstration expression packages of the plurality of categories and the tag coding representation; inputting the text coding representation of the expression package to be detected, the text coding representations of the demonstration expression packages of a plurality of categories and the tag coding representation into the feature extraction module through a data feature extraction module for feature extraction to obtain the text feature representation of the expression package to be detected, the text feature representation of the demonstration expression packages of a plurality of categories and the tag feature representation; inputting the text feature representation of the expression package to be detected, the text feature representations of the demonstration expression packages of a plurality of categories and the tag feature representations into the feature processing module through a data feature processing module for information perception to obtain information perception feature representations of a plurality of categories, and carrying out feature fusion on the information perception feature representations of a plurality of categories to obtain feature fusion representation; and inputting the text feature representation, the information perception feature representation of a plurality of categories and the feature fusion representation of the expression package to be detected into the classification recognition through an expression package classification recognition module to carry out classification recognition, obtaining classification recognition prediction probability data, obtaining a target classification recognition prediction probability vector with the largest dimension in the classification recognition prediction probability data according to the classification recognition prediction probability vector, and taking the classification type of the target classification recognition prediction probability vector as the classification recognition result of the expression package to be detected. According to the method, based on text data, feature extraction is carried out on the expression package to be detected, text feature representation of the expression package to be detected is obtained, the text feature representation of the expression package to be detected is combined with text data and label data of a plurality of categories of demonstration expression packages, a multi-view information perception is carried out on the text feature representation of the expression package to be detected by adopting an information perception method, the information perception feature representations of the categories are obtained, so that entity information associated with the expression in the text feature representation of the expression package to be detected is enhanced, feature fusion is carried out on the information perception feature representations of the categories, and therefore classification recognition is carried out on the expression package to be detected more comprehensively, and accuracy of classification recognition of the expression package to be detected is improved.
Referring to fig. 9, fig. 9 is a schematic structural diagram of a computer device according to a sixth embodiment of the present application, where the computer device 9 includes: a processor 91, a memory 92, and a computer program 93 stored on the memory 92 and executable on the processor 91; the computer device may store a plurality of instructions adapted to be loaded and executed by the processor 91 to perform the method steps of the first to fourth embodiments, and the specific execution process may be referred to in the specific description of the first to fourth embodiments, which are not repeated herein.
Wherein processor 91 may include one or more processing cores. The processor 91 performs various functions of the expression packet classification recognition device 6 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 92 and invoking data in the memory 92 using various interfaces and various parts within the wired connection server, and alternatively, the processor 91 may be implemented in at least one hardware form of digital signal Processing (DIGITAL SIGNAL Processing, DSP), field-Programmable gate array (fieldprogrammable GATE ARRAY, FPGA), programmable logic array (Programble Logic Array, PLA). The processor 91 may integrate one or a combination of several of a central processor 91 (Central Processing Unit, CPU), an image processor 91 (Graphics Processing Unit, GPU), and a modem, etc. The CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing the content required to be displayed by the touch display screen; the modem is used to handle wireless communications. It will be appreciated that the modem may not be integrated into the processor 91 and may be implemented by a single chip.
The Memory 92 may include a random access Memory 92 (Random Access Memory, RAM) or a Read-Only Memory 92 (Read-Only Memory). Optionally, the memory 92 includes a non-transitory computer readable medium (non-transitory computer-readable storage medium). Memory 92 may be used to store instructions, programs, code, a set of codes, or a set of instructions. The memory 92 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for at least one function (such as touch instructions, etc.), instructions for implementing the various method embodiments described above, etc.; the storage data area may store data or the like referred to in the above respective method embodiments. The memory 92 may also optionally be at least one memory device located remotely from the aforementioned processor 91.
The embodiment of the present application further provides a storage medium, where the storage medium may store a plurality of instructions, where the instructions are suitable for being loaded and executed by a processor, where the specific execution process may refer to the specific descriptions of the first to fourth embodiments, and the descriptions are omitted herein.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, the specific names of the functional units and modules are only for distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other manners. For example, the apparatus/terminal device embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical function division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated modules/units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present invention may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc.
The present invention is not limited to the above-described embodiments, but, if various modifications or variations of the present invention are not departing from the spirit and scope of the present invention, the present invention is intended to include such modifications and variations as fall within the scope of the claims and the equivalents thereof.

Claims (10)

1. The expression packet classification and identification method is characterized by comprising the following steps of:
Obtaining text data of an expression package to be detected, text data of a plurality of categories of demonstration expression packages, tag data and a preset expression package classification and identification model, wherein the expression package classification and identification model comprises a coding module, a feature extraction module, a feature processing module and a classification and identification module;
Inputting the text data of the expression package to be tested, the text data of the demonstration expression packages of a plurality of categories and the tag data into the coding module for coding processing to obtain text coding representation of the expression package to be tested, the text coding representation of the demonstration expression packages of a plurality of categories and the tag coding representation;
Inputting the text coding representation of the expression package to be tested, the text coding representations of the demonstration expression packages of a plurality of categories and the tag coding representation into the feature extraction module for feature extraction to obtain the text feature representation of the expression package to be tested, the text feature representation of the demonstration expression packages of a plurality of categories and the tag feature representation;
inputting the text feature representation of the expression package to be detected, the text feature representations of the demonstration expression packages of a plurality of categories and the tag feature representations into the feature processing module for information perception to obtain information perception feature representations of a plurality of categories, and carrying out feature fusion on the information perception feature representations of a plurality of categories to obtain feature fusion representations;
Inputting the text feature representation, the information perception feature representation of a plurality of categories and the feature fusion representation of the expression package to be detected into the classification recognition to carry out classification recognition, obtaining classification recognition prediction probability data, obtaining a target classification recognition prediction probability vector with the largest dimension in the classification recognition prediction probability data according to the classification recognition prediction probability vector, and taking the classification type of the target classification recognition prediction probability vector as the classification recognition result of the expression package to be detected.
2. The expression package classification and identification method according to claim 1, wherein: the type comprises an active category and a passive category, and the feature extraction module comprises a fully connected network corresponding to the active category and a fully connected network corresponding to the passive category;
Inputting the text feature representation of the expression package to be tested and the text feature representations of the demonstration expression packages of a plurality of categories into the feature extraction module for full connection processing to obtain information perception feature representations of the plurality of categories, wherein the method comprises the following steps:
Constructing mask data of text data of the expression package to be detected, and performing coding processing and feature extraction processing on the mask data of the text data of the expression package to be detected to obtain mask feature representation of the expression package to be detected;
Inputting the text feature representation, the mask feature representation, the text feature representation and the label feature representation of the presentation expression package corresponding to the active category into a fully connected network corresponding to the active category, and extracting according to a preset first information perception feature to obtain the information perception feature representation corresponding to the active category, wherein the first information perception feature extraction is as follows:
In the method, in the process of the invention, Information perception feature representation corresponding to active category,/>For a fully connected network corresponding to an active class,/>For the text feature representation of the expression package to be tested,/>For the mask feature representation of the expression package to be tested,/>Text feature representation of presentation expression package corresponding to active category,/>Tag feature representation of presentation expression package corresponding to active category,/>The symbol is spliced;
inputting the text feature representation, the mask feature representation, the text feature representation and the label feature representation of the expression package to be detected, and the text feature representation of the presentation expression package of the negative category into the fully connected network corresponding to the negative category, and extracting according to a preset second information perception feature to obtain the information perception feature representation corresponding to the negative category, wherein the second information perception feature extraction is as follows:
In the method, in the process of the invention, Information perception feature representation corresponding to negative category,/>For a fully connected network corresponding to a passive category,/>Text feature representation of presentation expression package for passive category,/>And the label characteristic of the presentation expression package is in a negative category.
3. The expression package classification and identification method according to claim 2, wherein the feature fusion is performed on the information perception feature representations of the plurality of categories to obtain a feature fusion representation, and the method comprises the steps of:
The soft gate mechanism is adopted, and the feature fusion representation is obtained according to the information perception feature representation corresponding to the positive category, the information perception feature representation of the negative category and a preset feature fusion algorithm, wherein the feature fusion algorithm is as follows:
In the method, in the process of the invention, For feature fusion representation,/>Is a soft gate mechanism.
4. The expression pack classification and recognition method according to claim 3, wherein the step of inputting the text feature representation, the information perception feature representations of the plurality of categories, and the feature fusion representation of the expression pack to be detected to the classification and recognition to obtain classification and recognition prediction probability data comprises the steps of:
Obtaining the classification recognition prediction probability data according to text feature representation of the expression package to be detected, information perception feature representation corresponding to the positive category, information perception feature representation of the negative category, feature fusion representation and a preset classification recognition prediction probability calculation algorithm, wherein the classification recognition prediction probability calculation algorithm is as follows:
In the method, in the process of the invention, Identifying predictive probability data for a first sub-category based on text feature representation,/>Identifying predictive probability data for a second sub-category based on a corresponding information-aware feature representation of the passive category,/>Identifying predictive probability data for a third sub-category based on a representation of information-aware features corresponding to the active category,/>Identifying predictive probability data for a fourth sub-category of information-aware feature representations based on feature fusion representations,/>Predictive probability data for classification recognition,/>Is a linear classification function.
5. The expression pack classification and recognition method according to claim 1 or 4, wherein before obtaining the text data of the expression pack to be detected, the text data of the presentation expression packs of the plurality of categories, and the preset expression pack classification and recognition model, the method further comprises the steps of: training an expression package classification recognition model, wherein the training of the expression package classification recognition model comprises the following steps:
obtaining text data and mask data of a plurality of sample expression packages, inputting the text data and the mask data of the plurality of sample expression packages into an initial expression package classification and identification model for coding processing and feature extraction, and obtaining mask feature representations of the plurality of sample expression packages;
Obtaining category information of a plurality of sample expression packages, taking the plurality of sample expression packages of the same category as positive examples and taking the plurality of sample expression packages of different categories as negative examples according to the category information, adopting a contrast learning method, obtaining a first loss value according to mask feature representations of the plurality of sample expression packages and a preset first contrast learning loss function, and training the initial expression package classification recognition model according to the first loss value, wherein the first contrast learning loss function is as follows:
In the method, in the process of the invention, For the first loss value, M is the number of sample expression packages,/>As a category judgment function,/>To judge the i-th sample expression package/>And j-th sample expression package/>Whether or not of the same category,/>Mask feature representation for the i-th sample expression package,/>Mask feature representation for j-th sample expression package,/>Mask feature representation for kth sample expression package,/>Is a first temperature coefficient.
6. The expression package classification recognition method of claim 5, wherein the training expression package classification recognition model comprises the steps of:
Obtaining tag data of a plurality of categories of demonstration expression packages, inputting the tag data of the plurality of categories of demonstration expression packages into an initial expression package classification and identification model for coding processing and feature extraction, and obtaining tag feature representations of the plurality of categories of demonstration expression packages;
According to the label data of the demonstration expression packages of a plurality of categories, category information of the demonstration expression packages of a plurality of categories is obtained, the sample expression packages and the demonstration expression packages of the same category are used as positive examples, the sample expression packages and the demonstration expression packages of different categories are used as negative examples, a comparison learning method is adopted, a second loss value is obtained according to mask feature representations of the sample expression packages, label feature representations of the demonstration expression packages of the categories and a preset second comparison learning loss function, and the initial expression package classification recognition model is trained according to the second loss value, wherein the second comparison learning loss function is as follows:
In the method, in the process of the invention, For the second loss value,/>To judge the i-th sample expression package/>With j-th presentation expression package/>Whether or not in the same category, K is the number of presentation expression packages,/>Mask feature representation for the ith presentation expression package,/>Mask feature representation for kth presentation expression package,/>Is a second temperature coefficient.
7. The expression package classification recognition method of claim 6, wherein the training expression package classification recognition model further comprises the steps of:
Obtaining classification recognition prediction probability data and real classification recognition results of a plurality of sample expression packages, obtaining prediction classification recognition results of the plurality of sample expression packages according to the classification recognition prediction probability data, and obtaining a third loss value according to the classification recognition prediction probability data, the prediction classification recognition results, the real classification recognition results and a preset cross entropy loss function of the plurality of sample expression packages by adopting a cross entropy learning method, wherein the cross entropy loss function is as follows:
In the method, in the process of the invention, For the third loss value, P is the number of categories,/>Predictive classification recognition result/>, for judging ith sample expression packageWhether to identify the result/>, with the true classificationIs of the same category,/>Identifying predictive probability data for the classification of the ith sample expression packet;
Obtaining a total loss value according to the first loss value, the second loss value, the third loss value and a preset total loss function, training the initial expression packet classification and identification model according to the total loss value, and obtaining the expression packet classification and identification model, wherein the total loss function is as follows:
where Loss is the total Loss value, For the first super parameter,/>Is the second super parameter.
8. An expression packet classification and identification device, comprising:
The system comprises a data acquisition module, a feature extraction module, a feature processing module and a classification recognition module, wherein the data acquisition module is used for acquiring text data of an expression package to be detected, text data of a plurality of categories of demonstration expression packages, tag data and a preset expression package classification recognition model;
The data coding module is used for inputting the text data of the expression package to be tested, the text data of the demonstration expression packages of a plurality of categories and the tag data into the coding module for coding processing to obtain text coding representation of the expression package to be tested, the text coding representation of the demonstration expression packages of a plurality of categories and the tag coding representation;
The data feature extraction module is used for inputting the text coding representation of the expression package to be detected, the text coding representations of the demonstration expression packages of the categories and the tag coding representations into the feature extraction module for feature extraction to obtain the text feature representation of the expression package to be detected, the text feature representations of the demonstration expression packages of the categories and the tag feature representations;
The data feature processing module is used for inputting the text feature representation of the expression package to be detected, the text feature representation of the demonstration expression package of a plurality of categories and the tag feature representation into the feature processing module for information perception to obtain information perception feature representations of a plurality of categories, and carrying out feature fusion on the information perception feature representations of a plurality of categories to obtain feature fusion representation;
The expression packet classification recognition module is used for inputting the text feature representation, the information perception feature representation of a plurality of categories and the feature fusion representation of the expression packet to be detected into the classification recognition to carry out classification recognition, obtaining classification recognition prediction probability data, obtaining a target classification recognition prediction probability vector with the largest dimension in the classification recognition prediction probability data according to the classification recognition prediction probability vector, and taking the classification type of the target classification recognition prediction probability vector as the classification recognition result of the expression packet to be detected.
9. A computer device comprising a processor, a memory and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the expression cassette classification recognition method of any one of claims 1 to 7 when the computer program is executed.
10. A storage medium storing a computer program which, when executed by a processor, implements the steps of the expression pack classification recognition method of any one of claims 1to 7.
CN202410424210.7A 2024-04-10 2024-04-10 Expression package classification recognition method, apparatus, computer device and storage medium Active CN118035454B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410424210.7A CN118035454B (en) 2024-04-10 2024-04-10 Expression package classification recognition method, apparatus, computer device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410424210.7A CN118035454B (en) 2024-04-10 2024-04-10 Expression package classification recognition method, apparatus, computer device and storage medium

Publications (2)

Publication Number Publication Date
CN118035454A true CN118035454A (en) 2024-05-14
CN118035454B CN118035454B (en) 2024-07-09

Family

ID=90991499

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410424210.7A Active CN118035454B (en) 2024-04-10 2024-04-10 Expression package classification recognition method, apparatus, computer device and storage medium

Country Status (1)

Country Link
CN (1) CN118035454B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112632996A (en) * 2020-12-08 2021-04-09 浙江大学 Entity relation triple extraction method based on comparative learning
US20210390288A1 (en) * 2020-06-16 2021-12-16 University Of Maryland, College Park Human emotion recognition in images or video
CN114419409A (en) * 2022-01-12 2022-04-29 大连海事大学 Multi-modal malicious fan map detection method based on face recognition and hierarchical fusion strategy
CN114580430A (en) * 2022-02-24 2022-06-03 大连海洋大学 Method for extracting fish disease description emotion words based on neural network
CN114781392A (en) * 2022-04-06 2022-07-22 西安电子科技大学 Text emotion analysis method based on BERT improved model
CN117743890A (en) * 2023-12-18 2024-03-22 大连理工大学 Expression package classification method with metaphor information based on contrast learning

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210390288A1 (en) * 2020-06-16 2021-12-16 University Of Maryland, College Park Human emotion recognition in images or video
CN112632996A (en) * 2020-12-08 2021-04-09 浙江大学 Entity relation triple extraction method based on comparative learning
CN114419409A (en) * 2022-01-12 2022-04-29 大连海事大学 Multi-modal malicious fan map detection method based on face recognition and hierarchical fusion strategy
CN114580430A (en) * 2022-02-24 2022-06-03 大连海洋大学 Method for extracting fish disease description emotion words based on neural network
CN114781392A (en) * 2022-04-06 2022-07-22 西安电子科技大学 Text emotion analysis method based on BERT improved model
CN117743890A (en) * 2023-12-18 2024-03-22 大连理工大学 Expression package classification method with metaphor information based on contrast learning

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
XUE, YY: "TeenSensor: Gaussian Processes for Micro-blog based Teen\'s Acute and Chronic Stress Detection", 《WEB OF SCIENCE》, vol. 34, no. 3, 31 May 2019 (2019-05-31), pages 151 - 164 *
冯超: "基于层次注意力机制和门机制的属性级别情感分析", 《中文信息学报 》, vol. 35, no. 10, 31 October 2021 (2021-10-31), pages 128 - 136 *
马壮: "基于脑电信号和周围生理信号的多模态融合情感识别", 《电子科技》, no. 04, 9 April 2024 (2024-04-09) *

Also Published As

Publication number Publication date
CN118035454B (en) 2024-07-09

Similar Documents

Publication Publication Date Title
CN112434721B (en) Image classification method, system, storage medium and terminal based on small sample learning
KR102266529B1 (en) Method, apparatus, device and readable storage medium for image-based data processing
CN114676704B (en) Sentence emotion analysis method, device and equipment and storage medium
CN116402063B (en) Multi-modal irony recognition method, apparatus, device and storage medium
CN108319888B (en) Video type identification method and device and computer terminal
CN110399547B (en) Method, apparatus, device and storage medium for updating model parameters
CN116089619B (en) Emotion classification method, apparatus, device and storage medium
CN113094533B (en) Image-text cross-modal retrieval method based on mixed granularity matching
CN116258145B (en) Multi-mode named entity recognition method, device, equipment and storage medium
CN113094478B (en) Expression reply method, device, equipment and storage medium
CN115659987B (en) Multi-mode named entity recognition method, device and equipment based on double channels
CN110969023B (en) Text similarity determination method and device
CN115587597B (en) Sentiment analysis method and device of aspect words based on clause-level relational graph
CN115168592A (en) Statement emotion analysis method, device and equipment based on aspect categories
CN117891940B (en) Multi-modal irony detection method, apparatus, computer device, and storage medium
CN113435531B (en) Zero sample image classification method and system, electronic equipment and storage medium
CN117407523A (en) Sentence emotion analysis method, sentence emotion analysis device, computer device and storage medium
CN113569118A (en) Self-media pushing method and device, computer equipment and storage medium
CN115906861B (en) Sentence emotion analysis method and device based on interaction aspect information fusion
CN117349402A (en) Emotion cause pair identification method and system based on machine reading understanding
CN118035454B (en) Expression package classification recognition method, apparatus, computer device and storage medium
JP7390442B2 (en) Training method, device, device, storage medium and program for document processing model
CN110851629A (en) Image retrieval method
CN115827878A (en) Statement emotion analysis method, device and equipment
CN115618884A (en) Language analysis method, device and equipment based on multi-task learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant