WO2023050470A1 - Procédé et appareil de détection d'événement basés sur un réseau d'attention graphique multicouche - Google Patents

Procédé et appareil de détection d'événement basés sur un réseau d'attention graphique multicouche Download PDF

Info

Publication number
WO2023050470A1
WO2023050470A1 PCT/CN2021/123249 CN2021123249W WO2023050470A1 WO 2023050470 A1 WO2023050470 A1 WO 2023050470A1 CN 2021123249 W CN2021123249 W CN 2021123249W WO 2023050470 A1 WO2023050470 A1 WO 2023050470A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
vector
context
word
syntactic
Prior art date
Application number
PCT/CN2021/123249
Other languages
English (en)
Chinese (zh)
Inventor
包先雨
吴共庆
何俐娟
柯培超
陆振亚
王歆
程立勋
蔡伊娜
郑文丽
慕容灏鼎
蔡屹
Original Assignee
深圳市检验检疫科学研究院
合肥工业大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市检验检疫科学研究院, 合肥工业大学 filed Critical 深圳市检验检疫科学研究院
Publication of WO2023050470A1 publication Critical patent/WO2023050470A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/284Lexical analysis, e.g. tokenisation or collocates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • G06F40/211Syntactic parsing, e.g. based on context-free grammar [CFG] or unification grammars
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Definitions

  • the present application relates to the field of natural language processing, in particular to an event detection method and device based on a multi-layer graph attention network.
  • Knowledge Graph describes concepts, entities and their relationships in the objective world in a structured form, and expresses Internet information in a form closer to the human cognitive world, providing a better way to organize, manage and understand the Internet.
  • the knowledge map was proposed by Google in 2012 and successfully applied to search engines.
  • the knowledge map belongs to the important research field of artificial intelligence - the research category of knowledge engineering. It is a killer application of using knowledge engineering to build large-scale knowledge resources.
  • Typical examples are the knowledge map launched by Google in 2012 after acquiring Freebase (a free knowledge database), the graph search of Facebook (social network service website), Microsoft Satori (Microsoft), and specific fields such as business, finance, and life sciences. knowledge base.
  • the event knowledge in the knowledge graph is implicit in Internet resources, including existing structured semantic knowledge, structured information in databases, semi-structured information resources, and unstructured resources. Different types of resources have different knowledge acquisition method. Event identification and extraction research is how to identify and extract event information from the text describing event information and present it in a structured form, including the time, place, participating roles, and related actions or states. Change.
  • the paper "No Trigger Word Event Detection Method Fused with Syntactic Information” proposes to use syntactic information and combine attention mechanism (ATTENTION) to realize the connection of scattered event information in sentences to improve the accuracy of event detection;
  • the paper " Vietnamese News Event Detection by Fusion of Dependency Information and Convolutional Neural Network” uses the features between the convolutional codes of the fusion of dependency syntactic information to encode the features between discontinuous words, and then fusing the two parts of features as event codes to realize event detection.
  • this application is proposed to provide a multi-layer graph attention network-based event detection method that overcomes the problem or at least partially solves the problem, comprising the steps of:
  • An event detection method based on a multi-layer graph attention network comprising the steps of:
  • the trigger word category of the context word is determined according to the aggregation information.
  • the step of acquiring the context words in the event text information and determining the syntactic information adjacency matrix and splicing vector corresponding to the context words includes:
  • the stitching vector is generated according to the word embedding vector of the context word.
  • the step of determining the syntactic information corresponding to the context word according to the context word includes:
  • the event text information is analyzed through syntactic dependence, and syntactic information corresponding to the context words is generated according to the analysis result of the event text information.
  • the step of using the adjacency matrix and the splicing vector as the input of the artificial neural network to obtain the output vector includes:
  • the tensor and the splicing vector are input into the artificial neural network for calculation, and the output vector is generated according to the calculation result of the artificial neural network.
  • the step of determining the trigger word category of the context word according to the aggregation information includes:
  • the trigger words of the context words are determined according to the aggregation information, and the trigger words are classified according to the classifier module.
  • An event detection device based on a multi-layer graph attention network comprising:
  • An acquisition module configured to acquire context words in the event text information, and determine a syntactic information adjacency matrix and splicing vectors corresponding to the context words;
  • a calculation module configured to use the adjacency matrix and the splicing vector as the input of the artificial neural network to obtain an output vector
  • An aggregation module configured to generate aggregation information according to the aggregation of the spliced vector and the output vector;
  • a classification module configured to determine the trigger word category of the context word according to the aggregation information.
  • the acquisition module includes:
  • An expression submodule configured to determine syntactic information corresponding to the context word according to the context word
  • a generating submodule configured to generate the syntax information adjacency matrix according to the syntax information
  • the splicing submodule is used to generate the splicing vector according to the word embedding vector of the context word.
  • the expression submodule includes:
  • the dependency analysis sub-module is configured to analyze the event text information through syntactic dependencies, and generate syntactic information corresponding to the context words according to the analysis result of the event text information.
  • calculation module includes:
  • the array conversion submodule is used to generate a tensor from the adjacency matrix of the same batch
  • the artificial neural network calculation sub-module is used to input the tensor and the stitching vector into the artificial neural network for calculation, and generate the output vector according to the calculation result of the artificial neural network.
  • classification module includes:
  • the trigger word processing submodule is configured to determine the trigger word of the context word according to the aggregation information, and classify the trigger word according to the classifier module.
  • This application can effectively solve the problem of easy information loss and error propagation when using syntactic analysis tools by combining the syntactic information and context information of context words at the same time; and by combining skip connection modules in the graph attention network layer, more can be retained
  • the original feature avoids the unsatisfactory classification of the final trigger word due to the excessive spread of some short-distance syntactic information, and effectively improves the accuracy, recall rate and F1 value of the trigger word classification.
  • Fig. 1 is a flow chart of the steps of an event detection method based on a multi-layer graph attention network provided by an embodiment of the present application;
  • FIG. 2 is a schematic diagram of a syntax dependency tree provided by an embodiment of the present application.
  • FIG. 3 is a schematic diagram of an adjacency matrix provided by an embodiment of the present application.
  • Fig. 4 is a schematic diagram of a graph attention network provided by an embodiment of the present application.
  • FIG. 5 is a schematic flow diagram of an event detection method based on a multi-layer graph attention network provided by an embodiment of the present application
  • Fig. 6 is a structural block diagram of an event detection device based on a multi-layer graph attention network provided by an embodiment of the present application.
  • FIG. 1 shows a kind of event detection method based on multi-layer graph attention network that an embodiment of the application provides
  • the methods include:
  • This application can effectively solve the problem of easy information loss and error propagation when using syntactic analysis tools by combining the syntactic information and context information of context words at the same time; and by combining skip connection modules in the graph attention network layer, more can be retained
  • the original feature avoids the unsatisfactory classification of the final trigger word due to the excessive spread of some short-distance syntactic information, and effectively improves the accuracy, recall rate and F1 value of the trigger word classification.
  • step S110 the context words in the event text information are obtained, and the syntactic information adjacency matrix and splicing vector corresponding to the context words are determined.
  • step S110 the specific process of "obtaining the context words in the event text information and determining the syntactic information adjacency matrix and splicing vectors corresponding to the context words" in step S110 can be further described in conjunction with the following description.
  • the event text information is analyzed through syntactic dependencies, and syntactic information corresponding to the context words is generated according to the analysis result of the event text information.
  • syntactic dependence is to reveal the syntactic structure by analyzing the interdependence relationship between the components in the language unit.
  • the syntactic dependence analysis identifies the grammatical components such as "subject-verb-object” and "fixed complement” in the sentence, and emphasizes Analyze the relationship between words.
  • the core of the sentence in syntactic dependency analysis is the predicate verb, and then find out other components around the predicate, and finally analyze the sentence into a syntactic dependency tree.
  • the syntactic dependency tree can describe the dependency relationship between each word.
  • the event text information is obtained, the event text information is identified, and the syntax dependency analysis is performed using Stanford Core NLP (StandFord Natural Language Processing, Stanford Natural Language Processing Tool), and each sentence in the event text is analyzed and identified.
  • the event trigger words in the sentence and emphasize the analysis of the dependency relationship between the event trigger words and event parameters, and/or, the event parameters and event parameters, to form a syntactic dependency tree.
  • the event trigger word refers to the word that can best represent the occurrence of the event in an event. It is the projection of the concept of the event at the word and phrase levels.
  • event parameters refer to information describing the time, place, and person of an event.
  • FIG. 2 it shows a schematic diagram of a syntax dependency tree provided by an embodiment of the present application.
  • the sentence "I went to Beijing Tiananmen Square to watch the sun rise” in the constructed syntactic dependency tree, we can see that the core predicate of the sentence is "go", which is the root of the syntactic dependency tree, and the subject of "go” is The object of "I” and “go” is "Beijing Tiananmen", and the object of another verb "look” is "sun”.
  • the syntactic dependency tree can describe the dependency relationship between context words.
  • the adjacency matrix is a matrix representing the adjacency relationship between vertices.
  • Adjacency matrix is divided into directed graph adjacency matrix and undirected graph adjacency matrix.
  • the adjacency matrix of G is an nth-order square matrix with the following properties: For an undirected graph, the adjacency matrix must be symmetric, and the main diagonal must be zero, and the subdiagonal must not be 0, and the directed graph Not necessarily so.
  • the degree of any vertex i is the number of all non-zero elements in the i-th column (or i-th row)
  • the out-degree of a vertex i in a directed graph is the number of all non-zero elements in the i-th row number
  • the in-degree is the number of all non-zero elements in the i-th column
  • the adjacency matrix of the directed graph is used to store the syntactic dependency between two event parameters.
  • each sentence forms a syntactic dependency tree through syntactic dependency analysis, and then generates a corresponding adjacency matrix according to the syntactic dependency tree.
  • FIG. 3 shows a schematic diagram of an adjacency matrix provided by an embodiment of the present application.
  • the adjacency matrix shown in FIG. 3 corresponds to the syntax dependency tree shown in FIG. 2 .
  • the trigger word in Figure 2 is "go", "Beijing” and “Tiananmen” are parallel objects, so in the corresponding adjacency matrix, the intersection position of the row of "go” and the column value of "Beijing” and “Tiananmen” , the value is 1.
  • Each word is used as a node, "I”, “Go”, “Beijing", “Tiananmen", “Look”, “Sun”, and “Rise” are seven words, so it is a 7X7 square matrix.
  • the adjacency matrix of the directed graph is used to store the syntactic dependencies of the text. If there is a dependency between words, the corresponding adjacency matrix element value is 1, and between words without dependency relationship, the corresponding adjacency matrix element is 0.
  • the dependency relationship between the context words can be represented by the adjacency matrix.
  • the stitching vector is generated according to the word embedding vector of the context word.
  • the word-level information in the sentence needs to be converted into a real-valued vector as the input of the artificial neural network.
  • X ⁇ x1,x2,x3,...,xn ⁇ be a sentence of length n, where xi is the ith word in the sentence.
  • the semantic information of a word is related to its position in the sentence, and the part-of-speech and entity type information can improve the recognition of trigger words and the understanding of semantics.
  • the concatenated vector formed by concatenating the meaning vector, entity vector, part-of-speech vector and position vector of the context word is used as the input of the artificial neural network.
  • four different word embedding vectors including the meaning vector, entity vector, part-of-speech vector and position vector of the context word are spliced into the first spliced vector, and then the first spliced vector is input to the Bi-LSTM neural network
  • the network layer generates a second concatenated vector, which is used as one of the input vectors of the multi-layer graph attention network, and the concatenated vector can obtain semantic information between context words.
  • the adjacency matrix and the concatenated vector are used as the input of the artificial neural network to obtain an output vector.
  • the artificial neural network is a multi-layer graph attention network (Graph Attention Networks). Due to the various limitations of the traditional graph convolutional network, it cannot handle directed graphs well, cannot be applied to inductive tasks (inductive tasks refer to: the graph structure that needs to be processed in the training phase and the testing phase is different) and cannot handle dynamic Graph, and the graph attention network can solve the defects of the graph convolutional network very well. For each node, the attention mechanism can be used to calculate the similarity coefficient of node j to node i, so that it does not need to rely entirely on the graph structure. It can also be applied to inductive tasks.
  • the operation method of the graph attention network is a vertex-by-vertex operation, and each operation needs to cycle through all the vertices on the graph to complete.
  • Vertex-by-vertex operation means getting rid of the constraints of the Laplacian matrix in the original graph structure, so that the directed graph problem can be easily solved.
  • step S120 the specific process of "using the adjacency matrix and the concatenated vector as the input of the artificial neural network to obtain the output vector" in step S120 can be further described in conjunction with the following description.
  • the tensor and the concatenated vector are input to the artificial neural network for calculation, and the output vector is generated according to the calculation result of the artificial neural network.
  • FIG. 4 it shows a schematic diagram of a graph attention network provided by an embodiment of the present application, which is divided into two steps of calculating the attention coefficient and weighted summation.
  • the tensor and the second concatenated vector are used as the input of the graph attention layer, expressed as Where N is the number of nodes, F is the number of node features; the output is where F' represents the new node feature vector dimension.
  • a is a mapping of RF′ ⁇ RF ′ ⁇ R, and W ⁇ RF ′ ⁇ F is the weight matrix.
  • the graph attention network can use the attention mechanism to calculate the similarity coefficient weights between node i and neighbor node j for each node, so that it does not need to rely entirely on the graph structure.
  • the attention coefficient is normalized by softmax, and the calculation formula is expressed as follows:
  • W is the weight matrix multiplied with the feature
  • is the nonlinear activation function
  • the j traversed in j ⁇ N i represents all the nodes adjacent to i.
  • aggregation information is generated according to the aggregation of the spliced vector and the output vector.
  • the aggregation of syntactic information is realized through the skip connection module (Skip-Connection), and the splicing vector is skipped through the graph attention of each layer through the skip connection module network, and perform an aggregation operation with the output vector.
  • the over-spreading of short-distance syntactic information can be prevented through the skip connection module, more original syntactic information can be retained, and the final trigger word classification effect is avoided.
  • the trigger word category of the context word is determined according to the aggregation information.
  • step S140 the specific process of "determining the trigger word category of the context word according to the aggregation information" in step S140 can be further described in conjunction with the following description.
  • the trigger word of the context word is determined according to the aggregation information, and the trigger word is classified according to the classifier module.
  • the trigger words of the context words are determined according to the aggregation information, the trigger words are classified according to the preset conditions of the classifier module, and the corresponding event sentences are determined according to the classification categories of the trigger words. event type.
  • the event types are pre-defined different types.
  • the preset condition of the classifier module is to aggregate the information of different modules, pass through a fully connected layer, and then map the output of multiple neurons to the (0,1) interval through the softmax function (softmax function, you can see into a probability to understand, so as to perform multi-classification) select the category with the largest category probability corresponding to each context word as the label of the current trigger word prediction.
  • softmax function softmax function, you can see into a probability to understand, so as to perform multi-classification
  • the division of the data set in this experiment is consistent with the division of the data set of other event detection methods.
  • the experimental results prove that the event detection method proposed in this embodiment is better than the traditional event detection method that only uses sentence-level features.
  • the F1-score is about 8% higher; compared with the method based on the graph neural network, the event method proposed in this embodiment also achieved the highest values in F1-score and Recall.
  • FIG. 5 a schematic flow diagram of an event detection method based on a multi-layer graph attention network is shown
  • the event text information is acquired, the event text information is analyzed by syntactic analysis technology to generate a syntactic dependency tree, and then an adjacency matrix corresponding to the context word is generated according to the syntactic dependency tree, and The adjacency matrix of the same batch of sentences generates a tensor; a total of 4 different word embedding vectors of the context words are spliced into the first spliced vector, and the first spliced vector is input to the Bi-LSTM neural network layer to generate The second splicing vector, inputting the adjacency matrix and the second splicing vector into a multi-layer graph attention network to generate an output vector to aggregate syntactic information of different depths; passing the splicing vector through a skip connection module Skip the multi-layer graph attention network to do the aggregation operation; aggregate the output vector and the splicing vector, and classify the trigger words of the context words through the classifier module to determine the event type corresponding
  • the description is relatively simple, and for related parts, please refer to the part of the description of the method embodiment.
  • FIG. 6 it shows an event detection device based on a multi-layer graph attention network provided by an embodiment of the present application
  • An acquisition module 610 configured to acquire context words in the event text information, and determine a syntactic information adjacency matrix and splicing vectors corresponding to the context words;
  • Calculation module 620 is used for using described adjacency matrix and described stitching vector as the input of artificial neural network, obtains output vector;
  • Aggregation module 630 configured to generate aggregation information according to the aggregation of the spliced vector and the output vector;
  • a classification module 640 configured to determine the trigger word category of the context word according to the aggregation information.
  • the acquisition module 610 includes:
  • An expression submodule configured to determine syntactic information corresponding to the context word according to the context word
  • a generating submodule configured to generate the syntax information adjacency matrix according to the syntax information
  • the splicing submodule is used to generate the splicing vector according to the word embedding vector of the context word.
  • the expression submodule includes:
  • the dependency analysis sub-module is configured to analyze the event text information through syntactic dependencies, and generate syntactic information corresponding to the context words according to the analysis result of the event text information.
  • the calculation module 620 includes:
  • the array conversion submodule is used to generate a tensor from the adjacency matrix of the same batch
  • the artificial neural network calculation sub-module is used to input the tensor and the stitching vector into the artificial neural network for calculation, and generate the output vector according to the calculation result of the artificial neural network.
  • the classification module 640 includes:
  • the trigger word processing submodule is configured to determine the trigger word of the context word according to the aggregation information, and classify the trigger word according to the classifier module.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Machine Translation (AREA)

Abstract

La présente demande concerne un procédé et un appareil de détection d'événement basés sur un réseau d'attention graphique multicouche, le procédé consistant : à acquérir un mot de contexte dans des informations de texte d'événement, et à déterminer une matrice d'adjacence d'informations syntaxiques et un vecteur d'épissage correspondant au mot de contexte ; à l'aide de la matrice d'adjacence et du vecteur d'épissage comme entrée d'un réseau neuronal artificiel, à acquérir un vecteur de sortie ; à agréger et à générer des informations d'agrégation selon le vecteur d'épissage et le vecteur de sortie ; et, selon les informations d'agrégation, à déterminer une catégorie de mots de déclenchement du mot de contexte. Dans la présente demande, au moyen de la combinaison simultanée d'informations syntaxiques et d'informations de contexte d'un mot de contexte, le problème selon lequel une perte d'informations et une propagation d'erreur se produisent facilement lors de l'utilisation d'un outil d'analyse syntaxique peut être efficacement résolu ; au moyen d'une combinaison d'un module de connexion de saut dans une couche de réseau d'attention de graphe, la situation dans laquelle la classification d'un mot de déclenchement final n'est pas idéale en raison d'une propagation excessive de certaines informations syntaxiques à courte distance peut être évitée, ce qui permet d'améliorer efficacement la précision, le taux de rappel et la valeur F1 de la classification de mots de déclenchement.
PCT/CN2021/123249 2021-09-30 2021-10-12 Procédé et appareil de détection d'événement basés sur un réseau d'attention graphique multicouche WO2023050470A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111164755.1A CN113887213A (zh) 2021-09-30 2021-09-30 一种基于多层图注意力网络的事件检测方法及装置
CN202111164755.1 2021-09-30

Publications (1)

Publication Number Publication Date
WO2023050470A1 true WO2023050470A1 (fr) 2023-04-06

Family

ID=79005069

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/123249 WO2023050470A1 (fr) 2021-09-30 2021-10-12 Procédé et appareil de détection d'événement basés sur un réseau d'attention graphique multicouche

Country Status (2)

Country Link
CN (1) CN113887213A (fr)
WO (1) WO2023050470A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116303996A (zh) * 2023-05-25 2023-06-23 江西财经大学 基于多焦点图神经网络的主题事件抽取方法
CN116629237A (zh) * 2023-07-25 2023-08-22 江西财经大学 基于逐步集成多层注意力的事件表示学习方法及系统
CN116701576A (zh) * 2023-08-04 2023-09-05 华东交通大学 无触发词的事件检测方法和系统

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111259142A (zh) * 2020-01-14 2020-06-09 华南师范大学 基于注意力编码和图卷积网络的特定目标情感分类方法
US20200356628A1 (en) * 2019-05-07 2020-11-12 International Business Machines Corporation Attention-based natural language processing
CN112163416A (zh) * 2020-10-09 2021-01-01 北京理工大学 一种融合句法和实体关系图卷积网络的事件联合抽取方法
CN112347248A (zh) * 2020-10-30 2021-02-09 山东师范大学 一种方面级文本情感分类方法及系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200356628A1 (en) * 2019-05-07 2020-11-12 International Business Machines Corporation Attention-based natural language processing
CN111259142A (zh) * 2020-01-14 2020-06-09 华南师范大学 基于注意力编码和图卷积网络的特定目标情感分类方法
CN112163416A (zh) * 2020-10-09 2021-01-01 北京理工大学 一种融合句法和实体关系图卷积网络的事件联合抽取方法
CN112347248A (zh) * 2020-10-30 2021-02-09 山东师范大学 一种方面级文本情感分类方法及系统

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
BAI XUEFENG; LIU PENGBO; ZHANG YUE: "Investigating Typed Syntactic Dependencies for Targeted Sentiment Classification Using Graph Attention Neural Network", IEEE/ACM TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, IEEE, USA, vol. 29, 2 December 2020 (2020-12-02), USA, pages 503 - 514, XP011829612, ISSN: 2329-9290, DOI: 10.1109/TASLP.2020.3042009 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116303996A (zh) * 2023-05-25 2023-06-23 江西财经大学 基于多焦点图神经网络的主题事件抽取方法
CN116303996B (zh) * 2023-05-25 2023-08-04 江西财经大学 基于多焦点图神经网络的主题事件抽取方法
CN116629237A (zh) * 2023-07-25 2023-08-22 江西财经大学 基于逐步集成多层注意力的事件表示学习方法及系统
CN116629237B (zh) * 2023-07-25 2023-10-10 江西财经大学 基于逐步集成多层注意力的事件表示学习方法及系统
CN116701576A (zh) * 2023-08-04 2023-09-05 华东交通大学 无触发词的事件检测方法和系统
CN116701576B (zh) * 2023-08-04 2023-10-10 华东交通大学 无触发词的事件检测方法和系统

Also Published As

Publication number Publication date
CN113887213A (zh) 2022-01-04

Similar Documents

Publication Publication Date Title
WO2023050470A1 (fr) Procédé et appareil de détection d'événement basés sur un réseau d'attention graphique multicouche
Aggarwal et al. Classification of fake news by fine-tuning deep bidirectional transformers based language model
EP3958145A1 (fr) Procédé et appareil de recherche sémantique, dispositif et support d'enregistrement
WO2022001333A1 (fr) Procédé de reconnaissance d'entité à granularité fine basé sur une interaction de texte d'étiquette et une représentation d'espace hyperbolique
Sagnika et al. An attention-based CNN-LSTM model for subjectivity detection in opinion-mining
CN116304748B (zh) 一种文本相似度计算方法、系统、设备及介质
Niu et al. An Improved Method for Web Text Affective Cognition Computing Based on Knowledge Graph.
JP7369228B2 (ja) ユーザ興味画像の生成方法、装置、電子機器及び記憶媒体
Zhen et al. The research of convolutional neural network based on integrated classification in question classification
Mohan et al. Sarcasm Detection Using Bidirectional Encoder Representations from Transformers and Graph Convolutional Networks
CN112906368B (zh) 行业文本增量方法、相关装置及计算机程序产品
Chen et al. Learning a general clause-to-clause relationships for enhancing emotion-cause pair extraction
Lee et al. Detecting suicidality with a contextual graph neural network
CN113821588A (zh) 文本处理方法、装置、电子设备及存储介质
Gupta et al. Analysis of machine learning approaches for sentiment analysis of Twitter data
WO2023077562A1 (fr) Procédé et appareil de détection d'événement sur la base d'une stratégie de perturbation de graphe
KR102567896B1 (ko) 딥러닝을 이용한 종교 감성 분석 장치 및 방법
WO2023137903A1 (fr) Procédé et appareil de détermination de déclaration de réponse basés sur une sémantique grossière, et dispositif électronique
CN116257632A (zh) 基于图对比学习的未知目标立场检测方法、装置
Luo et al. A survey of transformer and GNN for aspect-based sentiment analysis
Xu et al. AHRNN: Attention‐Based Hybrid Robust Neural Network for emotion recognition
Lou Deep learning-based sentiment analysis of movie reviews
Aljebreen et al. Moth Flame Optimization with Hybrid Deep Learning based Sentiment Classification Towards ChatGPT on Twitter
Liu Emotional Analysis of Short Text Based on Multiscale Improved CNN Model Under Tenorflow
Sharma et al. 6 Natural Language

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21959034

Country of ref document: EP

Kind code of ref document: A1