CN113076744A - Cultural relic knowledge relation extraction method based on convolutional neural network - Google Patents

Cultural relic knowledge relation extraction method based on convolutional neural network Download PDF

Info

Publication number
CN113076744A
CN113076744A CN202110410046.0A CN202110410046A CN113076744A CN 113076744 A CN113076744 A CN 113076744A CN 202110410046 A CN202110410046 A CN 202110410046A CN 113076744 A CN113076744 A CN 113076744A
Authority
CN
China
Prior art keywords
sentence
cultural relic
neural network
convolutional neural
word
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110410046.0A
Other languages
Chinese (zh)
Inventor
田侃
唐昌伦
赵�卓
张殊
张晨
先兴平
游小琳
廖嘉欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Three Gorges Museum
Original Assignee
Chongqing University of Post and Telecommunications
Three Gorges Museum
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications, Three Gorges Museum filed Critical Chongqing University of Post and Telecommunications
Priority to CN202110410046.0A priority Critical patent/CN113076744A/en
Publication of CN113076744A publication Critical patent/CN113076744A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/284Lexical analysis, e.g. tokenisation or collocates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • G06F40/216Parsing using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Data Mining & Analysis (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Databases & Information Systems (AREA)
  • Machine Translation (AREA)

Abstract

The invention relates to the field of natural language processing, in particular to a cultural relic knowledge relationship extraction method based on a convolutional neural network, which comprises the following steps: acquiring a cultural relic data set, and preprocessing the cultural relic data to obtain preprocessed cultural relic data; performing Word vector conversion on the preprocessed cultural relic data through a Skip-gram model of Word2vec, and extracting the vocabulary level characteristics of each Word in the sentence; extracting sentence level characteristics of each sentence in the cultural relic data; splicing the extracted vocabulary level features and sentence level features to obtain spliced feature vectors, and accessing the spliced feature vectors into a full-connection layer as feature data of a classification task; and linearly transforming the characteristic data at the full connection layer, and calculating a classification predicted value through a Softmax classifier to obtain a confidence score of the sentence corresponding relation. The extracted feature confidence coefficient is higher, and the relation extraction efficiency is improved.

Description

Cultural relic knowledge relation extraction method based on convolutional neural network
Technical Field
The invention relates to the field of natural language processing, in particular to a cultural relic knowledge relationship extraction method based on a convolutional neural network.
Background
With the rapid development of communication technology and internet technology, people propose to use information technology to express the functions of traditional physical museum in a digitalized form in order to realize resource sharing and effective utilization of the cultural relic knowledge and enable social public to learn more about and contact the cultural relic knowledge, so that museums can better provide services for the society and the public. Therefore, the relation between cultural relic knowledge can be established by constructing the knowledge map, and the purpose of digital exhibition of the museum is achieved. Relationship extraction is one of the important tasks for knowledge graph construction, and relationship extraction is to identify entities and relationships between the entities. The relation extraction technology is an essential module for converting unstructured information in the cultural relic knowledge into structured information to be stored in a knowledge base, and provides certain support and help for the subsequent digital museum exhibition.
Generally, the conventional relation extraction technology adopts a rule-based relation extraction technology, and requires that grammar and semantic rules are constructed manually, and then preprocessed statement fragments are matched and judged with pattern rules to complete the classification of relation extraction. Because the relation extraction based on the rules depends on the rule formulation in the early stage, the problems of low coverage rate of the relation extraction, higher labor cost, poorer portability, difficult design of the rules with conflict and overlap and the like are caused. The method aims at the problem that the rule-based relation extraction is difficult to be applied to the complicated relation extraction of cultural relics. The method considers the diversity and profound of the cultural relic information, can automatically learn the effective characteristics of the information by utilizing the relation extraction based on the deep learning algorithm, and extracts the words and sentence characteristics of the sentences by combining the convolution deep neural network.
Disclosure of Invention
The invention provides a cultural relic knowledge relation extraction method based on a convolutional neural network, aiming at the problems that a large amount of manpower is required to be consumed for designing a rule in the rule-based relation extraction, and the method is difficult to be applied to the extraction of complex and various cultural relic information relations.
A cultural relic knowledge relationship extraction method based on a convolutional neural network comprises the following steps:
s1, obtaining a cultural relic data set, and preprocessing the cultural relic data to obtain preprocessed cultural relic data;
s2, converting Word vectors of the preprocessed cultural relic data through a Skip-gram model of Word2vec, and extracting the vocabulary level characteristics of each Word in the sentence;
s3, extracting sentence level characteristics of each sentence in the cultural relic data;
s4, splicing the extracted vocabulary level features and sentence level features to obtain spliced feature vectors (represented by sentences), and accessing the spliced feature vectors into a full-connection layer as feature data of a classification task; and linearly transforming the characteristic data at the full connection layer, and calculating a classification predicted value through a Softmax classifier to obtain a confidence score of the corresponding relation of the sentence, wherein the confidence score reflects the relation of the sentence.
Further, in step S3, the extracting sentence-level features of each sentence in the cultural relic data includes:
s31, extracting word features and position features aiming at each sentence in the cultural relic data, and combining and splicing the word features and the position features to obtain spliced feature vectors;
s32, sending the spliced feature vectors into a convolutional neural network to extract sentence-level features, and obtaining feature vectors output by the convolutional neural network;
and S33, performing down-sampling on the output feature vector of the convolution operation by utilizing maximum pooling to obtain more accurate sentence-level features.
Further, the convolutional neural network structure comprises an input layer, a pooling layer and a convolutional layer, wherein the pooling layer is used for selecting the strongest feature after the convolution result calculation by adopting MaxPholing; the convolutional layer is used to extract features.
Further, the processing flow of the convolutional neural network comprises the following processes:
s321, inputting a feature vector obtained by combining and splicing the word features and the position features into a convolutional neural network, and inputting the feature vector into a k x n word vector matrix, wherein k is the dimension of a word vector, and n is the number of words contained in a sentence;
s322, carrying out window interception on the input matrix, wherein the window size is l, and the intercepted window is represented as:
qi=wi:i+l-1∈Rl×d(1≤i≤m-l+1)
wherein q isiRepresenting sentence representation with window size of l, w represents text embedded representation, and R represents the dimension of the text as l × d;
s323, processing each phrase in the window by the convolutional layer, and outputting a context feature vector corresponding to each word; the kth convolution kernel WkThe result of the action on the ith window is calculated as follows:
pk,i=f(Wkqi+b)∈R
wherein f (-) is a tangent function, WkRepresenting the convolution kernel, qiThe representation window size is l sentence representation, b bias item;
the final output of the convolutional neural network is:
pk=[pk,1…pk,m-l+1]T∈Rm-l+1
wherein p iskIs the kth convolution kernelThe result of (1), pk,1The convolution result output by the first window is shown, R represents the dimension of the output result, m represents the sentence length, and l represents the window size.
Further, the maximum pooling is utilized to perform down-sampling on the output result of the convolutional neural network, redundant noise information contained in sentences is removed, and the most useful local feature information in the convolutional layer is screened out, wherein the expression is as follows:
pk,max=max(pk)
performing maximum pooling operation, splicing output results, performing nonlinear transformation, selecting hyperbolic tangent as an activation function, and obtaining more accurate sentence level characteristics by the following calculation method:
x=tanh(W·pk,max)
where x represents a more precise sentence-level feature, and x ∈ RdcW is the weight matrix to be learned, tanh is the activation function, pk,maxThe pooled feature vectors are obtained.
The invention has the following advantages:
(1) and adopting the position characteristics to encode the relative distance between the current word in the sentence and the two tagged nouns. Therefore, structural information which cannot be obtained by the word features can be extracted, the association degree between words is improved, and the extracted feature confidence coefficient is higher.
(2) The convolutional neural network is adopted to automatically extract the features, so that the problems that a large amount of manpower is consumed to design the rules with conflict overlapping in rule-based relation extraction and the rule portability is poor are solved, the automatic feature learning is realized, and the relation extraction efficiency is improved.
(3) And maximum pooling operation is adopted to perform downsampling on the output result of the convolutional layer, so that the most useful local feature information in the convolutional layer can be screened out and used as the input of a classification model.
Drawings
The present invention will be described in further detail with reference to the accompanying drawings and specific embodiments.
FIG. 1 is a diagram of a relationship extraction method model provided by the present invention;
FIG. 2 is a diagram of a Skip-gram model provided by the present invention;
fig. 3 is a schematic flow chart of the cultural relic knowledge relationship extraction method based on the convolutional neural network provided by the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment provides a cultural relic knowledge relationship extraction method based on a convolutional neural network, as shown in fig. 1 to 3, the method is specifically implemented as follows:
the method comprises the steps of firstly, acquiring a cultural relic information data set, preprocessing the cultural relic data, aligning and storing each part of data according to a corresponding format, and obtaining preprocessed cultural relic data.
Firstly, a cultural relic information data set is divided into 3 parts, namely a training set, a testing set and a verification set, according to the ratio of 8:1: 1. The relation category file contains 23 kinds of relations to be predicted in total, and on the basis of the file of the known category, the method of the invention judges which kind of relation in the relation category file the relation contained in the input sentence belongs to.
On the basis of named entity recognition, preprocessing the cultural relic information data, wherein the preprocessing comprises the following steps: the long sentence is split, simplified and traditional sentences are converted, various punctuations and stop words are removed, and the like.
The text in the sentence is participled using a participle tool, and then each sentence is converted into an original vector using a trained Word2vec or the like model. With this component, find word embedding converts each input tagged word into a vector, generating a word _ embedding vector matrix for initialization. After the word vectors are encoded, the text data may be converted into numerical data.
And secondly, extracting the vocabulary level characteristics of each word in the sentence.
The vocabulary level characteristics mainly comprise characteristics of nouns, types of noun pairs, word sequences among entities and the like. And converting the Word vector according to a Skip-gram model of Word2vec to obtain the Word characteristics.
The process of the Skip-gram model algorithm comprises the following steps: a vocabulary is first constructed from training text data and then a vector representation of the words is learned. The generated word vector file can be used as a base feature with word embedding. The Skip-gram algorithm model is shown in FIG. 2. To extract the vocabulary level features, the Skip-gram algorithm operates specifically to predict its context words (the other words in the window except the center word, where the window size is 2, i.e., two words on the left and right) given the target word (the center word).
Suppose the input sentence is "Chenziang, word Beryu. If the target word is "character" and the window size is 2, the context word is "son-on" or "boyu".
And thirdly, extracting word features and position features for each sentence in the cultural relic data, and combining and splicing the word features and the position features to obtain sentence level features of each sentence in the cultural relic data.
The sentence-level features mainly include word features and position features.
The sentence level feature extraction process comprises the following steps:
s31, extracting word features and position features aiming at each sentence in the cultural relic data, and combining and splicing the word features and the position features to obtain spliced feature vectors;
s32, sending the spliced feature vectors into a convolutional neural network to extract sentence-level features, and obtaining feature vectors output by the convolutional neural network;
and S33, performing down-sampling on the feature vectors output by the convolutional neural network by utilizing maximum pooling so as to obtain more accurate sentence-level features.
It is theorized by the distribution hypothesis that words appearing in the same context tend to have similar meanings. Therefore, in a preferred embodiment, in order to accurately capture word features in a text, when extracting the word features, a window size of a word context word is set, and context features corresponding to the window size are extracted for words in a sentence. The word feature WF represents a feature vector of a word and the context of the word.
In one specific embodiment, assume that the word sequence of a sentence is: [ CHEN1Seed of Japanese apricot2High drug3Chinese character4Bo et al5Jade6]All word tokens in a sentence are represented as a vector list (x)0,x1,…,x6),xiRepresenting the ith word of the word embedding in the sentence. Using w to represent the word context window size, setting w to 2, the second word "child" in the sentence is represented by the word characteristics: [ x ] of0,x1,x2]. So for the entire sentence, the word feature WF is represented as:
{[xs,x0,x1],[x0,x1,x2],[x1,x2,x3],[x2,x3,x4],[x3,x4,x5],[x4,x5,x6],[x5,x6,xe]}。
the position feature of a word refers to a combination vector of relative distances between the word and two adjacent entities respectively. For example: [ Chenziang1Chinese character2Berry jade3。]The relative distances of the word in the sentence from the "Chenziang" and "Boyu" are 1 and-1, respectively. Converting the relative distance into a randomly initialized dimension vector deThen, the vector d of the relative distance is obtained1And d2Wherein PF is [ d ═ d1,d2]。
After extracting word characteristics WF and position characteristics PF, combining and splicing the word characteristics and the position characteristics, and performing transposition operation, namely [ WF, PF ]]TAnd obtaining a spliced feature vector X ═ w1,w2,.....,wmUsing a matrix X formed by the feature vectors of sentences as a convolutional neural networkThe original input is subjected to sentence-level feature extraction.
The convolutional neural network adopted by the invention is an improved algorithm, and is applied to the relation extraction of text data. The method solves the problems that a large amount of manpower is consumed to design the rules with conflict overlapping in relation extraction based on the rules and the rules are poor in transportability, realizes automatic learning characteristics, and improves the relation extraction efficiency. And the feature vector processing of the traditional convolutional neural network is modified, and the traditional process for processing the feature vector comprises the following steps: when the characteristic vector matrix is input into the convolutional layer, the convolutional layer is responsible for extracting characteristics, and then the extracted characteristics are fed into the full-connection layer for relation classification. The core improvement of the embodiment is two, and mainly includes adding position feature extraction in the feature extraction process and adding a pooling layer after the convolutional layer. Firstly, when the text is subjected to feature extraction, the extraction of position features is added, so that structural information which cannot be obtained by word features can be extracted, the association degree among words is improved, and the context information of the words is extracted; secondly, the role of the pooling layer is to select the largest characteristic value from a plurality of characteristic values extracted by the convolution kernel to be reserved, and discard all other characteristic values. The feature value maximum represents the strongest of these features, and the other weak such features are discarded. By introducing a pooling layer in the traditional convolutional neural network, the parameters of the model can be reduced and more optimal characteristics can be selected.
The processing flow of the convolutional neural network specifically includes the following processes:
s321, inputting a feature vector obtained by combining and splicing word features and position features into a convolutional neural network, wherein the feature vector is a k x n word vector matrix, k is the dimension of a word vector, and n is the number of words contained in a sentence. The dimension of the convolution kernel is l × d.
And S322, before the convolution operation, window interception needs to be carried out on the input Embedding matrix. The window intercepting process mainly comprises the following implementation processes:
the window size is l, and the ith window can be expressed as:
qi=wi:i+l-1∈Rl×d(1≤i≤m-l+1)
wherein q isiRepresents a sentence representation with window size l, w represents a text embedding representation, and R represents a dimension of the text of l × d.
S323, the convolution layer processes each phrase in the window, and outputs a context feature vector corresponding to each word, wherein the context feature vector corresponds to only local features.
In a convolutional neural network, by dcThe set of convolution kernels can be expressed as a tensor
Figure BDA0003023780720000071
The kth convolution kernel WkThe result of the action on the ith window is calculated as follows:
pk,i=f(Wkqi+b)∈R。
wherein f (-) is a tangent function, WkRepresenting the convolution kernel, qiRepresenting sentence representation, b bias terms.
Performing convolution calculation on all windows i (i is more than or equal to 1 and less than or equal to m-l +1) to obtain the final output of the convolutional neural network, wherein the output result of the kth convolutional kernel is as follows:
pk=[pk,1…pk,m-l+1]T∈Rm-l+1
wherein p iskIs the final output of the convolutional neural network, pk,1The convolution result output by the first window is represented, R represents the dimension of the output result, m represents the sentence length, l represents the window size, and T represents the transposition operation.
Further, in one embodiment, the maximum pooling pair d is utilizedcThe output result of each convolution kernel is subjected to down sampling, and some redundant noise information contained in the sentence is removed, so that the most useful local characteristic information in the convolution layer is screened out. According to the formula pk,max=max(pk) To dcThe output results of the convolution kernels are respectively subjected to maximum pooling operation, the output results are spliced together, hyperbolic tangent is selected as an activation function through nonlinear transformation, and the calculation method is as follows, so that more accurate sentence level characteristics are obtained
Figure BDA0003023780720000081
x=tanh(W·pk,max)
Where W is the weight matrix to be learned, tanh is the activation function, pk,maxThe pooled feature vectors are obtained.
And fourthly, combining and splicing the vocabulary level features and the sentence level features to obtain spliced feature vectors (represented by sentences), taking the spliced feature vectors as feature data of a classification task to be accessed into a full connection layer, performing linear transformation on the feature data in the full connection layer, and finally giving a classification predicted value through a softmax activation function, so that the probability of entity-to-relation is predicted, and the confidence score of each relation corresponding to the sentence is calculated according to the probability of the entity-to-relation. The relationship with high score is the relationship extracted from the sentence.
For a sentence of a given entity pair, the probability of predicting the entity pair relationship is as follows:
Figure BDA0003023780720000082
wherein, O represents a relational probability modeling expression, r represents the r-th relation in the relations, x represents sentence level characteristics, M is a weight matrix to be learned, d is a bias term to be learned, OkDenotes the kth element, n, in OrIs the number of relationship categories.
It should be noted that, as one of ordinary skill in the art would understand, all or part of the processes of the above method embodiments may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when executed, the computer program may include the processes of the above method embodiments. The storage medium may be a magnetic disk, an optical disk, a Read-0nly Memory (ROM), a Random Access Memory (RAM), or the like.
The foregoing is directed to embodiments of the present invention and it will be appreciated by those skilled in the art that changes may be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (5)

1. A cultural relic knowledge relationship extraction method based on a convolutional neural network is characterized by comprising the following steps:
s1, obtaining a cultural relic data set, and preprocessing the cultural relic data to obtain preprocessed cultural relic data;
s2, converting Word vectors of the preprocessed cultural relic data through a Skip-gram model of Word2vec, and extracting the vocabulary level characteristics of each Word in the sentence;
s3, extracting sentence level characteristics of each sentence in the cultural relic data;
s4, splicing the extracted vocabulary level features and sentence level features to obtain spliced feature vectors, and accessing the spliced feature vectors into a full-connection layer as feature data of a classification task; and linearly transforming the characteristic data at the full connection layer, and calculating a classification predicted value through a Softmax classifier to obtain a confidence score of the corresponding relation of the sentence, wherein the confidence score reflects the relation of the sentence.
2. The method for extracting knowledge relationship of cultural relics based on convolutional neural network as claimed in claim 1, wherein in step S3, extracting sentence-level features of each sentence in the cultural relic data comprises:
s31, extracting word features and position features aiming at each sentence in the cultural relic data, and combining and splicing the word features and the position features to obtain spliced feature vectors;
s32, sending the spliced feature vectors into a convolutional neural network to extract sentence-level features, and obtaining feature vectors output by the convolutional neural network;
and S33, performing down-sampling on the output feature vector of the convolution operation by utilizing maximum pooling to obtain more accurate sentence-level features.
3. The cultural relic knowledge relationship extraction method based on the convolutional neural network as claimed in claim 2, wherein the structure of the convolutional neural network comprises an input layer, a pooling layer and a convolutional layer, wherein the pooling layer is used for selecting the strongest feature after the convolution result calculation by adopting MaxPholing; the convolutional layer is used to extract features.
4. The cultural relic knowledge relationship extraction method based on the convolutional neural network as claimed in claim 2, wherein the processing flow of the convolutional neural network comprises the following processes:
s321, inputting a feature vector obtained by combining and splicing the word features and the position features into a convolutional neural network, and inputting the feature vector into a k x n word vector matrix, wherein k is the dimension of a word vector, and n is the number of words contained in a sentence;
s322, carrying out window interception on the input matrix, wherein the window size is l, and the intercepted window is represented as:
qi=wi:i+l-1∈Rl×d (1≤i≤m-l+1)
wherein q isiRepresenting sentence representation with window size of l, w represents text embedded representation, and R represents the dimension of the text as l × d;
s323, processing each phrase in the window by the convolutional layer, and outputting a context feature vector corresponding to each word; the kth convolution kernel WkThe result of the action on the ith window is calculated as follows:
pk,i=f(Wkqi+b)∈R
wherein f (-) is a tangent function, WkRepresenting the convolution kernel, qiRepresenting window size, b bias term;
the final output of the convolutional neural network is:
pk=[pk,1…pk,m-l+1]T∈Rm-l+1
wherein p iskIs the result of the output of the kth convolution kernel, pk,1Represents the first oneAnd (3) outputting a convolution result by a window, wherein R represents the dimension of the output result, m represents the sentence length, and l represents the window size.
5. The method for extracting cultural relic knowledge relationship based on the convolutional neural network as claimed in claim 4, wherein the maximum pooling is used for down-sampling the output result of the convolutional neural network, removing redundant noise information contained in sentences, and screening out the most useful local feature information in convolutional layers, wherein the expression is as follows:
pk,max=max(pk)
performing maximum pooling operation, splicing output results, performing nonlinear transformation, selecting hyperbolic tangent as an activation function, and obtaining more accurate sentence level characteristics by the following calculation method:
x=tanh(W·pk,max)
wherein x represents a more accurate sentence-level feature, an
Figure FDA0003023780710000021
W is the weight matrix to be learned, tanh is the activation function, pk,maxThe pooled feature vectors are obtained.
CN202110410046.0A 2021-04-16 2021-04-16 Cultural relic knowledge relation extraction method based on convolutional neural network Pending CN113076744A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110410046.0A CN113076744A (en) 2021-04-16 2021-04-16 Cultural relic knowledge relation extraction method based on convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110410046.0A CN113076744A (en) 2021-04-16 2021-04-16 Cultural relic knowledge relation extraction method based on convolutional neural network

Publications (1)

Publication Number Publication Date
CN113076744A true CN113076744A (en) 2021-07-06

Family

ID=76617760

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110410046.0A Pending CN113076744A (en) 2021-04-16 2021-04-16 Cultural relic knowledge relation extraction method based on convolutional neural network

Country Status (1)

Country Link
CN (1) CN113076744A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113569049A (en) * 2021-08-10 2021-10-29 燕山大学 Multi-label text classification algorithm based on hierarchy Trans-CNN
CN117097674A (en) * 2023-10-20 2023-11-21 南京邮电大学 Sampling time insensitive frequency dimension configurable network feature extraction method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106855853A (en) * 2016-12-28 2017-06-16 成都数联铭品科技有限公司 Entity relation extraction system based on deep neural network
CN110196978A (en) * 2019-06-04 2019-09-03 重庆大学 A kind of entity relation extraction method for paying close attention to conjunctive word
CN111985245A (en) * 2020-08-21 2020-11-24 江南大学 Attention cycle gating graph convolution network-based relation extraction method and system
CN112084790A (en) * 2020-09-24 2020-12-15 中国民航大学 Relation extraction method and system based on pre-training convolutional neural network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106855853A (en) * 2016-12-28 2017-06-16 成都数联铭品科技有限公司 Entity relation extraction system based on deep neural network
CN110196978A (en) * 2019-06-04 2019-09-03 重庆大学 A kind of entity relation extraction method for paying close attention to conjunctive word
CN111985245A (en) * 2020-08-21 2020-11-24 江南大学 Attention cycle gating graph convolution network-based relation extraction method and system
CN112084790A (en) * 2020-09-24 2020-12-15 中国民航大学 Relation extraction method and system based on pre-training convolutional neural network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
DAOJIAN ZENG 等: "Relation Classification via Convolutional Deep Neural Network", 《PROCEEDINGS OF COLING 2014,THE 25TH INTERNATIONAL CONFERENCE ON COMPUTATIONAL LINGUISTICS: TECHNICAL PAPERS》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113569049A (en) * 2021-08-10 2021-10-29 燕山大学 Multi-label text classification algorithm based on hierarchy Trans-CNN
CN113569049B (en) * 2021-08-10 2024-03-29 燕山大学 Multi-label text classification method based on hierarchical Trans-CNN
CN117097674A (en) * 2023-10-20 2023-11-21 南京邮电大学 Sampling time insensitive frequency dimension configurable network feature extraction method

Similar Documents

Publication Publication Date Title
US11631007B2 (en) Method and device for text-enhanced knowledge graph joint representation learning
CN108733792B (en) Entity relation extraction method
WO2022007823A1 (en) Text data processing method and device
CN111931506B (en) Entity relationship extraction method based on graph information enhancement
CN106599032B (en) Text event extraction method combining sparse coding and structure sensing machine
CN107818164A (en) A kind of intelligent answer method and its system
CN110222163A (en) A kind of intelligent answer method and system merging CNN and two-way LSTM
CN111966812B (en) Automatic question answering method based on dynamic word vector and storage medium
CN110276396B (en) Image description generation method based on object saliency and cross-modal fusion features
CN111125367A (en) Multi-character relation extraction method based on multi-level attention mechanism
CN110532395B (en) Semantic embedding-based word vector improvement model establishing method
CN114091450B (en) Judicial domain relation extraction method and system based on graph convolution network
CN115221846A (en) Data processing method and related equipment
CN113076744A (en) Cultural relic knowledge relation extraction method based on convolutional neural network
CN110750646A (en) Attribute description extracting method for hotel comment text
CN114997288A (en) Design resource association method
CN115935959A (en) Method for labeling low-resource glue word sequence
CN115687609A (en) Zero sample relation extraction method based on Prompt multi-template fusion
CN115017879A (en) Text comparison method, computer device and computer storage medium
CN113191150B (en) Multi-feature fusion Chinese medical text named entity identification method
CN114707517A (en) Target tracking method based on open source data event extraction
CN111159405B (en) Irony detection method based on background knowledge
CN114169447B (en) Event detection method based on self-attention convolution bidirectional gating cyclic unit network
CN111813927A (en) Sentence similarity calculation method based on topic model and LSTM
CN113434698B (en) Relation extraction model establishing method based on full-hierarchy attention and application thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210706

RJ01 Rejection of invention patent application after publication