CN115545031A - Entity identification method, device, equipment and storage medium for multiple attention mechanisms - Google Patents
Entity identification method, device, equipment and storage medium for multiple attention mechanisms Download PDFInfo
- Publication number
- CN115545031A CN115545031A CN202211255256.8A CN202211255256A CN115545031A CN 115545031 A CN115545031 A CN 115545031A CN 202211255256 A CN202211255256 A CN 202211255256A CN 115545031 A CN115545031 A CN 115545031A
- Authority
- CN
- China
- Prior art keywords
- vector
- entity
- feature vector
- attention
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000007246 mechanism Effects 0.000 title claims abstract description 127
- 238000000034 method Methods 0.000 title claims abstract description 44
- 239000013598 vector Substances 0.000 claims abstract description 374
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 41
- 230000004927 fusion Effects 0.000 claims abstract description 24
- 230000015654 memory Effects 0.000 claims description 50
- 230000002457 bidirectional effect Effects 0.000 claims description 43
- 238000000605 extraction Methods 0.000 claims description 32
- 230000006870 function Effects 0.000 claims description 25
- 238000006243 chemical reaction Methods 0.000 claims description 16
- 230000006403 short-term memory Effects 0.000 claims description 15
- 230000004913 activation Effects 0.000 claims description 14
- 230000007787 long-term memory Effects 0.000 claims description 14
- 238000004590 computer program Methods 0.000 claims description 9
- 230000002035 prolonged effect Effects 0.000 claims description 3
- 238000010276 construction Methods 0.000 claims description 2
- 238000013473 artificial intelligence Methods 0.000 abstract description 5
- 238000005516 engineering process Methods 0.000 abstract description 4
- 238000004891 communication Methods 0.000 description 8
- 230000008569 process Effects 0.000 description 7
- 238000007726 management method Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 239000011159 matrix material Substances 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 230000008520 organization Effects 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 101500021084 Locusta migratoria 5 kDa peptide Proteins 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 239000006185 dispersion Substances 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/279—Recognition of textual entities
- G06F40/289—Phrasal analysis, e.g. finite state techniques or chunking
- G06F40/295—Named entity recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Machine Translation (AREA)
Abstract
The invention relates to an artificial intelligence technology, and discloses an entity identification method for multiple attention mechanisms, which comprises the following steps: constructing an entity recognition model architecture; inputting the semantic vector into an entity recognition model architecture to obtain a first feature vector; inputting the semantic vector into a convolutional neural network layer, outputting a second feature vector, and inputting the second feature vector into a gating mechanism layer to obtain a third feature vector; inputting the semantic vector into an attention mechanism layer to obtain an attention weight vector; fusing the first feature vector, the third feature vector and the attention weight vector to obtain a fused feature vector; and calculating the feature probability of the fusion feature vector, and identifying the entity sentence by using the feature probability. In addition, the invention also relates to a block chain technology, and the entity statement can be stored in the node of the block chain. The invention also provides an entity identification device, equipment and medium with multiple attention mechanisms. The invention can improve the efficiency of entity identification.
Description
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to an entity identification method and device of multiple attention mechanisms, electronic equipment and a computer readable storage medium.
Background
In natural language processing, named Entity Recognition (NER) is a task of recognizing the position and category of an Entity in a text, a template regular processing method is adopted for regular data, and a model is required for irregular data to be recognized.
Most of the existing named entity recognition methods are to establish a statistical learning model or a deep learning model and train the model, and finally obtain the result of named entity recognition. In practical application and production, the real-time requirement is high when the named entity identification is carried out, the hardware cost is high, only the accuracy of the identification result is considered, the hardware requirement on the entity identification is possibly too low, and the efficiency of the entity identification is low.
Disclosure of Invention
The invention provides a method and a device for entity identification of multiple attention mechanisms and a computer readable storage medium, and mainly aims to solve the problem of low efficiency of entity identification.
In order to achieve the above object, the present invention provides an entity identification method for multiple attention mechanisms, including:
constructing an entity recognition model architecture by utilizing a preset bidirectional long-short term memory network layer, a preset convolutional neural network layer, a preset gating mechanism layer and a preset attention mechanism layer;
acquiring a preset entity statement, and performing vector conversion on the entity statement to obtain a semantic vector;
inputting the semantic vector into a bidirectional long-short term memory network layer in the entity recognition model architecture for feature extraction to obtain a first feature vector;
inputting the semantic vector into a convolutional neural network layer in the entity recognition model architecture for feature extraction, outputting a second feature vector, and inputting the second feature vector into a gate control mechanism layer in the entity recognition model architecture to obtain a third feature vector;
inputting the semantic vector to an attention mechanism layer in the entity recognition model architecture to obtain an attention weight vector;
fusing the first feature vector, the third feature vector and the attention weight vector to obtain a fused feature vector;
and calculating the feature probability of the fusion feature vector by using a preset activation function, and identifying the entity sentence by using the feature probability.
Optionally, the constructing an entity recognition model architecture by using a preset bidirectional long-short term memory network layer, a preset convolutional neural network layer, a preset gating mechanism layer, and a preset attention mechanism layer includes:
taking the bidirectional long-short term memory network layer, the convolutional neural network and the attention mechanism layer as a first layer;
connecting the convolutional neural network layer with the gating mechanism layer to obtain a second layer;
and constructing the entity recognition model architecture according to the first level and the second level.
Optionally, the inputting the semantic vector into a bidirectional long and short term memory network layer in the entity recognition model architecture for feature extraction to obtain a first feature vector includes:
inputting the semantic vector into the bidirectional long and short term memory network layer according to a positive sequence rule to obtain a positive sequence feature vector;
inputting the semantic vector into the bidirectional long and short term memory network layer according to a reverse order rule to obtain a reverse order characteristic vector;
and fusing the positive sequence feature vector and the negative sequence feature vector to obtain the first feature vector.
Optionally, the inputting the second feature vector to a gating control mechanism layer in the entity recognition model architecture to obtain a third feature vector includes:
calculating an output eigenvector of the gating control mechanism layer from the second eigenvector using the following formula:
wherein h is 1 (x) The output feature vector of the 1 st layer is obtained, X is the second feature vector, W is a convolution kernel in the gate control mechanism layer, V is the convolution kernel in the gate control mechanism layer, b is a parameter learned by the gate control mechanism layer, and c is a parameter learned by the gate control mechanism layer;
and taking the output feature vector as the third feature vector.
Optionally, the inputting the semantic vector to an attention mechanism layer in the entity recognition model architecture to obtain an attention weight vector includes:
dividing the semantic vector into a query vector, a matching vector and a value vector;
calculating an attention weight for the attention mechanism layer output from the query vector, the match vector, and the value vector using an attention formula as follows:
Attention(Q,K,V)=Q(K T V)
wherein Attention (Q, K, V) represents the Attention weight, Q represents the query vector, K represents the match vector, T represents a transposed symbol, and V represents the value vector;
the attention weight is assembled into the attention weight vector.
Optionally, the fusing the first feature vector, the third feature vector, and the attention weight to obtain a fused feature vector includes:
counting vector lengths of the first feature vector, the third feature vector and the attention weight vector;
determining the maximum value in the vector lengths as a target length;
extending all vector lengths to the target length by using preset parameters;
and merging the column dimensions of all the vectors after the length is prolonged to obtain the fusion characteristic vector.
Optionally, the identifying the entity statement by using the feature probability includes:
selecting the maximum characteristic probability as a classification label corresponding to the entity statement;
and identifying the entity sentence according to the classification label.
In order to solve the above problems, the present invention further provides an entity recognition apparatus with multiple attention mechanisms, the apparatus including:
the entity recognition model construction module is used for constructing an entity recognition model architecture by utilizing a preset bidirectional long-short term memory network layer, a preset convolutional neural network layer, a preset gating mechanism layer and a preset attention mechanism layer;
the vector conversion module is used for acquiring a preset entity statement and performing vector conversion on the entity statement to obtain a semantic vector;
the first feature vector extraction module is used for inputting the semantic vector to a bidirectional long-short term memory network layer in the entity recognition model architecture for feature extraction to obtain a first feature vector;
the second feature vector extraction module is used for inputting the semantic vector into a convolutional neural network layer in the entity recognition model architecture for feature extraction, outputting a second feature vector, and inputting the second feature vector into a gate control mechanism layer in the entity recognition model architecture to obtain a third feature vector;
the attention weight vector output module is used for inputting the semantic vector to an attention mechanism layer in the entity recognition model architecture to obtain an attention weight vector;
a feature vector fusion module, configured to fuse the first feature vector, the third feature vector, and the attention weight vector to obtain a fusion feature vector;
and the entity sentence identification module is used for calculating the characteristic probability of the fusion characteristic vector by using a preset activation function and identifying the entity sentence by using the characteristic probability.
In order to solve the above problem, the present invention also provides an electronic device, including:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the method of entity identification of various attentiveness schemes described above.
In order to solve the above problems, the present invention also provides a computer-readable storage medium, in which at least one computer program is stored, the at least one computer program being executed by a processor in an electronic device to implement the entity identification methods of various attention mechanisms described above.
The embodiment of the invention constructs the entity recognition model by combining the respective characteristics of the bidirectional long-short term memory network and the convolutional neural network. The entity words are converted into respective vector representations through a word vector table, semantic vectors respectively pass through a bidirectional long-short term memory network, a convolutional layer and a global attention mechanism, the output data of the convolutional neural network also needs to pass through a gate control mechanism, complete extractable features can be mined from the data by utilizing an entity recognition model, and long-distance information can be prevented from being lost by utilizing attention to remember. Fusing the first feature vector, the third feature vector and the attention weight vector into a fused feature vector, calculating the probability value of the fused feature vector by using an activation function, and identifying the entity sentence according to the probability value. Compared with the existing entity identification single model, the identification effect is improved. Therefore, the entity identification method, the entity identification device, the electronic equipment and the computer readable storage medium of various attention mechanisms provided by the invention can solve the problem of low efficiency in entity identification.
Drawings
FIG. 1 is a flowchart illustrating an entity identification method for various attention mechanisms according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart illustrating a method for constructing an entity recognition model architecture according to an embodiment of the present invention;
fig. 3 is a schematic flow chart of extracting a first feature vector according to an embodiment of the present invention;
FIG. 4 is a functional block diagram of an entity recognition apparatus for various attention mechanisms according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device for implementing the entity identification methods of various attention mechanisms according to an embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and do not limit the invention.
The embodiment of the application provides an entity identification method for multiple attention mechanisms. The execution subject of the entity identification method of multiple attention mechanisms includes, but is not limited to, at least one of the electronic devices that can be configured to execute the method provided by the embodiments of the present application, such as a server, a terminal, and the like. In other words, the entity identification methods of the multiple attention mechanisms may be performed by software or hardware installed in the terminal device or the server device, and the software may be a block chain platform. The server includes but is not limited to: a single server, a server cluster, a cloud server or a cloud server cluster, and the like. The server may be an independent server, or may be a cloud server that provides basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a Content Delivery Network (CDN), a big data and artificial intelligence platform, and the like.
Referring to fig. 1, a flow chart of an entity identification method for multiple attention mechanisms according to an embodiment of the present invention is shown. In this embodiment, the entity identification methods of the multiple attention mechanisms include:
s1, constructing an entity recognition model architecture by utilizing a preset bidirectional long-short term memory network layer, a preset convolutional neural network layer, a preset gating mechanism layer and a preset attention mechanism layer;
in the embodiment of the invention, the bidirectional long-short term memory network layer is Bi-LSTM, wherein the Bi-LSTM model is divided into 2 independent LSTMs, the input sequence is respectively input into the 2 LSTM models in positive sequence and negative sequence for feature extraction, and a word vector formed by splicing two output vectors is used as the final feature expression of the word, so that feature data obtained at the time t has information between the past and the future. The convolutional neural network layer is CNN and is used for feature extraction, but a gating mechanism GLU is introduced into the CNN, and the gating mechanism GLU is equivalent to replacing the original activation function. The attention mechanism layer is used to let each input interact with each other, finding the input that should be of greater interest.
In an embodiment of the present invention, referring to fig. 2, the constructing an entity recognition model architecture by using a preset bidirectional long and short term memory network layer, a preset convolutional neural network layer, a preset gating mechanism layer, and a preset attention mechanism layer includes:
s21, taking the bidirectional long and short term memory network layer, the convolutional neural network and the attention mechanism layer as a first layer;
s22, connecting the convolutional neural network layer with the gating mechanism layer to obtain a second layer;
s23, constructing the entity recognition model architecture according to the first layer and the second layer.
In detail, the entity recognition model architecture is used for recognizing named entities, and according to vectors input by an input layer, the named entities are respectively input by a bidirectional long-short term memory network (Bi-LSTM) layer, a convolutional layer (CNN) and a global attention-control layer (self-attention), and data output by a convolutional neural network is used as an input of a gating mechanism, namely the convolutional neural network layer is connected with the gating mechanism to serve as a second layer of the entity recognition model architecture.
Specifically, entity statements can be identified through an entity identification model, which refers to a process of identifying a specific type of object name or symbol in a document set. Firstly, the entity sentence is converted into a word embedding vector, so that the entity sentence can be conveniently identified by an entity identification model.
S2, acquiring a preset entity statement, and performing vector conversion on the entity statement to obtain a semantic vector;
in the embodiment of the invention, the entity statement comprises a person name, a mechanism name, a place name and other entities marked by names, and the wider entities comprise numbers, dates, currency, addresses and the like.
In detail, when entity recognition is performed, an entity statement is converted into a vector to be recognized in an entity recognition model. Namely, the entity sentence is coded and represented by a vector.
In the embodiment of the invention, the entity statement can be subjected to vector conversion through a preset vector conversion model to obtain a semantic vector, wherein the vector conversion model comprises but is not limited to a word2vec model and a Bert model.
In detail, the entity sentences are subjected to vector conversion, that is, the whole sentences corresponding to the language and the semantic information thereof are represented as vectors, so that the machine can understand the context, the intention and other nuances hidden in the text, for example, the embedding matrix is 8000 × 732, the dictionary capacity is 8000, the embedding vector dimension is 732, and the sentences with the length s are represented as s × 732 matrix, so that the machine can understand the sentence context through the vector matrix, and thus the complete meaning of the sentences is obtained.
Furthermore, vector representation of the entity statement is obtained through an input layer of the entity recognition model, and comprehensive feature extraction is carried out through a bidirectional long-short term memory network layer and a convolutional neural network layer respectively, so that the extracted features are as complete as possible.
S3, inputting the semantic vector into a bidirectional long-short term memory network layer in the entity recognition model architecture for feature extraction to obtain a first feature vector;
in the embodiment of the invention, the first feature vector is a word vector formed by splicing a semantic vector through a positive sequence output vector and a negative sequence output vector in a bidirectional long-short term memory network.
In an embodiment of the present invention, referring to fig. 3, the inputting the semantic vector to a bidirectional long and short term memory network layer in the entity recognition model architecture for feature extraction to obtain a first feature vector includes:
s31, inputting the semantic vector into the bidirectional long and short term memory network layer according to a positive sequence rule to obtain a positive sequence feature vector;
s32, inputting the semantic vector into the bidirectional long and short term memory network layer according to a reverse order rule to obtain a reverse order feature vector;
and S33, fusing the forward sequence feature vector and the reverse sequence feature vector to obtain the first feature vector.
In detail, the bidirectional long-short term memory network layer is divided into 2 independent LSTMs, and input sequences are respectively input into the 2 LSTM neural networks in a positive sequence and a negative sequence for feature extraction. Bidirectional semantic dependence can be better captured through the bidirectional long-short term memory network layer. The LSTM comprises a forgetting gate, an input gate and an output gate, wherein the forgetting gate is used for forgetting part of past information, the input gate is used for remembering part of present information, the past memory and the present memory are merged and then pass through the output gate, and the feature vector is output for representation.
Illustratively, entity recognition is actually a sequence tagging problem, and the entity is divided into a plurality of data tags, for example, B-Person represents the beginning part of a Person name, I-Person represents the middle part of the Person name, B-Organization represents the beginning part of an Organization, I-Organization represents the middle part of the Organization, and 0 represents non-entity information. When the Bi-LSTM is used for feature extraction, the input entity sentence is converted into (x) through a vector 1 ,x 2 ,x 3 ,x 4 ,x 5 ) Wherein (x) 1 ,x 2 ) Is the name of a person, (x) 3 ) Is the organization (x) 4 ,x 5 ) Is a non-entity messageAnd inputting the vector into the Bi-LSTM, and outputting scores which represent that the word corresponds to each category, namely the output result should be BIBIBIO. Such as x 1 The output by Bi-LSTM was 1.5 (B-Person), 0.9 (I-Person), 0.1 (-Organization), 0.08 (I-Organization), 0.05 (0), thus x 1 The corresponding first feature vector is (1.5,0.9,0.1,0.08,0.05).
S4, inputting the semantic vector to a convolutional neural network layer in the entity recognition model architecture for feature extraction, outputting a second feature vector, and inputting the second feature vector to a gate control mechanism layer in the entity recognition model architecture to obtain a third feature vector;
in the embodiment of the invention, the second feature vector is a vector representation of a semantic vector for extracting features of the entity statement through a convolutional neural network.
In detail, the semantic vector may be feature extracted by a convolution kernel in a convolutional neural network. The convolution kernel is also called filter, and can be used to extract features, and its essence is a set of weights. And (5) convolving the vector with the convolution kernel to obtain a characteristic value, and fusing the characteristic value to obtain the characteristic vector.
In this embodiment of the present invention, the third feature vector is a feature vector output by a gating mechanism. In the gate control mechanism input layer, the output of the convolutional neural network is used as an input vector, and for the input of the same layer, convolution operations A and B are carried out, wherein A is the output of the convolutional layer without a nonlinear function, B is the output of the convolutional layer with a nonlinear activation function, and vector multiplication is carried out on A and B to be used as the input of the gate control mechanism layer.
In this embodiment of the present invention, the inputting the second feature vector to a gate control mechanism layer in the entity recognition model architecture to obtain a third feature vector includes:
calculating an output eigenvector of the gating control mechanism layer from the second eigenvector using the following formula:
wherein h is 1 (x) The output characteristic vector of the ith layer is taken as the output characteristic vector of the ith layer, X is the second characteristic vector, W is a convolution kernel in the gate control mechanism layer, V is the convolution kernel in the gate control mechanism layer, b is a parameter learned by the gate control mechanism layer, and c is a parameter learned by the gate control mechanism layer;
and taking the output feature vector as the third feature vector.
In detail, the output feature vector is an output vector of a hidden layer, wherein the hidden layer abstracts features of input data to another dimensional space to show more abstracted features of the input data, and the features can be better linearly divided.
Specifically, the gating mechanism is used for controlling information transferred in the hierarchical structure, the gating mechanism is a gating linear unit, and finally, a vector of each word output by each layer is represented as a fusion of output vectors of each hidden layer, namely a third feature vector.
Furthermore, the gating mechanism is beneficial to deep network modeling of the text, can reduce gradient dispersion and accelerate convergence.
S5, inputting the semantic vector to an attention mechanism layer in the entity recognition model architecture to obtain an attention weight vector;
in the embodiment of the present invention, the attention weight is an entity statement that is most interesting to be output through an attention mechanism layer. The attention mechanism is used to prevent the loss of long distance information.
In an embodiment of the present invention, the inputting the semantic vector to an attention mechanism layer in the entity recognition model architecture to obtain an attention weight vector includes:
dividing the semantic vector into a query vector, a matching vector and a value vector;
calculating an attention weight for the attention mechanism layer output from the query vector, the match vector, and the value vector using an attention formula as follows:
Attention(Q,K,V)=Q(K T V)
wherein Attention (Q, K, V) represents the Attention weight, Q represents the query vector, K represents the match vector, T represents a transposed symbol, and V represents the value vector;
the attention weight is assembled into the attention weight vector.
In detail, the attention mechanism is used for dynamically generating weights of different connections, the specific calculation process of the attention mechanism is two processes, the first process is to calculate a weight coefficient corresponding to the semantic vector according to Query and key, and the second process is to perform weighted summation on Value according to the weight coefficient. And obtaining an attention weight list corresponding to the semantic vector through an attention model.
Specifically, in order to reduce the computational complexity and meet the performance requirement, the core formula in the original Attention machine module is replaced by Linear Attention area, namely Softmax (QK) T ) V is simplified to Q (K) T V), Q, K and V correspond to query, key and value in classical self-entry. And K T V∈R d×d ,QK T ∈R T×T The use of linear attention reduces the computational dimensionality.
Further, in order to make the feature of the entity sentence more completely embodied, the vector can be merged to form a merged feature vector for entity recognition.
S6, fusing the first feature vector, the third feature vector and the attention weight vector to obtain a fused feature vector;
in the embodiment of the invention, the fusion feature vector is fused by data output by a bidirectional long-short term memory network layer (Bi-LSTM), a convolutional neural network layer (CNN) and an attention mechanism layer (attention).
In this embodiment of the present invention, the fusing the first feature vector, the third feature vector, and the attention weight to obtain a fused feature vector includes:
counting vector lengths of the first feature vector, the third feature vector and the attention weight vector;
determining the maximum value in the vector lengths as a target length;
extending all vector lengths to the target length by using preset parameters;
and merging the column dimensions of all the vectors after the length is prolonged to obtain the fusion characteristic vector.
In detail, since the lengths of the first feature vector, the third feature vector, and the attention weight vector may not be the same, in order to perform vector fusion on vectors, it is necessary to unify the vector lengths of all vectors.
Specifically, the vector lengths of all vectors may be compared, and the vector having a shorter vector length may be vector-extended so that the vector lengths of all vectors are the same.
Illustratively, there is a vector A in the first feature vector [23,36,86]And a vector B is present in the third feature vector: [56,89,57,86]The attention weight vector is represented by the formula C: [36,89,35]If the vector length of the vector A is 3, the vector length of the vector B is 4, and the vector length of the vector B is greater than the first vector length, the vector A can be extended by using a preset parameter (such as 0) until the vector length of the vector A is equal to the vector length of the vector B to obtain an extended vector, so that the vector A is extended to be [23,36,86,0 [ ]]Similarly, the C-chain extension is [36,89,35,0]The column elements corresponding to the vectors are displayed in parallel, and then the column dimensions among the vectors are merged, namelyAnd taking the merged vector matrix as the fusion feature vector.
And S7, calculating the feature probability of the fusion feature vector by using a preset activation function, and identifying the entity sentence by using the feature probability.
In the embodiment of the invention, the activation function is softmax, the softmax function is an activation function for a multi-classification problem, and for an arbitrary real vector with the length of K, the arbitrary real vector is compressed into a real vector with the length of K, the value is in the range of [0,1], and the sum of elements in the vector is 1.
In the embodiment of the invention, the fusion feature vector is calculated by a softmax function, so that the probability value of the classification result in each group of vectors can be obtained, and the probability of belonging to each category is represented.
In this embodiment of the present invention, the identifying the entity statement by using the feature probability includes:
selecting the maximum characteristic probability as a classification label corresponding to the entity statement;
and identifying the entity sentence according to the classification label.
In detail, the entity sentence is firstly subjected to word division, the sentence is divided into different entity types, the feature probability of each word in different entity types is calculated, the maximum feature probability of each word in different entity types is selected as a classification label, the classification labels of each word in the entity sentence are output, and the different classification labels are collected to obtain the entity label corresponding to the entity sentence.
Illustratively, the entity statement is morning xiao ming school, where an entity tag corresponding to time in the morning is T, an entity tag corresponding to name of xiao ming person is B, an entity tag corresponding to institution-based school is I, and if the entity tag is non-entity information, the entity tag should be TBIO. If the probability of the corresponding feature is small (the probability of the label T is 0.3, the probability of the label B is 0.8, the probability of the label I is 0.2, and the probability of the label O is 0.1), the label B is selected as the corresponding output of the name of the person.
The embodiment of the invention constructs the entity recognition model by combining the respective characteristics of the bidirectional long-short term memory network and the convolutional neural network. The entity words are converted into respective vector representation through the word vector table, semantic vectors respectively pass through the bidirectional long-short term memory network, the convolution layer and the global attention mechanism, the output data of the convolution neural network also needs to pass through a gate control mechanism, complete extractable features can be mined from the data by utilizing an entity recognition model, and long-distance information loss can be prevented by utilizing attention to remember. Fusing the first feature vector, the third feature vector and the attention weight vector into a fused feature vector, calculating the probability value of the fused feature vector by using an activation function, and identifying the entity sentence according to the probability value. Compared with the existing entity identification single model, the identification effect is improved. Therefore, the entity identification method, the entity identification device, the electronic equipment and the computer readable storage medium of various attention mechanisms provided by the invention can solve the problem of low efficiency in entity identification.
Fig. 4 is a functional block diagram of an entity recognition apparatus with multiple attention mechanisms according to an embodiment of the present invention.
The entity identifying apparatus 100 of the various attention mechanisms of the present invention may be installed in an electronic device. According to the implemented functions, the entity identification apparatus 100 of the multiple attention mechanisms may include an entity identification model building module 101, a vector conversion module 102, a first feature vector extraction module 103, a second feature vector extraction module 104, an attention weight vector output module 105, a feature vector fusion module 106, and an entity sentence identification module 107. The module of the present invention, which may also be referred to as a unit, refers to a series of computer program segments that can be executed by a processor of an electronic device and that can perform a fixed function, and that are stored in a memory of the electronic device.
In the present embodiment, the functions regarding the respective modules/units are as follows:
the entity identification model building module 101 is configured to build an entity identification model framework by using a preset bidirectional long-short term memory network layer, a preset convolutional neural network layer, a preset gating mechanism layer and a preset attention mechanism layer;
the vector conversion module 102 is configured to obtain a preset entity statement, and perform vector conversion on the entity statement to obtain a semantic vector;
the first feature vector extraction module 103 is configured to input the semantic vector to a bidirectional long-short term memory network layer in the entity recognition model architecture for feature extraction, so as to obtain a first feature vector;
the second feature vector extraction module 104 is configured to input the semantic vector to a convolutional neural network layer in the entity recognition model architecture for feature extraction, output a second feature vector, input the second feature vector to a gating control mechanism layer in the entity recognition model architecture, and obtain a third feature vector;
the attention weight vector output module 105 is configured to input the semantic vector to an attention mechanism layer in the entity recognition model architecture to obtain an attention weight vector;
the feature vector fusion module 106 is configured to fuse the first feature vector, the third feature vector, and the attention weight vector to obtain a fusion feature vector;
the entity sentence recognition module 107 is configured to calculate a feature probability of the fusion feature vector by using a preset activation function, and recognize the entity sentence by using the feature probability.
In detail, when the modules in the entity identification apparatus 100 with multiple attention mechanisms according to the embodiment of the present invention are used, the same technical means as the entity identification method with multiple attention mechanisms described in fig. 1 to 3 are adopted, and the same technical effects can be produced, which is not described herein again.
Fig. 5 is a schematic structural diagram of an electronic device for implementing an entity identification method with multiple attention mechanisms according to an embodiment of the present invention.
The electronic device 1 may comprise a processor 10, a memory 11, a communication bus 12 and a communication interface 13, and may further comprise computer programs, such as various attention-based entity identification programs, stored in the memory 11 and executable on the processor 10.
In some embodiments, the processor 10 may be composed of an integrated circuit, for example, a single packaged integrated circuit, or may be composed of a plurality of integrated circuits packaged with the same function or different functions, and includes one or more Central Processing Units (CPUs), a microprocessor, a digital Processing chip, a graphics processor, a combination of various control chips, and the like. The processor 10 is a Control Unit (Control Unit) of the electronic device, connects various components of the electronic device by using various interfaces and lines, and executes various functions and processes data of the electronic device by running or executing programs or modules (e.g., entity identification programs for executing various attention mechanisms, etc.) stored in the memory 11 and calling data stored in the memory 11.
The memory 11 includes at least one type of readable storage medium including flash memory, removable hard disks, multimedia cards, card-type memory (e.g., SD or DX memory, etc.), magnetic memory, magnetic disks, optical disks, etc. The memory 11 may in some embodiments be an internal storage unit of the electronic device, for example a removable hard disk of the electronic device. The memory 11 may also be an external storage device of the electronic device in other embodiments, such as a plug-in mobile hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the electronic device. Further, the memory 11 may also include both an internal storage unit and an external storage device of the electronic device. The memory 11 may be used not only to store application software installed in the electronic device and various types of data, such as codes of entity identification programs of various attention mechanisms, etc., but also to temporarily store data that has been output or will be output.
The communication bus 12 may be a Peripheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (EISA) bus. The bus may be divided into an address bus, a data bus, a control bus, etc. The bus is arranged to enable connection communication between the memory 11 and at least one processor 10 or the like.
The communication interface 13 is used for communication between the electronic device and other devices, and includes a network interface and a user interface. Optionally, the network interface may include a wired interface and/or a wireless interface (e.g., WI-FI interface, bluetooth interface, etc.), which are typically used to establish a communication connection between the electronic device and other electronic devices. The user interface may be a Display (Display), an input unit such as a Keyboard (Keyboard), and optionally a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like. The display, which may also be referred to as a display screen or display unit, is suitable, among other things, for displaying information processed in the electronic device and for displaying a visualized user interface.
Only electronic devices having components are shown, and those skilled in the art will appreciate that the structures shown in the figures do not constitute limitations on the electronic devices, and may include fewer or more components than shown, or some components in combination, or a different arrangement of components.
For example, although not shown, the electronic device may further include a power supply (such as a battery) for supplying power to each component, and preferably, the power supply may be logically connected to the at least one processor 10 through a power management device, so that functions of charge management, discharge management, power consumption management and the like are realized through the power management device. The power supply may also include any component of one or more dc or ac power sources, recharging devices, power failure detection circuitry, power converters or inverters, power status indicators, and the like. The electronic device may further include various sensors, a bluetooth module, a Wi-Fi module, and the like, which are not described herein again.
It is to be understood that the embodiments described are illustrative only and are not to be construed as limiting the scope of the claims.
The entity identification program of various attention mechanisms stored in the memory 11 of the electronic device 1 is a combination of instructions, which when executed in the processor 10, can realize:
constructing an entity recognition model architecture by utilizing a preset bidirectional long-short term memory network layer, a preset convolutional neural network layer, a preset gating mechanism layer and a preset attention mechanism layer;
acquiring a preset entity statement, and performing vector conversion on the entity statement to obtain a semantic vector;
inputting the semantic vector into a bidirectional long and short term memory network layer in the entity recognition model architecture for feature extraction to obtain a first feature vector;
inputting the semantic vector into a convolutional neural network layer in the entity recognition model architecture for feature extraction, outputting a second feature vector, and inputting the second feature vector into a gating control mechanism layer in the entity recognition model architecture to obtain a third feature vector;
inputting the semantic vector to an attention mechanism layer in the entity recognition model architecture to obtain an attention weight vector;
fusing the first feature vector, the third feature vector and the attention weight vector to obtain a fused feature vector;
and calculating the feature probability of the fusion feature vector by using a preset activation function, and identifying the entity sentence by using the feature probability.
Specifically, the specific implementation method of the instruction by the processor 10 may refer to the description of the relevant steps in the embodiment corresponding to the drawings, which is not described herein again.
Further, the integrated modules/units of the electronic device 1, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. The computer readable storage medium may be volatile or non-volatile. For example, the computer-readable medium may include: any entity or device capable of carrying said computer program code, recording medium, U-disk, removable hard disk, magnetic disk, optical disk, computer Memory, read-Only Memory (ROM).
The present invention also provides a computer-readable storage medium, storing a computer program which, when executed by a processor of an electronic device, may implement:
constructing an entity recognition model architecture by utilizing a preset bidirectional long-short term memory network layer, a preset convolutional neural network layer, a preset gating mechanism layer and a preset attention mechanism layer;
acquiring a preset entity statement, and performing vector conversion on the entity statement to obtain a semantic vector;
inputting the semantic vector into a bidirectional long and short term memory network layer in the entity recognition model architecture for feature extraction to obtain a first feature vector;
inputting the semantic vector into a convolutional neural network layer in the entity recognition model architecture for feature extraction, outputting a second feature vector, and inputting the second feature vector into a gating control mechanism layer in the entity recognition model architecture to obtain a third feature vector;
inputting the semantic vector to an attention mechanism layer in the entity recognition model architecture to obtain an attention weight vector;
fusing the first feature vector, the third feature vector and the attention weight vector to obtain a fused feature vector;
and calculating the feature probability of the fusion feature vector by using a preset activation function, and identifying the entity sentence by using the feature probability.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus, device and method can be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
The block chain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
The embodiment of the application can acquire and process related data based on an artificial intelligence technology. Among them, artificial Intelligence (AI) is a theory, method, technique and application system that simulates, extends and expands human Intelligence using a digital computer or a machine controlled by a digital computer, senses the environment, acquires knowledge and uses the knowledge to obtain the best result.
Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the system claims may also be implemented by one unit or means in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.
Finally, it should be noted that the above embodiments are only intended to illustrate the technical solutions of the present invention and not to limit the same, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions can be made to the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.
Claims (10)
1. A method for entity identification in multiple attention mechanisms, the method comprising:
constructing an entity recognition model architecture by utilizing a preset bidirectional long-short term memory network layer, a preset convolutional neural network layer, a preset gating mechanism layer and a preset attention mechanism layer;
acquiring a preset entity statement, and performing vector conversion on the entity statement to obtain a semantic vector;
inputting the semantic vector into a bidirectional long and short term memory network layer in the entity recognition model architecture for feature extraction to obtain a first feature vector;
inputting the semantic vector into a convolutional neural network layer in the entity recognition model architecture for feature extraction, outputting a second feature vector, and inputting the second feature vector into a gating control mechanism layer in the entity recognition model architecture to obtain a third feature vector;
inputting the semantic vector to an attention mechanism layer in the entity recognition model architecture to obtain an attention weight vector;
fusing the first feature vector, the third feature vector and the attention weight vector to obtain a fused feature vector;
and calculating the feature probability of the fusion feature vector by using a preset activation function, and identifying the entity sentence by using the feature probability.
2. The method according to claim 1, wherein the constructing the entity recognition model architecture using the predetermined bidirectional long-term and short-term memory network layer, the predetermined convolutional neural network layer, the predetermined gating mechanism layer, and the predetermined attention mechanism layer comprises:
taking the bidirectional long-short term memory network layer, the convolutional neural network and the attention mechanism layer as a first layer;
connecting the convolutional neural network layer with the gating mechanism layer to obtain a second layer;
and constructing the entity recognition model architecture according to the first level and the second level.
3. The method of claim 1, wherein the step of inputting the semantic vector into a bidirectional long short term memory network layer of the entity recognition model architecture for feature extraction to obtain a first feature vector comprises:
inputting the semantic vector into the bidirectional long and short term memory network layer according to a positive sequence rule to obtain a positive sequence feature vector;
inputting the semantic vector into the bidirectional long and short term memory network layer according to a reverse order rule to obtain a reverse order characteristic vector;
and fusing the positive sequence feature vector and the negative sequence feature vector to obtain the first feature vector.
4. The method of entity recognition in multiple attention mechanisms according to claim 1, wherein the inputting the second feature vector to a gating mechanism layer in the entity recognition model architecture to obtain a third feature vector comprises:
calculating an output eigenvector of the gating control mechanism layer from the second eigenvector using the following formula:
wherein h is l (x) Is composed ofThe output characteristic vector of the ith layer, X is the second characteristic vector, W is a convolution kernel in the gate control mechanism layer, V is the convolution kernel in the gate control mechanism layer, b is a parameter learned by the gate control mechanism layer, and c is a parameter learned by the gate control mechanism layer;
and taking the output feature vector as the third feature vector.
5. The method of entity recognition in multiple attention mechanisms according to claim 1, wherein the inputting the semantic vector to an attention mechanism layer in the entity recognition model architecture to obtain an attention weight vector comprises:
dividing the semantic vector into a query vector, a matching vector and a value vector;
calculating an attention weight for the attention mechanism layer output from the query vector, the match vector, and the value vector using an attention formula as follows:
Attention(Q,K,V)=Q(K T V)
wherein Attention (Q, K, V) represents the Attention weight, Q represents the query vector, K represents the match vector, T represents a transposed symbol, and V represents the value vector;
the attention weight is assembled into the attention weight vector.
6. The method according to any one of claims 1 to 5, wherein the fusing the first feature vector, the third feature vector and the attention weight to obtain a fused feature vector comprises:
counting vector lengths of the first feature vector, the third feature vector and the attention weight vector;
determining the maximum value in the vector lengths as a target length;
extending all vector lengths to the target length by using preset parameters;
and merging the column dimensions of all the vectors after the length is prolonged to obtain the fusion characteristic vector.
7. The method for entity identification with multiple attention mechanisms according to claim 1, wherein the identifying the entity sentence by using the feature probability comprises:
selecting the maximum feature probability as a classification label corresponding to the entity statement;
and identifying the entity sentence according to the classification label.
8. An entity recognition apparatus for multiple attention mechanisms, the apparatus comprising:
the entity recognition model construction module is used for constructing an entity recognition model architecture by utilizing a preset bidirectional long-short term memory network layer, a preset convolutional neural network layer, a preset gating mechanism layer and a preset attention mechanism layer;
the vector conversion module is used for acquiring a preset entity statement and performing vector conversion on the entity statement to obtain a semantic vector;
the first feature vector extraction module is used for inputting the semantic vector to a bidirectional long-short term memory network layer in the entity recognition model architecture for feature extraction to obtain a first feature vector;
the second feature vector extraction module is used for inputting the semantic vector to a convolutional neural network layer in the entity recognition model architecture for feature extraction, outputting a second feature vector, and inputting the second feature vector to a gating control mechanism layer in the entity recognition model architecture to obtain a third feature vector;
the attention weight vector output module is used for inputting the semantic vector to an attention mechanism layer in the entity recognition model architecture to obtain an attention weight vector;
a feature vector fusion module, configured to fuse the first feature vector, the third feature vector, and the attention weight vector to obtain a fusion feature vector;
and the entity sentence identification module is used for calculating the characteristic probability of the fusion characteristic vector by using a preset activation function and identifying the entity sentence by using the characteristic probability.
9. An electronic device, characterized in that the electronic device comprises:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the method of entity identification of a plurality of attention mechanisms as claimed in any one of claims 1 to 7.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out a method for entity identification of multiple attention mechanisms according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211255256.8A CN115545031A (en) | 2022-10-13 | 2022-10-13 | Entity identification method, device, equipment and storage medium for multiple attention mechanisms |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211255256.8A CN115545031A (en) | 2022-10-13 | 2022-10-13 | Entity identification method, device, equipment and storage medium for multiple attention mechanisms |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115545031A true CN115545031A (en) | 2022-12-30 |
Family
ID=84733574
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211255256.8A Pending CN115545031A (en) | 2022-10-13 | 2022-10-13 | Entity identification method, device, equipment and storage medium for multiple attention mechanisms |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115545031A (en) |
-
2022
- 2022-10-13 CN CN202211255256.8A patent/CN115545031A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113822494B (en) | Risk prediction method, device, equipment and storage medium | |
CN113157927B (en) | Text classification method, apparatus, electronic device and readable storage medium | |
CN113360654B (en) | Text classification method, apparatus, electronic device and readable storage medium | |
CN113378970B (en) | Sentence similarity detection method and device, electronic equipment and storage medium | |
CN113704429A (en) | Semi-supervised learning-based intention identification method, device, equipment and medium | |
CN115392237B (en) | Emotion analysis model training method, device, equipment and storage medium | |
CN112988963A (en) | User intention prediction method, device, equipment and medium based on multi-process node | |
CN113821622A (en) | Answer retrieval method and device based on artificial intelligence, electronic equipment and medium | |
CN114399775A (en) | Document title generation method, device, equipment and storage medium | |
CN116821373A (en) | Map-based prompt recommendation method, device, equipment and medium | |
CN112269875A (en) | Text classification method and device, electronic equipment and storage medium | |
CN113886708A (en) | Product recommendation method, device, equipment and storage medium based on user information | |
CN116245097A (en) | Method for training entity recognition model, entity recognition method and corresponding device | |
CN115309865A (en) | Interactive retrieval method, device, equipment and storage medium based on double-tower model | |
CN115510188A (en) | Text keyword association method, device, equipment and storage medium | |
CN113344125B (en) | Long text matching recognition method and device, electronic equipment and storage medium | |
CN114595321A (en) | Question marking method and device, electronic equipment and storage medium | |
CN114840684A (en) | Map construction method, device and equipment based on medical entity and storage medium | |
CN113918704A (en) | Question-answering method and device based on machine learning, electronic equipment and medium | |
CN113658002A (en) | Decision tree-based transaction result generation method and device, electronic equipment and medium | |
CN114757154A (en) | Job generation method, device and equipment based on deep learning and storage medium | |
CN115114408A (en) | Multi-modal emotion classification method, device, equipment and storage medium | |
CN115221274A (en) | Text emotion classification method and device, electronic equipment and storage medium | |
CN114610854A (en) | Intelligent question and answer method, device, equipment and storage medium | |
CN114219367A (en) | User scoring method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |