CN108021544A - The method, apparatus and electronic equipment classified to the semantic relation of entity word - Google Patents
The method, apparatus and electronic equipment classified to the semantic relation of entity word Download PDFInfo
- Publication number
- CN108021544A CN108021544A CN201610929103.5A CN201610929103A CN108021544A CN 108021544 A CN108021544 A CN 108021544A CN 201610929103 A CN201610929103 A CN 201610929103A CN 108021544 A CN108021544 A CN 108021544A
- Authority
- CN
- China
- Prior art keywords
- matrix
- word
- text sequence
- predetermined quantity
- renewal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/279—Recognition of textual entities
- G06F40/289—Phrasal analysis, e.g. finite state techniques or chunking
- G06F40/295—Named entity recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
Abstract
The method, apparatus and electronic equipment, the device that a kind of semantic relation to entity word in text sequence of the embodiment of the present application offer is classified include:First obtains unit, it is used for each word word vector representation in text sequence, to build the first matrix;Second obtaining unit, it is handled first matrix using deep learning model, to obtain the second matrix;3rd obtaining unit, it utilizes the attention model of more than 2, second matrix is handled, to determine the concerned degree of word in the text sequence, and based on the 3rd matrix of the concerned degree acquisition text sequence;Taxon, it is according at least to the 3rd matrix of the text sequence, and the disaggregated model prestored, to determine the semantic relation between the entity word in the text sequence.According to the present embodiment, it is possible to increase classification effectiveness.
Description
Technical field
This application involves information technology field, more particularly to a kind of semantic relation to entity word in text sequence to divide
The method, apparatus and electronic equipment of class.
Background technology
The semantic relation classification of entity word refers to, determines that the semantic relation between the entity word in text sequence belongs to predetermined
Semantic relation in which kind of, which for example can be the relation of upperseat concept and subordinate concept, dynamic guest
Relation etc., for example, sentence "<e1>Machine<e1>Generate a large amount of<e2>Noise<e2>" in, determine entity word e1 and entity word
Relation between e2 is:Cause-fruit (e1, e2).
In natural language processing field, the semantic relation classification of entity word is more concerned, because semantic relation is sorted in letter
There is important application value in the tasks such as breath extraction, information retrieval, machine translation, question and answer, Knowledge Database and semantic disambiguation.
In the semantic relation sorting technique of existing entity word, it can utilize based on shot and long term memory (Long-Short
Term Memory, LSTM) recurrent neural network (Recurrent Neural Network, the RNN) model of unit divided
Class, the model can efficiently use the ability of sequence data middle and long distance Dependency Specification, therefore for the processing of text sequence data
It is highly effective.
It should be noted that the introduction to technical background above be intended merely to the convenient technical solution to the application carry out it is clear,
Complete explanation, and facilitate the understanding of those skilled in the art and illustrate.Cannot merely because these schemes the application's
Background section is set forth and thinks that above-mentioned technical proposal is known to those skilled in the art.
The content of the invention
The inventors of the present application found that in semantic relation classification task weight, the weight of other words in sentence to entity word
It is different to want degree, and the influence for classification results is also different.It is existing when the negligible amounts of the word in text sequence
The semantic relation sorting technique of entity word can be carried out efficiently classification, and when the quantity of the word in text sequence is more,
Largely classification results are influenced with little word due to existing, so the efficiency of classification can reduce.
The method, apparatus and electronics that a kind of semantic relation to entity word of embodiments herein offer is classified are set
It is standby, determine the concerned degree of word in text sequence by being introduced into attention model (Attention Model), and then be based on
Concerned degree classifies the semantic relation between entity word, and thereby, it is possible to improve the efficiency of classification.
According to the first aspect of the embodiment of the present application, there is provided a kind of semantic relation to entity word in text sequence divides
The device of class, the device include:
First obtains unit, it is used for each word word vector representation in text sequence, to build the first matrix;
Second obtaining unit, it is handled first matrix using deep learning model, to obtain the second matrix,
Wherein, the row or column of second matrix is corresponding with the word in the text sequence;
3rd obtaining unit, it utilizes the attention model of more than 2, second matrix is handled, to determine
The concerned degree of word in the text sequence, and based on the 3rd matrix of the concerned degree acquisition text sequence;
Taxon, it is according at least to the 3rd matrix of the text sequence, and the disaggregated model prestored,
To determine the semantic relation between the entity word in the text sequence.
According to the second aspect of the embodiment of the present application, there is provided a kind of semantic relation to entity word in text sequence divides
The method of class, this method include:
By each word word vector representation in text sequence, to build the first matrix;
First matrix is handled using deep learning model, to obtain the second matrix, wherein, second square
The row or column of battle array is corresponding with the word in the text sequence;
Using more than 2 attention models, second matrix is handled, to determine word in the text sequence
Concerned degree, and obtain based on the concerned degree the 3rd matrix of the text sequence;And
It is described to determine according at least to the 3rd matrix of the text sequence, and the disaggregated model prestored
The semantic relation between entity word in text sequence.
According to the third aspect of the embodiment of the present application, there is provided a kind of electronic equipment, including the embodiment of the present application first aspect
The device classified to the semantic relation of entity word in text sequence.
The beneficial effect of the application is:Improve the efficiency classified to the semantic relation of entity word.
With reference to following explanation and attached drawing, only certain exemplary embodiments of this invention is disclose in detail, specifies the original of the present invention
Reason can be in a manner of adopted.It should be understood that embodiments of the present invention are not so limited in scope.In appended power
In the range of the spirit and terms that profit requires, embodiments of the present invention include many changes, modifications and are equal.
The feature for describing and/or showing for a kind of embodiment can be in a manner of same or similar one or more
Used in a other embodiment, it is combined with the feature in other embodiment, or substitute the feature in other embodiment.
It should be emphasized that term "comprises/comprising" refers to the presence of feature, one integral piece, step or component when being used herein, but simultaneously
It is not excluded for the presence or additional of one or more further features, one integral piece, step or component.
Brief description of the drawings
Included attached drawing is used for providing being further understood from the embodiment of the present invention, which constitutes one of specification
Point, come together explaination the principle of the present invention for illustrating embodiments of the present invention, and with word description.Under it should be evident that
Attached drawing in the description of face is only some embodiments of the present invention, for those of ordinary skill in the art, is not paying wound
On the premise of the property made is laborious, other attached drawings can also be obtained according to these attached drawings.In the accompanying drawings:
Fig. 1 is a schematic diagram of the sorting technique of the embodiment of the present application 1;
Fig. 2 is a schematic diagram of the method for the 3rd matrix of acquisition of the embodiment of the present application 1;
Fig. 3 is a schematic diagram of the method for the word for selecting predetermined quantity of the embodiment of the present application 1;
Fig. 4 is a schematic diagram of the sorter of the embodiment of the present application 2;
Fig. 5 is a schematic diagram of the 3rd obtaining unit of the embodiment of the present application 2;
Fig. 6 is a schematic diagram of the selecting unit of the embodiment of the present application 2;
Fig. 7 is a composition schematic diagram of the electronic equipment of the embodiment of the present application 3.
Embodiment
Referring to the drawings, will be apparent by following specification, foregoing and further feature of the invention.In specification
In attached drawing, only certain exemplary embodiments of this invention is specifically disclosed, which show wherein can be with the portion of principle using the present invention
Divide embodiment, it will thus be appreciated that the invention is not restricted to described embodiment, on the contrary, the present invention includes falling into appended power
Whole modification, modification and equivalents in the range of profit requirement.
Embodiment 1
The embodiment of the present application 1 provides a kind of sorting technique, for dividing the semantic relation of entity word in text sequence
Class.
Fig. 1 is a schematic diagram of the sorting technique of embodiment 1, as shown in Figure 1, this method includes:
S101, by each word word vector representation in text sequence, to build the first matrix (that is, wording
embedding);
S102, using deep learning model handled first matrix, with obtain the second matrix (that is, BLSTM's
Output), wherein, the row or column of second matrix is corresponding with the word in the text sequence;
S103, using 2 power models (Attention Model) noted above, second matrix is handled, with
Determine the concerned degree of word in the text sequence, and the 3rd square of the text sequence is obtained based on the concerned degree
Battle array (result of attention);
S104, the 3rd matrix according at least to the text sequence, and the disaggregated model prestored, to determine
The semantic relation between entity word in the text sequence.
In the present embodiment, at least two attention model (Attention Model) is introduced to determine in text sequence
The concerned degree of word, and then classified based on concerned degree to the semantic relation between entity word, thereby, it is possible to improve
The efficiency of classification.
In the step S101 of the present embodiment, vocabulary can be shown as by term vector (Wording according to the feature of word
Embedding), term vector can be multidimensional floating-point number vector.
Wherein, the feature of word can include feature word in itself, the position feature of word in text sequence etc., for example, word
The feature of itself can be represented as the vector of 50 dimensions or 100 dimensions, and the position feature of word can be represented as vector of 5 dimensions etc..
Certainly, the present embodiment not limited to this, in addition to the position feature of word feature in itself and word, it is also contemplated that hypernym, word
Property, the name feature such as entity and syntactic analysis tree build the term vector of the word.
In the present embodiment, each word word vector representation in text sequence, thus, owns in whole text sequence
The term vector of word is built into the first matrix, which should with text sequence pair.For example, a line of first matrix or one
The term vector of a word in the corresponding text sequence of row.
In the step S102 of the present embodiment, first matrix can be handled using deep learning model, with
Obtain the second matrix.It is for instance possible to use two-way shot and long term remembers (Bi-LSTM) model in step S101 obtained first
Matrix is handled.In addition it is also possible to using other deep learning models, such as the model such as shot and long term memory (LSTM) model
First matrix is handled.
In the present embodiment, the row or column vector of the second matrix can be corresponding with the word in text sequence.For example, this
Two matrix M2 can be represented as M2={ F1 ..., Fi ..., Ft }, wherein, i and t are integer, and 1≤i≤t, t represent this article
The quantity of word in this sequence, Fi are the corresponding vectors of i-th of word in text sequence, it is assumed that the entity word in text sequence
E1, e2 are i-th in text sequence respectivelye1、ie2A word, then, vectorial Fie1And Fie2It is the entity word in the sequence respectively
The corresponding vector of e1, e2.
In the step S103 of the present embodiment, using the attention models of more than 2 (Attention Model), determine
The concerned degree of word in the text sequence, handles second matrix, to obtain the 3rd of the text sequence the
Matrix, wherein, the concerned degree of word can reflect significance level of the word for entity word in text sequence, thus, the
Three matrixes can embody the word that concerned degree is higher in the second matrix, so that the classification in step S104 is more efficient;And
And due to being to employ the attention model of more than 2, it can more efficiently select the higher word of concerned degree.
Fig. 2 is a schematic diagram of the method for the 3rd matrix of acquisition of the present embodiment, as shown in Fig. 2, this method can be with
Including:
S201, using the attention models of more than 2, determine the concerned degree of each word in the text sequence, and
Based on the concerned degree, the word of predetermined quantity is selected from the text sequence;And
S202, by the word of the predetermined quantity in second matrix with selecting it is corresponding vector merge, with formed
3rd matrix.
In the present embodiment, by step S201 and step S202, the word of predetermined quantity can be extracted from text sequence
To form the 3rd matrix, thus, the scale of the 3rd matrix can be less than the second matrix.
Fig. 3 is a schematic diagram of the method for the word for selecting predetermined quantity of the present embodiment, is used for realization step S201.
As shown in figure 3, this method includes:
S301, by the entity word in second matrix it is corresponding it is vectorial merge with second matrix, formed the 4th square
Battle array;
S302, using attention model (Attention Model) corresponding with the scale of the 4th matrix, to described
4th matrix at least carries out Nonlinear Processing, to determine the concerned degree of each word in the text sequence, and based on described
Concerned degree, selects the word of the first predetermined quantity from the text sequence;And
It is S303, the word of first predetermined quantity in second matrix with selecting is corresponding vectorial with described the
Four matrixes merge, and form the 4th matrix after renewal, using attention model corresponding with the scale of the 4th matrix after renewal,
Nonlinear Processing is at least carried out to the 4th matrix after renewal, and selects the first predetermined quantity again from the text sequence
Word, wherein, the summation of the word of all the first predetermined quantities selected is equal to the predetermined quantity.
In the present embodiment, since classification of the information that entity word is included for semantic relation in itself is most important, because
This, in step S301, the corresponding vector of the entity word in the second matrix is merged with second matrix, forms the 4th matrix,
Thus the 4th matrix can be made of two parts, wherein, a part is the second matrix M2, another part be incorporated into this second
The addition unit U that the corresponding vector of entity word in matrix is formed, addition unit U can be a vector or from it is multiple to
The matrix of composition is measured, for example, it can be by the corresponding vector Fi of entity word e1 to add unit Ue1Vector corresponding with entity word e2
Fie2Matrix { the Fi of compositione1, Fie2, therefore, the 4th matrix M4 can be represented as M4=M2+U=F1 ..., Fi ...,
Ft, Fie1, Fie2}。
In step s 302, the 4th matrix can be handled, so that it is determined that in text sequence each word it is concerned
Degree.Wherein, Nonlinear Processing is included at least in the processing, which can include being based on notice
(Attention) neutral net (Neural Network) processing of mechanism, such as Sigma (σ) processing, tangent (tangent)
Processing, relu processing, sigmoid processing etc., but the present embodiment is not limited to this, and Nonlinear Processing can also include at others
Reason mode.In addition, linear process can also be included in the processing.
For example, in the present embodiment, following formula (1) can be used to the 4th matrix M4 processing, in the processing both
Include linear process, also include Nonlinear Processing:
Y=WX (WfM2+WuU) (1)
Wherein, X is the corresponding function of Nonlinear Processing, WfIt is linear coefficient corresponding with the second matrix M2, WuIt is and addition
The corresponding linear coefficients of unit U, W are linear coefficients corresponding with the 4th matrix M4.
After handling the 4th matrix, the corresponding weights of each word in text sequence can be obtained, which represents should
The concerned degree of word.
In step s 302, the concerned degree based on each word, selects the first predetermined quantity from text sequence
Word, for example, the sequence according to concerned degree from high to low, select the word of preceding first predetermined quantity, this is first predetermined
Quantity is such as can be 1 or 2.
In step S303, can by the word of the first predetermined quantity chosen in step S302 in the second matrix it is right
The vector answered merges with the 4th matrix, to form the 4th matrix of renewal, for example, being selected in step S302 in text sequence
J-th of word and k-th of word, in step S303, by vector corresponding with j-th of word and k-th of word in the second matrix M2
Fj and Fk merges with the 4th matrix M4, and to form the 4th matrix M4 ' of renewal, the 4th matrix M4 ' of the renewal can be expressed
For M4 '={ F1 ..., Fi ..., Ft, Fie1, Fie2, Fj, Fk } and=M2+U ', wherein, { Fie1, Fie2, Fj, Fk } it can be counted as
It is the addition unit U ' of renewal, that is to say, that the word for the first predetermined quantity selected in step s 302 is in the second matrix
Corresponding vector can merge with original addition unit U, form the addition unit U ' of renewal, the addition unit U ' of the renewal
Merge with the second matrix M2, to form the 4th matrix M4 ' of renewal.
In step S303, the method identical with step S302 can be used, the 4th matrix after renewal is carried out non-thread
Property processing, to redefine the concerned degree of word in text sequence, and based on the concerned degree redefined from text sequence
The word of the first predetermined quantity is selected in row again.
In the present embodiment, step S303 can be carried out at least once, and first until being selected from step S302 is pre-
The word of fixed number amount, and the summation of the word for the first predetermined quantity selected from step S303 are equal to required by step S201
Predetermined quantity word.
It should be noted that every time perform step S303 when it is selected go out the first predetermined quantity word corresponding to
Amount, can form the 4th matrix after the renewal when performing step S303 next time.
In the present embodiment, the processing for the 4th matrix M4 carried out in step s 302, and in step S303
The processing of the 4th matrix M4 ' being directed at least once after updating carried out, such as following formula (2) can be collectively expressed as:
ym=WmXm(WfmM2+WumUm) (2)
Wherein, m is integer, and 0≤m≤N, N are the numbers for carrying out step S303, and N is natural number;As 1≤m≤N,
Corresponding to the m times progress step S303, when carrying out step S303 every time, all the 4th matrix M4 ' after renewal is handled,
As m=0, corresponding in step s 302 to the 4th initial matrix M4 processing;As 1≤m≤N, Wfm、Wum、WmPoint
It is not linear coefficient corresponding with the second matrix M2, the addition unit U with renewal when carrying out step S303 the m timesmIt is corresponding
Linear coefficient, linear coefficient corresponding with the 4th matrix M4 ' updated, XmIt is the corresponding function of Nonlinear Processing;As m=0,
Wf0、Wu0、W0It is linear coefficient corresponding with the second matrix M2 and initial addition unit U when carrying out step S302 respectively0
Corresponding linear coefficient, linear coefficient corresponding with the 4th initial matrix M4, X0It is the corresponding function of Nonlinear Processing;Every time
When being handled, the type of Nonlinear Processing may be the same or different, that is to say, that with the change of m, XmType
It can change, can not also change.
In above formula (2), the scale of fourth matrix of the 4th matrix from updating is different, also, the 4th after renewal every time
The scale of matrix is also different, for example, during m=0, the scale of the 4th matrix is minimum, as m gradually increases, then the 4th square that updates
The scale of battle array is also increasing therewith, W corresponding with mfm、Wum、WmAlso changing therewith, so, by function X and each linear system
Number Wfm、Wum、WmThe attention model together decided on is also changing, thereby, it is possible to using with the 4th matrix or renewal
The corresponding attention model of scale of 4th matrix, repeatedly to be determined to the concerned degree of word in text sequence, is improved
Accuracy.Furthermore, it is necessary to explanation, the formula shown in above formula (2) is simply illustrated, and the present embodiment can use other forms
Formula.
Furthermore, it is noted that the parameter in power model, such as linear coefficient Wfm、Wum、WmDeng, can be by it is a large amount of training samples
Originally obtained from being trained.
In addition, in the step S302 and step S303 of the present embodiment, the word for the first predetermined quantity selected every time with
There is no dittograph in the word for the first predetermined quantity having been selected out, that is to say, that be already present on and add unit or renewal
Add unit in word, will not be added again in the addition unit or the addition unit of the renewal.
In the present embodiment, the quantity of this 2 power models noted above, can be according to the quantity of word in text sequence
To determine.For example, the quantity of word is more in text sequence, the quantity of the attention model used is more.
In the present embodiment, the attention can be set according to the quantity of word in text sequence before step S301
The quantity of power model, for example, pair in text sequence between the quantity of word and the quantity of attention model can be preset
It should be related to, and according to the quantity of word in the correspondence and text sequence to be detected, to set the number of attention model
Amount.
In the present embodiment, the correspondence in text sequence between the quantity of word and the quantity of attention model, can
To be preset by being trained to a large amount of training samples, for example, being wrapped as the text sequence of training sample according to it
The word quantity contained is divided into multiple training sets, and the quantity for the word that each training sample is included can be with each training set
Belong to specific quantity section, for example, the quantity for the word that each training sample is included may belong to 1- in the first training set
10 this quantity section etc.;The classification of multiple semantic relation is carried out to each training sample in each training set, is divided every time
The quantity of attention model is different used by class, thus, according to every subseries as a result, it is possible to determine the optimal of the training set
Attention model quantity, so that it is determined that the quantity of quantity section corresponding to the training set and optimal attention model it
Between correspondence.Certainly, the present embodiment not limited to this, can also determine word in text sequence using other modes
Correspondence between quantity and the quantity of attention model.
According to the step S201 of the present embodiment, at least two attention model can be used, to determine word in text sequence
Concerned degree, the judging result thereby, it is possible to make concerned degree are more accurate.
In the step S202 of the present embodiment, by from selected by step S201 come predetermined quantity word in the second matrix
Corresponding vector merges, to form the 3rd matrix.For example, selected in step S201 j-th in text sequence, k-th,
L-thth, the word of m-th, n-th and o-th of word as predetermined quantity, in step S202, by the second matrix M2 with
Corresponding vector Fj, Fk, Fl, Fm, Fn and Fo of word of above-mentioned predetermined quantity merges, to form the 3rd matrix M3, the 3rd matrix
M3 can be represented as M3={ Fj, Fk, Fl, Fm, Fn, Fo }.
In above-mentioned Fig. 2, Fig. 3, the vector for the word that predetermined quantity is extracted from the second matrix is shown, to form
The method of three matrixes;But the present embodiment is not limited to this, the 3rd matrix can also be formed using other methods.
More than, illustrate the method that step S103 obtains the 3rd matrix with reference to Fig. 2-Fig. 4, certainly, the present embodiment can be with
Not limited to this, can also obtain the 3rd matrix using the method different from Fig. 2-Fig. 4.
In the step S104 of the present embodiment, according at least to the 3rd matrix of step S103 acquisitions, and prestore
Disaggregated model, to determine the semantic relation between the entity word in text sequence.For example, no matter the quantity of the word in text sequence
It is how many, hidden layer processing can be carried out to the 3rd matrix, to generate feature vector, and according to the disaggregated model prestored
Classify to this feature vector, to obtain the classification of semantic relation, wherein, the method for carrying out hidden layer processing may be referred to now
There is technology, no longer illustrate herein.
, can also be according to both the 3rd matrix M3 and the second matrix M2 in the step S104 of the present embodiment, and base
Semantic relation is determined in the disaggregated model prestored.
In the present embodiment, disaggregated model used in step S104 can include softmax, maximum entropy, Bayes or
Support vector machines etc..And it is possible to by training to obtain the disaggregated model, and the disaggregated model is stored with step
Used in S104.In the present embodiment, method corresponding with step S101-S104 can be applied to the training of training set
In sample, so that training obtains the disaggregated model, on the explanation of training process, it is not repeated herein.
In the present embodiment, the attention model (Attention Model) of more than 2 is introduced to determine text sequence
The concerned degree of middle word, and then classified based on concerned degree to the semantic relation between entity word, thereby, it is possible to carry
The efficiency of high-class.
Embodiment 2
The embodiment of the present application 2 provides the device that a kind of semantic relation to entity word in text sequence is classified, with reality
The method for applying example 1 corresponds to.
Fig. 4 is a schematic diagram of the sorter of the present embodiment 2, is obtained as shown in figure 4, the device 400 includes first
Unit 401, the second obtaining unit 402, the 3rd obtaining unit 403 and taxon 404.
Wherein, first obtains unit 401 is used for each word word vector representation in text sequence, to build the first square
Battle array;Second obtaining unit 402 is handled first matrix using deep learning model, to obtain the second matrix;3rd
Obtaining unit 403 is handled second matrix, to determine the text sequence using the attention model of more than 2
The concerned degree of middle word, and based on the 3rd matrix of the concerned degree acquisition text sequence;Taxon 404 to
Few the 3rd matrix according to the text sequence, and the disaggregated model prestored, to determine in the text sequence
Entity word between semantic relation.
Fig. 5 is a schematic diagram of the 3rd obtaining unit of the present embodiment 2, and as described in Figure 5, the 3rd obtaining unit 403 can
With including selecting unit 501 and combining unit 502.
Wherein, selecting unit 501 is using the attention model of more than 2, determine each word in the text sequence by
Degree of concern, and the concerned degree is based on, the word of predetermined quantity is selected from the text sequence;Combining unit 502
By the word of the predetermined quantity in second matrix with selecting it is corresponding vector merge, to form the 3rd matrix.
Fig. 6 is a schematic diagram of the selecting unit of the present embodiment 2, as shown in fig. 6, selecting unit 501 can include the
One merges subelement 601, the first processing subelement 602 and second processing subelement 603.
Wherein, first merge subelement 601 be used for the entity word in second matrix is corresponding vectorial with described the
Two matrixes merge, and form the 4th matrix;First processing subelement 602 carries out Nonlinear Processing to the 4th matrix, to determine
The concerned degree of each word in the text sequence, and the concerned degree is based on, selected from the text sequence
The word of first predetermined quantity;Second processing subelement 603 by second matrix with first predetermined quantity selected
Word it is corresponding it is vectorial merge with second matrix, the 4th matrix after renewal is formed, and based on the 4th matrix after renewal
Select the word of the first predetermined quantity again from the text sequence, wherein, the word of all the first predetermined quantities selected
Summation be equal to the predetermined quantity.
In the present embodiment, the attention model (Attention Model) of more than 2 is introduced to determine text sequence
The concerned degree of middle word, and then classified based on concerned degree to the semantic relation between entity word, thereby, it is possible to carry
The efficiency of high-class.
Embodiment 3
The embodiment of the present application 3 provides a kind of electronic equipment, and the electronic equipment includes:As described in Example 2 to text
The device that the semantic relation of entity word is classified in sequence.
Fig. 7 is a composition schematic diagram of the electronic equipment of the embodiment of the present application 3.As shown in fig. 7, electronic equipment 700 can
With including:Central processing unit (CPU) 701 and memory 702;Memory 702 is coupled to central processing unit 701.The wherein storage
Device 702 can store various data;The program classified to the semantic relation of entity word in text sequence is additionally stored, and
The program is performed under the control of central processing unit 701.
In one embodiment, the function of sorter can be integrated into central processing unit 701.
Wherein, central processing unit 701 can be configured as:
By each word word vector representation in text sequence, to build the first matrix;Using deep learning model to institute
The first matrix is stated to be handled, to obtain the second matrix, wherein, in the row or column and the text sequence of second matrix
Word corresponds to;Using 2 power models (Attention Model) noted above, second matrix is handled, to determine institute
The concerned degree of word in text sequence is stated, and the 3rd matrix of the text sequence is obtained based on the concerned degree
(result of attention);According at least to the 3rd matrix of the text sequence, and the disaggregated model prestored,
To determine the semantic relation between the entity word in the text sequence.
Wherein, central processing unit 701 can be additionally configured to:
Using more than 2 attention models, the concerned degree of each word in the text sequence is determined, and be based on institute
Concerned degree is stated, the word of predetermined quantity is selected from the text sequence;By the institute in second matrix with selecting
The corresponding vector of word for stating predetermined quantity merges, to form the 3rd matrix.
Wherein, central processing unit 701 can be additionally configured to:
By the entity word in second matrix it is corresponding it is vectorial merge with second matrix, formed the 4th matrix;
Using attention model (Attention Model) corresponding with the scale of the 4th matrix, to the described 4th
Matrix at least carries out Nonlinear Processing, to determine the concerned degree of each word in the text sequence, and is closed based on described
Note degree, selects the word of the first predetermined quantity from the text sequence;And
The word of first predetermined quantity in second matrix with selecting is corresponding vectorial with the 4th square
Battle array merges (word elected merges with BLSTM and previous context information), forms the 4th square after renewal
Battle array;
Using attention model (Attention Model) corresponding with the scale of the 4th matrix after renewal, to renewal
The 4th matrix afterwards at least carries out Nonlinear Processing, and selects the word of the first predetermined quantity again from the text sequence;
Wherein, the corresponding vector of word of the first predetermined quantity at least carrying out once once selecting before updates the
Four matrixes, and the step of select the word of the first predetermined quantity again from the 4th matrix after renewal using attention model,
Also, the summation of the word of all the first predetermined quantities selected is equal to the predetermined quantity.
Wherein, central processing unit 701 can be additionally configured to:
The word for the first predetermined quantity selected every time in the word for the first predetermined quantity having been selected out with not repeating
Word.
Wherein, central processing unit 701 can be additionally configured to:
The semantic relation is determined according to the 3rd matrix and second matrix, and the disaggregated model.
Wherein, central processing unit 701 can be additionally configured to:
The quantity of 2 power models noted above is the quantity according to word in the text sequence come definite.
In addition, as shown in fig. 7, electronic equipment 700 can also include:Input-output unit 703 and display unit 704 etc.;
Wherein, similarly to the prior art, details are not described herein again for the function of above-mentioned component.It is worth noting that, electronic equipment 700 is not yet
It is all components for having to include shown in Fig. 7;In addition, electronic equipment 700 can also include the portion being not shown in Fig. 7
Part, may be referred to the prior art.
The embodiment of the present application also provides a kind of computer-readable program, wherein being performed when in positioner or electronic equipment
During described program, described program causes the sorter or electronic equipment to perform the sorting technique described in embodiment 2.
The embodiment of the present application also provides a kind of storage medium for being stored with computer-readable program, wherein, the storage is situated between
Matter stores above computer readable program, and the computer-readable program causes sorter or electronic equipment to perform embodiment 2
The sorting technique.
Hardware, the software module performed by processor can be embodied directly in reference to the sorter that the embodiment of the present invention describes
Or the two combination.For example, the one or more of one or more of functional block diagram shown in Fig. 4-6 and/or functional block diagram
Combination, both can correspond to each software module of computer program flow, and can also correspond to each hardware module.These are soft
Part module, can correspond respectively to each step shown in embodiment 1.These hardware modules are for example using field programmable gate
These software modules are cured and realized by array (FPGA).
Software module can be located at RAM memory, flash memory, ROM memory, eprom memory, eeprom memory, post
Storage, hard disk, mobile disk, the storage medium of CD-ROM or any other form known in the art.One kind can be deposited
Storage media is coupled to processor, so as to enable a processor to from the read information, and can be write to the storage medium
Information;Or the storage medium can be the part of processor.Pocessor and storage media can be located in ASIC.This is soft
Part module can store in a memory in the mobile terminal, can also be stored in the storage card of pluggable mobile terminal.For example,
If equipment (such as mobile terminal) is using the MEGA-SIM cards of larger capacity or the flash memory device of large capacity, the software
Module is storable in the flash memory device of the MEGA-SIM cards or large capacity.
Combined for one or more of Fig. 4-6 functional block diagrams described and/or the one or more of functional block diagram, can
To be embodied as general processor, digital signal processor (DSP), application-specific integrated circuit for performing function described herein
(ASIC), field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete
Nextport hardware component NextPort or it is any appropriately combined.For one or more of Fig. 4-6 functional block diagrams described and/or functional block
One or more combinations of figure, are also implemented as the combination of computing device, for example, the combination of DSP and microprocessor, Duo Gewei
Processor, communicate the one or more microprocessors combined or any other this configuration with DSP.
The application is described above in association with specific embodiment, it will be appreciated by those skilled in the art that this
A little descriptions are all exemplary, and are not the limitation to the application protection domain.Those skilled in the art can be according to the application
Principle various variants and modifications are made to the application, these variants and modifications are also within the scope of application.
On the embodiment including above example, following note is also disclosed:
The device that note 1, a kind of semantic relation to entity word in text sequence are classified, the device include:
First obtains unit, it is used for each word word vector representation in text sequence, to build the first matrix;
Second obtaining unit, it is handled first matrix using deep learning model, to obtain the second matrix,
Wherein, the row or column of second matrix is corresponding with the word in the text sequence;
3rd obtaining unit, it utilizes the attention model of more than 2, second matrix is handled, to determine
The concerned degree of word in the text sequence, and based on the 3rd matrix of the concerned degree acquisition text sequence;
Taxon, it is according at least to the 3rd matrix of the text sequence, and the disaggregated model prestored,
To determine the semantic relation between the entity word in the text sequence.
Note 2, the device as described in note 1, wherein, the 3rd obtaining unit includes:
Selecting unit, it utilizes the attention model of more than 2, determines the concerned journey of each word in the text sequence
Degree, and the concerned degree is based on, the word of predetermined quantity is selected from the text sequence;And
Combining unit, its be used for by the word of the predetermined quantity in second matrix with selecting it is corresponding vector close
And to form the 3rd matrix.
Note 3, the device as described in note 2, wherein, the selecting unit includes:
First merges subelement, it is used for the entity word in second matrix is corresponding vectorial with second matrix
Merge, form the 4th matrix;
First processing subelement, it utilizes attention model corresponding with the scale of the 4th matrix, to the described 4th
Matrix at least carries out Nonlinear Processing, to determine the concerned degree of each word in the text sequence, and is closed based on described
Note degree, selects the word of the first predetermined quantity from the text sequence;And
Second processing subelement, it is corresponding by the word of first predetermined quantity in second matrix with selecting
It is vectorial to merge with the 4th matrix, the 4th matrix after renewal is formed, and utilize the scale pair with the 4th matrix after renewal
The attention model answered, at least carries out Nonlinear Processing, to be selected again from the text sequence to the 4th matrix after renewal
The word of the first predetermined quantity is selected out,
Wherein, the second processing subelement is at least carried out once according to the word pair for the first predetermined quantity having been selected out
The vector answered updates the 4th matrix, and it is predetermined to select from the 4th matrix after renewal using attention model first again
The step of word of quantity, the summation of the word of all the first predetermined quantities selected are equal to the predetermined quantity.
Note 4, the device as described in note 3, wherein,
The word for the first predetermined quantity that first processing subelement or second processing subelement are selected every time is with having been selected
There is no dittograph in the word of the first predetermined quantity gone out.
Note 5, the device as described in note 1, wherein,
The quantity of 2 power models noted above is the quantity according to word in the text sequence come definite.
Note 6, the device as described in note 1, wherein,
Described in the taxon is determined according to the 3rd matrix and second matrix, and the disaggregated model
Semantic relation.
Note 7, a kind of electronic equipment, including the device any one of note 1-6.
A kind of method that note 8, semantic relation to entity word in text sequence are classified, this method include:
By each word word vector representation in text sequence, to build the first matrix;
First matrix is handled using deep learning model, to obtain the second matrix, wherein, second square
The row or column of battle array is corresponding with the word in the text sequence;
Using more than 2 attention models, second matrix is handled, to determine word in the text sequence
Concerned degree, and obtain based on the concerned degree the 3rd matrix of the text sequence;
It is described to determine according at least to the 3rd matrix of the text sequence, and the disaggregated model prestored
The semantic relation between entity word in text sequence.
Note 9, the method as described in note 8, wherein, using 2 power models noted above, obtain the 3rd matrix
Method includes:
Using more than 2 attention models, the concerned degree of each word in the text sequence is determined, and be based on institute
Concerned degree is stated, the word of predetermined quantity is selected from the text sequence;And
The corresponding vector of the word of the predetermined quantity in second matrix with selecting is merged, to form described the
Three matrixes.
Note 10, the method as described in note 9, wherein, the word of predetermined quantity is selected from the text sequence to be included:
By the entity word in second matrix it is corresponding it is vectorial merge with second matrix, formed the 4th matrix;
Using attention model corresponding with the scale of the 4th matrix, the 4th matrix is at least carried out non-linear
Processing, to determine the concerned degree of each word in the text sequence, and is based on the concerned degree, from the text sequence
The word of the first predetermined quantity is selected in row;And
The word of first predetermined quantity in second matrix with selecting is corresponding vectorial with the 4th square
Battle array merges, and forms the 4th matrix after renewal, utilizes attention model corresponding with the scale of the 4th matrix after renewal
(Attention Model), Nonlinear Processing is at least carried out to the 4th matrix after renewal, and from the text sequence again
The word of the first predetermined quantity is selected,
Wherein, the corresponding vector of word of the first predetermined quantity at least carrying out once once selecting before updates the
Four matrixes, and the step of select the word of the first predetermined quantity again from the 4th matrix after renewal using attention model,
Also, the summation of the word of all the first predetermined quantities selected is equal to the predetermined quantity.
Note 11, the method as described in note 10, wherein,
The word for the first predetermined quantity selected every time in the word for the first predetermined quantity having been selected out with not repeating
Word.
Note 12, the method as described in note 8, wherein,
The quantity of 2 power models noted above is the quantity according to word in the text sequence come definite.
Note 13, the method as described in note 8, wherein, come according at least to the 3rd matrix and the disaggregated model true
The fixed semantic relation includes:
The semantic relation is determined according to the 3rd matrix and second matrix, and the disaggregated model.
Claims (10)
1. the device that a kind of semantic relation to entity word in text sequence is classified, the device include:
First obtains unit, it is used for each word word vector representation in text sequence, to build the first matrix;
Second obtaining unit, it is handled first matrix using deep learning model, to obtain the second matrix, its
In, the row or column of second matrix is corresponding with the word in the text sequence;
3rd obtaining unit, it utilizes the attention model of more than 2, second matrix is handled, described to determine
The concerned degree of word in text sequence, and based on the 3rd matrix of the concerned degree acquisition text sequence;
Taxon, according at least to the 3rd matrix of the text sequence, and the disaggregated model prestored, comes true for it
The semantic relation between entity word in the fixed text sequence.
2. device as claimed in claim 1, wherein, the 3rd obtaining unit includes:
Selecting unit, it utilizes the attention model of more than 2, determines the concerned degree of each word in the text sequence,
And the concerned degree is based on, the word of predetermined quantity is selected from the text sequence;And
Combining unit, its be used for by the word of the predetermined quantity in second matrix with selecting it is corresponding vector merge,
To form the 3rd matrix.
3. device as claimed in claim 2, wherein, the selecting unit includes:
First merges subelement, it is used to vectorial close the entity word in second matrix is corresponding with second matrix
And form the 4th matrix;
First processing subelement, it utilizes attention model corresponding with the scale of the 4th matrix, to the 4th matrix
Nonlinear Processing is at least carried out, to determine the concerned degree of each word in the text sequence, and is based on the concerned journey
Degree, selects the word of the first predetermined quantity from the text sequence;And
Second processing subelement, it is by the corresponding vector of the word of first predetermined quantity in second matrix with selecting
Merge with the 4th matrix, form the 4th matrix after renewal, and utilize corresponding with the scale of the 4th matrix after renewal
Attention model, at least carries out Nonlinear Processing, to be selected again from the text sequence to the 4th matrix after renewal
The word of first predetermined quantity,
Wherein, the second processing subelement at least carries out once corresponding according to the word for the first predetermined quantity having been selected out
Vector selects the first predetermined quantity again using attention model to update the 4th matrix from the 4th matrix after renewal
Word the step of, the summation of the word of all the first predetermined quantities selected is equal to the predetermined quantity.
4. device as claimed in claim 3, wherein,
First processing subelement or the second processing subelement word of the first predetermined quantity selected every time and have been selected out
There is no dittograph in the word of first predetermined quantity.
5. device as claimed in claim 1, wherein,
The quantity of 2 power models noted above is the quantity according to word in the text sequence come definite.
6. device as claimed in claim 1, wherein,
The taxon determines the semanteme according to the 3rd matrix and second matrix, and the disaggregated model
Relation.
7. a kind of electronic equipment, including the device any one of claim 1-6.
8. a kind of method that semantic relation to entity word in text sequence is classified, this method include:
By each word word vector representation in text sequence, to build the first matrix;
First matrix is handled using deep learning model, to obtain the second matrix, wherein, second matrix
Row or column is corresponding with the word in the text sequence;
Using more than 2 attention models, second matrix is handled, with determine the text sequence in word by
Degree of concern, and based on the 3rd matrix of the concerned degree acquisition text sequence;
According at least to the 3rd matrix of the text sequence, and the disaggregated model prestored, to determine the text
The semantic relation between entity word in sequence.
9. method as claimed in claim 8, wherein, using 2 power models noted above, the method for obtaining the 3rd matrix
Including:
Using more than 2 attention models, determine the concerned degree of each word in the text sequence, and based on it is described by
Degree of concern, selects the word of predetermined quantity from the text sequence;And
By the word of the predetermined quantity in second matrix with selecting it is corresponding vector merge, to form the 3rd square
Battle array.
10. method as claimed in claim 9, wherein, the word of predetermined quantity is selected from the text sequence to be included:
By the entity word in second matrix it is corresponding it is vectorial merge with second matrix, formed the 4th matrix;
Using attention model corresponding with the scale of the 4th matrix, non-linear place is at least carried out to the 4th matrix
Reason, to determine the concerned degree of each word in the text sequence, and is based on the concerned degree, from the text sequence
In select the word of the first predetermined quantity;And
Vectorial close the word of first predetermined quantity in second matrix with selecting is corresponding with the 4th matrix
And the 4th matrix after renewal is formed, using attention model corresponding with the scale of the 4th matrix after renewal, after renewal
The 4th matrix at least carry out Nonlinear Processing, and select the word of the first predetermined quantity again from the text sequence;
Wherein, at least carry out once updating the 4th square according to the corresponding vector of word of preceding the first predetermined quantity once selected
Battle array, and the step of select the word of the first predetermined quantity again from the 4th matrix after renewal using attention model, also,
The summation of the word of all the first predetermined quantities selected is equal to the predetermined quantity.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610929103.5A CN108021544B (en) | 2016-10-31 | 2016-10-31 | Method and device for classifying semantic relation of entity words and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610929103.5A CN108021544B (en) | 2016-10-31 | 2016-10-31 | Method and device for classifying semantic relation of entity words and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108021544A true CN108021544A (en) | 2018-05-11 |
CN108021544B CN108021544B (en) | 2021-07-06 |
Family
ID=62069665
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610929103.5A Active CN108021544B (en) | 2016-10-31 | 2016-10-31 | Method and device for classifying semantic relation of entity words and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108021544B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109376222A (en) * | 2018-09-27 | 2019-02-22 | 国信优易数据有限公司 | Question and answer matching degree calculation method, question and answer automatic matching method and device |
CN111177383A (en) * | 2019-12-24 | 2020-05-19 | 上海大学 | Text entity relation automatic classification method fusing text syntactic structure and semantic information |
CN112085837A (en) * | 2020-09-10 | 2020-12-15 | 哈尔滨理工大学 | Three-dimensional model classification method based on geometric shape and LSTM neural network |
CN112417156A (en) * | 2020-11-30 | 2021-02-26 | 百度国际科技(深圳)有限公司 | Multitask learning method, device, equipment and storage medium |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040088726A1 (en) * | 2002-11-01 | 2004-05-06 | Yu-Fei Ma | Systems and methods for generating a comprehensive user attention model |
CN1716921A (en) * | 2004-06-30 | 2006-01-04 | 微软公司 | When-free messaging |
CN101044470A (en) * | 2003-06-30 | 2007-09-26 | 微软公司 | Positioning and rendering notification heralds based on user's focus of attention and activity |
CN102111601A (en) * | 2009-12-23 | 2011-06-29 | 大猩猩科技股份有限公司 | Content-based adaptive multimedia processing system and method |
US20110159467A1 (en) * | 2009-12-31 | 2011-06-30 | Mark Peot | Eeg-based acceleration of second language learning |
US20140278359A1 (en) * | 2013-03-15 | 2014-09-18 | Luminoso Technologies, Inc. | Method and system for converting document sets to term-association vector spaces on demand |
CN104298651A (en) * | 2014-09-09 | 2015-01-21 | 大连理工大学 | Biomedicine named entity recognition and protein interactive relationship extracting on-line system based on deep learning |
US20150095017A1 (en) * | 2013-09-27 | 2015-04-02 | Google Inc. | System and method for learning word embeddings using neural language models |
US20150154284A1 (en) * | 2013-11-29 | 2015-06-04 | Katja Pfeifer | Aggregating results from named entity recognition services |
CN104834747A (en) * | 2015-05-25 | 2015-08-12 | 中国科学院自动化研究所 | Short text classification method based on convolution neutral network |
CN105183720A (en) * | 2015-08-05 | 2015-12-23 | 百度在线网络技术(北京)有限公司 | Machine translation method and apparatus based on RNN model |
-
2016
- 2016-10-31 CN CN201610929103.5A patent/CN108021544B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040088726A1 (en) * | 2002-11-01 | 2004-05-06 | Yu-Fei Ma | Systems and methods for generating a comprehensive user attention model |
CN101044470A (en) * | 2003-06-30 | 2007-09-26 | 微软公司 | Positioning and rendering notification heralds based on user's focus of attention and activity |
CN1716921A (en) * | 2004-06-30 | 2006-01-04 | 微软公司 | When-free messaging |
CN102111601A (en) * | 2009-12-23 | 2011-06-29 | 大猩猩科技股份有限公司 | Content-based adaptive multimedia processing system and method |
US20110159467A1 (en) * | 2009-12-31 | 2011-06-30 | Mark Peot | Eeg-based acceleration of second language learning |
US20140278359A1 (en) * | 2013-03-15 | 2014-09-18 | Luminoso Technologies, Inc. | Method and system for converting document sets to term-association vector spaces on demand |
US20150095017A1 (en) * | 2013-09-27 | 2015-04-02 | Google Inc. | System and method for learning word embeddings using neural language models |
US20150154284A1 (en) * | 2013-11-29 | 2015-06-04 | Katja Pfeifer | Aggregating results from named entity recognition services |
CN104298651A (en) * | 2014-09-09 | 2015-01-21 | 大连理工大学 | Biomedicine named entity recognition and protein interactive relationship extracting on-line system based on deep learning |
CN104834747A (en) * | 2015-05-25 | 2015-08-12 | 中国科学院自动化研究所 | Short text classification method based on convolution neutral network |
CN105183720A (en) * | 2015-08-05 | 2015-12-23 | 百度在线网络技术(北京)有限公司 | Machine translation method and apparatus based on RNN model |
Non-Patent Citations (4)
Title |
---|
HUANG SUI等: "Sentiment analysis of Chinese micro-blog using semantic sentiment space model", 《PROCEEDINGS OF 2012 2ND INTERNATIONAL CONFERENCE ON COMPUTER SCIENCE AND NETWORK TECHNOLOGY》 * |
MAOFU LIU等: "Recognizing entailment in Chinese texts with feature combination", 《2015 INTERNATIONAL CONFERENCE ON ASIAN LANGUAGE PROCESSING (IALP)》 * |
周佳逸: "基于张量递归神经网络的英文语义关系分类方法研究", 《现代计算机(专业版)》 * |
贾兆红等: "结合改进非负矩阵分解的模糊网页文本分类算法", 《重庆大学学报》 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109376222A (en) * | 2018-09-27 | 2019-02-22 | 国信优易数据有限公司 | Question and answer matching degree calculation method, question and answer automatic matching method and device |
CN111177383A (en) * | 2019-12-24 | 2020-05-19 | 上海大学 | Text entity relation automatic classification method fusing text syntactic structure and semantic information |
CN111177383B (en) * | 2019-12-24 | 2024-01-16 | 上海大学 | Text entity relation automatic classification method integrating text grammar structure and semantic information |
CN112085837A (en) * | 2020-09-10 | 2020-12-15 | 哈尔滨理工大学 | Three-dimensional model classification method based on geometric shape and LSTM neural network |
CN112085837B (en) * | 2020-09-10 | 2022-04-26 | 哈尔滨理工大学 | Three-dimensional model classification method based on geometric shape and LSTM neural network |
CN112417156A (en) * | 2020-11-30 | 2021-02-26 | 百度国际科技(深圳)有限公司 | Multitask learning method, device, equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN108021544B (en) | 2021-07-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110287477B (en) | Entity emotion analysis method and related device | |
CN109933670B (en) | Text classification method for calculating semantic distance based on combined matrix | |
CN107562752A (en) | The method, apparatus and electronic equipment classified to the semantic relation of entity word | |
CN104798088B (en) | Piecewise linearity nerve Meta Model | |
CN109902222A (en) | Recommendation method and device | |
CN108665175A (en) | A kind of processing method, device and the processing equipment of insurance business risk profile | |
CN111191002B (en) | Neural code searching method and device based on hierarchical embedding | |
Krause et al. | Dynamic evaluation of transformer language models | |
CN104657350A (en) | Hash learning method for short text integrated with implicit semantic features | |
CN108021544A (en) | The method, apparatus and electronic equipment classified to the semantic relation of entity word | |
CN107665248A (en) | File classification method and device based on deep learning mixed model | |
CN110750640A (en) | Text data classification method and device based on neural network model and storage medium | |
CN104462066A (en) | Method and device for labeling semantic role | |
CN104598611A (en) | Method and system for sequencing search entries | |
WO2022062193A1 (en) | Individual credit assessment and explanation method and apparatus based on time sequence attribution analysis, and device and storage medium | |
Zhao et al. | Enhanced accuracy and robustness via multi-teacher adversarial distillation | |
Borna et al. | Hierarchical LSTM network for text classification | |
CN110019784B (en) | Text classification method and device | |
CN115238893A (en) | Neural network model quantification method and device for natural language processing | |
Mohtasham-Zadeh et al. | Audio Steganalysis based on collaboration of fractal dimensions and convolutional neural networks | |
Bai et al. | Gated character-aware convolutional neural network for effective automated essay scoring | |
Altan et al. | A novel fractional operator application for neural networks using proportional Caputo derivative | |
CN107644074A (en) | A kind of method of the readable analysis of the Chinese teaching material based on convolutional neural networks | |
Nijkamp et al. | Impacts of Multiple‐Period Lags in Dynamic Logit Models | |
Zhou et al. | Stochastic configuration broad learning system and its approximation capability analysis |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |