CN109977234A - A kind of knowledge mapping complementing method based on subject key words filtering - Google Patents
A kind of knowledge mapping complementing method based on subject key words filtering Download PDFInfo
- Publication number
- CN109977234A CN109977234A CN201910245584.1A CN201910245584A CN109977234A CN 109977234 A CN109977234 A CN 109977234A CN 201910245584 A CN201910245584 A CN 201910245584A CN 109977234 A CN109977234 A CN 109977234A
- Authority
- CN
- China
- Prior art keywords
- entity
- theme
- description
- word
- score
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Abstract
A kind of knowledge mapping complementing method based on subject key words filtering, belongs to knowledge mapping field.The content of text complex redundancy of the entity description of existing knowledge map complementing method leads to the problem of cannot targetedly carrying out completion to a certain specific completion task.A kind of knowledge mapping complementing method based on subject key words filtering gathers attention mechanism aiming at the problem that description information complexity, the redundancy of entity.Subject key words score function is proposed, the availability that evaluation improves entity description text is carried out to the description of entity, solves the problems, such as that there are a large amount of noise informations for description text.For the semantic relation of further reaction entity description and triple, the semantic specific aim of entity description is improved by theme semantic space model.The present invention can targetedly complete specific completion task by text filtering method.
Description
Technical field
The present invention relates to a kind of knowledge mapping complementing methods, in particular to based on the subject key words in entity description to reality
Body describes the knowledge mapping complementing method of text filtering.
Background technique
Knowledge mapping technology is widely used in intelligent answer and search field.Currently, constructed by knowledge mapping technology
Although knowledge base is large-scale, its integrated degree is not still high.In map most entity do not have birthplace information,
There is no nationality's information yet, the relationship quantity that the entity for half is included be no more than 5, thus it is necessary to knowledge mapping into
Row completion.The method of knowledge mapping map can be divided into two classes: one kind is untranslated type complementing method, and another kind of is that translation type is mended
Full method.Compared to untranslated type method, the calculating parameter being related to using the algorithm of translation model is few, and algorithm complexity is low.Mesh
Preceding researcher has been contemplated that by the way of Multi-source Information Fusion to knowledge mapping completion.In fact, not only being wrapped in knowledge base
Containing the triple being made of a large amount of entity and relationship, the description text envelope largely about the entity in triple is further comprised
Breath.Existing knowledge mapping method has been contemplated that by the way of combining the description information of translation model and entity to knowledge graph
Spectrum carries out completion.However the description of entity is from a wealth of sources, most content of text is drawn from encyclopaedia and webpage, the content of text
Complex redundancy targetedly can not carry out completion to a certain specific completion task.The present invention is exactly in such background
Lower proposition.Currently, no matter domestic or external, all in positive research, scholars are proposed for knowledge mapping completion task
A variety of model methods and corresponding algorithm, they are directed to different network models and specific practical problem, respectively there is feature.This
Invention proposes the knowledge mapping complementing method based on subject key words filtering on the model of forefathers and viewpoint.
Summary of the invention
The purpose of the present invention is to solve the content of text complexity of the entity description of existing knowledge map complementing method is superfluous
It is remaining, lead to the problem of cannot targetedly carrying out completion to a certain specific completion task, and propose a kind of based on theme pass
The knowledge mapping complementing method of keyword filtering.
A kind of knowledge mapping complementing method based on subject key words filtering, the method are realized by following steps:
Step 1: setting knowledge mapping G=(E, R, T);Wherein, E indicates that knowledge mapping entity sets, R indicate knowledge mapping
Middle set of relationship, T indicate the triplet sets to completion,
Step 2: the set that triple element incomplete in knowledge mapping G is constituted is set as completion set of tasks H, H
In element be divided into (h, r,?) and (h,?, t) and two kinds of forms;Wherein, head entity h ∈ E, relationship r ∈ R, tail entity t ∈ E;
Step 3: h in the triplet sets T of completion is treated using term vector tool and r is trained, for (h, r,?)
Task obtains h ' and r ', for (h,?, t) and task obtains h ' and t ';
Step 4: being handled using entity description of the term vector tool to entity in triplet sets T, obtain theme meter
Calculate term vector matrix, comprising: the term vector matrix D of head entity descriptioneWith the term vector matrix D of tail entity descriptiont;
Step 5: being handled by the entity description of NMF model enemy entity h and tail entity t respectively, obtain head entity
With the theme vector s of tail entityhAnd st;
Step 6: the theme vector s obtained using step 5hAnd stTo calculate theme semantic space s (sh,st):
Wherein, vector s is the theme the normal vector of semantic space;
Step 7: obtaining theme and calculate term vector matrix DeAnd DtAttention score, further according to attention score choose lead
Epigraph;And to term vector matrix DeAnd DtCarry out the assignment of attention score;Wherein, the formula of attention score is obtained are as follows:
In formula,It indicates that row multiplies, shows to every a line of the term vector matrix of entity description D multiplied by attention score;aiTable
Show the attention score of i-th of word in entity description;
Step 8: the attention score matrix attention (D) that step 7 is calculated being extracted using convolutional neural networks
Feature vector;
Step 9: defining loss function E (h, r, t) and objective function l;Wherein:
Loss function are as follows: E (h, r, t)=E 's+E′d+Es+Ed;
L1And L2All indicate norm, L1/L2Indicate L1Or L2Relationship;And
EdIndicate edEnergy possessed by indicating, ed=hd+r-td, hdIt is an entity h entity description
Feature vector, tdIt is the feature vector of tail entity t entity description;
sTIndicate the transposition of s;
Objective function are as follows: l=lembed+μltopic;And
S '={ (h ', r, t) } ∪ { (h, r ', t) } ∪ { (h, r, t ') }
Wherein, lembedIndicate the objective function of consideration term vector;ltopicIndicate the objective function of consideration theme;μ indicates super
Parameter is determined according to training result;S indicates the set of correct triple;S ' expression is by negative using obtained wrong triple
Set;By the triplet sets for randomly replacing entity and relationship composition mistake in correct triple;max(0,γ+E
(h, r, t)-E (h ', r ', t ')) it indicates to return to the larger value in two amounts;γ is hyper parameter, indicate correct triple score with
Spacing distance between mistake triple score;
And ltopicIt is defined as follows:
In formula, E presentation-entity collection, DeThe set of words that the entity description of presentation-entity e is constituted, ce,ωIndicate that word w occurs
In the frequency of occurrence of the description of entity e;seThe theme vector of the description text of presentation-entity e;The theme distribution of θ expression word w;
Entire training process is trained using stochastic gradient descent method;
Step 10: wrong by negative sampling study using elements whole in E or R as missing entity or the candidate collection of relationship
Triplet sets T ' accidentally;
Step 11: for each of H element, by the correct triple that step 8 obtains and the mistake that step 10 obtains
Triple is input to loss function, calculates corresponding score;
Step 12: by training with adjusting parameter, optimization object function l is so that target function value reaches minimum;
Step 13: resulting score being calculated according to step 9 candidate entity sets are sorted and exported and select list;
Step 9~step 11 is repeated, until obtaining output result.
The invention has the benefit that
1) present invention combines in theme semantic plane space with translation model, to enhance the discrimination of triple.
2) propose that the text filtering method based on subject key words is filtered to realize knowledge graph entity description text
Completion is composed, specific completion task can targetedly be completed by text filtering method.
The present invention gathers deep learning related side by the description in conjunction with triple structural information and entity in knowledge mapping
Method proposes the new method of knowledge mapping prediction, completion.The translation model of existing binding entity description is improved.Tool
Body is realized by the following method:
Firstly, for entity description from a wealth of sources, the most content of text of entity description is drawn from encyclopaedia and webpage,
There is complex redundancy in the content of text, for set attention mechanism.The invention proposes subject key words scoring letters
Number carries out the availability that evaluation improves entity description text to the description of entity, and solving description text, there are a large amount of noises to believe
The problem of breath.
Then, for the semantic relation of further reaction entity description and triple, the present invention passes through theme semantic space
The semantic specific aim of model raising entity description.
Finally, for triple link prediction and classification task, this paper model is trained in associated data set and
Test.
Detailed description of the invention
Fig. 1 is flow chart of the method for the present invention;
Fig. 2 is that head entity of the present invention and tail entity pass through completion and obtain the process schematic of entity sets.
Specific embodiment
Specific embodiment 1:
A kind of knowledge mapping complementing method based on subject key words filtering of present embodiment, the method includes following
Step:
Step 1: setting knowledge mapping G=(E, R, T);Wherein, E indicates that knowledge mapping entity sets, R indicate knowledge mapping
Middle set of relationship, T then indicate the triplet sets to completion,
Step 2: the set that triple element incomplete in knowledge mapping G that step 1 is set is constituted is set as completion
Element in set of tasks H, H be divided into (h, r,?) and (h,?, t) and two kinds of forms;Wherein, head entity h ∈ E, relationship r ∈ R, tail are real
Body t ∈ E;
Completion task is one and contains the set of many tasks.It contains much for example: (China, capital,?), (Yao
It is bright, birthplace,?), (Liu Dehua, the birthday,?) etc. as triple.Various such incomplete triple structures
At one set seek to complete task;
The triple of magnanimity is full of in knowledge base, such as: (U.S., president, Obama), that a triple indicates is visitor
See a fact present in the world, it may be assumed that the president in the U.S. is Obama.In triple, the U.S. is entity 1, and president is relationship,
Obama is entity 2.Corresponding to (h, r,?), "? " the entity being missing from indicated, that is, being completed for task, to "? "
Carry out completion.Such as such task: (China, capital,?).Where does is the capital of China? being completed for task is to missing
Entity predicted, or perhaps to its completion, so that this triple is complete.
Step 3: h in the triplet sets T of completion is treated using term vector tool and r is trained, for (h, r,?)
Task obtains h ' and r ', for (h,?, t) and task obtains h ' and t ';
The term vector tool be forefathers propose a kind of algorithm model, by this model can by a word with
The form of term vector is expressed.Such as: this word of Yao Ming can become (0.1,0.2,0.4,0.6) by term vector training.This
That a vector indicates is exactly Yao Ming.Concrete implementation needs to be realized with program.We are pre-processed real using this implementing procedure
The related text of body description can produce a large amount of term vector.
Step 4: being handled using entity description of the term vector tool to entity in triplet sets T, obtain theme meter
Calculate term vector matrix: the term vector matrix D of head entity descriptioneWith the term vector matrix D of tail entity descriptiont;
What entity description indicated is one section of word for describing the entity in triple.Such as: " Yao Ming " is searched in Baidupedia,
Yao Ming is exactly an entity in knowledge base, and it is exactly " entity description " that those of Yao Ming text is introduced in Baidupedia.
Step 5: being handled by the entity description of NMF model enemy entity h and tail entity t respectively, obtain head entity
With the theme vector s of tail entityhAnd st;
Step 6: the theme vector s obtained using step 5hAnd stTo calculate theme semantic space s (sh,st):
Wherein, all vectors perpendicular to s are called theme semantic space, the i.e. normal direction that vector s is the theme semantic space
Amount;
Step 7: obtaining theme and calculate term vector matrix DeAnd DtAttention score, further according to attention score choose lead
Epigraph;And to term vector matrix DeAnd DtCarry out the assignment of attention score;Wherein, the formula of attention score is obtained are as follows:
In formula,It indicates that row multiplies, shows to every a line of the term vector matrix of entity description D multiplied by attention score;ai
The attention score of i-th of word in presentation-entity description;
Step 8: the attention score matrix attention (D) that step 7 is calculated being extracted using convolutional neural networks
Feature vector;
Step 9: defining loss function E (h, r, t) and objective function l;Wherein:
Loss function are as follows: E (h, r, t)=E 's+E′d+Es+Ed;
E=h+r-t;L1And L2All indicate norm, L1/L2Indicate L1Or L2Relationship;And
EdIndicate edEnergy possessed by indicating, ed=hd+r-td, hdIt is an entity h entity description
Feature vector, tdIt is the feature vector of tail entity t entity description;
sTIndicate the transposition of s;
Objective function are as follows: l=lembed+μltopic;And
S '={ (h ', r, t) } ∪ { (h, r ', t) } ∪ { (h, r, t ') }
Wherein, lembedIndicate the objective function of consideration term vector;ltopicIndicate the objective function of consideration theme;μ indicates super
Parameter is determined according to training result;S indicates the set of correct triple;S ' expression is by negative using obtained wrong triple
Set;By the triplet sets for randomly replacing entity and relationship composition mistake in correct triple;max(0,γ+E
(h, r, t)-E (h ', r ', t ')) it indicates to return to the larger value in two amounts;γ is hyper parameter, indicate correct triple score with
Spacing distance (margin) between mistake triple score;
And ltopicIt is defined as follows:
In formula, E presentation-entity collection, DeThe set of words that the entity description of presentation-entity e is constituted, ce,ωIndicate that word w occurs
In the frequency of occurrence of the description of entity e;seThe theme vector of the description text of presentation-entity e;The theme distribution of θ expression word w;
Entire training process is trained using stochastic gradient descent method;
Step 10: wrong by negative sampling study using elements whole in E or R as missing entity or the candidate collection of relationship
Triplet sets T ' accidentally;
Step 11: for each of H element, by the correct triple that step 8 obtains and the mistake that step 10 obtains
Triple is input to loss function, calculates corresponding score;
Step 12: by training with adjusting parameter, optimization object function l is so that target function value reaches minimum;
Step 13: resulting score being calculated according to step 9 candidate entity sets are sorted and exported and select list;
Step 9~step 11 is repeated, until obtaining output result.
The present embodiment effect:
(1) for current representation of knowledge study Sparse existing for knowledge mapping completion field and and computation complexity
High problem, the present invention is by analyzing based on triple structural information and showing learning model based on correlation tables such as entity description information
Feature proposes subject key words filter function, efficiently solves the problems, such as entity description text noise redundancy.
(2) the expression model of fusion entity description is only to carry out modeling study to the word in text at present, fails to react word
The included semantic topic of remittance.It is different in fact, word under certain semantic environment, can react semanteme.Therefore, this hair
It is bright that row completion and is mapped on theme semantic space by theme semantic space, theme vocabulary secondly is carried out to description text
Complex relationship modeling and Sparse Problem has been effectively relieved to extend the semantic meaning of word in evaluation.
(3) present invention training pattern in FreeBase and DBpedia associated data set is investigated model and is predicted in triple
With the training result in two tasks of classification, algorithm is evaluated and tested.The experimental results showed that algorithm is in preceding ten accuracy rate and averagely
Previous algorithm is superior in ranking.
Specific embodiment 2:
Unlike specific embodiment one, a kind of knowledge mapping based on subject key words filtering of present embodiment
Complementing method in the step 7, obtains theme and calculates term vector matrix DeAnd DtAttention score, further according to attention point
Number chooses the master met;And term vector matrix D is calculated to themeeAnd DtThe process of the assignment of attention score is carried out, specifically:
Step 7.1: obtaining the attention function score of main body description, attention score a is described by following formulai:
Wherein, attention (D) indicates the attention function score result to the entity description;To in entity description text i-th
The attention a of a wordiFor i-th of term vector diWith the maximum in inference understanding W (T) in the cosine similarity of all row vectors
Value;
Step 7.2: in matrix W, to document doci, m theme probability before being chosen according to the size of theme probability distribution
Maximum theme, m≤K;K indicates theme number;
Step 7.3: in matrix H, respectively in this m theme, being chosen according to the size of theme and word probability distribution
The word that preceding n word is best suitable for this theme is used as " descriptor ", total m × n descriptor;
Step 7.4: definition window size is 2A, if write inscription based on i-th of word in entity description, attention score
For ai, attention score in its preceding A word and rear A word is enabled to be less than aiWord attention score be ai, i.e., in descriptor
Attention score is less than a in forward and backward each A wordiWord, its attention score is assigned a value of ai;It is considered that descriptor has
Certain instruction function, because often the answer of completion is often around descriptor, the descriptor in entity description can be with
Reflect the purport of article;To improve the importance of the relevant word of these " descriptor " and its front and back, assigned by the above method
The higher attention function score of these words helps to carry out knowledge mapping completion;Wherein, aiI-th in presentation-entity description is single
The attention score of word, and write inscription based on i-th of word.
Specific embodiment 3:
Unlike specific embodiment one or two, a kind of knowledge based on subject key words filtering of present embodiment
Map complementing method, in the step 12, by training with the training method of adjusting parameter include: stochastic gradient descent method,
Adam coaching method etc..
Emulation experiment data:
In each table, the corresponding data of Topic-ADRL_j are the experimental data of the method for the present invention.The experimental results showed that algorithm
Previous algorithm is superior in preceding ten accuracy rate and average ranking.
1 FB15K entity of table predicts comparing result
2 WN18 entity of table predicts comparing result
3 FB15K co-relation forecast assessment result of table
The assessment result of attribute of a relation is mapped on 4 FB15K of table
The present invention can also have other various embodiments, without deviating from the spirit and substance of the present invention, this field
Technical staff makes various corresponding changes and modifications in accordance with the present invention, but these corresponding changes and modifications all should belong to
The protection scope of the appended claims of the present invention.
Claims (3)
1. a kind of knowledge mapping complementing method based on subject key words filtering, it is characterised in that: the method includes following steps
It is rapid:
Step 1: setting knowledge mapping G=(E, R, T);Wherein, E indicates that knowledge mapping entity sets, R indicate to close in knowledge mapping
Assembly is closed, and T indicates the triplet sets to completion,
Step 2: the set that triple element incomplete in knowledge mapping G is constituted is set as in completion set of tasks H, H
Element be divided into (h, r,?) and (h,?, t) and two kinds of forms;Wherein, head entity h ∈ E, relationship r ∈ R, tail entity t ∈ E;
Step 3: h in the triplet sets T of completion is treated using term vector tool and r is trained, for (h, r,?) task
H ' and r ' are obtained, for (h,?, t) and task obtains h ' and t ';
Step 4: being handled using entity description of the term vector tool to entity in triplet sets T, obtain theme and calculate word
Vector matrix, comprising: the term vector matrix D of head entity descriptioneWith the term vector matrix D of tail entity descriptiont;
Step 5: being handled by the entity description of NMF model enemy entity h and tail entity t respectively, obtain head entity and tail
The theme vector s of entityhAnd st;
Step 6: the theme vector s obtained using step 5hAnd stTo calculate theme semantic space s (sh,st):
Wherein, vector s is the theme the normal vector of semantic space;
Step 7: obtaining theme and calculate term vector matrix DeAnd DtAttention score, further according to attention score choose descriptor;
And to term vector matrix DeAnd DtCarry out the assignment of attention score;Wherein, the formula of attention score is obtained are as follows:
In formula,It indicates that row multiplies, shows to every a line of the term vector matrix of entity description D multiplied by attention score;aiIndicate real
The attention score of i-th of word in body description;
Step 8: attention score matrix attention (D) extraction feature that step 7 is calculated using convolutional neural networks
Vector;
Step 9: defining loss function E (h, r, t) and objective function l;Wherein:
Loss function are as follows: E (h, r, t)=E 's+E′d+Es+Ed;
E=h+r-t;L1And L2All indicate norm, L1/L2Indicate L1Or L2Relationship;And
EdIndicate edEnergy possessed by indicating, ed=hd+r-td, hdIt is a spy for entity h entity description
Levy vector, tdIt is the feature vector of tail entity t entity description, extracts to obtain through convolutional neural networks by step 8
sTIndicate the transposition of s;
Objective function are as follows: l=lembed+μltopic;And
S '={ (h ', r, t) } ∪ { (h, r ', t) } ∪ { (h, r, t ') }
Wherein, lembedIndicate the objective function of consideration term vector;ltopicIndicate the objective function of consideration theme;μ indicates hyper parameter,
It is determined according to training result;S indicates the set of correct triple;S ' expression passes through the negative collection using obtained wrong triple
It closes;By the triplet sets for randomly replacing entity and relationship composition mistake in correct triple;
Max (0, γ+E (h, r, t)-E (h ', r ', t ')) indicates to return to the larger value in two amounts;γ is hyper parameter, is indicated just
Spacing distance between true triple score and wrong triple score;
And ltopicIt is defined as follows:
In formula, E presentation-entity collection, DeThe set of words that the entity description of presentation-entity e is constituted, ce,ωIndicate that word w appears in reality
The frequency of occurrence of the description of body e;seThe theme vector of the description text of presentation-entity e;The theme distribution of θ expression word w;Entirely
Training process is trained using stochastic gradient descent method;
Step 10: using elements whole in E or R as missing entity or the candidate collection of relationship, passing through negative sampling study mistake
Triplet sets T ';
Step 11: for each of H element, by the correct triple that step 8 obtains and the wrong ternary that step 10 obtains
Group is input to loss function, calculates corresponding score;
Step 12: by training with adjusting parameter, optimization object function l is so that target function value reaches minimum;
Step 13: resulting score being calculated according to step 9 candidate entity sets are sorted and exported and select list;
Step 9~step 11 is repeated, until obtaining output result.
2. a kind of knowledge mapping complementing method based on subject key words filtering according to claim 1, it is characterised in that:
In the step 7, obtains theme and calculate term vector matrix DeAnd DtAttention score, further according to attention score choose accord with
The master of conjunction;And term vector matrix D is calculated to themeeAnd DtThe process of the assignment of attention score is carried out, specifically:
Step 7.1: obtaining the attention function score of main body description, attention score a is described by following formulai:
Wherein, attention (D) indicates the attention function score result to the entity description;It is single to i-th in entity description text
The attention a of wordiFor i-th of term vector diWith the maximum value in inference understanding W (T) in the cosine similarity of all row vectors;
Step 7.2: in matrix W, to document doci, preceding m theme maximum probability is chosen according to the size of theme probability distribution
Theme, m≤K;K indicates theme number;
Step 7.3: in matrix H, respectively in this m theme, n before being chosen according to the size of theme and word probability distribution
The word that word is best suitable for this theme is used as " descriptor ", total m × n descriptor;
Step 7.4: definition window size is 2A, in the forward and backward each A word of descriptor attention score less than aiWord,
Its attention score is assigned a value of ai;To improve the importance of the relevant word of descriptor and its front and back;Wherein, aiPresentation-entity is retouched
The attention score of i-th of word in stating, and write inscription based on i-th of word.
3. a kind of knowledge mapping complementing method based on subject key words filtering according to claim 2, it is characterised in that:
It include: stochastic gradient descent method, Adam coaching method by training with the training method of adjusting parameter in the step 12.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910245584.1A CN109977234A (en) | 2019-03-28 | 2019-03-28 | A kind of knowledge mapping complementing method based on subject key words filtering |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910245584.1A CN109977234A (en) | 2019-03-28 | 2019-03-28 | A kind of knowledge mapping complementing method based on subject key words filtering |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109977234A true CN109977234A (en) | 2019-07-05 |
Family
ID=67081347
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910245584.1A Pending CN109977234A (en) | 2019-03-28 | 2019-03-28 | A kind of knowledge mapping complementing method based on subject key words filtering |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109977234A (en) |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110427524A (en) * | 2019-08-05 | 2019-11-08 | 北京百度网讯科技有限公司 | Method, apparatus, electronic equipment and the storage medium of knowledge mapping completion |
CN110543574A (en) * | 2019-08-30 | 2019-12-06 | 北京百度网讯科技有限公司 | knowledge graph construction method, device, equipment and medium |
CN110851620A (en) * | 2019-10-29 | 2020-02-28 | 天津大学 | Knowledge representation method based on combination of text embedding and structure embedding |
CN110929047A (en) * | 2019-12-11 | 2020-03-27 | 中国人民解放军国防科技大学 | Knowledge graph reasoning method and device concerning neighbor entities |
CN111291139A (en) * | 2020-03-17 | 2020-06-16 | 中国科学院自动化研究所 | Attention mechanism-based knowledge graph long-tail relation completion method |
CN111462282A (en) * | 2020-04-02 | 2020-07-28 | 哈尔滨工程大学 | Scene graph generation method |
CN111488462A (en) * | 2020-04-02 | 2020-08-04 | 中国移动通信集团江苏有限公司 | Recommendation method, device, equipment and medium based on knowledge graph |
CN111814480A (en) * | 2020-07-21 | 2020-10-23 | 润联软件系统(深圳)有限公司 | Knowledge graph complementing method and device, computer equipment and storage medium |
CN111967263A (en) * | 2020-07-30 | 2020-11-20 | 北京明略软件系统有限公司 | Domain named entity denoising method and system based on entity topic relevance |
CN112035672A (en) * | 2020-07-23 | 2020-12-04 | 深圳技术大学 | Knowledge graph complementing method, device, equipment and storage medium |
CN112132444A (en) * | 2020-09-18 | 2020-12-25 | 北京信息科技大学 | Method for identifying knowledge gap of cultural innovation enterprise in Internet + environment |
CN112560477A (en) * | 2020-12-09 | 2021-03-26 | 中科讯飞互联(北京)信息科技有限公司 | Text completion method, electronic device and storage device |
CN112667824A (en) * | 2021-01-17 | 2021-04-16 | 北京工业大学 | Knowledge graph complementing method based on multi-semantic learning |
CN113360670A (en) * | 2021-06-09 | 2021-09-07 | 山东大学 | Knowledge graph completion method and system based on fact context |
CN113360664A (en) * | 2021-05-31 | 2021-09-07 | 电子科技大学 | Knowledge graph complementing method |
CN113360675A (en) * | 2021-06-25 | 2021-09-07 | 中关村智慧城市产业技术创新战略联盟 | Knowledge graph specific relation completion method based on Internet open world |
CN117743601A (en) * | 2024-02-05 | 2024-03-22 | 中南大学 | Natural resource knowledge graph completion method, device, equipment and medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170083817A1 (en) * | 2015-09-23 | 2017-03-23 | Isentium, Llc | Topic detection in a social media sentiment extraction system |
WO2018006469A1 (en) * | 2016-07-07 | 2018-01-11 | 深圳狗尾草智能科技有限公司 | Knowledge graph-based human-robot interaction method and system |
US20180204111A1 (en) * | 2013-02-28 | 2018-07-19 | Z Advanced Computing, Inc. | System and Method for Extremely Efficient Image and Pattern Recognition and Artificial Intelligence Platform |
CN108694469A (en) * | 2018-06-08 | 2018-10-23 | 哈尔滨工程大学 | A kind of Relationship Prediction method of knowledge based collection of illustrative plates |
CN108874782A (en) * | 2018-06-29 | 2018-11-23 | 北京寻领科技有限公司 | A kind of more wheel dialogue management methods of level attention LSTM and knowledge mapping |
US20190034780A1 (en) * | 2017-07-31 | 2019-01-31 | Microsoft Technology Licensing, Llc | Knowledge Graph For Conversational Semantic Search |
-
2019
- 2019-03-28 CN CN201910245584.1A patent/CN109977234A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180204111A1 (en) * | 2013-02-28 | 2018-07-19 | Z Advanced Computing, Inc. | System and Method for Extremely Efficient Image and Pattern Recognition and Artificial Intelligence Platform |
US20170083817A1 (en) * | 2015-09-23 | 2017-03-23 | Isentium, Llc | Topic detection in a social media sentiment extraction system |
WO2018006469A1 (en) * | 2016-07-07 | 2018-01-11 | 深圳狗尾草智能科技有限公司 | Knowledge graph-based human-robot interaction method and system |
US20190034780A1 (en) * | 2017-07-31 | 2019-01-31 | Microsoft Technology Licensing, Llc | Knowledge Graph For Conversational Semantic Search |
CN108694469A (en) * | 2018-06-08 | 2018-10-23 | 哈尔滨工程大学 | A kind of Relationship Prediction method of knowledge based collection of illustrative plates |
CN108874782A (en) * | 2018-06-29 | 2018-11-23 | 北京寻领科技有限公司 | A kind of more wheel dialogue management methods of level attention LSTM and knowledge mapping |
Non-Patent Citations (2)
Title |
---|
HAN XIAO, MINLIE HUANG, LIAN MENG, XIAOYAN ZHU: ""SSP: Semantic Space Projection"", 《HTTPS://ARXIV.ORG/ABS/1604.04835》 * |
SHUOHANG WANG,JING JIANG: ""A COMPARE-AGGREGATE MODEL FOR MATCHING"", 《HTTPS://ARXIV.ORG/ABS/1611.01747》 * |
Cited By (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110427524A (en) * | 2019-08-05 | 2019-11-08 | 北京百度网讯科技有限公司 | Method, apparatus, electronic equipment and the storage medium of knowledge mapping completion |
CN110543574B (en) * | 2019-08-30 | 2022-05-17 | 北京百度网讯科技有限公司 | Knowledge graph construction method, device, equipment and medium |
CN110543574A (en) * | 2019-08-30 | 2019-12-06 | 北京百度网讯科技有限公司 | knowledge graph construction method, device, equipment and medium |
CN110851620A (en) * | 2019-10-29 | 2020-02-28 | 天津大学 | Knowledge representation method based on combination of text embedding and structure embedding |
CN110851620B (en) * | 2019-10-29 | 2023-07-04 | 天津大学 | Knowledge representation method based on text embedding and structure embedding combination |
CN110929047A (en) * | 2019-12-11 | 2020-03-27 | 中国人民解放军国防科技大学 | Knowledge graph reasoning method and device concerning neighbor entities |
CN111291139B (en) * | 2020-03-17 | 2023-08-22 | 中国科学院自动化研究所 | Knowledge graph long-tail relation completion method based on attention mechanism |
CN111291139A (en) * | 2020-03-17 | 2020-06-16 | 中国科学院自动化研究所 | Attention mechanism-based knowledge graph long-tail relation completion method |
CN111488462B (en) * | 2020-04-02 | 2023-09-19 | 中国移动通信集团江苏有限公司 | Recommendation method, device, equipment and medium based on knowledge graph |
CN111488462A (en) * | 2020-04-02 | 2020-08-04 | 中国移动通信集团江苏有限公司 | Recommendation method, device, equipment and medium based on knowledge graph |
CN111462282A (en) * | 2020-04-02 | 2020-07-28 | 哈尔滨工程大学 | Scene graph generation method |
CN111462282B (en) * | 2020-04-02 | 2023-01-03 | 哈尔滨工程大学 | Scene graph generation method |
CN111814480A (en) * | 2020-07-21 | 2020-10-23 | 润联软件系统(深圳)有限公司 | Knowledge graph complementing method and device, computer equipment and storage medium |
CN111814480B (en) * | 2020-07-21 | 2024-04-16 | 华润数字科技有限公司 | Knowledge graph completion method and device, computer equipment and storage medium |
CN112035672A (en) * | 2020-07-23 | 2020-12-04 | 深圳技术大学 | Knowledge graph complementing method, device, equipment and storage medium |
CN112035672B (en) * | 2020-07-23 | 2023-05-09 | 深圳技术大学 | Knowledge graph completion method, device, equipment and storage medium |
CN111967263A (en) * | 2020-07-30 | 2020-11-20 | 北京明略软件系统有限公司 | Domain named entity denoising method and system based on entity topic relevance |
CN112132444B (en) * | 2020-09-18 | 2023-05-12 | 北京信息科技大学 | Identification method for cultural innovation enterprise knowledge gap in Internet+environment |
CN112132444A (en) * | 2020-09-18 | 2020-12-25 | 北京信息科技大学 | Method for identifying knowledge gap of cultural innovation enterprise in Internet + environment |
CN112560477A (en) * | 2020-12-09 | 2021-03-26 | 中科讯飞互联(北京)信息科技有限公司 | Text completion method, electronic device and storage device |
CN112560477B (en) * | 2020-12-09 | 2024-04-16 | 科大讯飞(北京)有限公司 | Text completion method, electronic equipment and storage device |
CN112667824A (en) * | 2021-01-17 | 2021-04-16 | 北京工业大学 | Knowledge graph complementing method based on multi-semantic learning |
CN112667824B (en) * | 2021-01-17 | 2024-03-15 | 北京工业大学 | Knowledge graph completion method based on multi-semantic learning |
CN113360664B (en) * | 2021-05-31 | 2022-03-25 | 电子科技大学 | Knowledge graph complementing method |
CN113360664A (en) * | 2021-05-31 | 2021-09-07 | 电子科技大学 | Knowledge graph complementing method |
CN113360670B (en) * | 2021-06-09 | 2022-06-17 | 山东大学 | Knowledge graph completion method and system based on fact context |
CN113360670A (en) * | 2021-06-09 | 2021-09-07 | 山东大学 | Knowledge graph completion method and system based on fact context |
CN113360675A (en) * | 2021-06-25 | 2021-09-07 | 中关村智慧城市产业技术创新战略联盟 | Knowledge graph specific relation completion method based on Internet open world |
CN113360675B (en) * | 2021-06-25 | 2024-02-13 | 中关村智慧城市产业技术创新战略联盟 | Knowledge graph specific relationship completion method based on Internet open world |
CN117743601A (en) * | 2024-02-05 | 2024-03-22 | 中南大学 | Natural resource knowledge graph completion method, device, equipment and medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109977234A (en) | A kind of knowledge mapping complementing method based on subject key words filtering | |
CN112214610B (en) | Entity relationship joint extraction method based on span and knowledge enhancement | |
CN108984745A (en) | A kind of neural network file classification method merging more knowledge mappings | |
CN107944559B (en) | Method and system for automatically identifying entity relationship | |
CN109992779B (en) | Emotion analysis method, device, equipment and storage medium based on CNN | |
CN105787557B (en) | A kind of deep-neural-network construction design method of computer intelligence identification | |
CN109558487A (en) | Document Classification Method based on the more attention networks of hierarchy | |
CN110021439A (en) | Medical data classification method, device and computer equipment based on machine learning | |
CN110222349A (en) | A kind of model and method, computer of the expression of depth dynamic context word | |
CN110502753A (en) | A kind of deep learning sentiment analysis model and its analysis method based on semantically enhancement | |
CN107818164A (en) | A kind of intelligent answer method and its system | |
CN110134946B (en) | Machine reading understanding method for complex data | |
CN106547735A (en) | The structure and using method of the dynamic word or word vector based on the context-aware of deep learning | |
CN107330011A (en) | The recognition methods of the name entity of many strategy fusions and device | |
CN109885670A (en) | A kind of interaction attention coding sentiment analysis method towards topic text | |
CN106557462A (en) | Name entity recognition method and system | |
CN108549658A (en) | A kind of deep learning video answering method and system based on the upper attention mechanism of syntactic analysis tree | |
CN109885824A (en) | A kind of Chinese name entity recognition method, device and the readable storage medium storing program for executing of level | |
CN111400470A (en) | Question processing method and device, computer equipment and storage medium | |
CN108427665A (en) | A kind of text automatic generation method based on LSTM type RNN models | |
CN112052684A (en) | Named entity identification method, device, equipment and storage medium for power metering | |
CN110085215A (en) | A kind of language model data Enhancement Method based on generation confrontation network | |
CN109783794A (en) | File classification method and device | |
CN109977199A (en) | A kind of reading understanding method based on attention pond mechanism | |
CN109886021A (en) | A kind of malicious code detecting method based on API overall situation term vector and layered circulation neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20190705 |
|
WD01 | Invention patent application deemed withdrawn after publication |