CN112069408A - Recommendation system and method for fusion relation extraction - Google Patents
Recommendation system and method for fusion relation extraction Download PDFInfo
- Publication number
- CN112069408A CN112069408A CN202010931994.4A CN202010931994A CN112069408A CN 112069408 A CN112069408 A CN 112069408A CN 202010931994 A CN202010931994 A CN 202010931994A CN 112069408 A CN112069408 A CN 112069408A
- Authority
- CN
- China
- Prior art keywords
- entity
- module
- vector
- text
- sentence
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000605 extraction Methods 0.000 title claims abstract description 67
- 238000000034 method Methods 0.000 title claims abstract description 49
- 230000004927 fusion Effects 0.000 title claims abstract description 36
- 239000013598 vector Substances 0.000 claims abstract description 109
- 239000011159 matrix material Substances 0.000 claims abstract description 45
- 238000010276 construction Methods 0.000 claims abstract description 30
- 230000002452 interceptive effect Effects 0.000 claims abstract description 18
- 230000006399 behavior Effects 0.000 claims abstract description 16
- 238000013507 mapping Methods 0.000 claims description 21
- 230000006870 function Effects 0.000 claims description 18
- 230000007246 mechanism Effects 0.000 claims description 16
- 238000001914 filtration Methods 0.000 claims description 10
- 239000003550 marker Substances 0.000 claims description 6
- 230000002457 bidirectional effect Effects 0.000 claims description 5
- 230000004913 activation Effects 0.000 claims description 4
- 238000004364 calculation method Methods 0.000 claims description 4
- 230000002708 enhancing effect Effects 0.000 claims description 4
- 238000010606 normalization Methods 0.000 claims description 4
- 230000009466 transformation Effects 0.000 claims description 4
- 238000012545 processing Methods 0.000 claims description 3
- 238000012549 training Methods 0.000 claims description 3
- 238000013527 convolutional neural network Methods 0.000 claims description 2
- 238000010586 diagram Methods 0.000 description 12
- 230000000694 effects Effects 0.000 description 3
- 239000000284 extract Substances 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000003672 processing method Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 239000013589 supplement Substances 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008451 emotion Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 239000000047 product Substances 0.000 description 1
- 238000007430 reference method Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 230000017105 transposition Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/953—Querying, e.g. by the use of web search engines
- G06F16/9535—Search customisation based on user profiles and personalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/36—Creation of semantic tools, e.g. ontology or thesauri
- G06F16/367—Ontology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/10—Text processing
- G06F40/166—Editing, e.g. inserting or deleting
- G06F40/186—Templates
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/237—Lexical tools
- G06F40/242—Dictionaries
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/279—Recognition of textual entities
- G06F40/284—Lexical analysis, e.g. tokenisation or collocates
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Databases & Information Systems (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Audiology, Speech & Language Pathology (AREA)
- General Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Animal Behavior & Ethology (AREA)
- Machine Translation (AREA)
Abstract
The invention relates to a recommendation system and method for extracting fusion relations, and belongs to the technical field of content recommendation. An article feature construction module in the system constructs a text feature matrix, a basic entity feature matrix and an enhanced entity feature matrix, and further obtains feature vectors of target articles and interactive articles in user historical behaviors; a user interest construction module obtains a user interest vector; the multilayer perceptron module obtains the probability of a user clicking a target article; the method comprises 1) a relation extraction submodule predicts the relation between entities; 2) acquiring a word embedding set, a basic entity embedding set and an enhanced entity embedding set through knowledge extraction; 3) adopting KCNN to construct article characteristics; 4) constructing user interests and constructing user interests; 5) and splicing the feature vectors and predicting the probability of the user clicking the target object. The system and the method can effectively reduce the influence of the entity on the template, and the accuracy is higher than that of the prior art.
Description
Technical Field
The invention relates to a recommendation system and method for extracting fusion relations, and belongs to the technical field of artificial intelligence, network big data and content recommendation.
Background
The recommendation system based on the content is widely applied to various fields and has wide development prospect. The recommendation system provides personalized service for the user, the distance between the user and the network platform is shortened, and the user experience is greatly improved. In addition, the research concept and processing method of the content-based recommendation system have important reference value in the fields of advertisement putting, search engines and the like.
For example, in a news scenario, with the development of the World Wide Web (WWW), the vast majority of people read news through the internet. Google News (Google News), bang News (Bing News) and the like abroad, and on-line News websites such as News of newwave, News of Tencent and the like in China collect News from various sources and provide readers with a summary view of the News. The number of news is huge, and it is very important to provide personalized news lists for different users in order to alleviate the problem of information overload. The news recommendation scene mainly adopts a recommendation method based on content, and in recent years, a better recommendation result is obtained by fusing knowledge.
In addition, the invention also relates to a relation extraction technology, which aims to automatically extract the relation between the entities from the unstructured text and provide a deep text analysis function for a user. The method breaks through the traditional manual processing method, and can greatly improve the efficiency and the accuracy. Meanwhile, the task can provide key semantic information for a conversation system, a recommendation system and the like; and technical support can be provided for natural language processing tasks such as semantic network labeling, machine translation, emotion analysis and the like, and the method has great research significance.
Conventional content-based recommendation systems often employ existing knowledge-graph information to supplement knowledge, resulting in a lack of pertinence to knowledge. In addition, the traditional relational extraction model only solves the problem that the utilization of the WordNet dictionary information is only limited at a word level, the utilization of the WordNet dictionary information is insufficient, and deep-level relation among entities is lacked.
Disclosure of Invention
The invention aims to solve the problem that the knowledge adopted by the existing content-based recommendation system is lack of pertinence when the knowledge is supplemented, further improve the prediction accuracy of the recommendation system, and provide a recommendation system and a recommendation method for extracting a fusion relationship.
The invention is realized by the following technical scheme.
The recommendation system and method for extracting the fusion relationship comprise a recommendation system for extracting the fusion relationship and a recommendation method for extracting the fusion relationship;
the recommendation system for extracting the fusion relationship comprises a knowledge extraction module, an article feature construction module, a user interest construction module and a multilayer perceptron module;
the knowledge extraction module comprises a word embedding module, a basic entity module and an enhanced entity module;
the basic entity module comprises a basic entity link sub-module, a filtering map sub-module and a knowledge representation learning sub-module; the enhanced entity module comprises an enhanced entity link sub-module, a relation extraction sub-module and a knowledge representation learning sub-module;
the relation extraction submodule comprises a sentence feature extractor, a template feature extractor and a threshold fusion device;
the connection relation of each module in the recommendation system extracted by the fusion relation is as follows:
the knowledge extraction module is connected with the article characteristic construction module; the article characteristic construction module is connected with the user interest construction module; the article characteristic construction module, the user interest construction module and the multilayer perceptron module are connected;
in the knowledge extraction module, a word embedding module, a basic entity module and an enhanced entity module are in parallel relation;
a basic entity link submodule in the basic entity module is connected with a filtering map submodule, and the filtering map submodule is connected with a knowledge representation learning submodule; an enhanced entity link sub-module in the enhanced entity module is connected with a relationship extraction sub-module, and the relationship extraction sub-module is connected with a knowledge representation learning sub-module;
in the relation extraction submodule, a sentence feature extractor and a template feature extractor are respectively connected with a threshold fusion device;
the functions of the modules in the recommendation system extracted by the fusion relationship are as follows:
the knowledge extraction module has the function of extracting knowledge required by the system;
the word embedding module receives the large-scale linguistic data to obtain a word embedding set; the basic entity module receives text description information of the interactive articles in the historical behaviors to obtain a basic entity embedded set; the method comprises the steps that an enhanced entity module receives text description information of interactive articles in historical behaviors to obtain an enhanced entity embedding set;
in the basic entity module, a basic entity link submodule receives text description information of interactive articles in historical behaviors to obtain an entity set; the filtering map submodule receives an external knowledge map to obtain a knowledge map without nodes in the entity set; the knowledge representation learning submodule receives the filtered knowledge map to obtain a basic entity embedding set; in the entity enhancing module, an entity enhancing link submodule receives text description information of interactive articles in historical behaviors to obtain an entity set; the relation extraction submodule receives text description information and an entity set of the interactive articles in historical behaviors to obtain a related knowledge graph; the knowledge representation learning submodule receives a relevant knowledge map to obtain an enhanced entity embedding set;
in the relation extraction submodule, a sentence feature extractor receives a text to obtain sentence features; the template feature extractor receives the text to obtain template features; the threshold fusion device is used for receiving the sentence characteristics and the template characteristics to obtain text characteristics;
the article feature construction module is used for receiving the output of the knowledge extraction module, constructing a text feature matrix, a basic entity feature matrix and an enhanced entity feature matrix, and further obtaining a feature vector of a target article and a feature vector of an interactive article in user historical behaviors; the user interest construction module is used for receiving the output of the article characteristic construction module to obtain a user interest vector; the function of the multilayer perceptron module is to receive the output of the user interest building module and the feature vector of the target object in the output of the object feature building module to obtain the probability of the user clicking the target object;
the recommendation method for extracting the fusion relationship comprises the following steps:
step one, the relation extraction submodule predicts the relation between the entities and comprises the following substeps:
step 1.1, a sentence feature extractor acquires sentence features;
the sentence characteristic extractor is a relation extraction model which is one of an end-to-end model, a lexical model and a syntactic model;
the end-to-end model does not depend on external knowledge at all, and only sentences and entity pair information in the sentences are used;
using lexical information contained in a sentence based on a lexical model, wherein the lexical information comprises named entity identification, part of speech tagging and word net hypernym;
using syntactic information contained in the sentence based on a syntactic model, wherein the syntactic information comprises a phrase structure tree, a dependency tree and a shortest dependency path;
the sentence characteristic extractor utilizes semantic structure information contained in the sentence to extract characteristics, the characteristics are highly related to entity information, and finally sentence characteristics are obtained;
step 1.2, the template feature extractor obtains template features through entity replacement, word embedding, multi-head self-attention mechanism, bidirectional LSTM and attention mechanism operation, and specifically comprises the following steps:
step 1.2A, a template feature extractor replaces an entity in a text with an entity hypernym path through entity replacement operation to obtain a sentence with the entity replaced;
the method for obtaining the sentence with the replaced entity comprises the following steps:
step 1.2A1 setting the longest path length s;
step 1.2A2 initializing the path list to null;
step 1.2A3 obtaining a first hypernym of an entity in a WordNet dictionary;
step 1.2a4, determining whether the hypernym is empty, and deciding whether to jump to step 1.2a5, specifically:
if the hypernym is empty, then go to step 1.2A 5;
if the hypernym is not empty, adding the hypernym into the list, assigning the hypernym to the entity, and returning to the step 1.2A3 for execution;
step 1.2A5 calculating path list length;
step 1.2a6, comparing the length of the path list with the length s of the longest path, and completing the method for obtaining the sentence with the entity replaced, specifically:
if the length of the path list is greater than or equal to the length s of the longest path, intercepting the first s items of the list, and splicing the first s items to obtain a sentence with the entity replaced;
otherwise, if the length of the path list is smaller than the length s of the longest path, adopting 'splicing' to splice all list items to obtain a sentence with the entity replaced;
so far, through the steps from step 1.2a1 to step 1.2a6, the entity in the sentence is replaced by the entity hypernym path, and a sentence with the entity replaced is obtained;
step 1.2B, converting the sentence with the replaced entity obtained in step 1.2A into a position vector;
wherein the elements in the position vector are defined as follows: if a word is an entity hypernym path, its template location flag is "0"; if the word is other words in the sentence, its template position flag is "1";
step 1.2C, carrying out word embedding operation on the sentences obtained in the step 1.2A and the position vectors obtained in the step 1.2B to obtain a word matrix and a template position mark matrix;
specifically, a vocabulary is randomly initializedWord listTraverse each word x of the textiTaking the ith row of the word list W and the word list T to obtain the word xiWord vector eiAnd a template position marker vector ti(ii) a Splicing word vectors of all words in the text to obtain a word matrix x of sentences with entities replacede=[e1,e2,…,en](ii) a Splicing the template position mark vectors of all words in the text to obtain a template position mark matrix x of the sentence with the entity replacedt=[t1,t2,…,tn](ii) a R is a real number field, and is marked with Nxme and 2×mtRepresents the dimension of R;
wherein i is traversed from 1 to n, and n is the length of the replaced entity sentence; n is the total number of words in the word list;the expression xiWord vector of, meIs the dimension of the word vector;the expression xiThe template position marker vector of, mtIs the dimension of the template position marker vector;
step 1.2D, firstly, performing multi-head self-attention mechanism operation on the word matrix obtained in the step 1.2C, and then splicing the template position mark matrix to obtain the low-order characteristics of the text;
step 1.2E, performing bidirectional LSTM operation on the low-order features of the text acquired in the step 1.2D to obtain high-order features of the text;
step 1.2F, obtaining template characteristics by subjecting the high-order characteristics of the text obtained in step 1.2E to an attention mechanism, and specifically calculating the characteristics as shown in formulas (1) to (5):
M=tanh(H) (1)
α=softmax(Mw) (2)
r=HTα (3)
R′=tanh(r) (4)
R=dropout(R') (5)
wherein H is a high-order feature of the text; hTIs the transpose of H; m ishIs the dimension of the high-order feature;is a parameter that needs to be trained; alpha is formed by Rn×1Is the weight column vector obtained by calculation;is a weighted feature vector; the output of the attention mechanism issoftmax (·), tanh (·) is an activation function; dropout (·) represents an operation of randomly replacing a value of a certain dimension of a vector with 0;
step 1.3, a threshold fusion device fuses sentence characteristics and template characteristics to obtain text characteristics; the method specifically comprises the following steps:
step 1.3A, mapping sentence characteristics and template characteristics to the same vector space, and ensuring the same dimensionality of the sentence characteristics and the template characteristics; i.e. Cg=tanh(WmapcC+bmapc),Rg=tanh(WmaprR+bmapr);
wherein ,is a sentence feature;is a template feature;is a mapping matrix;is a bias vector; m iscIs the dimension of the sentence feature; m isgIs the vector dimension after mapping;
wherein exp (·) denotes exponential operation;
step 1.3C reaction of Cg and RgRespectively give weight gC and gRObtaining text characteristics V;
wherein ,V=gC⊙Cg+gR⊙Rg, "is an element-by-element multiplication operation;
step 1.4, predicting the relation of the text characteristics obtained in the step 1.3 through a full-connection network and a softmax (·) function, wherein the predicting relation specifically comprises the following steps:
firstly, taking the text characteristic V as input, obtaining the probability distribution of the relation category through the full-connection network and the softmax (·) functionThen, the probability distribution is takenThe relationship class corresponding to the maximum value of (1)As a result of the prediction;
wherein, S represents a sentence,is a mapping matrix of text features and relationships, bS∈RmIs the offset vector, m is the number of relationship classes,the operation of taking the y value corresponding to the maximum result is shown;
step two, acquiring a word embedding set, a basic entity embedding set and an enhanced entity embedding set through knowledge extraction;
step 2.1, training a word embedding set from a large-scale corpus by using a word2vec word embedding method;
step 2.2, acquiring a basic entity embedding set;
firstly, matching and disambiguating a text and a knowledge base to obtain an entity set contained in the text; because the scale of the original knowledge graph is large, then extracting a subgraph from the original knowledge graph, and removing nodes in the entity set which does not exist to obtain a basic knowledge graph; finally, mapping the entities and the relations in the basic knowledge map to a low-dimensional vector space by adopting a knowledge representation learning method TransD to obtain a basic entity embedding set;
step 2.3, obtaining an enhanced entity embedding set;
firstly, matching and disambiguating a text and a knowledge base to obtain an entity set contained in the text; then marking out corresponding entities in the description text, and adopting a relationship extraction submodule to carry out relationship identification; after entity linkage, one sentence may contain a plurality of entities, all the entities are combined and predicted, and an enhanced knowledge graph is constructed; finally, mapping the entities and the relations in the enhanced knowledge graph to a low-dimensional vector space by adopting a knowledge representation learning method TransD to obtain an enhanced entity embedding set;
thirdly, constructing article characteristics by adopting a knowledge-aware convolutional neural network (KCNN);
step 3.1, constructing a text feature matrix on the basis of the word embedding set obtained in the step two; the method specifically comprises the following steps:
firstly, searching a vector corresponding to each word in an article description text in a word embedding set, and if not, randomly initializing the vector; then all vectors are spliced to obtain a text feature matrix;
3.2, constructing basic entity characteristics on the basis of the basic entity embedded set obtained in the second step; the method specifically comprises the following steps:
firstly, searching a vector corresponding to each word in an article description text in a basic entity embedding set, and if not, replacing the vector by using a zero vector; then through a mapping function fm(X)=ReLU(WmX+bm) Mapping each vector into a vector space which is the same as the text features; finally, splicing all vectors to obtain a basic entity feature matrix;
wherein ,is a transformation matrix;is a bias vector; dwIs the dimension of word embedding; deIs the dimension of the underlying entity embedding;
3.2 constructing an enhanced entity feature matrix on the basis of the enhanced entity embedded set obtained in the second step; the method specifically comprises the following steps:
firstly, searching a vector corresponding to each word in an article description text in an enhanced entity embedding set, and if not, replacing the vector by using a zero vector; then through a mapping function fm(X)=ReLU(WmX+bm) Mapping each vector into a vector space which is the same as the text features; finally, splicing all vectors to obtain an enhanced entity feature matrix;
3.3, stacking the text feature matrix, the basic entity feature matrix and the enhanced entity feature matrix to be used as three-channel input of the KCNN model;
step 3.4, constructing an article feature vector by using a plurality of convolution kernels to obtain the feature vector of the article;
step four, constructing user interests by using an attention mechanism; the method comprises the following specific steps:
first, the influence degree is calculated by adopting an attention network, and the input information of the attention network is the target object and the user historyFeature vector q of interactive items in a behaviorvAndoutputting the weight value through the full-connection network after splicingThen normalization processing is carried out to obtainDegree of influence ofFinally, constructing an interest feature vector of the user u
Step five, the interest feature vector u of the user u and the feature vector q of the target item v are combinedvAnd (4) splicing, namely predicting the probability p (x) ═ MLP ([ u: q ] of the user u clicking the target item v through a multi-layer perceptronv]);
Wherein MLP (·) represents a multi-layered perceptron, using a ReLU nonlinear activation function; x represents the input of the model; representing vector splicing operation;
advantageous effects
Compared with the prior art, the recommendation system and method for extracting the fusion relationship have the following beneficial effects:
1. the entity upper-level word path is used for replacing the entity, and the WordNet dictionary information is more fully utilized;
2. in the process of obtaining the template characteristics, the influence of the entity on the template is effectively reduced by using the position word path of the entity and the template position mark;
3. the enhanced entity characteristics are obtained through relationship extraction, and knowledge is more pertinent;
4. compared with the prior art, the accuracy of the recommendation system is improved.
Drawings
FIG. 1 is a schematic diagram of the module components of a recommendation system for fusion relationship extraction;
FIG. 2 is a schematic diagram of predicting relationships between entities using a relationship extraction sub-module;
FIG. 3 is a flow diagram of extracting knowledge;
FIG. 4 is a diagram of a build article feature structure;
FIG. 5 is a diagram of building a user interest structure;
FIG. 6 is a schematic diagram of a recommendation system for fused relationship extraction.
Detailed Description
The following describes a recommendation system and method for fusion relation extraction according to the present invention in detail with reference to the accompanying drawings and embodiments.
Example 1
In this embodiment, with reference to fig. 1 to fig. 6, the recommendation system and method for extracting a fusion relationship according to the present invention are described in a news scene.
And the recommendation system extracted by the fusion relation is used for recommending news. News websites contain a huge amount of news, and in various aspects, it is important to provide interesting news to users. The news website collects news clicked and browsed by a user, forms historical interactive behaviors of the user, analyzes news headline texts, supplements knowledge, extracts interests and hobbies of the user, and predicts the news interested by the user.
FIG. 1 is a schematic diagram of the module components of a recommendation system for fusion relationship extraction. The knowledge extraction module available from fig. 1 is connected to the item feature construction module; the article characteristic construction module is connected with the user interest construction module; the article characteristic construction module, the user interest construction module and the multilayer perceptron module are connected; in the knowledge extraction module, a word embedding module, a basic entity module and an enhanced entity module are in parallel relation; in the basic entity module, a basic entity link submodule is connected with a filtering map submodule, and the filtering map submodule is connected with a knowledge representation learning submodule; in the enhanced entity module, an enhanced entity link sub-module is connected with a relation extraction sub-module, and the relation extraction sub-module is connected with a knowledge representation learning sub-module; in the relation extraction submodule, a sentence feature extractor and a template feature extractor are respectively connected with a threshold fusion device;
FIG. 2 is a schematic diagram of predicting relationships between entities using a relationship extraction sub-module. As can be seen from fig. 2, the relation extraction submodule consists of a sentence feature extractor, a template feature extractor, and a threshold fusion; a sentence characteristic extractor extracts sentence characteristics; the template feature extractor obtains template features through entity replacement, word embedding, a multi-head self-attention mechanism, a bidirectional LSTM and an attention mechanism; the threshold fusion device fuses sentence characteristics and template characteristics to obtain text characteristics; finally, predicting the relation according to the text characteristics;
FIG. 3 is a flow diagram of extracting knowledge; training word embedding set S from large-scale corpus by using word2vec word embedding methodw(ii) a In the acquisition of the basic entity embedded set, firstly, matching and disambiguating the text and a knowledge base to acquire an entity set contained in the text; then extracting a subgraph from the existing knowledge graph, and removing nodes in the entity set which does not exist to obtain a basic knowledge graph; and finally, mapping the entities and the relations in the basic knowledge map to a low-dimensional vector space by adopting a knowledge representation learning method to obtain a basic entity embedding set Sb(ii) a In the acquisition of the enhanced entity embedded set, marking out a corresponding entity in a description text according to the entity set; then, a relation extraction submodule is adopted for relation identification; the method can combine and predict all entities to construct an enhanced knowledge graph; finally, the invention adopts a knowledge representation learning method to map the entities and the relations in the enhanced knowledge map to a low-dimensional vector space to obtain an enhanced entity embedding set Se;
Fig. 4 is a diagram of a build article feature. As can be seen from FIG. 4, the text feature w and the basic entity feature e are used to construct the object featurebaseAnd enhanced entity features eenhanceThree-channel input I (stack) as KCNN model after stackingbase,eenhance) (ii) a Then using convolution kernelsFirstly, performing convolution operation on input I to obtain charactersA eigenvector; then, obtaining a corresponding characteristic value by performing maximum pooling operation on the characteristic vector; finally, splicing the eigenvalues obtained by using a plurality of convolution kernels to obtain the eigenvector of the article;
wherein stack (·) represents a matrix stacking operation; l represents the convolution window size; dwIs the dimension of word embedding;
FIG. 5 is a block diagram of building a user interest structure. As can be seen from FIG. 5, the present invention uses the attention network to construct the computational influence, and the input information is the target item qvFeature vector of interactive object in user historical behaviorOutputting the weight value through the full-connection network after splicingThen obtaining the influence degree s through normalization processingi(ii) a Finally, constructing an interest feature vector u of the user u;
FIG. 6 is a schematic diagram of a recommendation system for fused relationship extraction. As can be seen from fig. 6, the recommendation system for fusion relationship extraction includes four modules, namely knowledge extraction, article feature construction, user interest construction and a multilayer perceptron, and finally obtains the probability p (x) that the user u clicks the template article v;
the knowledge extraction module obtains a word embedding set, a basic entity embedding set and an enhanced entity embedding set; the article feature construction module is used for receiving the output of the knowledge extraction module to obtain a feature vector of a target article and a feature vector of an interactive article in user historical behaviors; the user interest construction module is used for receiving the output of the article characteristic construction module to obtain a user interest vector; the function of the multilayer perceptron module is to receive the output of the user interest building module and the feature vector of the target object in the output of the object feature building module to obtain the probability of the user clicking the target object;
example 2
And 1.2A, replacing the entity in the text with the superior position word path of the entity to obtain a sentence with the replaced entity. Taking The sentence "The bottom photo is from The New York public library" as an example, The process of obtaining a sentence with an entity replaced is described:
step 1.2a1 sets the longest path length s to 8;
step 1.2A2 initializing path list L to null; initializing entity entry as "photo";
step 1.2A3 obtains the first hypernym "creation" of entity entry in WordNet dictionary;
step 1.2a4 judges that "creation" is not empty, and adds the hypernym "creation" to the list L, i.e., L { "creation" }; assigning "creation" to an entity, i.e., an entity ═ creation;
step 1.2a5 loops step 1.2A3 through step 1.2a4 until entry is "entry" and the first hypernym in the WordNet dictionary is empty, at which point the list L is { "creation", "artifact", "white", "object", "physical _ entry", "entry" };
step 1.2A6, calculating the length of a path list L to be 6;
step 1.2A7 comparing the length of the path list with the length of the longest path as s; the length 6 of the path list is smaller than the length 8 of the longest path, and all list items are spliced by adopting 'i.e. the entity' photo 'is replaced by the entity hypernym path' entity.
Similarly, the entity "library" is replaced with the entity hypernym path "entity. physical _ entity. object. world. entity. structure" through steps 1.2a1 to 1.2a 7;
thus, The entity in The sentence is replaced by The entity hypernym path, and The sentence "The bottom entity, physical _ entity, object, artifact, creation is from The New York public entity, physical _ entity, object, world, artifact, structure", from which The entity is replaced, is obtained;
example 3
When the step 1.2B is implemented specifically, the sentence with the entity replaced is converted into the position vector. Taking The sentence "The bottom, physical _ entry, object, creation, from The New York public, physical _ entry, object, structure," as an example, The process of obtaining a position vector is described:
each word in the traversal sentence, "entry, physical _ entry, object, whole, artifact, structure" is an entity hyperword path, the corresponding template position flag is "0"; the template position marks corresponding to the other words are '1'; to obtain a position vector '1101111110';
example 4
Step 1.2D implementation, input of multi-head self-attention mechanismOutput ofAs shown in formulas (6) to (8):
MultiHead(Q,K,V)=[head1:head2:…:headr]WM (6)
wherein r is the number of subspaces different from each other, i.e. the number of heads;is the result of the ith head; [:]indicating splicing operation, splicing resultIs a linear transformation matrix of a multi-head self attention mechanism;respectively corresponding linear transformation matrixes, and randomly initializing;attention (·) denotes the mechanism of zoom point product Attention; superscript T represents transposition; softmax (·) represents a normalization function; n is the sentence length;
example 5
The experimental comparison results of the fusion relationship extracted recommendation system and various reference methods are shown as follows, and the method has the best effect on the AUC indexes.
Recommendation system comparison effect of table fusion relation extraction
While the foregoing is directed to the preferred embodiment of the present invention, it is not intended that the invention be limited to the embodiment and the drawings disclosed herein. Equivalents and modifications may be made without departing from the spirit of the disclosure, which is to be considered as within the scope of the invention.
Claims (7)
1. A recommendation system for fusion relationship extraction, characterized by: the system comprises a knowledge extraction module, an article feature construction module, a user interest construction module and a multilayer perceptron module;
the knowledge extraction module comprises a word embedding module, a basic entity module and an enhanced entity module;
the basic entity module comprises a basic entity link sub-module, a filtering map sub-module and a knowledge representation learning sub-module; the enhanced entity module comprises an enhanced entity link sub-module, a relation extraction sub-module and a knowledge representation learning sub-module;
the relation extraction submodule comprises a sentence feature extractor, a template feature extractor and a threshold fusion device;
the connection relation of each module in the recommendation system extracted by the fusion relation is as follows:
the knowledge extraction module is connected with the article characteristic construction module; the article characteristic construction module is connected with the user interest construction module; the article characteristic construction module, the user interest construction module and the multilayer perceptron module are connected;
in the knowledge extraction module, a word embedding module, a basic entity module and an enhanced entity module are in parallel relation;
a basic entity link submodule in the basic entity module is connected with a filtering map submodule, and the filtering map submodule is connected with a knowledge representation learning submodule; an enhanced entity link sub-module in the enhanced entity module is connected with a relationship extraction sub-module, and the relationship extraction sub-module is connected with a knowledge representation learning sub-module;
in the relation extraction submodule, a sentence characteristic extractor and a template characteristic extractor are respectively connected with a threshold fusion device.
2. A recommendation method for fusion relation extraction is characterized in that: the method comprises the following steps:
step one, the relation extraction submodule predicts the relation between the entities and comprises the following substeps:
step 1.1, the sentence feature extractor obtains sentence features, specifically:
the sentence characteristic extractor utilizes semantic structure information contained in the sentence to extract characteristics, the characteristics are highly related to entity information, and finally sentence characteristics are obtained;
step 1.2, the template feature extractor obtains template features through entity replacement, word embedding, multi-head self-attention mechanism, bidirectional LSTM and attention mechanism operation, and specifically comprises the following steps:
step 1.2A, a template feature extractor replaces an entity in a text with an entity hypernym path through entity replacement operation to obtain a sentence with the entity replaced;
the method for obtaining the sentence with the replaced entity comprises the following steps:
step 1.2A1 setting the longest path length s;
step 1.2A2 initializing the path list to null;
step 1.2A3 obtaining a first hypernym of an entity in a WordNet dictionary;
step 1.2a4, determining whether the hypernym is empty, and deciding whether to jump to step 1.2a5, specifically:
if the hypernym is empty, then go to step 1.2A 5;
if the hypernym is not empty, adding the hypernym into the list, assigning the hypernym to the entity, and returning to the step 1.2A3 for execution;
step 1.2A5 calculating path list length;
step 1.2a6, comparing the length of the path list with the length s of the longest path, and completing the method for obtaining the sentence with the entity replaced, specifically:
if the length of the path list is greater than or equal to the length s of the longest path, intercepting the first s items of the list, and splicing the first s items to obtain a sentence with the entity replaced;
otherwise, if the length of the path list is smaller than the length s of the longest path, adopting 'splicing' to splice all list items to obtain a sentence with the entity replaced;
so far, through the steps from step 1.2a1 to step 1.2a6, the entity in the sentence is replaced by the entity hypernym path, and a sentence with the entity replaced is obtained;
step 1.2B, converting the sentence with the replaced entity obtained in step 1.2A into a position vector;
wherein the elements in the position vector are defined as follows: if a word is an entity hypernym path, its template location flag is "0"; if the word is other words in the sentence, its template position flag is "1";
step 1.2C, carrying out word embedding operation on the sentences obtained in the step 1.2A and the position vectors obtained in the step 1.2B to obtain a word matrix and a template position mark matrix;
specifically, a vocabulary is randomly initializedWord listTraverse each word x of the textiWord-taking tableW and line i of the vocabulary T, get the word xiWord vector eiAnd a template position marker vector ti(ii) a Splicing word vectors of all words in the text to obtain a word matrix x of sentences with entities replacede=[e1,e2,...,en](ii) a Splicing the template position mark vectors of all words in the text to obtain a template position mark matrix x of the sentence with the entity replacedt=[t1,t2,...,tn](ii) a R is a real number field, and is marked with Nxme and 2×mtRepresents the dimension of R;
wherein i is traversed from 1 to n, and n is the length of the replaced entity sentence; n is the total number of words in the word list;the expression xiWord vector of, meIs the dimension of the word vector;the expression xiThe template position marker vector of, mtIs the dimension of the template position marker vector;
step 1.2D, firstly, performing multi-head self-attention mechanism operation on the word matrix obtained in the step 1.2C, and then splicing the template position mark matrix to obtain the low-order characteristics of the text;
step 1.2E, performing bidirectional LSTM operation on the low-order features of the text acquired in the step 1.2D to obtain high-order features of the text;
step 1.2F, obtaining template characteristics by subjecting the high-order characteristics of the text obtained in step 1.2E to an attention mechanism, and specifically calculating the characteristics as shown in formulas (1) to (5):
M=tanh(H) (1)
α=softmax(Mw) (2)
R′=tanh(r) (4)
R=dropout(R') (5)
wherein H is a high-order feature of the text;is the transpose of H; m ishIs the dimension of the high-order feature;is a parameter that needs to be trained; alpha is formed by Rn×1Is the weight column vector obtained by calculation;is a weighted feature vector; the output of the attention mechanism issoftmax (·), tanh (·) is an activation function; dropout (·) represents an operation of randomly replacing a value of a certain dimension of a vector with 0;
step 1.3, a threshold fusion device fuses sentence characteristics and template characteristics to obtain text characteristics; the method specifically comprises the following steps:
step 1.3A, mapping sentence characteristics and template characteristics to the same vector space, and ensuring the same dimensionality of the sentence characteristics and the template characteristics; i.e. Cg=tanh(WmapcC+bmapc),Rg=tanh(WmaprR+bmapr);
wherein ,is a sentence feature;is a template feature;is a mapping matrix;is a bias vector; m iscIs the dimension of the sentence feature; m isgIs the vector dimension after mapping;
wherein exp (·) denotes exponential operation;
step 1.3C reaction of Cg and RgRespectively give weight gC and gRObtaining text characteristics V;
wherein ,V=gC⊙Cg+gR⊙Rg, "is an element-by-element multiplication operation;
step 1.4, predicting the relation of the text characteristics obtained in the step 1.3 through a full-connection network and a softmax (·) function, wherein the predicting relation specifically comprises the following steps:
firstly, taking the text characteristic V as input, obtaining the probability distribution of the relation category through the full-connection network and the softmax functionThen, the probability distribution is takenThe relationship class corresponding to the maximum value of (1)As a result of the prediction;
wherein, S represents a sentence,is a mapping matrix of text features and relationships, bS∈RmIs the offset vector, m is the number of relationship classes,the operation of taking the y value corresponding to the maximum result is shown;
step two, acquiring a word embedding set, a basic entity embedding set and an enhanced entity embedding set through knowledge extraction;
step 2.1, training a word embedding set from a large-scale corpus by using a word2vec word embedding method;
step 2.2, obtaining a basic entity embedding set, specifically:
firstly, matching and disambiguating a text and a knowledge base to obtain an entity set contained in the text; because the scale of the original knowledge graph is large, then extracting a subgraph from the original knowledge graph, and removing nodes in the entity set which does not exist to obtain a basic knowledge graph; finally, mapping the entities and the relations in the basic knowledge map to a low-dimensional vector space by adopting a knowledge representation learning method TransD to obtain a basic entity embedding set;
step 2.3, obtaining an enhanced entity embedding set, specifically:
firstly, matching and disambiguating a text and a knowledge base to obtain an entity set contained in the text; then marking out corresponding entities in the description text, and adopting a relationship extraction submodule to carry out relationship identification; after entity linkage, one sentence may contain a plurality of entities, all the entities are combined and predicted, and an enhanced knowledge graph is constructed; finally, mapping the entities and the relations in the enhanced knowledge graph to a low-dimensional vector space by adopting a knowledge representation learning method TransD to obtain an enhanced entity embedding set;
thirdly, constructing article characteristics by adopting a knowledge-aware convolutional neural network (KCNN);
step 3.1, constructing a text feature matrix on the basis of the word embedding set obtained in the step two; the method specifically comprises the following steps:
firstly, searching a vector corresponding to each word in an article description text in a word embedding set, and if not, randomly initializing the vector; then all vectors are spliced to obtain a text feature matrix;
3.2, constructing basic entity characteristics on the basis of the basic entity embedded set obtained in the second step; the method specifically comprises the following steps:
firstly, searching a vector corresponding to each word in an article description text in a basic entity embedding set, and if not, replacing the vector by using a zero vector; then through a mapping function fm(X)=ReLU(WmX+bm) Mapping each vector into a vector space which is the same as the text features; finally, splicing all vectors to obtain a basic entity feature matrix;
wherein ,is a transformation matrix;is a bias vector; dwIs the dimension of word embedding; deIs the dimension of the underlying entity embedding;
3.2 constructing an enhanced entity feature matrix on the basis of the enhanced entity embedded set obtained in the second step; the method specifically comprises the following steps:
firstly, searching a vector corresponding to each word in an article description text in an enhanced entity embedding set, and if not, replacing the vector by using a zero vector; then through a mapping function fm(X)=ReLU(WmX+bm) Mapping each vector into a vector space which is the same as the text features; finally, splicing all vectors to obtain an enhanced entity feature matrix;
3.3, stacking the text feature matrix, the basic entity feature matrix and the enhanced entity feature matrix to be used as three-channel input of the KCNN model;
step 3.4, constructing an article feature vector by using a plurality of convolution kernels to obtain the feature vector of the article;
step four, constructing user interests by using an attention mechanism and constructing user interests; the method comprises the following specific steps:
firstly, an attention network is adopted to calculate influence degree, and input information of the attention network is a feature vector q of a target object and an interactive object in historical behaviors of a uservAndoutputting the weight value through the full-connection network after splicingThen normalization processing is carried out to obtainDegree of influence ofFinally, constructing an interest feature vector of the user u
Step five, the interest feature vector u of the user u and the feature vector q of the target item v are combinedvAnd (4) splicing, namely predicting the probability p (x) ═ MLP ([ u: q ] of the user u clicking the target item v through a multi-layer perceptronv]);
Wherein MLP (·) represents a multi-layered perceptron, using a ReLU nonlinear activation function; x represents the input of the model; and [ ] represents a vector splicing operation.
3. The system for recommending fused relation extraction as claimed in claim 1, wherein: the knowledge extraction module has the function of extracting knowledge required by the system;
the word embedding module receives the large-scale linguistic data to obtain a word embedding set; the basic entity module receives text description information of the interactive articles in the historical behaviors to obtain a basic entity embedded set; the method comprises the steps that an enhanced entity module receives text description information of interactive articles in historical behaviors to obtain an enhanced entity embedding set;
in the basic entity module, a basic entity link submodule receives text description information of interactive articles in historical behaviors to obtain an entity set; the filtering map submodule receives an external knowledge map to obtain a knowledge map without nodes in the entity set; the knowledge representation learning submodule receives the filtered knowledge map to obtain a basic entity embedding set; in the entity enhancing module, an entity enhancing link submodule receives text description information of interactive articles in historical behaviors to obtain an entity set; the relation extraction submodule receives text description information and an entity set of the interactive articles in historical behaviors to obtain a related knowledge graph; the knowledge representation learning submodule receives a relevant knowledge map to obtain an enhanced entity embedding set;
in the relation extraction submodule, a sentence feature extractor receives a text to obtain sentence features; the template feature extractor receives the text to obtain template features; the threshold fusion device is used for receiving the sentence characteristics and the template characteristics to obtain the text characteristics.
4. The system for recommending fused relation extraction as claimed in claim 1, wherein: and the article characteristic construction module receives the output of the knowledge extraction module, constructs a text characteristic matrix, a basic entity characteristic matrix and an enhanced entity characteristic matrix, and further obtains a characteristic vector of the target article and a characteristic vector of the interactive article in the user historical behavior.
5. The system for recommending fused relation extraction as claimed in claim 1, wherein: the user interest building module is used for receiving the output of the article feature building module to obtain a user interest vector.
6. The system for recommending fused relation extraction as claimed in claim 1, wherein: the function of the multilayer perceptron module is to receive the output of the user interest building module and the feature vector of the target object in the output of the object feature building module, and obtain the probability of the user clicking the target object.
7. The recommendation method for fusion relationship extraction as claimed in claim 2, wherein:
in the step 1.1, a sentence feature extractor is a relation extraction model which is one of an end-to-end model, a lexical model and a syntactic model;
the end-to-end model does not depend on external knowledge at all, and only sentences and entity pair information in the sentences are used;
using lexical information contained in a sentence based on a lexical model, wherein the lexical information comprises named entity identification, part of speech tagging and word net hypernym;
the syntactic information contained in the sentence is used based on the syntactic model, including a phrase structure tree, a dependency tree, and the shortest dependency path.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2020105511776 | 2020-06-15 | ||
CN202010551177 | 2020-06-15 |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112069408A true CN112069408A (en) | 2020-12-11 |
CN112069408B CN112069408B (en) | 2023-06-09 |
Family
ID=73664114
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010931994.4A Active CN112069408B (en) | 2020-06-15 | 2020-09-08 | Recommendation system and method for fusion relation extraction |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112069408B (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112836120A (en) * | 2021-01-27 | 2021-05-25 | 深圳大学 | Multi-mode knowledge graph-based movie recommendation method, system and terminal |
CN113010778A (en) * | 2021-03-10 | 2021-06-22 | 北京明略软件系统有限公司 | Knowledge graph recommendation method and system based on user historical interest |
CN113220994A (en) * | 2021-05-08 | 2021-08-06 | 中国科学院自动化研究所 | User personalized information recommendation method based on target object enhanced representation |
CN113254550A (en) * | 2021-06-29 | 2021-08-13 | 浙江大华技术股份有限公司 | Knowledge graph-based recommendation method, electronic device and computer storage medium |
CN113298661A (en) * | 2021-07-28 | 2021-08-24 | 北京芯盾时代科技有限公司 | Artificial intelligence based product recommendation method and device and electronic equipment |
CN113553510A (en) * | 2021-07-30 | 2021-10-26 | 华侨大学 | Text information recommendation method and device and readable medium |
CN114218380A (en) * | 2021-12-03 | 2022-03-22 | 淮阴工学院 | Multi-mode-based cold chain loading user portrait label extraction method and device |
CN114265920A (en) * | 2021-12-27 | 2022-04-01 | 北京易聊科技有限公司 | Intelligent robot dialogue method and system based on signals and scenes |
CN116049267A (en) * | 2022-12-26 | 2023-05-02 | 上海朗晖慧科技术有限公司 | Multi-dimensional intelligent identification chemical article searching and displaying method |
CN116703529A (en) * | 2023-08-02 | 2023-09-05 | 山东省人工智能研究院 | Contrast learning recommendation method based on feature space semantic enhancement |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016101302A1 (en) * | 2014-12-25 | 2016-06-30 | 广东电子工业研究院有限公司 | User log and entity relationship gallery-based personalized recommendation system and recommendation method thereof |
CN108304911A (en) * | 2018-01-09 | 2018-07-20 | 中国科学院自动化研究所 | Knowledge Extraction Method and system based on Memory Neural Networks and equipment |
CN110837602A (en) * | 2019-11-05 | 2020-02-25 | 重庆邮电大学 | User recommendation method based on representation learning and multi-mode convolutional neural network |
CN111061856A (en) * | 2019-06-06 | 2020-04-24 | 北京理工大学 | Knowledge perception-based news recommendation method |
-
2020
- 2020-09-08 CN CN202010931994.4A patent/CN112069408B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016101302A1 (en) * | 2014-12-25 | 2016-06-30 | 广东电子工业研究院有限公司 | User log and entity relationship gallery-based personalized recommendation system and recommendation method thereof |
CN108304911A (en) * | 2018-01-09 | 2018-07-20 | 中国科学院自动化研究所 | Knowledge Extraction Method and system based on Memory Neural Networks and equipment |
CN111061856A (en) * | 2019-06-06 | 2020-04-24 | 北京理工大学 | Knowledge perception-based news recommendation method |
CN110837602A (en) * | 2019-11-05 | 2020-02-25 | 重庆邮电大学 | User recommendation method based on representation learning and multi-mode convolutional neural network |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112836120A (en) * | 2021-01-27 | 2021-05-25 | 深圳大学 | Multi-mode knowledge graph-based movie recommendation method, system and terminal |
CN112836120B (en) * | 2021-01-27 | 2024-03-22 | 深圳大学 | Movie recommendation method, system and terminal based on multi-mode knowledge graph |
CN113010778A (en) * | 2021-03-10 | 2021-06-22 | 北京明略软件系统有限公司 | Knowledge graph recommendation method and system based on user historical interest |
CN113220994A (en) * | 2021-05-08 | 2021-08-06 | 中国科学院自动化研究所 | User personalized information recommendation method based on target object enhanced representation |
CN113220994B (en) * | 2021-05-08 | 2022-10-28 | 中国科学院自动化研究所 | User personalized information recommendation method based on target object enhanced representation |
CN113254550A (en) * | 2021-06-29 | 2021-08-13 | 浙江大华技术股份有限公司 | Knowledge graph-based recommendation method, electronic device and computer storage medium |
CN113298661A (en) * | 2021-07-28 | 2021-08-24 | 北京芯盾时代科技有限公司 | Artificial intelligence based product recommendation method and device and electronic equipment |
CN113553510B (en) * | 2021-07-30 | 2023-06-20 | 华侨大学 | Text information recommendation method and device and readable medium |
CN113553510A (en) * | 2021-07-30 | 2021-10-26 | 华侨大学 | Text information recommendation method and device and readable medium |
CN114218380A (en) * | 2021-12-03 | 2022-03-22 | 淮阴工学院 | Multi-mode-based cold chain loading user portrait label extraction method and device |
CN114265920A (en) * | 2021-12-27 | 2022-04-01 | 北京易聊科技有限公司 | Intelligent robot dialogue method and system based on signals and scenes |
CN116049267A (en) * | 2022-12-26 | 2023-05-02 | 上海朗晖慧科技术有限公司 | Multi-dimensional intelligent identification chemical article searching and displaying method |
CN116049267B (en) * | 2022-12-26 | 2023-07-18 | 上海朗晖慧科技术有限公司 | Multi-dimensional intelligent identification chemical article searching and displaying method |
CN116703529A (en) * | 2023-08-02 | 2023-09-05 | 山东省人工智能研究院 | Contrast learning recommendation method based on feature space semantic enhancement |
CN116703529B (en) * | 2023-08-02 | 2023-10-20 | 山东省人工智能研究院 | Contrast learning recommendation method based on feature space semantic enhancement |
Also Published As
Publication number | Publication date |
---|---|
CN112069408B (en) | 2023-06-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112069408B (en) | Recommendation system and method for fusion relation extraction | |
CN111753060B (en) | Information retrieval method, apparatus, device and computer readable storage medium | |
CN110162749B (en) | Information extraction method, information extraction device, computer equipment and computer readable storage medium | |
CN112199511B (en) | Cross-language multi-source vertical domain knowledge graph construction method | |
CN109284357B (en) | Man-machine conversation method, device, electronic equipment and computer readable medium | |
CN108959270B (en) | Entity linking method based on deep learning | |
CN108628828B (en) | Combined extraction method based on self-attention viewpoint and holder thereof | |
CN112131350B (en) | Text label determining method, device, terminal and readable storage medium | |
CN108932342A (en) | A kind of method of semantic matches, the learning method of model and server | |
CN113239700A (en) | Text semantic matching device, system, method and storage medium for improving BERT | |
Dashtipour et al. | Exploiting deep learning for Persian sentiment analysis | |
CN108875051A (en) | Knowledge mapping method for auto constructing and system towards magnanimity non-structured text | |
CN111159485B (en) | Tail entity linking method, device, server and storage medium | |
CN110619051B (en) | Question sentence classification method, device, electronic equipment and storage medium | |
CN109189943B (en) | Method for extracting capability knowledge and constructing capability knowledge map | |
CN112052684A (en) | Named entity identification method, device, equipment and storage medium for power metering | |
CN114896388A (en) | Hierarchical multi-label text classification method based on mixed attention | |
CN111666766A (en) | Data processing method, device and equipment | |
CN116661805B (en) | Code representation generation method and device, storage medium and electronic equipment | |
CN111488455A (en) | Model training method, text classification method, system, device and medium | |
CN114328800A (en) | Text processing method and device, electronic equipment and computer readable storage medium | |
CN115115432B (en) | Product information recommendation method and device based on artificial intelligence | |
CN116842934A (en) | Multi-document fusion deep learning title generation method based on continuous learning | |
CN117216617A (en) | Text classification model training method, device, computer equipment and storage medium | |
CN116975271A (en) | Text relevance determining method, device, computer equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |