CN111858955A - Knowledge graph representation learning enhancement method and device based on encrypted federated learning - Google Patents

Knowledge graph representation learning enhancement method and device based on encrypted federated learning Download PDF

Info

Publication number
CN111858955A
CN111858955A CN202010629643.8A CN202010629643A CN111858955A CN 111858955 A CN111858955 A CN 111858955A CN 202010629643 A CN202010629643 A CN 202010629643A CN 111858955 A CN111858955 A CN 111858955A
Authority
CN
China
Prior art keywords
learning
word vector
knowledge graph
entity
word
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010629643.8A
Other languages
Chinese (zh)
Other versions
CN111858955B (en
Inventor
刘明生
马伯元
张诣
温洪念
许爱雪
滕琦
杜林峰
赵尉钦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shijiazhuang Institute of Railway Technology
Original Assignee
Shijiazhuang Institute of Railway Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shijiazhuang Institute of Railway Technology filed Critical Shijiazhuang Institute of Railway Technology
Priority to CN202010629643.8A priority Critical patent/CN111858955B/en
Publication of CN111858955A publication Critical patent/CN111858955A/en
Application granted granted Critical
Publication of CN111858955B publication Critical patent/CN111858955B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/36Creation of semantic tools, e.g. ontology or thesauri
    • G06F16/367Ontology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • G06F40/295Named entity recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Abstract

The invention belongs to the technical field of knowledge graph representation learning, and provides a knowledge graph representation learning enhancement method based on encryption federation learning. The invention also provides an asynchronous training device, a generative confrontation learning device and a federal learning device so as to implement the method. The technical scheme provided by the invention realizes the federal learning under multiple knowledge maps among distrusted data providers under homomorphic encryption, and enhances the representation learning capability of the respective knowledge maps of the data providers.

Description

Knowledge graph representation learning enhancement method and device based on encrypted federated learning
Technical Field
The embodiment of the invention relates to an artificial intelligence data processing technology, in particular to a knowledge graph representation learning enhancement method and device based on encrypted federated learning.
Background
Federated Learning (Federal Learning) is a distributed machine Learning technique that utilizes information in an uncoordinated database for cross-database global machine Learning model training. The federated learning technology can realize the common modeling of multiple databases on the basis of ensuring the data privacy safety and legal compliance, namely, a machine learning model is trained together, and the effect of the machine learning model is improved.
Knowledge Graph (knowledgegraph) represents a Knowledge base in which learning is a semantic network. And extracting the natural entities and the relations among the natural entities to obtain the machine learning semantic model stored in the form of a multi-relation graph. In the knowledge graph, entities and relationships are represented by semantic information carried in word vectors. Based on the expression of the knowledge graph on the entities and the relations in the objective time, the computer system can better organize, manage, learn and understand the big data in the Internet.
In the field of Natural Language Processing (NLP), a word vector is a vector used to represent a word, thereby mathematically symbolizing natural language for computer processing of natural language. More specifically, in this patent the term vector refers to the "knowledge word vector" of an entity node in the knowledge-graph.
The homomorphic encryption technology is a technology that after a ciphertext obtained by encrypting data is subjected to certain operation, an operation result is decrypted and then a corresponding data operation result can be obtained. Homomorphic encryption can achieve mutual confidentiality between a data provider and an operation executor, namely the data provider cannot leak plaintext data to the operation executor. The homomorphic encryption technology can realize data exchange and calculation under the distrusted condition, and is the basis of distrusted cloud calculation and distributed calculation.
In the prior art, when a data provider learns knowledge graph representation of a data set of the data provider, the data provider is not trusted by other data providers in a network, so that the learning effect of local learning cannot be enhanced by using knowledge graph representation learning results of other data providers.
Disclosure of Invention
The invention provides a knowledge graph representation learning enhancement method and device based on encryption federated learning, which are used for carrying out federated learning under multiple knowledge graphs among untrusted data providers under homomorphic encryption and enhancing the representation learning capability of the respective knowledge graphs of the data providers.
An embodiment of the first aspect of the present invention provides a knowledge graph representation learning enhancement method based on encrypted federated learning, including:
At a first data processing end, performing representation learning on a first knowledge graph to obtain a first word vector of a first entity in the knowledge graph;
at a second data processing end, performing characterization learning on a second knowledge graph to obtain a second word vector of a second entity aligned with the first entity in the knowledge graph;
a second data processing end receives the homomorphic encrypted first word vector, and performs federated learning in a mode of a generative countermeasure network by using the first word vector and the second word vector to obtain a fused third word vector of the second entity;
and at a second data processing end, replacing the second word vector of the second entity in the second knowledge graph with the third word vector, and then continuing to perform representation learning on the second knowledge graph so as to obtain enhanced fourth word vectors of all entities in the knowledge graph.
In order to improve the effect of federal learning and expand the information exchange between data processing ends to a one-hop node of an aligned entity, in an improved embodiment of the learning enhancement method represented by the knowledge graph, the method comprises the following steps:
performing characterization learning on the first knowledge graph at a first data processing end to obtain a first word vector of a first entity in the knowledge graph and a first word vector of a one-hop node of the first entity;
At a second data processing end, performing characterization learning on a second knowledge graph to obtain a second word vector of a second entity aligned with the first entity and a second word vector of a first hop node of the second entity aligned with the first entity in the knowledge graph;
a second data processing end receives the homomorphic encrypted first word vector, and performs federated learning in a mode of a generative countermeasure network by using the first word vector and the second word vector to obtain a fused third word vector of the second entity and a one-hop node;
and at a second data processing end, replacing second word vectors of a second entity and a first-hop node of the second entity in the second knowledge graph with the third word vectors, and then continuing to perform representation learning on the second knowledge graph so as to obtain enhanced fourth word vectors of all entities in the knowledge graph.
In a preferred embodiment, in the knowledge graph representation learning enhancement method, the representation learning is embedded by a TransE method.
In a preferred embodiment, in the knowledge graph representation learning enhancement method, the step of performing federal learning in the form of a generative confrontation network includes:
The set of first word vectors received by the second data provider is denoted as X ═ X1,…,xnRecording a set of second word vectors corresponding to each first word vector in X in the second knowledge graph as Y ═ Y }1,…,yn};
Learning a discriminator in the generative confrontation network from a generator W in the generative confrontation network to distinguish between random samples WX { Wx ═ Wx1,…,WxnY ═ Y1,…,yn-an element of };
learning a generator W in the generative confrontation network to map the elements in X to the word vectors of the corresponding nodes in Y as accurately as possible, so that the discriminator is difficult to discriminate whether one element belongs to WX or Y;
training the generative confrontation network, and,
taking the elements in the WX obtained after the training as a third word vector,
alternatively, the first and second electrodes may be,
taking the element obtained by averaging the element in WX obtained after training and the corresponding element in Y as a third word vector,
alternatively, the first and second electrodes may be,
and taking an element obtained by summing the element in the WX obtained after the training and the corresponding element in the Y as a third word vector.
In a further improvement of the above technical solutions, the second data processing end shares the countermeasure generation network with the first data processing end.
An embodiment of the second aspect of the present invention provides an asynchronous training device, deployed at a data processing end, for implementing the knowledge graph representation learning enhancement method based on encrypted federated learning, where the asynchronous training device includes:
the reading module is used for reading a local original knowledge base and preprocessing the local original knowledge base into a local knowledge map containing word vectors;
the characteristic learning module is used for receiving a first request, starting characteristic learning of the local knowledge graph once as a response so as to update word vectors of all entities in the local knowledge graph;
the communication module is used for communicating with the federal learning server and other data processing terminals so as to receive the shared word vector information of the knowledge graph of the other data processing terminals; the shared word vector information is homomorphic encrypted;
the federal learning module is used for sending the shared word vector information and local corresponding word vectors into a generating type countermeasure network for learning and obtaining fused word vectors; and the fused word vector is used for replacing the word vector of all or part of the entity of the local knowledge graph.
In an embodiment of the asynchronous training device, the communication module sends the shared word vector information of the local knowledge graph to the other data processing terminals; the shared word vector information is homomorphic encrypted.
In one embodiment of the asynchronous training device, an operation monitoring module is included; the operation monitoring module is used for supervising the asynchronous training device to execute the knowledge graph representation learning enhancement method of any one of claims 1 to 5; and/or, for arbitrating the operational status of the asynchronous training device; and/or, for adjusting the learning effectiveness of the federal learning module.
An embodiment of the third aspect of the present invention provides a generative confrontation learning apparatus, deployed at a data processing end or a federal learning server, and the apparatus includes:
a memory for storing computer executable code and a generative countermeasure network;
the communication interface is used for being in communication connection with the federal learning module of the asynchronous training device through a communication module of the asynchronous training device provided by the second aspect, so that the set of first word vectors and the set of second word vectors are received from the federal learning module, and a set of third word vectors is sent to the federal learning module;
a processor for reading and executing the computer executable code to configure and train the generative countermeasure network; the instructions of the computer-executable code, when executed by the processor, cause the processor to:
The first word is oriented toThe set of quantities is denoted X ═ X1,…,xnAnd recording a set of second word vectors corresponding to each first word vector in the X as Y ═ Y1,…,yn};
Learning a discriminator in the generative confrontation network from a generator W in the generative confrontation network to distinguish between random samples WX { Wx ═ Wx1,…,WxnY ═ Y1,…,yn-an element of };
learning a generator W in the generative confrontation network to map the elements in X to the word vectors of the corresponding nodes in Y as accurately as possible, so that the discriminator is difficult to discriminate whether one element belongs to WX or Y;
training the generative confrontation network, and,
taking the elements in the WX obtained after the training as the third word vector output,
alternatively, the first and second electrodes may be,
taking the element obtained by averaging the element in WX obtained after training and the corresponding element in Y as the third word vector output,
alternatively, the first and second electrodes may be,
and outputting the element obtained by summing the element in the WX obtained after the training and the corresponding element in the Y as the third word vector.
An embodiment of a fourth aspect of the present invention provides a bang learning device, which includes the asynchronous training device of the second aspect and the generative confrontation learning device of the third aspect. The asynchronous training devices are distributed and deployed on different data processing terminals in a network, and distributed knowledge graph representation learning enhancement is achieved.
In the technology provided by the aspects of the invention, the federal learning is based on a generation countermeasure network technology (GAN), the Word vectors of aligned entities in different knowledge maps are mapped to the same representation space, semantic information contained in other knowledge maps of the entity, such as Word vectors (Word Embedding), is introduced, and the expression capability of the Word vectors of the aligned entities is improved. Wherein the federal learning strategy includes: acquiring information of aligned entities in different knowledge maps and original word vectors, extracting word vectors of the entities in the different knowledge maps by using the information of the aligned entities as input for generating an antagonistic network, training to obtain fused word vectors, replacing the word vectors of the aligned entities in the original representation learning result with the fused word vectors, and performing the next round of representation learning training by using the fused word vectors as an initial value; on the basis of the strategy, one-hop nodes of the alignment entities in other knowledge graphs are introduced to be used as input for generating the countermeasure network, and the semantic richness of the fusion word vector is further enhanced. And the encryption federation learns that a homomorphic encryption algorithm is used in the transmission process of the word vectors of the alignment entities carrying the information of the alignment entities, the ciphertext of the word vectors in different knowledge maps is used as the input for generating the countermeasure network for training, and the obtained fusion word vectors are decrypted. Because of the homomorphism of the encryption algorithm, semantic information cannot be lost in the process of carrying out word vector fusion and decryption on the ciphertext. The asynchronous training device is a framework for carrying out federal learning by multiple knowledge graphs, and in the invention, after a word vector model of a knowledge graph is enhanced, a fusion request and alignment node information are sent to other knowledge graphs to guide the other knowledge graphs to carry out federal learning training. The method has the advantages that specific information of the participating knowledge graph cannot be revealed; and the joint federal learning of multiple knowledge graph nodes is realized, so that the good effect of the integral knowledge representation capability is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments or the technical solutions in the prior art are briefly introduced below, and it is obvious that the drawings in the following description are some examples of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
FIG. 1 is a flow diagram of an embodiment of a knowledge graph representation learning enhancement method based on encrypted federated learning in accordance with the present invention;
FIG. 2 is a schematic structural diagram of an asynchronous training device in an embodiment of a knowledge graph representation learning enhancement device based on encrypted federated learning according to the present invention.
Detailed Description
First, in the prior art related to the present invention, the data owner may be a company, an organization, or other entity with an operation requirement, or may be an edge computing device, a data processing end, or other device that performs an operation task, and each data owner has its own knowledge base, which may be a structured or unstructured data set. The knowledge base of each data owner is in relevance, and the characterization learning of the knowledge base is needed so as to classify, analyze and the like the data owner for the next machine learning. Due to distrust among data owners, sharing of respective knowledge bases cannot be realized, so that more semantic information can not be directly acquired to optimize the result of characterization learning, and only in the local range trusted by the data owners, the original knowledge graph of the knowledge base can be obtained by performing characterization learning on the knowledge base. The knowledge graph at least comprises a set of Entities (Entities), Relations (relationships) and Facts (Facts), wherein the Entities can be instances, concepts or literal quantities (literals), because each knowledge base has correlation, the original knowledge graphs of each knowledge base necessarily have the same or equivalent Entities, and when a certain entity e1 of one knowledge graph and a certain entity e2 of another knowledge graph are endowed with the same or equivalent relation, the invention refers to that the entity e1 and the entity e2 are aligned Entities of each other in the knowledge graphs. The same or equivalent relationship comprises ontology matching (ontology matching) or entity alignment (entity alignment) in a knowledge fusion (knowledge fusion) method, and is used for providing data alignment basis for federal learning. Aligned entity nodes of the respective knowledge-graphs on the graph structure can be considered to map to the same token space in the present invention. By the knowledge graph representation learning enhancement method based on the encrypted federated learning, in some embodiments, a data owner can obtain a better characterization embedding effect locally, and in other embodiments, a characterization embedding effect based on a union of knowledge bases of the data owners can be obtained.
It should be noted that characterization Learning (also called Representation Learning) is a method for obtaining vectorized expression of each entity or relationship by using machine Learning to easily extract useful information when constructing classifiers or other prediction variables. In machine learning, the characterization learning is a technical integration of feature learning, that is, raw data is converted into a form that can be developed by machine learning, which avoids the complexity of manually extracting features and allows learning to use the features while grasping the extraction manner.
The technical idea of the invention is that firstly, the knowledge graph spectrum is expressed, learned and trained locally to obtain word vectors of aligned entities, then the word vectors of the aligned nodes sent from outside are received in a federal learning mode, the word vectors of the local aligned nodes are updated through the federal learned GAN, and finally, the next expression, learning and training of the knowledge graph spectrum is continued locally, so that the enhanced expression and learning effect is obtained circularly. To make the objects, technical solutions and advantages of the present invention clearer, the technical solutions in the examples of the present invention will be clearly and completely described below with reference to the drawings in the examples of the present invention, and it is obvious that the described examples are a part of examples of the present invention, but not all examples. All other examples, which can be obtained by a person skilled in the art without inventive step based on the examples of the present invention, are within the scope of the present invention.
The first embodiment of the first aspect of the invention provides a knowledge graph representation learning enhancement method based on encrypted federated learning. In this embodiment, the data owner F1 has a knowledge base D1, and a local secure network thereof runs a first data processing terminal for learning knowledge graph representation of D1; the data owner F2 has a knowledge base D2, and a second data processing end for learning the knowledge graph representation of D2 runs in a local security network of the data owner F2; d1 and D2 have relevance, such as the fact that both contain the geographic concept of "beijing" and the vehicle "airplane", the data owner F1 and the data owner F2 cannot disclose their knowledge bases to each other, but the first data processing end and the second data processing end have communication connection across the gateway. The present embodiment improves the effect of the second data processing end on the representation learning of D2 by the following steps. As shown in the flow chart of the embodiment method shown in fig. 1, the method of the present example includes steps 101 to 106.
Step 101, acquiring a knowledge graph of an original data set, and performing knowledge representation learning on the independent knowledge graph.
The original data set is a knowledge base of each data owner, and each related knowledge base is considered to be basically static and not to be updated in the time period of implementing the method, so that the entity number of the knowledge graph obtained in the step is not changed in the subsequent process. In the embodiment, knowledge information of an original data set is extracted through an information extraction technology, then an original knowledge graph corresponding to the knowledge information is obtained through semantic analysis, and each entity in the original knowledge graph is converted into a dense vector for describing a word vector corresponding to the entity. In some other embodiments, the word vector corresponding to the entity may also be obtained through other word embedding or word2vec technologies. And performing representation learning on the original knowledge graph to obtain a word vector set of each aligned entity of the original knowledge graph.
Specifically, in this embodiment, at the second data processing end, the second knowledge graph extracted by D2 is representation-learned, and all word vectors in the knowledge graph are updated, that is, a set of word vectors of the knowledge graph after being embedded by the representation-learning in a multivariate relationship is obtained, where the set includes all second word vectors of the second entity in the second knowledge graph.
The learning of one expression of one knowledge graph in the invention can be realized by methods such as TransE, TransH, TransR, TransD and the like. Exemplarily, a specific flow of knowledge representation learning on a knowledge graph implemented by transit in the present embodiment is as follows:
step 201, reading all entities of a knowledge graph and relationship information among the entities, wherein the relationship information includes relationships among different entities, for example, a "capital" in "china-capital-beijing" is relationship information, and a triplet is formed by a word vector 1, a relationship between the word vector and the word vector 2. Each entity and the relationship information are respectively represented by a dense vector, wherein the dense vector representing the entity is the word vector of the entity, and the dense vector representing the relationship is the word vector of the relationship. And expressing the word vectors of all entities and the word vectors of relations of the knowledge graph as a plurality of triplet examples in the shape of (h, l, t) based on the distributed vectors of the entities and the relations, wherein the relation word vector l is regarded as a translation from the entity word vector h to the entity word vector t, and the set of all the triplet examples of the knowledge graph is S.
In step 202, h, l, and t are continuously adjusted during knowledge characterization learning, so that h + l is equal to t as much as possible, that is, h + l is equal to t.
Let the loss function of knowledge representation learning be:
Figure BDA0002564591320000091
wherein h and t are the representation word vectors of the entities in the knowledge graph, l is the representation word vector of the relationship in the knowledge graph, S is the set of all triples in the knowledge graph to be trained, S' is the set of negative samples of all triples in the knowledge graph to be trained, and]+representing an absolute value operation, and gamma is a preset hyperparameter.
Specifically, according to the method of TransE, the specific Algorithm flow of the primary knowledge characterization learning of this embodiment is shown as follows as Algorithm 1:
Figure BDA0002564591320000101
wherein k is the dimension of the generated token vector, E is the set of all entities in the knowledge-graph to be trained, and L is the set of all relationships in the knowledge-graph to be trained.
In Algorithm 1, lines 1-3 represent that, during initialization, a token word vector is randomly assigned to each element of each triplet (h, l, t) of each input untrained knowledge graph, and the modular length of the token word vector is normalized and unified to 1.
In Algorithm 1, lines 4-12 represent the training of the respective token vectors of the knowledge-graph as follows: first from S using minipatch Extracting a set S containing the triple number bbatchAs a sample set for the current training, then according to SbatchGenerating T by negative sampling methodbatch。TbatchEach element in (1) is SbatchAnd its corresponding randomly generated negative sample (h ', l, t') in the set of triplets ((h, l, t), (h ', l, t')). Wherein, negative examples refer to: (h, l, t) corresponds to a negative example of (h ', l, t') if and only if (h, l, t) belongs to Sbatch(h ', l, t') does not belong to Sbatch
Then for TbatchEach set of triplets in (1) is updated by a gradient descent method.
Through the TransE algorithm, knowledge representation learning can be independently carried out on a knowledge graph. After the updating of the word vectors of the knowledge graph in the process is completed, a word vector set of the knowledge graph embedded in the multivariate relation through TransE is obtained, and the knowledge graph is considered to obtain the improvement of the knowledge representation capability.
And 102, acquiring a knowledge graph to acquire a message with improved knowledge representation capability.
Specifically, at the first data processing end, representation learning is performed on the first knowledge graph extracted by D1, and first word vectors of all first entities in the knowledge graph are obtained. Based on the whole network broadcasting, point-to-point or third party scheduling, the second data processing end receives a message that the first data processing end completes the improvement of the knowledge characterization capability. In some embodiments, the forwarding of the message may be forwarded by a federally learned coordinator trusted by each data processing end, so as to include in the messages unencrypted information of each first entity for entity alignment, for when the second data processing end processes the second knowledge graph, filtering the existence of the first entity corresponding to the second entity in the second knowledge graph, so as to allocate a corresponding index and stack, or for obtaining a second word vector of the second entity aligned with the first entity in the knowledge graph.
And 103, sending the word vectors of the aligned nodes to other knowledge graph nodes. The aligned nodes are the intersections of the entity node sets in the knowledge graphs, for example, if the first knowledge graph has an entity "beijing", and the second knowledge graph also has an entity "beijing", then "beijing" is one of the aligned entity nodes of the two knowledge graphs. The homomorphic encryption technology is used for protecting data in the process of sharing data among different knowledge graphs, and Federal learning is carried out after the sent word vectors are encrypted, so that the knowledge graph nodes sending the word vectors can be ensured not to leak word vector information and entity information to other knowledge graph nodes.
Specifically, the second data processing end asynchronously obtains information of a first-time vector of a first entity aligned with a plurality of second entities of the second knowledge graph from the first data processing end, and the information is encrypted based on homomorphism in federal learning.
And 105, after receiving the word vectors, the knowledge graph utilizes a generative countermeasure network to conduct federal learning.
Specifically, the second data processing end receives the homomorphic encrypted first word vector, and performs federated learning in the form of a generative countermeasure network by using the first word vector and the second word vector to obtain a fused third word vector of the second entity. And the other data processing terminals receive word vector information of the aligned entity, input the word vector information together with the aligned node word vectors of the local knowledge graph to generate a countermeasure network for training, and acquire trained fusion word vectors for replacing the original word vectors in the local knowledge graph.
Exemplarily, in this embodiment, the step of performing federal learning in the form of a generative confrontation network to obtain a fused word vector includes:
step 301, recording the set of first word vectors received by the second data provider as X ═ X1,…,xnThe method comprises the steps that a first data provider serves as a far-end data owner, and a set of word vectors of aligned nodes provided by the first data provider is obtained; and recording a set of second word vectors corresponding to each first word vector in X in the second knowledge graph as Y ═ Y1,…,ynI.e. the set of word vectors of the corresponding aligned nodes within the local knowledge-graph.
Step 302, learning a discriminator in the generative countermeasure network according to the generator W in the generative countermeasure network, so as to distinguish between random samples in WX ═ { WX ═ WX1,…,WxnY ═ Y1,…,ynThe elements of (c). WX is a set of vector elements corresponding to X, which are randomly generated by the generator W according to the elements in X, and the vector elements contain partial information of the corresponding first word vector.
Step 303, learning a generator W in the generative confrontation network, so that the generator W maps the elements in X to word vectors of corresponding nodes in Y as accurately as possible, and the discriminator is difficult to discriminate whether an element belongs to WX or Y;
Step 304, training the generative confrontation network, and taking the element in WX obtained after training as a third word vector, or taking the element obtained after training and the element in Y obtained after averaging as a third word vector, or taking the element obtained after training and the element in Y obtained after summing as a third word vector. Namely, the fusion vector has three modes, which are all processing on the aligned nodes embedding, including:
(1) directly replacing the alignment node embedding with the GAN result;
(2) averaging the GAN result and the alignment node embedding, and then replacing the alignment node embedding;
(3) the result of GAN is summed with its node embedding and then replaces the aligned node embedding.
The learning method of the generator and the discriminator adopts a training process of a standard deep confrontation network, and for given two groups of samples X and Y, the discriminator and the generator are sequentially updated by a random gradient descent method so as to minimize the objective function of the discriminator and the objective function of the generator.
The objective function of the discriminator can be written as:
Figure BDA0002564591320000131
the objective function of the generator can be written as:
Figure BDA0002564591320000132
wherein: thetaDIs a parameter of the discriminator,
Figure BDA0002564591320000133
The representation arbiter considers the z-word vector to belong to an element in Y,
Figure BDA0002564591320000134
the representation arbiter considers the z-word vector to belong to
Figure BDA0002564591320000135
Of (1).
After training the generator and the discriminator, the final result of the GAN is the word vector in WX.
And 106, replacing the original word vector of the knowledge graph with the obtained fusion word vector, and continuing to perform knowledge representation learning.
Specifically, at the second data processing end, after replacing the second word vector of the second entity in the second knowledge graph with the third word vector, representation learning is continuously performed on the second knowledge graph, so as to obtain the enhanced fourth word vector of each entity in the knowledge graph.
103, 105 and 106, sending the word vector of each node of the knowledge graph to the data owners of the aligned nodes of other knowledge graphs, and inputting the word vector of each node from the aligned node of each data owner and the word vector of the aligned node in the knowledge graph of the other data owners together to generate an anti-network for training, acquiring the fused word vector trained by the node, and replacing the original word vector of the node.
A second embodiment of the first aspect of the invention provides a knowledge graph representation learning enhancement method based on encrypted federated learning. The difference from the first embodiment is that step 103 is replaced with step 104.
And 104, sending the word vectors of the one-hop nodes of the alignment nodes and the word vectors of the alignment nodes to other knowledge graph nodes together. So that federate learning is performed in the mode of generating a countermeasure network in step 105, and the original word vector of the alignment node is replaced by the trained fusion word vector.
In this embodiment, the steps 104, 105, and 106 send the word vectors of all the one-hop nodes of the alignment node and the word vectors of the alignment node to the knowledge graph nodes of other data owners together to generate a form of an adversary network for federal learning, and replace the original word vectors of the alignment node with the trained fused word vectors.
Correspondingly, in this embodiment, at the first data processing end, the first knowledge graph is subjected to characterization learning, and a first word vector of a first entity in the knowledge graph and a first word vector of a one-hop node of the first entity are obtained; at a second data processing end, performing characterization learning on a second knowledge graph to obtain a second word vector of a second entity aligned with the first entity and a second word vector of a first hop node of the second entity aligned with the first entity in the knowledge graph; a second data processing end receives the homomorphic encrypted first word vector, and performs federated learning in a mode of a generative countermeasure network by using the first word vector and the second word vector to obtain a fused third word vector of the second entity and a one-hop node; and at a second data processing end, replacing second word vectors of a second entity and a first-hop node of the second entity in the second knowledge graph with the third word vectors, and then continuing to perform representation learning on the second knowledge graph so as to obtain enhanced fourth word vectors of all entities in the knowledge graph.
In a third embodiment of the first aspect of the present invention, the second data processing side shares the antagonistic generation network with the first data processing side.
In a fourth embodiment of the first aspect of the present invention, a third data provider exists in the network, and the method according to the foregoing embodiments respectively and asynchronously improves self-knowledge representation learning ability through word vector information of aligned entity nodes in respective knowledge graphs of the first data provider and the second data provider.
The invention also provides an embodiment of a knowledge graph representation learning enhancement device based on encrypted federated learning. The device of the embodiment is a federated learning device, which includes an asynchronous training device deployed at a local first data processing end of a first data owner and a local second data processing end of a second data owner at the same time, and a generative confrontation learning device deployed at a federated learning server.
Each asynchronous training device, the structure of which is shown in fig. 2, includes:
and the reading module 11 is configured to read a local original knowledge base, and preprocess the local original knowledge base into a local knowledge graph including word vectors. The module reads an original knowledge base of a data owner into the federal learning device, preprocesses the knowledge base, prepares operating conditions for federal learning, and constructs a triple set by acquiring word vectors and relations in step 101 if the operating conditions are completed.
And the characteristic learning module 12 is configured to receive the first request, and in response, initiate characteristic learning on the local knowledge graph once so as to update the word vectors of the entities in the local knowledge graph. The module is used for starting knowledge representation learning on each knowledge graph in a distributed mode and recording the learning condition of the knowledge graph, such as finishing the characterization learning in steps 101 and 106. The first request may be provided by the present asynchronous training device or by an external generative antagonistic learning device.
The communication module 13 is used for communicating with the federal learning server and other data processing terminals so as to receive the shared word vector information of the knowledge graph of the other data processing terminals; the shared word vector information is homomorphic encrypted. The module is used for communicating the knowledge graph nodes with the whole distributed federal learning device and other knowledge graph nodes, sending shared word vector information, namely word vectors of all aligned nodes, to other knowledge graphs when the self knowledge representation capability is improved, and receiving the shared word vector information sent by other knowledge graph nodes. Specifically, the communication module 13 is configured to receive the message of step 102. In some other embodiments, the communication module sends the shared word vector information of the local knowledge graph to the other data processing terminals; the shared word vector information is homomorphic encrypted.
The federal learning module 14 is used for sending the shared word vector information and local corresponding word vectors into a generative confrontation network for learning and obtaining fused word vectors; and the fused word vector is used for replacing the word vector of all or part of the entity of the local knowledge graph. The module is used for sending the received shared word vectors and the corresponding word vectors into the generation of confrontation network learning, and replacing the word vectors of the self alignment nodes of the knowledge graph with the obtained fusion word vectors.
An operation monitoring module 15, configured to supervise the asynchronous training device to execute the knowledge graph representation learning enhancement method provided in the first aspect; and/or, for arbitrating the operational status of the asynchronous training device; and/or, for adjusting the learning effectiveness of the federal learning module.
The generative confrontation learning device of the embodiment comprises: a memory for storing computer executable code and a generative countermeasure network; a communication interface, which is used for being in communication connection with the federal learning module of the asynchronous training device through a communication module of any one of claims 6 to 8, so as to receive the first word vector set and the second word vector set from the federal learning module and send the third word vector set to the federal learning module; a processor for reading and executing the computer executable code to configure and train the generative countermeasure network; the instructions of the computer-executable code, when executed by the processor, cause the processor to: denote the set of first word vectors as X ═ X 1,…,xnAnd recording a set of second word vectors corresponding to each first word vector in the X as Y ═ Y1,…,yn}; learning a discriminator in the generative confrontation network from a generator W in the generative confrontation network to distinguish between random samples WX { Wx ═ Wx1,…,WxnY ═ Y1,…,yn-an element of }; learning a generator W in the generative confrontation network to map the elements in X to the word vectors of the corresponding nodes in Y as accurately as possible, so that the discriminator is difficult to discriminate whether one element belongs to WX or Y; and training the generative confrontation network, and taking the element in the WX obtained after the training as the third word vector output, or taking the element obtained after the training is finished and averaging the element in the WX and the corresponding element in the Y as the third word vector output, or taking the element obtained after the training is finished and summing the element in the WX and the corresponding element in the Y as the third word vector output. In this embodiment, the data processing terminals train the same generative confrontation network together, and the characterization embedding effect based on the union of the knowledge bases of the data owners can be obtained.
The apparatus of this embodiment may be correspondingly used to implement the technical solution of the method embodiment shown in fig. 1, and the implementation principle and the technical effect are similar, which are not described herein again.
The embodiment of the invention relates to a knowledge graph representation learning enhancement method and an asynchronous training device based on encrypted federated learning. And acquiring a knowledge graph of the original data set, and performing knowledge representation learning on the independent knowledge graph. After the knowledge representation capability of a knowledge graph is improved, word vectors of the aligned nodes are encrypted and then sent to other knowledge graph nodes, and other knowledge graph nodes perform federated learning after receiving the word vectors. In federal learning, the received word vectors and the word vectors of the nodes aligned with the knowledge graph are input together to generate a countermeasure network for training, after training, fusion word vectors are obtained, and the original word vectors of the nodes of the knowledge graph are replaced. On the basis, the word vector of the one-hop node of the alignment node and the word vector of the alignment node are input together to generate a countermeasure network, federal learning is carried out, and the trained fused word vector replaces the original word vector of the node.
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A knowledge graph representation learning enhancement method based on encrypted federated learning comprises the following steps:
at a first data processing end, performing representation learning on a first knowledge graph to obtain a first word vector of a first entity in the knowledge graph;
at a second data processing end, performing characterization learning on a second knowledge graph to obtain a second word vector of a second entity aligned with the first entity in the knowledge graph;
a second data processing end receives the homomorphic encrypted first word vector, and performs federated learning in a mode of a generative countermeasure network by using the first word vector and the second word vector to obtain a fused third word vector of the second entity;
And at a second data processing end, replacing the second word vector of the second entity in the second knowledge graph with the third word vector, and then continuing to perform representation learning on the second knowledge graph so as to obtain enhanced fourth word vectors of all entities in the knowledge graph.
2. The knowledge graph representation learning enhancement method of claim 1, comprising:
performing characterization learning on the first knowledge graph at a first data processing end to obtain a first word vector of a first entity in the knowledge graph and a first word vector of a one-hop node of the first entity;
at a second data processing end, performing characterization learning on a second knowledge graph to obtain a second word vector of a second entity aligned with the first entity and a second word vector of a first hop node of the second entity aligned with the first entity in the knowledge graph;
a second data processing end receives the homomorphic encrypted first word vector, and performs federated learning in a mode of a generative countermeasure network by using the first word vector and the second word vector to obtain a fused third word vector of the second entity and a one-hop node;
and at a second data processing end, replacing second word vectors of a second entity and a first-hop node of the second entity in the second knowledge graph with the third word vectors, and then continuing to perform representation learning on the second knowledge graph so as to obtain enhanced fourth word vectors of all entities in the knowledge graph.
3. The knowledge graph representation learning enhancement method of claim 1, wherein the token learning is representation embedded by a TransE method.
4. The knowledge graph representation learning enhancement method of claim 1, wherein the step of federally learning in the form of a generative confrontation network comprises:
recording a set of first word vectors received by a second data provider as
Figure 613209DEST_PATH_IMAGE001
And internally associating the second knowledge map with
Figure 264770DEST_PATH_IMAGE002
The set of second word vectors corresponding to each first word vector is marked as
Figure 310087DEST_PATH_IMAGE003
Countering generators in a network in accordance with the generative equation
Figure 705296DEST_PATH_IMAGE004
Learning the discriminators in the generative confrontation network to distinguish between random samples
Figure 254089DEST_PATH_IMAGE005
And
Figure 279814DEST_PATH_IMAGE003
an element of (1);
learning generators in the generative confrontation network
Figure 546847DEST_PATH_IMAGE004
So that it will be as accurate as possible
Figure 480168DEST_PATH_IMAGE006
To map to
Figure 149047DEST_PATH_IMAGE007
On the word vector of the corresponding node, so that the discriminator has difficulty in distinguishing that an element belongs to
Figure 407990DEST_PATH_IMAGE008
Or belong to
Figure 896740DEST_PATH_IMAGE007
Training the generative confrontation network, and,
obtained after training
Figure 899331DEST_PATH_IMAGE008
The element in (a) is a third word vector,
alternatively, the first and second electrodes may be,
obtained after training
Figure 853075DEST_PATH_IMAGE008
The element in (b) and the element in corresponding Y are averaged to obtain the element as the third word vector,
Alternatively, the first and second electrodes may be,
obtained after training
Figure 17340DEST_PATH_IMAGE008
The element in (b) and the element in corresponding Y are summed to obtain the element of the third word vector.
5. The knowledge graph representation learning enhancement method of any one of claims 1 to 4, characterized in that: and the second data processing terminal shares the countermeasure generation network with the first data processing terminal.
6. An asynchronous training device deployed on a data processing end, comprising:
the reading module is used for reading a local original knowledge base and preprocessing the local original knowledge base into a local knowledge map containing word vectors;
the characteristic learning module is used for receiving a first request, starting characteristic learning of the local knowledge graph once as a response so as to update word vectors of all entities in the local knowledge graph;
the communication module is used for communicating with the federal learning server and other data processing terminals so as to receive the shared word vector information of the knowledge graph of the other data processing terminals; the shared word vector information is homomorphic encrypted;
the federal learning module is used for sending the shared word vector information and local corresponding word vectors into a generating type countermeasure network for learning and obtaining fused word vectors; and the fused word vector is used for replacing the word vector of all or part of the entity of the local knowledge graph.
7. The asynchronous training device of claim 6, wherein: the communication module sends the shared word vector information of the local knowledge graph to the other data processing terminals; the shared word vector information is homomorphic encrypted.
8. The asynchronous training device of claim 6, wherein: comprises an operation monitoring module; the operation monitoring module is used for supervising the asynchronous training device to execute the knowledge graph representation learning enhancement method of any one of claims 1 to 5; and/or, for arbitrating the operational status of the asynchronous training device; and/or, for adjusting the learning effectiveness of the federal learning module.
9. A generative confrontation learning device deployed on a data processing end or a federal learning server, comprising:
a memory for storing computer executable code and a generative countermeasure network;
a communication interface, which is used for being in communication connection with the federal learning module of the asynchronous training device through a communication module of any one of claims 6 to 8, so as to receive the first word vector set and the second word vector set from the federal learning module and send the third word vector set to the federal learning module;
A processor for reading and executing the computer executable code to configure and train the generative countermeasure network; the instructions of the computer-executable code, when executed by the processor, cause the processor to:
recording the set of first word vectors as
Figure 258966DEST_PATH_IMAGE001
Will be
Figure 799668DEST_PATH_IMAGE002
The set of second word vectors corresponding to each first word vector is marked as
Figure 177560DEST_PATH_IMAGE003
Countering generators in a network in accordance with the generative equation
Figure 778306DEST_PATH_IMAGE004
Learning the discriminators in the generative confrontation network to distinguish between random samples
Figure 241648DEST_PATH_IMAGE005
And
Figure 586042DEST_PATH_IMAGE003
an element of (1);
learning generators in the generative confrontation network
Figure 21702DEST_PATH_IMAGE004
So that it will be as accurate as possible
Figure 527770DEST_PATH_IMAGE006
To map to
Figure 743988DEST_PATH_IMAGE007
On the word vector of the corresponding node, so that the discriminator has difficulty in distinguishing that an element belongs to
Figure 626493DEST_PATH_IMAGE008
Or belong to
Figure 978977DEST_PATH_IMAGE007
Training the generative confrontation network, and,
obtained after training
Figure 655946DEST_PATH_IMAGE008
Is the third word vector output,
alternatively, the first and second electrodes may be,
obtained after training
Figure 93880DEST_PATH_IMAGE008
The element in (b) and the element in corresponding Y are averaged to obtain the element as the third word vector output,
alternatively, the first and second electrodes may be,
obtained after training
Figure 780077DEST_PATH_IMAGE008
And the element obtained by summing the element in (b) and the corresponding element in (Y) is output as the third word vector.
10. The utility model provides a bang learning device which characterized in that includes:
the asynchronous training device of any one of claims 6 to 8;
the generative confrontational learning apparatus of claim 9.
CN202010629643.8A 2020-07-01 2020-07-01 Knowledge graph representation learning enhancement method and device based on encryption federal learning Active CN111858955B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010629643.8A CN111858955B (en) 2020-07-01 2020-07-01 Knowledge graph representation learning enhancement method and device based on encryption federal learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010629643.8A CN111858955B (en) 2020-07-01 2020-07-01 Knowledge graph representation learning enhancement method and device based on encryption federal learning

Publications (2)

Publication Number Publication Date
CN111858955A true CN111858955A (en) 2020-10-30
CN111858955B CN111858955B (en) 2023-08-18

Family

ID=73152608

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010629643.8A Active CN111858955B (en) 2020-07-01 2020-07-01 Knowledge graph representation learning enhancement method and device based on encryption federal learning

Country Status (1)

Country Link
CN (1) CN111858955B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112365429A (en) * 2020-12-21 2021-02-12 神思电子技术股份有限公司 Knowledge-driven image fuzzy region definition enhancement method
CN113157938A (en) * 2021-03-25 2021-07-23 支付宝(杭州)信息技术有限公司 Method and device for jointly processing multiple knowledge graphs for protecting privacy data
CN113434626A (en) * 2021-08-27 2021-09-24 之江实验室 Multi-center medical diagnosis knowledge map representation learning method and system
CN113886598A (en) * 2021-09-27 2022-01-04 浙江大学 Knowledge graph representation method based on federal learning
CN113973125A (en) * 2021-10-26 2022-01-25 杭州博盾习言科技有限公司 Communication method and device in federal learning, electronic equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190227980A1 (en) * 2018-01-22 2019-07-25 Google Llc Training User-Level Differentially Private Machine-Learned Models
KR20190103088A (en) * 2019-08-15 2019-09-04 엘지전자 주식회사 Method and apparatus for recognizing a business card using federated learning
CN110266771A (en) * 2019-05-30 2019-09-20 天津神兔未来科技有限公司 Distributed intelligence node and distributed swarm intelligence system dispositions method
CN110428058A (en) * 2019-08-08 2019-11-08 深圳前海微众银行股份有限公司 Federal learning model training method, device, terminal device and storage medium
CN110572253A (en) * 2019-09-16 2019-12-13 济南大学 Method and system for enhancing privacy of federated learning training data
CN110633805A (en) * 2019-09-26 2019-12-31 深圳前海微众银行股份有限公司 Longitudinal federated learning system optimization method, device, equipment and readable storage medium
CN110874648A (en) * 2020-01-16 2020-03-10 支付宝(杭州)信息技术有限公司 Federal model training method and system and electronic equipment
CN110955907A (en) * 2019-12-13 2020-04-03 支付宝(杭州)信息技术有限公司 Model training method based on federal learning

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190227980A1 (en) * 2018-01-22 2019-07-25 Google Llc Training User-Level Differentially Private Machine-Learned Models
CN110266771A (en) * 2019-05-30 2019-09-20 天津神兔未来科技有限公司 Distributed intelligence node and distributed swarm intelligence system dispositions method
CN110428058A (en) * 2019-08-08 2019-11-08 深圳前海微众银行股份有限公司 Federal learning model training method, device, terminal device and storage medium
KR20190103088A (en) * 2019-08-15 2019-09-04 엘지전자 주식회사 Method and apparatus for recognizing a business card using federated learning
CN110572253A (en) * 2019-09-16 2019-12-13 济南大学 Method and system for enhancing privacy of federated learning training data
CN110633805A (en) * 2019-09-26 2019-12-31 深圳前海微众银行股份有限公司 Longitudinal federated learning system optimization method, device, equipment and readable storage medium
CN110955907A (en) * 2019-12-13 2020-04-03 支付宝(杭州)信息技术有限公司 Model training method based on federal learning
CN110874648A (en) * 2020-01-16 2020-03-10 支付宝(杭州)信息技术有限公司 Federal model training method and system and electronic equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
QIAN WANG等: "Privacy-Preserving Collaborative Model Learning: The Case of Word Vector Training", 《IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING》, pages 2381 - 2393 *
许爱雪等: "基于全同态加密的云数据安全方案研究", 《石家庄铁路职业技术学院学报》, pages 63 - 67 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112365429A (en) * 2020-12-21 2021-02-12 神思电子技术股份有限公司 Knowledge-driven image fuzzy region definition enhancement method
CN112365429B (en) * 2020-12-21 2022-07-22 神思电子技术股份有限公司 Knowledge-driven image fuzzy region definition enhancement method
CN113157938A (en) * 2021-03-25 2021-07-23 支付宝(杭州)信息技术有限公司 Method and device for jointly processing multiple knowledge graphs for protecting privacy data
CN113157938B (en) * 2021-03-25 2022-05-17 支付宝(杭州)信息技术有限公司 Method and device for jointly processing multiple knowledge graphs for protecting privacy data
CN113434626A (en) * 2021-08-27 2021-09-24 之江实验室 Multi-center medical diagnosis knowledge map representation learning method and system
CN113434626B (en) * 2021-08-27 2021-12-07 之江实验室 Multi-center medical diagnosis knowledge map representation learning method and system
CN113886598A (en) * 2021-09-27 2022-01-04 浙江大学 Knowledge graph representation method based on federal learning
CN113973125A (en) * 2021-10-26 2022-01-25 杭州博盾习言科技有限公司 Communication method and device in federal learning, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN111858955B (en) 2023-08-18

Similar Documents

Publication Publication Date Title
CN111858955A (en) Knowledge graph representation learning enhancement method and device based on encrypted federated learning
Sharma et al. Secure and efficient federated transfer learning
US20230039182A1 (en) Method, apparatus, computer device, storage medium, and program product for processing data
US9787647B2 (en) Secure computer evaluation of decision trees
Beigi et al. Simplified instantaneous non-local quantum computation with applications to position-based cryptography
Zheng et al. Securely and efficiently outsourcing decision tree inference
Vilasini et al. Multi-agent paradoxes beyond quantum theory
Mardini et al. Genetic algorithm for friendship selection in social IoT
CN115842627A (en) Decision tree evaluation method, device, equipment and medium based on secure multi-party computation
CN111767411A (en) Knowledge graph representation learning optimization method and device and readable storage medium
Zhou et al. Securing federated learning enabled NWDAF architecture with partial homomorphic encryption
Shafique et al. A novel machine learning technique for selecting suitable image encryption algorithms for IoT applications
Sidajaya et al. Neural network approach to the simulation of entangled states with one bit of communication
Kong et al. An improved artificial bee colony algorithm and its application
CN114329127A (en) Characteristic box dividing method, device and storage medium
Laneve et al. Impossibility of composable oblivious transfer in relativistic quantum cryptography
Villani et al. Identifying emergent dynamical structures in network models
Kerppo Quantum communication tasks
CN116821838B (en) Privacy protection abnormal transaction detection method and device
Ma et al. Flexible and Privacy-preserving Framework for Decentralized Collaborative Learning
Patil et al. A novel approach to prevent personal data on a social network using graph theory
CN114121206B (en) Case portrait method and device based on multi-party combined K mean modeling
Arsalan et al. Federated Learning Technique in Enabling Data-Driven Design for Wireless Communication
Mishra et al. Distinguishing lightweight block ciphers in encrypted images
Gunasekaran et al. Cross-fertilization of ideas in Collective Intelligence Model (CIM)

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant