CN117370568A - Power grid main equipment knowledge graph completion method based on pre-training language model - Google Patents

Power grid main equipment knowledge graph completion method based on pre-training language model Download PDF

Info

Publication number
CN117370568A
CN117370568A CN202311272913.4A CN202311272913A CN117370568A CN 117370568 A CN117370568 A CN 117370568A CN 202311272913 A CN202311272913 A CN 202311272913A CN 117370568 A CN117370568 A CN 117370568A
Authority
CN
China
Prior art keywords
model
defect
training
knowledge graph
power grid
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311272913.4A
Other languages
Chinese (zh)
Inventor
吴丽进
廖飞龙
郭俊
林晨翔
黄建业
郑州
赵志超
谢新志
杨彦
张志宏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen University
Electric Power Research Institute of State Grid Fujian Electric Power Co Ltd
State Grid Fujian Electric Power Co Ltd
Original Assignee
Xiamen University
Electric Power Research Institute of State Grid Fujian Electric Power Co Ltd
State Grid Fujian Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen University, Electric Power Research Institute of State Grid Fujian Electric Power Co Ltd, State Grid Fujian Electric Power Co Ltd filed Critical Xiamen University
Priority to CN202311272913.4A priority Critical patent/CN117370568A/en
Publication of CN117370568A publication Critical patent/CN117370568A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/36Creation of semantic tools, e.g. ontology or thesauri
    • G06F16/367Ontology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3447Performance evaluation by modeling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation
    • G06N5/022Knowledge engineering; Knowledge acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Economics (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Quality & Reliability (AREA)
  • Public Health (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Computer Hardware Design (AREA)
  • Databases & Information Systems (AREA)
  • Animal Behavior & Ethology (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Water Supply & Treatment (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention relates to a power grid main equipment knowledge graph completion method based on a pre-training language model, which comprises the following steps: constructing a power grid equipment defect knowledge graph; embedding the power grid equipment defect knowledge graph into a low-dimensional vector space; constructing a mixed knowledge graph embedded model PLMSM by combining the pre-training language model and the structure-based model; in the PLMSM model, input entities and their supplemental information are first fed into a pre-trained language model to obtain their embeddings, which are combined with the embeddings generated by the structure-based model to improve entity completion performance; training the PLMSM model through a training set, verifying the training result of the model through a verification set, and selecting an optimal model; in the training process, a negative sampling method is adopted to optimize the PLMSM model; testing the obtained PLMSM model through a test set; the PLMSM model which passes the test can be used for the entity completion task. The method is favorable for improving the entity completion performance, thereby improving the safety and stability of the power system.

Description

Power grid main equipment knowledge graph completion method based on pre-training language model
Technical Field
The invention relates to the technical field of electric power cognition intelligence, in particular to a power grid main equipment knowledge graph completion method based on a pre-training language model.
Background
With the continuous development and expansion of power systems, the number and complexity of power grid equipment is also increasing, and faults and defects therein cause unstable systems and safety problems. Therefore, timely diagnosing defects of power grid equipment becomes a key for operation of a power system. Knowledge maps, as a method of structured knowledge representation, can be used to describe devices and their relationships in electrical power systems, but it remains a challenge to efficiently exploit these knowledge to enable timely diagnosis and maintenance of grid device defects. Meanwhile, the knowledge graph embedding method based on the pre-training language model achieves good effect in learning the vector representation of the entity in the knowledge graph, but has certain defects in the aspect of domain-specific entity and knowledge representation.
Disclosure of Invention
The invention aims to provide a power grid main equipment knowledge graph completion method based on a pre-training language model, which is beneficial to improving entity completion performance, thereby improving the safety and stability of a power system.
In order to achieve the above purpose, the technical scheme adopted by the invention is as follows: a power grid main equipment knowledge graph completion method based on a pre-training language model comprises the following steps:
(1) Constructing a power grid equipment defect knowledge graph to simulate knowledge related to power grid equipment defects;
(2) Embedding the power grid equipment defect knowledge graph into a low-dimensional vector space suitable for a deep learning model by using a knowledge graph embedding method;
(3) Constructing a mixed knowledge graph embedded model PLMSM by combining a pre-training language model and a structure-based model to obtain a richer entity representation; the pre-trained language model is used for learning context information of entities, and the structure-based model is used for learning relationship information between the entities; in the PLMSM model, input entities and their supplemental information are first fed into a pre-trained language model to obtain their embeddings, which are combined with the embeddings generated by the structure-based model to improve entity completion performance;
(4) Training the PLMSM model through a training set, verifying the training result of the model through a verification set, and selecting an optimal model; in the training process, a PLMSM model is optimized by adopting a negative sampling method, so that the training complexity is reduced, the training speed and efficiency are improved, and meanwhile, the problem of inaccurate prediction caused by long tail entities in a power grid equipment defect knowledge graph is solved;
(5) Testing the obtained PLMSM model through a test set; the PLMSM model which passes the test can be used for the entity completion task.
Further, the specific method for constructing the power grid equipment defect knowledge graph comprises the following steps:
and (3) body design: performing concrete expression on key information including positions and expressions of defects by taking defect equipment and defect content as cores, and constructing a power grid equipment defect knowledge graph body taking equipment faults as cores; the power grid equipment defect knowledge graph body comprises 13 entity types, which are respectively: voltage class, line-of-site type, defect equipment, equipment type, component, location, defect description, defect content, defect property, classification basis; the power grid equipment defect knowledge graph body comprises 13 relations, namely:
(1) Power plant/line voltage class
(2) Power station/line station line type
(3) Power plant/line defect device
(4) Defective device component type
(5) Defective device type
(6) Device type
(7) Parts category parts
(8) Parts defect site
(9) Defect site defect description
(10) Defect Contents Defect description
(11) Defect content classification basis
(12) Defect Contents Defect Property
(13) Classification basis defect properties
Knowledge graph construction: and extracting corresponding entities and relations from unstructured data by the designed ontology to generate structured triples, and constructing knowledge graph database contents based on the power grid equipment defect knowledge graph ontology structure.
Further, acquiring data containing power grid main equipment and defect information thereof, sorting the semi-structured data to generate a structured document, and further converting the structured document into records with a determined format to form a data set; the data set is then divided into a training set, a validation set and a test set in a proportion.
Further, preprocessing is carried out on the training set, the verification set and the test set, specifically: adding an inverse relation to each triplet in the data set, converting the entity and relation in the triplet into id, calculating a score by using pagerank, and storing a preprocessed result for training and testing of a model.
Further, training the PLMSM model on a training set, taking head nodes and relations of each triplet in the preprocessed training set, initializing a cognitive subgraph by using the head nodes, expanding the subgraph by using the head nodes as root nodes, selecting a node with a score of 50 in each step of expansion according to the calculation result of the pagerank, and obtaining the cognitive subgraph after multi-step expansion; in each step of expansion process, calculating hidden representations of the entities, the relation representations and the nodes in the cognitive subgraph, and updating the attention flow; after sub-graph expansion is completed, the scores of all the nodes in the cognitive sub-graph are updated, a prediction result is obtained according to the final scores of the nodes, and a loss function is calculated to complete a round of training.
Further, the performance of the test model on the test set, outputting a complement result, an inference path and a logic expression supporting the result; the test task is map completion, namely, all triples in a test set are subjected to tail entity removal, prediction is carried out on a trained model, and four common metrics of Hit@1, hit@3, hit@10 and MRR are used for evaluating the model.
Further, the method is applied to the defect diagnosis and maintenance of the power grid equipment, equipment defects are found in time, measures are taken in time to maintain, and therefore safe and stable operation of the power grid is guaranteed.
Compared with the prior art, the invention has the following beneficial effects: the method is focused on electric spectrum complementation, embeds the knowledge spectrum into a low-dimensional vector space, and constructs a mixed knowledge spectrum embedded model PLMSM by combining a pre-training language model and a structure-based model so as to express domain-specific knowledge and entity supplementary information, thereby solving the problem of inaccurate prediction caused by the fact that the knowledge spectrum embedded method based on the pre-training language model cannot express the supplementary information and long-tail entities. The PLMSM model provided by the invention is optimized through high-efficiency negative sampling, and the PLMSM model is proved to perform well in the entity completion task on the power grid equipment defect knowledge graph in experiments.
Drawings
FIG. 1 is a schematic diagram of a method implementation of an embodiment of the present invention;
FIG. 2 is a schematic diagram of a body design according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a PLMSM model in an embodiment of the present invention.
Detailed Description
The invention will be further described with reference to the accompanying drawings and examples.
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the present application. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments in accordance with the present application. As used herein, the singular is also intended to include the plural unless the context clearly indicates otherwise, and furthermore, it is to be understood that the terms "comprises" and/or "comprising" when used in this specification are taken to specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof.
As shown in fig. 1, the embodiment provides a power grid main equipment knowledge graph completion method based on a pre-training language model, which includes the following steps:
(1) And constructing a power grid equipment defect Knowledge Graph (KG) to simulate knowledge related to the power grid equipment defects.
In modeling a grid plant defect Knowledge Graph (KG), a dataset containing power plants and their defect information is used. Entities in the data set include power equipment and various imperfections of the equipment, such as transformers, switches, shorts, etc. In the modeling process, various knowledge representation methods are adopted, including ontologies (ontologies), entity relationships (entity relationship), attribute relationships (attribute relationship), and the like.
(2) And embedding the power grid equipment defect knowledge graph into a low-dimensional vector space suitable for the deep learning model by using a knowledge graph embedding method.
In order to apply the modeled KG to practical problems, it needs to be embedded into a low-dimensional vector space. The method uses a Knowledge Graph Embedding (KGE) method to map entities and relations in KG into a low-dimensional vector space so as to facilitate the subsequent tasks of entity completion and the like by using a deep learning model.
(3) And constructing a mixed knowledge graph embedded model PLMSM by combining the pre-training language model and the structure-based model to obtain a richer entity representation. The pre-trained language model may learn contextual information of entities, while the structure-based model may learn relational information between entities. In the PLMSM model, input entities and their supplemental information are first fed into a pre-trained language model to obtain their embeddings, which are combined with the embeddings generated by the structure-based model to improve entity completion performance.
When the entity and the supplementary information thereof are input into the pre-training language model, a plurality of methods are adopted for extracting the characteristics of the entity. For example, information on the name, description, attributes, etc. of the entity may be used as input. Through these inputs, the pre-trained language model can learn the contextual information of the entity, thereby obtaining a more accurate embedding of the entity.
(4) Training the PLMSM model through a training set, verifying the training result of the model through a verification set, and selecting an optimal model; in the training process, a PLMSM model is optimized by adopting a negative sampling method, so that the training complexity is reduced, the training speed and the training efficiency are improved, and meanwhile, the problem of inaccurate prediction caused by long tail entities in a power grid equipment defect knowledge graph is solved.
(5) And testing the obtained PLMSM model through a test set. The PLMSM model which passes the test can be used for the entity completion task.
In this embodiment, a real data set containing power equipment defect information is used for testing. Experimental results show that PLMSM performs well in entity completion tasks and can effectively diagnose defects of power equipment.
The method is applied to the defect diagnosis and maintenance of the power grid equipment, can help an electric company to discover equipment defects in time and take measures in time to maintain, so that the safe and stable operation of the power grid is ensured. The method can also be applied to tasks such as knowledge graph modeling and entity completion in other fields.
The relevant matters related to the method are further described below.
1. Constructing a power grid equipment defect knowledge graph
1) And (3) body design: body design as shown in fig. 2, the method is based on the design of the national power grid on the power equipment knowledge graph, takes 'defect equipment' and 'defect content' as cores, performs specific expression on key information such as position, expression and the like of defects, constructs the power grid equipment defect knowledge graph body taking equipment faults as cores, and facilitates observation of frequently-occurring lines and equipment.
The power grid equipment defect knowledge graph body comprises 13 entity types, which are respectively: voltage class, line-of-site type, defect equipment, equipment type, component, location, defect description, defect content, defect property, classification basis; the power grid equipment defect knowledge graph body comprises 13 relations, namely:
(1) Power plant/line voltage class
(2) Power station/line station line type
(3) Power plant/line defect device
(4) Defective device component type
(5) Defective device type
(6) Device type
(7) Parts category parts
(8) Parts defect site
(9) Defect site defect description
(10) Defect Contents Defect description
(11) Defect content classification basis
(12) Defect Contents Defect Property
(13) Classification basis defect properties
2) Knowledge graph construction: and extracting corresponding entities and relations from unstructured data by the designed ontology to generate structured triples, and constructing knowledge graph database contents based on the power grid equipment defect knowledge graph ontology structure.
2. Acquisition and processing of data sets
1) And (5) data collection. And acquiring data containing the power grid main equipment and defect information thereof, and generating a structured document by arranging the semi-structured data, and further converting the structured document into records with a determined format to form a data set.
2) Data set partitioning. The data set is divided into a training set, a validation set and a test set according to the ratio of 8:1:1. Wherein the training set is used for training learning of the model, the validation set is used for validating training results of the model, and an optimal model is selected, wherein the model is measured using an MRR (reciprocal of average ranking) index.
3) And (5) preprocessing data. Preprocessing a training set, a verification set and a test set, wherein the preprocessing is specifically as follows: adding an inverse relation to each triplet in the data set, converting the entity and relation in the triplet into id, calculating a score by using pagerank, and storing a preprocessed result for training and testing of a model.
3. Model training
In this embodiment, the model architecture is shown in fig. 3. Training the PLMSM model on a training set, taking head nodes and relations for each triplet in the preprocessed training set, initializing a cognitive subgraph by using the head nodes, expanding the subgraph by taking the node as a root node in the system, and selecting in each expansion step according to the calculation result of the pagerankSelecting the node with the score of 50, and expanding in 4 steps to obtain a cognitive subgraph. In each step of expansion process, calculating hidden representations of the entities, the relation representations and the nodes in the cognitive subgraph, and updating the attention flow; after sub-graph expansion is completed, the system II is entered, scores of all nodes in the cognitive sub-graph are updated, a prediction result is obtained according to the final scores of the nodes, a loss function is calculated, and one round of training is completed. The maximum step length in the training process is 100000, the upper limit of the entity number of the candidate set is 100, and the training rate is 10 -4
4. Model testing
The performance of the test model on the test set, outputting a complement result, an inference path and a logic expression supporting the result; the test task is map completion, namely, all triples in a test set are subjected to tail entity removal, prediction is carried out on a trained model, four common metrics of Hit@1, hit@3, hit@10 and MRR are used for evaluating the model, and the results of the model on FB-15k and WN18RR are compared, so that the method has better interpretability and can obtain more reliable results. The experimental results are shown in table 1.
Table 1 experimental results
The experimental result performed on the power system equipment defect KG shows that the PLMSM model provided by the invention has good performance in the entity completion task. The model has potential application in the aspects of diagnosis and maintenance of equipment defects of the power system, wherein efficient and accurate entity completion can greatly promote detection and solution of the equipment defects, so that the safety and stability of the power system are improved.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the invention in any way, and any person skilled in the art may make modifications or alterations to the disclosed technical content to the equivalent embodiments. However, any simple modification, equivalent variation and variation of the above embodiments according to the technical substance of the present invention still fall within the protection scope of the technical solution of the present invention.

Claims (7)

1. The power grid main equipment knowledge graph completion method based on the pre-training language model is characterized by comprising the following steps of:
(1) Constructing a power grid equipment defect knowledge graph to simulate knowledge related to power grid equipment defects;
(2) Embedding the power grid equipment defect knowledge graph into a low-dimensional vector space suitable for a deep learning model by using a knowledge graph embedding method;
(3) Constructing a mixed knowledge graph embedded model PLMSM by combining a pre-training language model and a structure-based model to obtain a richer entity representation; the pre-training language model is used for learning the context information of entities in text types, and the structure-based model is used for learning the relation information between the entities based on the graph structure; in the PLMSM model, input entities and their supplemental information are first fed into a pre-trained language model to obtain their embeddings, which are combined with the embeddings generated by the structure-based model to improve entity completion performance;
(4) Training the PLMSM model through a training set, verifying the training result of the model through a verification set, and selecting an optimal model; in the training process, a PLMSM model is optimized by adopting a negative sampling method, so that the training complexity is reduced, the training speed and efficiency are improved, and meanwhile, the problem of inaccurate prediction caused by long tail entities in a power grid equipment defect knowledge graph is solved;
(5) Testing the obtained PLMSM model through a test set; the PLMSM model which passes the test can be used for the entity completion task.
2. The method for supplementing the knowledge graph of the power grid main equipment based on the pre-training language model as set forth in claim 1, wherein the specific method for constructing the knowledge graph of the power grid equipment defect is as follows:
and (3) body design: performing concrete expression on key information including positions and expressions of defects by taking defect equipment and defect content as cores, and constructing a power grid equipment defect knowledge graph body taking equipment faults as cores; the power grid equipment defect knowledge graph body comprises 13 entity types, which are respectively: voltage class, line-of-site type, defect equipment, equipment type, component, location, defect description, defect content, defect property, classification basis; the power grid equipment defect knowledge graph body comprises 13 relations, namely:
(1) Power plant/line voltage class
(2) Power station/line station line type
(3) Power plant/line defect device
(4) Defective device component type
(5) Defective device type
(6) Device type
(7) Parts category parts
(8) Parts defect site
(9) Defect site defect description
(10) Defect Contents Defect description
(11) Defect content classification basis
(12) Defect Contents Defect Property
(13) Classification basis defect properties
Knowledge graph construction: and extracting corresponding entities and relations from unstructured data by the designed ontology to generate structured triples, and constructing knowledge graph database contents based on the power grid equipment defect knowledge graph ontology structure.
3. The method for supplementing the knowledge graph of the power grid main equipment based on the pre-training language model according to claim 2, wherein the method is characterized in that data containing the power grid main equipment and defect information thereof are obtained, the semi-structured data are arranged to generate a structured document, and the structured document is further converted into records with a determined format to form a data set; the data set is then divided into a training set, a validation set and a test set in a proportion.
4. The method for supplementing the knowledge graph of the power grid main equipment based on the pre-training language model as set forth in claim 3, wherein the preprocessing of the training set, the verification set and the test set is specifically as follows: adding an inverse relation to each triplet in the data set, converting the entity and relation in the triplet into id, calculating a score by using pagerank, and storing a preprocessed result for training and testing of a model.
5. The method for supplementing the knowledge graph of the power grid main equipment based on the pre-training language model according to claim 4, wherein the PLMSM model is trained on a training set, head nodes and relations are taken for each triplet in the preprocessed training set, a cognitive subgraph is initialized by using the head nodes, the subgraph is expanded by taking the node as a root node, and a node with the score of 50 is selected in each step of expansion according to the calculation result of the pagerank, and the cognitive subgraph is obtained after multi-step expansion; in each step of expansion process, calculating hidden representations of the entities, the relation representations and the nodes in the cognitive subgraph, and updating the attention flow; after sub-graph expansion is completed, the scores of all the nodes in the cognitive sub-graph are updated, a prediction result is obtained according to the final scores of the nodes, and a loss function is calculated to complete a round of training.
6. The method for supplementing knowledge graph of power grid main equipment based on a pre-training language model according to claim 5, wherein the performance of the model is tested on a test set, and a supplementing result, an reasoning path and a logic expression supporting the result are output; the test task is map completion, namely, all triples in a test set are subjected to tail entity removal, prediction is carried out on a trained model, and four common metrics of Hit@1, hit@3, hit@10 and MRR are used for evaluating the model.
7. The method for supplementing the knowledge graph of the power grid main equipment based on the pre-training language model according to claim 1, wherein the method is applied to the defect diagnosis and maintenance of the power grid equipment, equipment defects are found in time, measures are taken in time for maintenance, and therefore safe and stable operation of the power grid is guaranteed.
CN202311272913.4A 2023-09-28 2023-09-28 Power grid main equipment knowledge graph completion method based on pre-training language model Pending CN117370568A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311272913.4A CN117370568A (en) 2023-09-28 2023-09-28 Power grid main equipment knowledge graph completion method based on pre-training language model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311272913.4A CN117370568A (en) 2023-09-28 2023-09-28 Power grid main equipment knowledge graph completion method based on pre-training language model

Publications (1)

Publication Number Publication Date
CN117370568A true CN117370568A (en) 2024-01-09

Family

ID=89397463

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311272913.4A Pending CN117370568A (en) 2023-09-28 2023-09-28 Power grid main equipment knowledge graph completion method based on pre-training language model

Country Status (1)

Country Link
CN (1) CN117370568A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117911811A (en) * 2024-03-19 2024-04-19 南京认知物联网研究院有限公司 Industrial vision model training method and device based on business knowledge fusion

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117911811A (en) * 2024-03-19 2024-04-19 南京认知物联网研究院有限公司 Industrial vision model training method and device based on business knowledge fusion

Similar Documents

Publication Publication Date Title
CN108595341B (en) Automatic example generation method and system
CN102662931B (en) Semantic role labeling method based on synergetic neural network
CN106527757A (en) Input error correction method and apparatus
CN117370568A (en) Power grid main equipment knowledge graph completion method based on pre-training language model
CN113392197B (en) Question-answering reasoning method and device, storage medium and electronic equipment
CN115114421A (en) Question-answer model training method
Liu et al. Compositional generalization by learning analytical expressions
Mahadi et al. Cross-dataset design discussion mining
CN117669656A (en) TCN-Semi PN-based direct-current micro-grid stability real-time monitoring method and device
CN107085655A (en) The traditional Chinese medical science data processing method and system of constrained concept lattice based on attribute
Chen et al. Automated Domain Modeling with Large Language Models: A Comparative Study
CN109815108A (en) A kind of combined test set of uses case priorization sort method and system based on weight
CN113901793A (en) Event extraction method and device combining RPA and AI
Ngoc et al. Sentiment analysis of students’ reviews on online courses: A transfer learning method
Van Nho et al. A solution for improving the effectiveness of higher order mutation testing
CN111930959A (en) Method and device for generating text by using map knowledge
CN107886232A (en) The QoS evaluating method and system of customer service
Wang et al. Solving Word Function Problems in Line with Educational Cognition Way
Mi et al. Improving Multi-Class Code Readability Classification with An Enhanced Data Augmentation Approach (130)
Brigante Evaluation of students' source code submissions using Machine Learning
CN118070901A (en) Method, device, medium and program product for LLM to implement power defect description reasoning
Han et al. Modelling and Simulation of Intelligent English Paper Generating based on Developed Genetic Algorithm
Dean et al. ChatPapers: An AI Chatbot for Interacting with Academic Research
Mehta et al. Graduate prediction using ontology approach
CN117421226A (en) Defect report reconstruction method and system based on large language model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination