CN117118856A - Knowledge graph completion-based network fault reasoning method and related equipment - Google Patents
Knowledge graph completion-based network fault reasoning method and related equipment Download PDFInfo
- Publication number
- CN117118856A CN117118856A CN202311068574.8A CN202311068574A CN117118856A CN 117118856 A CN117118856 A CN 117118856A CN 202311068574 A CN202311068574 A CN 202311068574A CN 117118856 A CN117118856 A CN 117118856A
- Authority
- CN
- China
- Prior art keywords
- graph
- knowledge
- model
- representation
- entity
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 72
- 238000003062 neural network model Methods 0.000 claims abstract description 19
- 238000013528 artificial neural network Methods 0.000 claims abstract description 18
- 238000012545 processing Methods 0.000 claims abstract description 14
- 239000011159 matrix material Substances 0.000 claims description 8
- 230000000694 effects Effects 0.000 claims description 7
- 230000006870 function Effects 0.000 claims description 7
- 238000012360 testing method Methods 0.000 claims description 6
- 238000004590 computer program Methods 0.000 claims description 5
- 238000013527 convolutional neural network Methods 0.000 claims description 4
- 230000004913 activation Effects 0.000 claims description 3
- 230000008569 process Effects 0.000 claims description 3
- 230000009466 transformation Effects 0.000 claims description 2
- 238000012163 sequencing technique Methods 0.000 claims 1
- 238000010586 diagram Methods 0.000 description 10
- 230000002159 abnormal effect Effects 0.000 description 4
- 238000011156 evaluation Methods 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 230000005494 condensation Effects 0.000 description 3
- 238000009833 condensation Methods 0.000 description 3
- 238000010276 construction Methods 0.000 description 3
- 238000013210 evaluation model Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 3
- 230000006978 adaptation Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 230000001364 causal effect Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000007689 inspection Methods 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 239000003595 mist Substances 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000008020 evaporation Effects 0.000 description 1
- 238000001704 evaporation Methods 0.000 description 1
- 238000005065 mining Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000013598 vector Substances 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/16—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/36—Creation of semantic tools, e.g. ontology or thesauri
- G06F16/367—Ontology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/06—Management of faults, events, alarms or notifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/14—Network analysis or design
- H04L41/145—Network analysis or design involving simulating, designing, planning or modelling of a network
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W24/00—Supervisory, monitoring or testing arrangements
- H04W24/04—Arrangements for maintaining operational condition
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Theoretical Computer Science (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Software Systems (AREA)
- Computational Linguistics (AREA)
- Databases & Information Systems (AREA)
- General Physics & Mathematics (AREA)
- Animal Behavior & Ethology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The disclosure provides a network fault reasoning method based on knowledge graph completion, which comprises the following steps: taking a network fault knowledge graph in a triplet form as an input of a model, wherein the model adopts an end-to-end knowledge graph to represent a learning model, and the model comprises an encoder and a decoder; processing the triplets by using a Chebyshev polynomial first-order approximate convolution graph neural network model to obtain entity embedded representation and relation embedded representation; obtaining a plurality of candidate entities and corresponding scores through the entity embedding representation and the relation embedding representation by adopting a knowledge graph representation learning model of the neural network; and sorting the candidate entities based on the scores to obtain a prediction result. The method can expand the scale and coverage of the knowledge graph and improve the overall data quality of the knowledge graph.
Description
Technical Field
The disclosure relates to the technical field of communication, in particular to a network fault reasoning method and device based on knowledge graph completion, a computer readable storage medium and electronic equipment.
Background
The traditional network fault reasoning method still has some limitations: (1) data quality problem: modern 5G+ networks have large data volume and contain various types of data, so that high-quality data is required to accurately infer fault causes; (2) failure type diversity problem: the 5G+ network fault types are various, and new fault types continuously appear, so that the existing reasoning method is difficult to completely cover all fault types.
It should be noted that the information disclosed in the above background section is only for enhancing understanding of the background of the present disclosure and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The disclosure aims to provide a network fault reasoning method and device based on knowledge graph completion, a computer readable storage medium and electronic equipment, so as to at least solve the technical problems that data quality is poor and all fault types are difficult to be completely covered by the existing reasoning method in the related technology.
Other features and advantages of the present disclosure will be apparent from the following detailed description, or may be learned in part by the practice of the disclosure.
The technical scheme of the present disclosure is as follows:
according to one aspect of the present disclosure, there is provided a network fault reasoning method based on knowledge graph completion, including: taking a network fault knowledge graph in a triplet form as an input of a model, wherein the model adopts an end-to-end knowledge graph to represent a learning model, and the model comprises an encoder and a decoder; processing the triplets by using a Chebyshev polynomial first-order approximate convolution graph neural network model to obtain entity embedded representation and relation embedded representation; obtaining a plurality of candidate entities and corresponding scores through the entity embedding representation and the relation embedding representation by adopting a knowledge graph representation learning model of the neural network; and sorting the candidate entities based on the scores to obtain a prediction result.
In some embodiments of the present disclosure, the method comprises: representing the network fault knowledge graphForm (S), wherein ε, < >>And->Representing the entity set, the relationship set, and the fact triplet set, respectively.
In some embodiments of the present disclosure, a first order approximate convolution graph neural network model employing chebyshev polynomials may be represented as equations (1), (2) and (3).
In some embodiments of the present disclosure, the step of processing the triplets to obtain the entity embedded representation and the relationship embedded representation using a first order approximate convolution graph neural network model of chebyshev polynomials includes: defining information in the network failure map to flow in two directions; embedding the relationship into a graph roll-up neural network model; combining the entities and the relationships; and respectively defining parameters of the original edge, the reverse edge and the self edge to obtain an entity embedded representation and a relation embedded representation.
In some embodiments of the present disclosure, the step of obtaining a plurality of entities and corresponding scores from the entity-embedded representation and the relationship-embedded representation using a knowledge-graph representation learning model of the neural network comprises: the scoring function definition can be expressed as equation (7) using convolutional neural network predictive evaluation model scores.
In some embodiments of the present disclosure, the method further comprises: link prediction is employed as a downstream task to measure the effects of the model.
In some embodiments of the present disclosure, the step of employing link prediction as a downstream task for measuring model effects includes: the prediction of the entity is evaluated by adopting MRR, the higher the MRR value is, the higher the ordering of the correct entity is, the more accurate the prediction is, and the definition of the MRR is as shown in a formula (8).
In some embodiments of the present disclosure, the set of entities in the network failure knowledge-graph includes: fault problem, fault cause, fault resolution method.
According to still another aspect of the present disclosure, there is provided a network fault inference apparatus based on knowledge-graph completion, the apparatus comprising: the input module is used for taking a network fault knowledge graph in a triplet form as the input of a model, wherein the model adopts an end-to-end knowledge graph to represent a learning model, and the knowledge graph represents the learning model and comprises an encoder and a decoder; the encoder adopts a Chebyshev polynomial first-order approximate convolution graph neural network model to process the triplets to obtain entity embedded representation and relation embedded representation; the decoder adopts a knowledge graph representation learning model of the neural network to obtain a plurality of candidate entities and corresponding scores through the entity embedding representation and the relation embedding representation; and the output module is used for sorting the candidate entities based on the scores to obtain a prediction result.
According to still another aspect of the present disclosure, there is provided an electronic apparatus including: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to perform the knowledge-graph completion based network fault reasoning method described above via execution of the executable instructions.
According to yet another aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the above-described knowledge-graph completion-based network failure reasoning method.
According to the method, the network fault data are subjected to a unified semantic expression mode in the form of the knowledge graph triples, so that knowledge of different data sources can be seamlessly integrated and inferred, and analysis and evaluation results of the network faults are more accurate and reliable.
Further, an encoder with knowledge representation is built by combining a graph neural network and is decoded through the neural network, so that the reasoning accuracy of the model is improved, and the method has good performance on fault link prediction.
Further, the obtained prediction scores are ranked, and a scoring sequence is generated, so that more effective prediction is performed.
In addition, the network fault reasoning based on the knowledge graph provided by the disclosure can automatically infer new fault knowledge, and the new fault knowledge can be automatically added into the knowledge graph, so that the scale and coverage of the knowledge graph are expanded, and the cost and workload of manual intervention are greatly reduced.
Moreover, the method proposed by the present disclosure can support complex reasoning: including logical reasoning, relational reasoning, causal reasoning, etc. The reasoning can help the user to find potential relations and rules hidden in the knowledge graph, and the overall data quality of the knowledge graph is improved, so that the utilization value of the knowledge is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure. It will be apparent to those of ordinary skill in the art that the drawings in the following description are merely examples of the disclosure and that other drawings may be derived from them without undue effort.
Fig. 1 shows a flowchart of a network fault reasoning method based on knowledge-graph completion in an embodiment of the disclosure.
Fig. 2 is a flow diagram of a method for obtaining an entity embedded representation and a relationship embedded representation by an encoder based on a knowledge-graph completion network failure inference method in an embodiment of the disclosure.
Fig. 3 shows a schematic diagram of a network failure knowledge graph based on a knowledge graph completion network failure reasoning method in an embodiment of the disclosure.
Fig. 4 illustrates an end-to-end knowledge graph representation learning model of a network fault reasoning method based on knowledge graph completion in an embodiment of the disclosure.
Fig. 5 is a flowchart illustrating a detailed description of a message passing mechanism in an encoder based on a knowledge-graph completion network failure inference method in an embodiment of the present disclosure.
Fig. 6 shows a schematic diagram of a decoder model of a knowledge-graph completion-based network failure reasoning method in an embodiment of the disclosure.
Fig. 7 is a schematic structural diagram of a network fault inference device based on knowledge-graph completion in an embodiment of the disclosure.
Fig. 8 shows a schematic block diagram of an electronic device based on a knowledge-graph completion network failure reasoning method in an embodiment of the disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in software or in one or more hardware modules or integrated circuits or in different networks and/or processor devices and/or microcontroller devices.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the present disclosure, the meaning of "a plurality" is at least two, such as two, three, etc., unless explicitly specified otherwise.
Aiming at the technical problems in the related art, the embodiment of the disclosure provides a network fault reasoning method, a device, electronic equipment and a computer readable storage medium based on knowledge graph completion, which are used for at least solving one or all of the technical problems.
It should be noted that, the terms or terms related to the embodiments of the present application may be referred to each other, and are not repeated.
The steps of a network fault reasoning method based on knowledge-graph completion in this exemplary embodiment will be described in more detail below with reference to the accompanying drawings and examples.
Fig. 1 shows a flowchart of a knowledge-graph completion-based network fault reasoning method in an embodiment of the disclosure, and as shown in fig. 1, the method 100 may include the following steps:
in step S110, the network failure knowledge graph is used as an input of a model in the form of a triplet, the model adopts an end-to-end knowledge graph to represent a learning model, and the model comprises an encoder and a decoder.
In step S120, the triplets are processed using a chebyshev polynomial first order approximate convolution graph neural network model to obtain an entity embedded representation and a relationship embedded representation.
In step S130, a knowledge-graph representation learning model of the neural network is used to obtain a plurality of candidate entities and corresponding scores through the entity-embedded representation and the relationship-embedded representation.
In step S140, the candidate entities are ranked based on the scores to obtain a prediction result.
According to the method, the network fault data are subjected to a unified semantic expression mode in the form of the knowledge graph triples, so that knowledge of different data sources can be seamlessly integrated and inferred, and analysis and evaluation results of the network faults are more accurate and reliable.
Further, an encoder with knowledge representation is built by combining a graph neural network and is decoded through the neural network, so that the reasoning accuracy of the model is improved, and the method has good performance on fault link prediction.
Further, the obtained prediction scores are ranked, and a scoring sequence is generated, so that more effective prediction is performed.
In addition, the network fault reasoning based on the knowledge graph provided by the disclosure can automatically infer new fault knowledge, and the new fault knowledge can be automatically added into the knowledge graph, so that the scale and coverage of the knowledge graph are expanded, and the cost and workload of manual intervention are greatly reduced.
Moreover, the method proposed by the present disclosure can support complex reasoning: including logical reasoning, relational reasoning, causal reasoning, etc. The reasoning can help the user to find potential relations and rules hidden in the knowledge graph, and the overall data quality of the knowledge graph is improved, so that the utilization value of the knowledge is improved.
In some embodiments of the present disclosure, the triplet form of the network failure knowledge-graph may be represented asForm (S), wherein ε, < >>And->Representing the entity set, the relationship set, and the fact triplet set, respectively. Wherein, the triplet in the knowledge graph is +.>Represented as the relationship r between the head and tail entities h and t.
The entity set in the network fault knowledge graph may include: fault problem, fault cause, fault resolution method. The set of relationships in the network failure knowledge-graph may include: fault problem-fault cause, fault problem-fault solution, fault cause-fault solution.
For example, fig. 3 illustrates a network failure knowledge graph 300, as shown in fig. 3, where an entity set in the network failure knowledge graph 300 is composed of nodes of types including a failure problem description 310, a failure cause 320, a failure resolution method 330, and the like, and a relationship set includes: the problem_to_reason 340, problem_to_method 350, and reason_to_method 360 are all edge relationships, and the fact triplet is composed of nodes in two entity sets and an edge relationship connecting the two nodes.
When the fault problem description 310 is an air conditioner fault, two fault causes 320 are available based on the problem_to_cause 340, one of which is: and 1, checking the secondary control loop, and finding that the control feedback signal of the air valve is abnormal. 2. And checking that the action of the air valve is normal. 3. And checking the fire control loop of the air valve to find out that the control loop is abnormal. The air valve control and the fire control are in linkage relation, and the fire control loop faults influence the terminal start of the air conditioner, so that the complete stop faults are caused. The end total stop fault of the air conditioner is often related to fire control linkage control, and the fault treatment is mainly checked; the other is: "change low wind gear into high wind gear or (automatic gear), set to 24 degrees, no condensation appears any more.
When the failure cause 320 is "1", the secondary control loop is checked, and the control feedback signal of the air valve is found to be abnormal. 2. And checking that the action of the air valve is normal. 3. And checking the fire control loop of the air valve to find out that the control loop is abnormal. The air valve control and the fire control are in linkage relation, and the fire control loop faults influence the terminal start of the air conditioner, so that the complete stop faults are caused. The air conditioner end total stop fault is often related to fire control coordinated control, and when the fault handling is checked with emphasis, the method 330 can be obtained based on the reason_to_method 360: "check that the commercial power can be transmitted to the input switch of the distribution panel, but the secondary control loop at the tail end of each air conditioner does not act, the tail end of the air conditioner cannot be powered on, and the tail end of the air conditioner cannot be started.
When the failure cause 320 is "shift low wind gear to high wind gear or (automatic gear), set to 24 degrees, no condensation occurs anymore", the method 330 may be based on the cause_to_method 360: when the inspection is carried out, water mist is blown out from the air port, and the inspection shows that the water mist is blown out by using a low wind shield, and water drops drop drops, the temperature is set at 20 ℃, so that the evaporation exchange amount is reduced after the temperature is reduced to form excessive condensation dew to be blown out.
In some embodiments of the present disclosure, the end-to-end knowledge graph representation learning model described in step S110 may be, for example, a schematic diagram shown in fig. 4, and as shown in fig. 4, the network failure knowledge graph is used as the input of the encoder 410 in the form of the above-mentioned triples, and is initialized and embedded according to different types of entities and relationships, and then gradient update is performed by the decoder 420 to obtain the prediction result.
Specifically, in some embodiments of the present disclosure, encoder 410 may employ a first order approximate convolution map neural network model of chebyshev polynomials, which may be further expressed as shown in equations (1), (2) and (3):
wherein,the input characteristics of the nodes are represented, A represents an adjacent matrix, D represents a node degree matrix, I represents an identity matrix, and W represents a weight matrix.
In some embodiments of the present disclosure, a specific construction method of a first order approximate convolution graph neural network model employing chebyshev polynomials may include the method 200 shown in fig. 2, the method 200 may include the steps of:
in step S210, information in the network failure map is defined to flow in both directions.
Specifically, for each edgeConstruction edge (h, r) -1 T), and the self-circulation relation (u, r) self V) add->In (2), can be obtained: /> R′=R∪{r -1 |r∈R}∪{r self }。
In step S220, the relationships are embedded into the graph roll-up neural network model.
Specifically, the entity-relationship combination operation used in the knowledge-graph embedding method is utilized: e, e t =φ(e h ,e r ). Wherein,is a combination operation; (h, r, t) represents head, relationship and tail entities, e (·) represents their corresponding embeddings; phi uses non-parameterisedThe method comprises the following steps: subtraction (TransE), multiplication (DistMult) and circular correlation (ConvE).
In step S230, the entities and relationships are combined.
In step S240, parameters of the original edge, the reverse edge, and the self edge are defined, respectively, to obtain an entity embedded representation and a relationship embedded representation.
Specifically, the original edge R and the reverse edge R can be -1 And self-edge r self Separate parameters are defined, for example equation (4).Parameters for a particular relationship type, i.e., λ (r) =dir (r):
the resulting entity embedded representation may be equation (5):
h t =f(∑ (h,r)∈N(t) W λ(r) φ(e h ,e r )) (5)
the resulting relational embedded representation may be formula (6):
h r =W rel e r (6)
wherein,is a parameter that projects all relationships to the same embedding space as the nodes and allows them to be used in the next layer convolutional graph neural network.
The entity embedding representation and the relation embedding representation are obtained by processing the triplets through the first-order approximate convolution graph neural network model of the Chebyshev polynomial constructed by the method, the better entity embedding representation and the relation embedding representation can be obtained, and the performance of the knowledge graph completion task is improved.
For example, the disclosure also provides a detailed flow diagram of a message passing mechanism in an encoder based on a knowledge-graph completion network fault reasoning method. As shown in figure 5 of the drawings,the encoder model 500 uses the concept of message passing in a graph roll-up neural network, using a 2-layer GCN model. First, for each edgeConstruction of the corresponding edges (h, r -1 T), and adding a self-circulation relation (u, r) self V) to->Is a kind of medium. Then, r corresponding to the same h h And t h (e.g., r 1 And t 1 、r 2 And t 2 Etc.) according to->The correspondence in the set performs a dimension-wise addition (phi) of the embedded vectors. Then using a first order chebyshev polynomial as a convolution kernel to define a weight matrix W with different edge types λ(r) Multiply and then accumulate (sigma), aggregate to get the next layer representation of h, and multiply r by the weight matrix W rel And updating to obtain the next-layer representation.
In some embodiments of the present disclosure, step S130, the step of obtaining a plurality of entities and corresponding scores by the entity-embedded representation and the relationship-embedded representation using the knowledge-graph representation learning model of the neural network may further include: using convolutional neural network predictive evaluation model scores, the scoring function can be defined as equation (7):
wherein, and [ · ] represent 2D remodelling and join operations, respectively, represent 2D convolution operations, ω being the convolution kernel; sigma denotes the activation function, W is a learnable matrix in a linear transformation.
For example, the present disclosure provides a decoder model schematic based on a knowledge-graph completion network failure reasoning method. As shown in fig. 6, the decoder model 600 may be a knowledge graph embedding model based on a neural network, and is composed of basic units such as a convolution layer, a full connection layer, an activation layer and the like, and is mainly obtained by embedding entities and relations obtained by the foregoing encoder to obtain a score, measuring the possibility of existence of the triplet, and finally obtaining a plurality of candidate entities and corresponding scores (0.9, 0.2, 0.1, 0.6, 0.2, 0.3, 0.0, 0.7) through a formula (7).
In some embodiments of the present disclosure, the method of the present disclosure may further comprise: link prediction is employed as a downstream task to measure the effects of the model. The entity recommendation is carried out through the link prediction model, so that the deep mining of the network fault knowledge graph can be realized, and the accuracy of an inference algorithm is improved.
In particular, in some embodiments of the present disclosure, the step of employing link prediction as a downstream task for measuring model effects may include: using MRR assessment to predict entities, the higher the MRR value, the higher the ordering of correct entities, the more accurate the prediction, the definition of MRR can be expressed as in equation (8):
where N is the size of the query set in the test set, rank n The ranking position of the correct candidate entity in the ranking of the predicted candidate entity (e.g., scoring ranking in fig. 6) for the nth test entity.
Fig. 7 is a schematic structural diagram of a network fault inference device based on knowledge-graph completion in an embodiment of the disclosure. As shown in diagram 700, apparatus 700 may include: an input module 710, configured to take a network failure knowledge graph in a triplet form as an input of a model, where the model adopts an end-to-end knowledge graph to represent a learning model, and the knowledge graph represents the learning model 720 including an encoder 722 and a decoder 724; the encoder 722 processes the triplets by using a first order approximate convolution graph neural network model of the chebyshev polynomial to obtain an entity embedded representation and a relation embedded representation; a decoder 724 that obtains a plurality of candidate entities and corresponding scores through the entity embedded representation and the relationship embedded representation using a knowledge graph representation learning model of the neural network; and an output module 730, configured to rank the candidate entities based on the scores to obtain a prediction result.
In some embodiments of the present disclosure, the apparatus may further include: a representation module for representing the network failure knowledge graph asForm (S), wherein ε, < >>And->Representing the entity set, the relationship set, and the fact triplet set, respectively.
In some embodiments of the present disclosure, knowledge-graph representation learning model 720 may also be used, and a first order approximate convolution graph neural network model employing chebyshev polynomials may be represented as shown in equations (1), (2) and (3).
In some embodiments of the present disclosure, encoder 722 may further include: for defining information in the network failure map to flow in two directions; embedding the relationship into a graph roll-up neural network model; combining the entities and the relationships; and respectively defining parameters of the original edge, the reverse edge and the self edge to obtain an entity embedded representation and a relation embedded representation.
In some embodiments of the present disclosure, the decoder 724 may also be configured to predict the evaluation model score using a convolutional neural network, the scoring function being defined as in equation (7) above.
In some embodiments of the present disclosure, the apparatus 700 may further include: and the evaluation module is used for adopting the link prediction as a downstream task for measuring the model effect.
In some embodiments of the present disclosure, the evaluation module may be further configured to evaluate the predictions of entities using an MRR, where N is the size of the query set in the test set, rank, the higher the MRR value, the more accurate the predictions, indicating a higher ranking of the correct entities n For the Nth test entityIn ranking the predicted candidate entities, the correct candidate entity is ranked in position.
In some embodiments of the present disclosure, the set of entities in the network failure knowledge-graph comprises: fault problem, fault cause, fault resolution method.
With respect to the knowledge-graph completion-based network fault inference apparatus 700 in the above embodiment, a specific manner in which each device performs an operation has been described in detail in the embodiment regarding the method, and will not be described in detail herein.
Those skilled in the art will appreciate that the various aspects of the present disclosure may be implemented as a system, method, or program product. Accordingly, various aspects of the disclosure may be embodied in the following forms, namely: an entirely hardware embodiment, an entirely software embodiment (including firmware, micro-code, etc.) or an embodiment combining hardware and software aspects may be referred to herein as a "circuit," module "or" system.
An electronic device 800 according to such an embodiment of the present disclosure is described below with reference to fig. 8. The electronic device 800 shown in fig. 8 is merely an example and should not be construed to limit the functionality and scope of use of embodiments of the present disclosure in any way.
As shown in fig. 8, the electronic device 800 is embodied in the form of a general purpose computing device. Components of electronic device 800 may include, but are not limited to: the at least one processing unit 810, the at least one memory unit 820, and a bus 830 connecting the various system components, including the memory unit 820 and the processing unit 810.
Wherein the storage unit stores program code that is executable by the processing unit 810 such that the processing unit 810 performs steps according to various exemplary embodiments of the present disclosure described in the above section of the present specification. For example, the processing unit 810 may perform step S110 as shown in fig. 1, taking the network failure knowledge-graph as an input to a model that represents a learning model using an end-to-end knowledge-graph, the model including an encoder and a decoder; step S120, processing the triplets by using a Chebyshev polynomial first-order approximate convolution graph neural network model to obtain entity embedded representation and relation embedded representation; step S130, a knowledge graph representation learning model of the neural network is adopted to obtain a plurality of candidate entities and corresponding scores through the entity embedding representation and the relation embedding representation; and step S140, sorting the candidate entities based on the scores to obtain a prediction result.
Storage unit 820 may include readable media in the form of volatile storage units such as Random Access Memory (RAM) 821 and/or cache memory unit 822, and may further include Read Only Memory (ROM) 823.
The storage unit 820 may also include a program/utility 824 having a set (at least one) of program modules 825, such program modules 825 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
Bus 830 may be one or more of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 800 may also communicate with one or more external devices 900 (e.g., keyboard, pointing device, bluetooth device, etc.), one or more devices that enable a user to interact with the electronic device 800, and/or any device (e.g., routing device, modem, etc.) that enables the electronic device 800 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 850. Also, electronic device 800 may communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet, through network adapter 860. As shown, network adapter 860 communicates with other modules of electronic device 800 over bus 830. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with electronic device 800, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
In an exemplary embodiment of the present disclosure, a computer-readable storage medium having stored thereon a program product capable of implementing the method described above in the present specification is also provided. In some possible implementations, various aspects of the disclosure may also be implemented in the form of a program product comprising program code for causing a terminal device to carry out the steps according to the various exemplary embodiments of the disclosure as described in the "exemplary methods" section of this specification, when the program product is run on the terminal device.
A program product for implementing the above-described method according to an embodiment of the present disclosure may employ a portable compact disc read-only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, server, terminal, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, server, terminal, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, server, terminal, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
According to one aspect of the present disclosure, there is provided a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the methods provided in the various alternative implementations of the above-described embodiments.
It should be noted that although in the above detailed description several modules or units of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit in accordance with embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
Furthermore, although the steps of the methods in the present disclosure are depicted in a particular order in the drawings, this does not require or imply that the steps must be performed in that particular order or that all illustrated steps be performed in order to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step to perform, and/or one step decomposed into multiple steps to perform, etc.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, including several instructions to cause a computing device (may be a personal computer, a server, a mobile terminal, or a network device, etc.) to perform the method according to the embodiments of the present disclosure.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
Claims (10)
1. The network fault reasoning method based on knowledge graph completion is characterized by comprising the following steps:
taking a network fault knowledge graph in a triplet form as an input of a model, wherein the model adopts an end-to-end knowledge graph to represent a learning model, and the model comprises an encoder and a decoder;
processing the triplets by using a Chebyshev polynomial first-order approximate convolution graph neural network model to obtain entity embedded representation and relation embedded representation;
obtaining a plurality of candidate entities and corresponding scores through the entity embedding representation and the relation embedding representation by adopting a knowledge graph representation learning model of the neural network; and
and sequencing the candidate entities based on the scores to obtain a prediction result.
2. The knowledge-graph completion-based network fault reasoning method of claim 1, wherein the method comprises:
representing the network fault knowledge graphForm (S), wherein ε, < >>And->Representing the entity set, the relationship set, and the fact triplet set, respectively.
3. The knowledge-graph completion-based network fault reasoning method of claim 2, wherein the step of processing the triples to obtain the entity embedded representation and the relationship embedded representation using a first order approximate convolution graph neural network model of chebyshev polynomials comprises:
defining information in the network failure map to flow in two directions;
embedding the relationship into a graph roll-up neural network model;
combining the entities and the relationships; and
and respectively defining parameters of the original edge, the reverse edge and the self edge to obtain an entity embedded representation and a relation embedded representation.
4. The knowledge-based completion network fault reasoning method of claim 3, wherein the step of obtaining a plurality of entities and corresponding scores through the entity-embedded representation and the relationship-embedded representation using a knowledge-graph representation learning model of a neural network comprises:
predicting and evaluating model scores by using a convolutional neural network, wherein a scoring function is defined as follows:
wherein,and []Respectively representing 2D reshaping and joining operations, representing a 2D convolution operation, ω being the convolution kernel; sigma denotes the activation function, W is a learnable matrix in a linear transformation.
5. The knowledge-graph completion-based network fault reasoning method of claim 4, further comprising:
link prediction is employed as a downstream task to measure the effects of the model.
6. The knowledge-graph completion-based network fault reasoning method as claimed in claim 5, wherein the step of using link prediction as a downstream task for measuring the effect of the model comprises:
the prediction of the entity is evaluated by adopting the MRR, the higher the MRR value is, the higher the ordering of the correct entity is, the more accurate the prediction is, and the definition of the MRR is as follows:
where N is the size of the query set in the test set, rank n And (3) ranking the correct candidate entity in the ranking of the predicted candidate entity for the Nth test entity.
7. The knowledge-graph completion-based network fault reasoning method of claim 6, wherein the set of entities in the network fault knowledge graph comprises: fault problem, fault cause, fault resolution method.
8. A knowledge-graph completion-based network fault reasoning device, the device comprising:
the input module is used for taking a network fault knowledge graph in a triplet form as the input of a model, wherein the model adopts an end-to-end knowledge graph to represent a learning model, and the knowledge graph represents the learning model and comprises an encoder and a decoder;
the encoder adopts a Chebyshev polynomial first-order approximate convolution graph neural network model to process the triplets to obtain entity embedded representation and relation embedded representation;
the decoder adopts a knowledge graph representation learning model of the neural network to obtain a plurality of candidate entities and corresponding scores through the entity embedding representation and the relation embedding representation; and
and the output module is used for sorting the candidate entities based on the scores to obtain a prediction result.
9. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the knowledge-graph completion based network failure inference method of any of claims 1-8 via execution of the executable instructions.
10. A computer readable storage medium having stored thereon a computer program, wherein the computer program when executed by a processor implements the knowledge-based completion network failure inference method of any of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311068574.8A CN117118856A (en) | 2023-08-23 | 2023-08-23 | Knowledge graph completion-based network fault reasoning method and related equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311068574.8A CN117118856A (en) | 2023-08-23 | 2023-08-23 | Knowledge graph completion-based network fault reasoning method and related equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117118856A true CN117118856A (en) | 2023-11-24 |
Family
ID=88812232
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311068574.8A Pending CN117118856A (en) | 2023-08-23 | 2023-08-23 | Knowledge graph completion-based network fault reasoning method and related equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117118856A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117725981A (en) * | 2024-02-08 | 2024-03-19 | 昆明学院 | Power distribution network fault prediction method based on optimal time window mechanism |
-
2023
- 2023-08-23 CN CN202311068574.8A patent/CN117118856A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117725981A (en) * | 2024-02-08 | 2024-03-19 | 昆明学院 | Power distribution network fault prediction method based on optimal time window mechanism |
CN117725981B (en) * | 2024-02-08 | 2024-04-30 | 昆明学院 | Power distribution network fault prediction method based on optimal time window mechanism |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9576031B1 (en) | Automated outlier detection | |
JP7392668B2 (en) | Data processing methods and electronic equipment | |
CN110929047B (en) | Knowledge graph reasoning method and device for focusing on neighbor entity | |
US20130151536A1 (en) | Vertex-Proximity Query Processing | |
CN117118856A (en) | Knowledge graph completion-based network fault reasoning method and related equipment | |
US20240127795A1 (en) | Model training method, speech recognition method, device, medium, and apparatus | |
CN111145076B (en) | Data parallelization processing method, system, equipment and storage medium | |
US20200167660A1 (en) | Automated heuristic deep learning-based modelling | |
CN113837596A (en) | Fault determination method and device, electronic equipment and storage medium | |
WO2023275764A1 (en) | Systems and methods for generation of action strategies by an autonomous system | |
CN114491037A (en) | Fault diagnosis method, device, equipment and medium based on knowledge graph | |
US20220221836A1 (en) | Performance determination through extrapolation of learning curves | |
CN110717116B (en) | Link prediction method and system of relational network, equipment and storage medium | |
US20220171985A1 (en) | Item recommendation with application to automated artificial intelligence | |
CN109272165A (en) | Register probability predictor method, device, storage medium and electronic equipment | |
US11537910B2 (en) | Method, system, and computer program product for determining causality | |
CN115271207A (en) | Sequence relation prediction method and device based on gated graph neural network | |
US20230168642A1 (en) | Systems and methods for generation of action strategies by an autonomous system | |
US8438129B1 (en) | Probabilistic implementation of system health prognosis | |
Xu et al. | Improved Bayesian network-based for fault diagnosis of air conditioner system | |
CN115996169A (en) | Network fault analysis method and device, electronic equipment and storage medium | |
US20230342664A1 (en) | Method and system for detection and mitigation of concept drift | |
CN114281691A (en) | Test case sequencing method and device, computing equipment and storage medium | |
CN115186738A (en) | Model training method, device and storage medium | |
CN114595787A (en) | Recommendation model training method, recommendation device, medium and equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |