CN117950906B - Method for deducing fault cause of server based on neural network of table graph - Google Patents

Method for deducing fault cause of server based on neural network of table graph Download PDF

Info

Publication number
CN117950906B
CN117950906B CN202410358660.0A CN202410358660A CN117950906B CN 117950906 B CN117950906 B CN 117950906B CN 202410358660 A CN202410358660 A CN 202410358660A CN 117950906 B CN117950906 B CN 117950906B
Authority
CN
China
Prior art keywords
data
feature set
graph
data feature
relevant parameters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410358660.0A
Other languages
Chinese (zh)
Other versions
CN117950906A (en
Inventor
李平
李翊
夏皓凡
李雅杰
周静
沈雅文
朱鑫鹏
王泓淏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yuxihe Irrigation District Transportation Management Center In Sichuan Province
Southwest Petroleum University
Original Assignee
Yuxihe Irrigation District Transportation Management Center In Sichuan Province
Southwest Petroleum University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yuxihe Irrigation District Transportation Management Center In Sichuan Province, Southwest Petroleum University filed Critical Yuxihe Irrigation District Transportation Management Center In Sichuan Province
Priority to CN202410358660.0A priority Critical patent/CN117950906B/en
Publication of CN117950906A publication Critical patent/CN117950906A/en
Application granted granted Critical
Publication of CN117950906B publication Critical patent/CN117950906B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a method and a system for deducing the fault cause of a server based on a form neural network, which relate to the technical field of server fault diagnosis and comprise the following steps: collecting relevant parameters of server equipment and corresponding fault results to obtain form data; performing feature extraction operation on the table data to obtain a first data representation of each row of data; performing similarity calculation among each row of data representation to obtain an adjacency matrix; splicing the fault result after the corresponding row of the table data, and extracting the characteristics to obtain a second data representation; constructing a graph network, wherein nodes of the graph represent second data, and an adjacency matrix of the graph is an adjacency matrix obtained through the first data representation; deducing new form data through the trained graph neural network to obtain a result; the invention converts the structured form data into unstructured graph data for prediction, and can excavate the relation among the rows of the form by means of the GNN model, thereby carrying out better prediction on the result.

Description

Method for deducing fault cause of server based on neural network of table graph
Technical Field
The invention relates to the technical field of server fault diagnosis, in particular to a method for deducing the cause of a server fault based on a table graph neural network.
Background
Server fault diagnosis techniques involve the inspection and testing of various aspects of hardware, environment, software, etc. Server failure data is typically recorded in a table. At present, the server fault cause inference method is generally based on an existing server fault lookup table, wherein each row corresponds to a fault cause and description and explanation thereof, including recorded log data, monitoring data and the like, and a specific lookup table is that key attributes are queried through manual lookup, and even though the key attributes are queried through automatic lookup, partial attributes are queried directly or similarity is simply calculated to query. In fact, the form-based server fault diagnosis data includes hundreds of rows, the manual table look-up method is troublesome and inaccurate, and although specific attributes are different among the rows of form data, some correlations exist among the attributes, and for the same faults, the results caused by the integration among some attributes may not be simply matched, so that a form-graph neural network-based server fault cause inference method is needed to automatically infer the server fault cause.
Disclosure of Invention
In order to solve the technical problems in the background art, the invention provides a server fault cause deducing method and system based on a table graph neural network.
Firstly, the invention provides a server fault cause deducing method based on a table graph neural network, which comprises the following steps:
S1, acquiring relevant parameters of server equipment and corresponding fault results, and processing the relevant parameters into a table form to obtain table data; dividing the data into a training set and a testing set;
S2, embedding and feature extraction operations are carried out on the collected relevant parameters of the server equipment, a first data representation of each row of data is obtained, and a first data feature set is formed;
S3, carrying out similarity calculation among each data representation on the first data feature set to obtain an adjacent matrix;
s4, splicing fault results corresponding to relevant parameters of each row of server equipment in the form data into corresponding rows of the form data, and then performing embedding and feature extraction operations on each row of data to obtain second data representation of each row of data;
S5, constructing a graph network, wherein nodes of the graph are represented by second data, and an adjacency matrix of the graph is an adjacency matrix obtained by the representation of the first data;
s6, training the graph neural network through a training set to obtain a prediction result of the node, and verifying through a testing set;
s7, deducing new form data through the trained graph neural network to obtain a deduced result.
Specifically, in the step S2, embedding and feature extraction operations are performed on the collected relevant parameters of the server device, where the feature extraction operations adopt a Bert processing method; in the step S4, embedding and feature extraction are performed on each line of data, where the feature extraction is performed by using a Bert processing method.
Specifically, in the step S2, embedding and feature extraction are performed on the collected parameters related to the server device, where the feature extraction is performed by using LLM processing; in the step S4, in the embedding and feature extraction operation for each row of data, the feature extraction operation also adopts the LLM processing method.
Preferably, the embedding and feature extraction operation for the collected server device related parameters in S2 includes the following steps:
the method comprises the steps that a data feature set a1 is obtained by carrying out embedding processing on a central value part in collected relevant parameters of server equipment;
Performing Bert processing on the collected relevant parameters of the divisor value part of the server equipment to obtain a data feature set a2, and performing splicing operation on the data feature set a1 and the data feature set a2 to obtain a first data feature set A;
Performing LLM processing on the collected relevant parameters of the divisor value part of the server equipment to obtain a data feature set B, and performing splicing operation on the data feature set a1 and the data feature set B to obtain a first data feature set B;
performing the same embedding and feature extraction operation on each row of data in the step S4 to obtain a second data feature set A and a second data feature set B;
The embedding process is as follows:
The central value part of the collected relevant parameters of the server equipment comprises continuous features and discrete features, the continuous features are embedded in a section single-heat coding mode, and the discrete features are embedded in a single-heat coding mode;
The step S3 is specifically as follows:
Respectively carrying out similarity calculation between each data representation on the first data representation set A and the first data representation set B to obtain an adjacent matrix A and an adjacent matrix B;
The step S5 specifically comprises the following steps:
Constructing two graph networks, wherein nodes of one graph network are represented by second data A, and an adjacency matrix of the graph is represented by an adjacency matrix A obtained in the step S3; the nodes of the other graph network represent the second data representation B, and the adjacency matrix of the graph is the adjacency matrix B obtained in S3.
Further, the step S6 specifically includes:
Respectively carrying out graph neural network training on the two graph networks through a training set to obtain a predicted result A and a predicted result B of the node, and fusing the predicted result A and the predicted result B; verifying through the test set;
the operation of fusing the predicted result A and the predicted result B is as follows:
Wherein, For the prediction result A,/>For the prediction result B,/>And/>Is a weight parameter,/>Is an offset term,/>To activate the function.
Further, the step S6 specifically includes:
respectively carrying out graph neural network training on the two graph networks through a training set to obtain an intermediate feature A and an intermediate feature B, and carrying out Concat operation on an intermediate result to obtain an intermediate feature C;
Sequentially inputting the first full-connection layer, the activation function, the second full-connection layer and the activation function to the intermediate feature C to obtain a total prediction result; the number of the neurons of the first full-connection layer is R/4, the number of the neurons of the second full-connection layer is R/2, and R is the length of the middle characteristic C;
Verification is performed by the test set.
Specifically, in S3, the step of performing similarity calculation on each row of data through the first data representation to obtain an adjacency matrix includes the following steps:
assume that any two rows in the first data representation are characterized as M is the number of rows of the table data,
Taking cos45 degree angle as dividing line:
Space converting weight values into Obtain/>The correlation matrix a of (a) is as follows:
RoundClip is specifically calculated as follows:
where n and m are the number of rows and columns in the first data representation, i represents the ith row, j represents the jth column, Representing specific eigenvalues of ith row and jth column, round (x) represents rounding x,/>, the method comprises the steps ofIs constant, prevents denominator from being 0.
Further, the step S2 further includes clustering the first data representation:
Setting the number of clusters, and selecting the number of fault types as the number of clusters;
clustering all the first data characterizations by an unsupervised clustering algorithm to obtain a plurality of class clusters;
And selecting a certain proportion of data in each class cluster as a final first data feature set.
In another aspect, a system for estimating a cause of a server failure based on a neural network of a table graph is provided, where the method used in the system is the method for estimating the cause of the server failure based on the neural network of the table graph.
After the scheme is adopted, the beneficial effects of the invention are as follows: the method of the present invention utilizes a tabular neural network to infer the cause of server failure, which is not known in the prior art. The invention carries out specific and unique processing on the table data, converts the structured table data into unstructured graph data for prediction, and can excavate the relation among the rows of the table by means of the CNN model, thereby carrying out better prediction on the result. Meanwhile, the feature characterization of the invention fuses the Bert processing and the LLM processing, integrates the results, compresses and excites the feature vector in the integration of the results, and can make the model pay attention to more important features and improve the accuracy. These improvements make the server failure cause inference method more efficient, flexible and accurate, contributing to improved efficiency of server maintenance and management.
Drawings
FIG. 1 is a flow chart of an embodiment of the present invention;
FIG. 2 is a topology of a model structure in an embodiment of the invention;
Fig. 3 is a schematic representation of continuous and discrete features in an embodiment of the present invention.
Detailed Description
The principles and features of the present invention are described below with examples given for the purpose of illustration only and are not intended to limit the scope of the invention.
Embodiment one: a server fault cause deducing method based on a table graph neural network, as shown in fig. 1 and 2, comprises the following steps:
S1, collecting relevant parameters (including attributes and attribute values) of server equipment and corresponding fault results, and processing the relevant parameters (including attributes and attribute values) and the corresponding fault results into a table form to obtain table data T, namely taking each relevant parameter (relevant attribute value) of the server equipment and the corresponding fault results as one row in the table. The relevant parameters of the server equipment comprise attributes and specific values, wherein if the attributes are temperatures, the specific values are 80 degrees or more than 80 degrees, the form of the whole table data is equivalent to a relational database, and the attributes can also be sentences; table data includes parameters related to the server device, including but not limited to, power parameters, motherboard performance parameters, temperature parameters, memory related parameters, display card related parameters, processors, memory, hard disk parameters, networks, operating systems, etc., table 1 shows a specific table example, and in the actual table, the description corresponding to some attributes is in plain text form instead of numerical form:
the data is divided into a training set and a testing set, and a verification set can be divided into a training set and a testing set, wherein the proportion of the training set, the verification set and the testing set is preferably 8:1:1, and the equalization of each class of data should be considered when dividing the data set.
S2, embedding and feature extraction operations are carried out on the collected relevant parameters of the server equipment, a first data representation of each row of data is obtained, and a first data feature set is formed. In the actual server equipment related parameter data, the related data is not directly acquired, but is required to be extracted from text and numerical data, and the text is described in a natural language, so that a typical language processing model is used for carrying out feature extraction operation on the text, specifically, the collected server equipment related parameters are subjected to feature extraction to obtain a first data feature set A, the Bert processing is to take specific data of a form as the text, and the specific data of the form is cleaned, segmented and encoded, which is the prior art and is not specifically expanded; and performing LLM processing on the collected relevant parameters of the server equipment to perform feature extraction so as to obtain a first data feature set B. The invention selects two processing modes to be performed in parallel, because the results extracted by different Embeddding operations may be different, specifically, the step S2 includes the following steps:
s21, embedding the collected central value part in the relevant parameters of the server equipment to obtain a data feature set a1;
s22, performing Bert processing on the collected relevant parameters of the divisor value part of the server equipment to obtain a data feature set a2, and performing splicing operation on the data feature set a1 and the data feature set a2 to obtain a first data feature set A; here, it should be noted that, the data feature set a1 and the data feature set a2 include the representations of the multi-row table data, and when the splicing operation is performed, the representations of the corresponding rows are spliced;
S23, performing LLM processing on the collected relevant parameters of the divisor value part of the server equipment to obtain a data feature set B, and performing splicing operation on the data feature set a1 and the data feature set B to obtain a first data feature set B; likewise, the data feature set a1 and the data feature set b contain representations of a plurality of rows of table data, and when the splicing operation is performed, the representations of the corresponding rows are spliced;
The embedding process is as follows:
The collected central value part of the relevant parameters of the server equipment comprises continuous characteristics and discrete characteristics, as shown in figure 3, wherein the characteristic 1 is the continuous characteristics with the temperature of more than 100 ℃, and the characteristic 2 is the continuous characteristics with the CPU utilization rate of more than 90%; feature 3 is a discrete feature of the server status code, and feature 4 is a discrete feature of the server network number; embedding the continuous features in a section single-heat coding mode, wherein if the temperature of the feature 1 is higher than 100 ℃, the temperature of the minimum is 90 ℃, the temperature of the maximum is 150 ℃, the interval unit is 10, the sections are divided, the temperature of the feature is higher than 100 ℃, the temperature can be expressed as (0, 1), and the values of the minimum and the maximum are only examples; the discrete features are embedded by means of one-hot encoding.
In practical applications, the original table data size may be large, including tens of thousands or even hundreds of thousands of rows, and there may be some similar or repeated server faults, if the nodes are combined into the same graph, the unnecessary calculation amount will be increased, so in S2, the similarity calculation operation is added to the first feature set a and the second feature set B to reduce the calculation amount, which includes the following steps:
s201, setting the number of clusters, and selecting the number of fault types as the number of clusters;
S202, clustering all first data representations through an unsupervised clustering algorithm to obtain a plurality of class clusters;
S203, selecting a certain proportion of data in each class cluster as a final first data feature set, wherein the specific selected proportion is determined according to the number and the size of the data sets, and the calculation amount and the accuracy are comprehensively considered.
Because the data is selected with certain randomness, the invention can adopt multiple training, and the data feature sets selected by each training are different.
S3, carrying out similarity calculation between each data representation on the first data representation set A and the first data representation set B respectively to obtain an adjacent matrix A and an adjacent matrix B. Specifically, the similarity calculation includes the steps of:
s31, assuming any two lines of the first data representation to be represented as M is the number of rows of the table data;
S32, taking a cos 45-degree angle as a dividing line:
s33, converting the weight value space into Obtain/>The correlation matrix A of (a) is as follows
RoundClip is specifically calculated as follows:
where n and m are the number of rows and columns in the first data representation, i represents the ith row, j represents the jth column, Representing specific eigenvalues of ith row and jth column, round (x) represents rounding x,/>, the method comprises the steps ofIs constant, prevents denominator from being 0; /(I)For the weight value after conversion, W is the weight value before conversion,/>For the mean of all weights in the first data representation, and b is an example parameter, min represents the minimum and max represents the maximum.
In this step, the weights are constrained to-1, 0 or +1, the weight matrix is obtained from its average value, and then each value is rounded to the nearest integer between { -1,0, +1 }.
S4, splicing fault results corresponding to relevant parameters of each row of server equipment in the table data in the corresponding row of the table data to obtain a tablePerforming feature extraction operation on each row of data to obtain a second data representation of each row of data; the same embedding and feature extraction operation is carried out on each line of data in the S4 to obtain a second data feature set A and a second data feature set B, wherein each line of feature vectors are/>This step includes:
S41, embedding the collected central value part in the relevant parameters of the server equipment to obtain a data feature set b1;
S42, performing Bert processing on the collected relevant parameters of the divisor value part of the server equipment to obtain a data feature set b2, and performing splicing operation on the data feature set b1 and the data feature set b2 to obtain a second data feature set A; here, it should be noted that, the data feature set b1 and the data feature set b2 include the representations of the multi-row table data, and when the splicing operation is performed, the representations of the corresponding rows are spliced;
s43, performing LLM processing on the collected relevant parameters of the divisor value part of the server equipment to obtain a data feature set d, and performing splicing operation on the data feature set B1 and the data feature set d to obtain a second data feature set B; similarly, the data feature set b1 and the data feature set d contain representations of a plurality of rows of table data, and when the splicing operation is performed, the representations of the corresponding rows are spliced.
Specifically, after the joining fault result is spliced, the previous part is repeated, so that in actual operation, only the fault data can be subjected to feature extraction operation, and then the fault data is spliced into the S2 to obtain a first data characterization set and a second data characterization set; it should be noted that the splice fault result in step S4 should correspond to the table data randomly chosen after clustering in S2.
S5, constructing a graph network, wherein nodes of the graph are represented by second data, and an adjacency matrix of the graph is obtained by the first data representation. Specifically, as the feature extraction operation is performed twice in the foregoing, two graph networks need to be constructed, wherein a node of one graph network is a second data representation a, and an adjacency matrix of the graph is an adjacency matrix a obtained in the step S3; the nodes of the other graph network represent the second data representation B, and the adjacency matrix of the graph is the adjacency matrix B obtained in S3.
The Bert model and the LLM model are different models, and the results obtained after feature extraction by adopting different models are different, so that the robustness of the models can be enhanced.
S6, respectively carrying out graph neural network training on the two graph networks through the training set, directly fusing the obtained results, and further extracting the characteristics to acquire the attention of the intermediate characteristics. If the nodes are directly fused, a predicted result A and a predicted result B of the nodes are obtained and fused, and verification is carried out through a test set. The operation of fusing the predicted result A and the predicted result B is as follows:
Wherein, For the prediction result A,/>For the prediction result B,/>And/>Is a weight parameter,/>Is an offset term,/>To activate the function. In the prediction result, the dimension of the feature should be the number of the fault types of the server, each specific feature value is 0 or 1,0 indicates that no fault of the corresponding type exists, and 1 indicates that a fault of the corresponding type exists. The loss function employs a cross entropy loss function.
If the results are not directly fused, the following modes can be adopted:
S61, respectively performing graph neural network training on the two graph networks through a training set to obtain an intermediate feature A and an intermediate feature B, and performing Concat operations on an intermediate result to obtain an intermediate feature C;
S62, sequentially inputting the first full-connection layer, the activation function, the second full-connection layer and the activation function into the middle feature C to obtain a total prediction result; the number of the neurons of the first full-connection layer is R/4, the number of the neurons of the second full-connection layer is R/2, and R is the length of the middle characteristic C; here, the loss function still employs a cross entropy loss function.
S63, verifying through the test set.
The training process of the model can be performed end-to-end.
S7, deducing new form data through the trained graph neural network to obtain a deduced result. It should be appreciated that when deducing new table data, ebedding processing, i.e. embedding processing and feature extraction processing, also needs to be performed on the table data to be predicted.
The technology based on the form graph neural network can directly convert different types of data (such as log data, performance indexes, configuration information and the like) into a graph structure, convert structured form data into unstructured graph data, process the data by using a GNN model and predict, so that a manual inference mode is omitted, and the inference result can be more accurate and rapid.
Embodiment two: a server fault cause inference system based on a table graph neural network is used for inferring a server fault cause, and the method used in the system is the server fault cause inference method based on the table graph neural network according to the first embodiment.
Embodiment III: to solve the above-mentioned problems, the present embodiment provides an electronic device, including a memory, a processor, and a computer program stored in the memory and capable of running on the processor, where the processor implements the server failure cause inference method based on the table graph neural network according to the first embodiment when executing the computer program.
Embodiment four: to solve the above-mentioned problems, the present embodiment provides a computer-readable storage medium having a computer program stored thereon, which when executed by a processor, implements the server failure cause inference method based on the table graph neural network as described in the first embodiment.
In the description of the present invention, it should be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present invention, the meaning of "plurality" means at least two, for example, two, three, etc., unless specifically defined otherwise.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
While embodiments of the present invention have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the invention, and that variations, modifications, alternatives and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the invention.

Claims (4)

1. The method for deducing the fault cause of the server based on the neural network of the table graph is characterized by comprising the following steps:
S1, acquiring relevant parameters of server equipment and corresponding fault results, and processing the relevant parameters into a table form to obtain table data; dividing the data into a training set and a testing set;
S2, embedding and feature extraction operations are carried out on the collected relevant parameters of the server equipment, a first data representation of each row of data is obtained, and a first data feature set is formed;
S3, carrying out similarity calculation among each data representation on the first data feature set to obtain an adjacent matrix;
s4, splicing fault results corresponding to relevant parameters of each row of server equipment in the form data into corresponding rows of the form data, and then performing embedding and feature extraction operations on each row of data to obtain second data representation of each row of data;
S5, constructing a graph network, wherein nodes of the graph are represented by second data, and an adjacency matrix of the graph is an adjacency matrix obtained by the representation of the first data;
s6, training the graph neural network through a training set to obtain a prediction result of the node, and verifying through a testing set;
S7, deducing new form data through the trained graph neural network to obtain a deduced result;
The step S2 of embedding and extracting the relevant parameters of the acquired server equipment comprises the following steps:
the method comprises the steps that a data feature set a1 is obtained by carrying out embedding processing on a central value part in collected relevant parameters of server equipment;
Performing Bert processing on the collected relevant parameters of the divisor value part of the server equipment to obtain a data feature set a2, and performing splicing operation on the data feature set a1 and the data feature set a2 to obtain a first data feature set A;
Performing LLM processing on the collected relevant parameters of the divisor value part of the server equipment to obtain a data feature set B, and performing splicing operation on the data feature set a1 and the data feature set B to obtain a first data feature set B;
The step S2 of embedding and extracting the relevant parameters of the acquired server equipment comprises the following steps:
s21, embedding the collected central value part in the relevant parameters of the server equipment to obtain a data feature set a1;
s22, performing Bert processing on the collected relevant parameters of the divisor value part of the server equipment to obtain a data feature set a2, and performing splicing operation on the data feature set a1 and the data feature set a2 to obtain a first data feature set A;
S23, performing LLM processing on the collected relevant parameters of the divisor value part of the server equipment to obtain a data feature set B, and performing splicing operation on the data feature set a1 and the data feature set B to obtain a first data feature set B;
The embedding process is as follows:
The central value part of the collected relevant parameters of the server equipment comprises continuous features and discrete features, the continuous features are embedded in a section single-heat coding mode, and the discrete features are embedded in a single-heat coding mode;
The step S3 is specifically as follows:
Respectively carrying out similarity calculation between each data representation on the first data representation set A and the first data representation set B to obtain an adjacent matrix A and an adjacent matrix B;
Specifically, the similarity calculation includes the steps of:
s31, assuming any two lines of the first data representation to be represented as M is the number of rows of the table data,
S32, taking a cos 45-degree angle as a dividing line:
s33, converting the weight value space into Obtain/>The correlation matrix a of (a) is as follows:
RoundClip is specifically calculated as follows:
where n and m are the number of rows and columns in the first data representation, i represents the ith row, j represents the jth column, Representing specific eigenvalues of ith row and jth column, round (x) represents rounding x,/>, the method comprises the steps ofIs constant, prevents denominator from being 0,/>Is the weight value after conversion,/>For the weight value before conversion,/>For the average of all weights in the first data representation, a and b are example parameters, min represents the minimum and max represents the maximum.
2. The method for deducing the cause of a server failure based on a neural network of a tabular diagram according to claim 1, wherein,
Embedding and extracting the characteristics of each row of data in the step S4 to obtain a second data feature set A and a second data feature set B; the method comprises the following steps:
S41, embedding the collected central value part in the relevant parameters of the server equipment to obtain a data feature set b1;
S42, performing Bert processing on the collected relevant parameters of the divisor value part of the server equipment to obtain a data feature set b2, and performing splicing operation on the data feature set b1 and the data feature set b2 to obtain a second data feature set A;
S43, performing LLM processing on the collected relevant parameters of the divisor value part of the server equipment to obtain a data feature set d, and performing splicing operation on the data feature set B1 and the data feature set d to obtain a second data feature set B;
The step S5 specifically comprises the following steps:
Constructing two graph networks, wherein nodes of one graph network are represented by second data A, and an adjacency matrix of the graph is represented by an adjacency matrix A obtained in the step S3; the nodes of the other graph network represent the second data representation B, and the adjacency matrix of the graph is the adjacency matrix B obtained in S3.
3. The method for deducing the cause of a failure of a server based on a neural network of a table graph according to claim 2, wherein the step S6 is specifically:
Respectively carrying out graph neural network training on the two graph networks through a training set to obtain a predicted result A and a predicted result B of the node, and fusing the predicted result A and the predicted result B; verifying through the test set;
the operation of fusing the predicted result A and the predicted result B is as follows:
Wherein, For the prediction result A,/>For the prediction result B,/>And/>Is a weight parameter,/>Is an offset term,/>To activate the function.
4. The method for deducing the cause of a failure of a server based on a neural network of a table graph according to claim 2, wherein the step S6 is specifically:
respectively carrying out graph neural network training on the two graph networks through a training set to obtain an intermediate feature A and an intermediate feature B, and carrying out Concat operation on an intermediate result to obtain an intermediate feature C;
Sequentially inputting the first full-connection layer, the activation function, the second full-connection layer and the activation function to the intermediate feature C to obtain a total prediction result; the number of the neurons of the first full-connection layer is R/4, the number of the neurons of the second full-connection layer is R/2, and R is the length of the middle characteristic C;
Verification is performed by the test set.
CN202410358660.0A 2024-03-27 2024-03-27 Method for deducing fault cause of server based on neural network of table graph Active CN117950906B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410358660.0A CN117950906B (en) 2024-03-27 2024-03-27 Method for deducing fault cause of server based on neural network of table graph

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410358660.0A CN117950906B (en) 2024-03-27 2024-03-27 Method for deducing fault cause of server based on neural network of table graph

Publications (2)

Publication Number Publication Date
CN117950906A CN117950906A (en) 2024-04-30
CN117950906B true CN117950906B (en) 2024-06-04

Family

ID=90805548

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410358660.0A Active CN117950906B (en) 2024-03-27 2024-03-27 Method for deducing fault cause of server based on neural network of table graph

Country Status (1)

Country Link
CN (1) CN117950906B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115511082A (en) * 2022-09-06 2022-12-23 东南大学 Fact verification method based on graph neural network and reinforcement learning
CN115733730A (en) * 2022-11-15 2023-03-03 国网新疆电力有限公司电力科学研究院 Power grid fault detection method and device based on graph neural network
CN116502175A (en) * 2023-03-15 2023-07-28 华南理工大学 Method, device and storage medium for diagnosing fault of graph neural network
CN116562114A (en) * 2023-04-25 2023-08-08 国网浙江省电力有限公司金华供电公司 Power transformer fault diagnosis method based on graph convolution neural network
CN116662275A (en) * 2023-03-22 2023-08-29 浙江远图技术股份有限公司 Hospital self-service terminal log abnormality detection system based on directed graph convolution neural network
CN117373487A (en) * 2023-12-04 2024-01-09 浙江恒逸石化有限公司 Audio-based equipment fault detection method and device and related equipment
WO2024045246A1 (en) * 2022-08-30 2024-03-07 大连理工大学 Spike echo state network model for aero engine fault prediction
CN117725220A (en) * 2023-10-23 2024-03-19 杭州阿里云飞天信息技术有限公司 Method, server and storage medium for document characterization and document retrieval

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024045246A1 (en) * 2022-08-30 2024-03-07 大连理工大学 Spike echo state network model for aero engine fault prediction
CN115511082A (en) * 2022-09-06 2022-12-23 东南大学 Fact verification method based on graph neural network and reinforcement learning
CN115733730A (en) * 2022-11-15 2023-03-03 国网新疆电力有限公司电力科学研究院 Power grid fault detection method and device based on graph neural network
CN116502175A (en) * 2023-03-15 2023-07-28 华南理工大学 Method, device and storage medium for diagnosing fault of graph neural network
CN116662275A (en) * 2023-03-22 2023-08-29 浙江远图技术股份有限公司 Hospital self-service terminal log abnormality detection system based on directed graph convolution neural network
CN116562114A (en) * 2023-04-25 2023-08-08 国网浙江省电力有限公司金华供电公司 Power transformer fault diagnosis method based on graph convolution neural network
CN117725220A (en) * 2023-10-23 2024-03-19 杭州阿里云飞天信息技术有限公司 Method, server and storage medium for document characterization and document retrieval
CN117373487A (en) * 2023-12-04 2024-01-09 浙江恒逸石化有限公司 Audio-based equipment fault detection method and device and related equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于多组件图神经网络的多元序列因果推断;张有兴,李平等;《西华师范大学学报(自然科学版)》;20240710;第1-10页 *

Also Published As

Publication number Publication date
CN117950906A (en) 2024-04-30

Similar Documents

Publication Publication Date Title
CN109844749B (en) Node abnormality detection method and device based on graph algorithm and storage device
WO2019238109A1 (en) Fault root cause analysis method and apparatus
CN116505665B (en) Fault monitoring method and system for power grid distribution line
CN111694879A (en) Multivariate time series abnormal mode prediction method and data acquisition monitoring device
CN111597247A (en) Data anomaly analysis method and device and storage medium
US9865101B2 (en) Methods for detecting one or more aircraft anomalies and devices thereof
CN112822052B (en) Network fault root cause positioning method based on network topology and alarm
CN105721207A (en) Method and device for determining importance of communication nodes in power communication network
CN113051440A (en) Link prediction method and system based on hypergraph structure
CN113343581B (en) Transformer fault diagnosis method based on graph Markov neural network
CN114996936A (en) Equipment operation and maintenance method, equipment operation and maintenance device, equipment operation and maintenance equipment and storage medium
CN117318052B (en) Reactive power prediction method and device for phase advance test of generator set and computer equipment
CN117950906B (en) Method for deducing fault cause of server based on neural network of table graph
CN114580546A (en) Industrial pump fault prediction method and system based on federal learning framework
CN112820400B (en) Disease diagnosis device and equipment based on medical knowledge map knowledge reasoning
CN117687815A (en) Hard disk fault prediction method and system
CN113536508A (en) Method and system for classifying manufacturing network nodes
Lê et al. Improving the Kuo-Lu-Yeh algorithm for assessing two-terminal reliability
CN116826961A (en) Intelligent power grid dispatching and operation and maintenance system, method and storage medium
WO2023155967A1 (en) Thermal anomaly management
CN114036319A (en) Power knowledge extraction method, system, device and storage medium
Tien et al. Compression algorithm for Bayesian network modeling of binary systems
CN113836707A (en) Electric power system community detection method and device based on accelerated attribute network embedding algorithm
US6873270B2 (en) Data storage and analysis
CN113447813B (en) Fault diagnosis method and equipment for offshore wind generating set

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant