CN115291091A - Analog circuit fault diagnosis method based on graph neural network - Google Patents

Analog circuit fault diagnosis method based on graph neural network Download PDF

Info

Publication number
CN115291091A
CN115291091A CN202210987528.7A CN202210987528A CN115291091A CN 115291091 A CN115291091 A CN 115291091A CN 202210987528 A CN202210987528 A CN 202210987528A CN 115291091 A CN115291091 A CN 115291091A
Authority
CN
China
Prior art keywords
graph
node
sample
fault
nodes
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210987528.7A
Other languages
Chinese (zh)
Inventor
杨京礼
李晔
高天宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology
Original Assignee
Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology filed Critical Harbin Institute of Technology
Priority to CN202210987528.7A priority Critical patent/CN115291091A/en
Publication of CN115291091A publication Critical patent/CN115291091A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R31/00Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
    • G01R31/28Testing of electronic circuits, e.g. by signal tracer
    • G01R31/316Testing of analog circuits
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Test And Diagnosis Of Digital Computers (AREA)

Abstract

The invention discloses an analog circuit fault diagnosis method based on a graph neural network. When a fault sample graph is constructed, structural features among samples form structural constraints, namely edges among sample nodes; when the graph neural network processes the fault sample graph, the structural features and the data features of the samples are extracted to classify the fault states of the samples. The structural constraint in the fault sample graph not only helps the model to extract more effective features for fault classification, thereby improving the fault diagnosis accuracy of the model; meanwhile, the method has the effect of sample clustering, and the model has a good training effect in the learning of few samples with a low sample proportion in the training set.

Description

Analog circuit fault diagnosis method based on graph neural network
Technical Field
The invention relates to the technical field of analog circuit fault diagnosis, in particular to an analog circuit fault diagnosis method based on a graph neural network.
Background
In a typical electronic device, compared with a digital circuit, the analog circuit has a higher difficulty in fault diagnosis and a higher fault occurrence rate, and a fault of 80% or more may occur in an analog circuit with a scale of only 20%, and the development of the fault diagnosis technology of the whole electronic system is restricted by the fault diagnosis technology of the analog circuit. Analog circuit failures can be classified by degree into soft failures and hard failures. Soft faults are faults resulting from the fact that the parameters of the circuit elements deviate outside a predetermined tolerance range, which may cause abnormal performance of the device while the circuit is still functioning. The hard failure is a failure caused by sudden and large change of a certain parameter of a circuit element, and can cause sudden breakdown of electronic equipment, thereby causing great loss. When soft faults occur, the fault characteristics of the circuit are obvious when hard faults do not occur, so that the difficulty of soft fault diagnosis is higher, but in order to avoid serious loss, the faults still need to be diagnosed in time at the stage of soft faults. The fault diagnosis accuracy of the analog circuit, particularly the soft fault diagnosis accuracy of the analog circuit, is important for improving the overall reliability of the electronic equipment.
The fault diagnosis method of the analog circuit goes through three development stages of a manual observation method, a traditional diagnosis method based on a complex model and an intelligent diagnosis method based on data driving. As analog circuits in a circuit system of a device are larger and more complex in scale and structure, the difficulty of diagnosing the analog circuits is also larger and more, and the fault diagnosis method in the first two stages is not suitable for most circuits. The method can be used for distinguishing different types of fault states without establishing a complex diagnosis model through a circuit topological structure, known parameters and the like by combining a data-driven intelligent diagnosis method and machine learning, a neural network, other artificial intelligence technologies and the like, and has great advantages in the field of analog circuit fault diagnosis. In recent years, the continuous development of data information processing technologies such as wavelet transformation preprocessing, one-dimensional convolutional neural networks, deep confidence networks, adaptive neural fuzzy systems and the like also provides continuous technical support for intelligent diagnosis methods based on data driving. Although the data-driven intelligent diagnosis method combined with the artificial intelligence technology is the mainstream of the research in the field of fault diagnosis from the development to the present, the method is limited by the principle of the general artificial intelligence technology, and can only process fault sample data of a data structure rule, so that the structural characteristics among samples are ignored; in order to retain structural features among samples, a fault sample set can be constructed as graph structure data of sample arrangement structure and sequence irregularity. The graph neural network is a neural network which is directly defined on graph structure data for calculation, the graph convolution neural network is one of variant models of the graph neural network, and feature updating is carried out on a central node by aggregating node features of neighbor nodes and the central node to complete one graph convolution calculation; and stacking a plurality of graph convolution networks to finish the extraction of the data characteristics of the samples in the fault sample graph and the structural characteristics among the samples, thereby effectively improving the fault diagnosis accuracy of the analog circuit fault diagnosis model.
Therefore, how to provide a method for diagnosing the fault of the analog circuit based on the graph neural network with high accuracy is a problem which needs to be solved urgently by the technical personnel in the field.
Disclosure of Invention
In view of this, the present invention provides a method for diagnosing a fault of an analog circuit based on a graph neural network.
In order to achieve the purpose, the invention adopts the following technical scheme:
a simulation circuit fault diagnosis method based on a graph neural network comprises the following steps:
s1, constructing a fault sample data set by using an impulse response signal at the output end of a sampling circuit, wherein each fault state comprises at least one group of sample data;
s2, dividing the fault sample data set into a training set and a test set;
s3, taking a sampling sample as a node V and taking the multidimensional data characteristics of the sample as node characteristics to form a node characteristic matrix X, and calculating the cosine similarity between every two nodes; constructing a fault sample graph by taking the connection relation E between each node and a fixed number of most similar neighbor nodes as edges, wherein the fixed number is determined by the connection accuracy between the training set sample nodes of the same category in the fault sample graph;
s4, initializing parameters of the neural network of the graph, enabling all nodes to participate in forward transmission of node characteristics in a training stage, calculating a loss function and optimizing and updating corresponding parameters only on nodes of a training set, circulating the process until the model is converged, and establishing a fault diagnosis classification model based on the neural network of the graph;
and S5, predicting the class labels of the nodes of the test set by using the fault diagnosis classification model based on the graph neural network, and outputting a diagnosis result.
Preferably, the specific content of S3 includes:
s31, selecting a fault sample as a node, wherein the sample data is node characteristics, and the characteristic dimension is N; constructing a connection relation among nodes according to the node characteristics:
for any two samples x in the failure sample set i And y i Where i =1,2, \ 8230, N, two samples x are calculated i And y i The cosine similarity r method comprises the following steps:
Figure BDA0003802387600000031
wherein, the cosine similarity is defined as the cosine value of the included angle between the sample vectors; the closer the absolute value is to 1, the closer the included angle of the sample vector is to 0, which indicates that the two samples are more similar;
s32, circularly calculating the cosine similarity of each node and the rest nodes, and considering that each node and a fixed number of most similar nodes belong to the same fault state, wherein the connection relation between the nodes in the same fault state is used as an edge;
and S33, after determining the connection relation between the nodes, constructing to obtain a fault sample graph.
Preferably, the specific content of forward transmission of the node characteristics in S4 includes:
s411, inputting the fault sample graph into the graph neural network, wherein the connection relation E of the nodes of the fault sample graph is represented by an adjacency matrix A:
Figure BDA0003802387600000032
wherein N represents the number of samples;
s412, node characteristics are updated based on the fault diagnosis classification model of the graph neural network, and the specific content includes:
(1) And finishing node feature updating, wherein the node feature matrix of the l +1 th layer is as follows:
H (l+1) =σ(AH (l) W (l) )
wherein H (l) Is a node feature matrix of the l-th layer, W (l) The final output matrix is Z, and sigma (·) represents a nonlinear activation function;
(2) Obtaining a matrix of the self characteristics of the aggregation nodes according to the adjacency matrix A
Figure BDA0003802387600000041
Figure BDA0003802387600000042
(3) By aggregating matrices of the characteristics of the nodes themselves
Figure BDA0003802387600000043
Updating the node characteristics, wherein the specific contents comprise:
obtaining a degree matrix D, wherein the degree matrix D is a diagonal matrix, elements on a main diagonal represent degrees of corresponding nodes, and on the basis of the degree matrix D, the degrees of the main diagonal elements as the corresponding nodes are added by one to obtain the degree matrix D
Figure BDA0003802387600000044
Use of
Figure BDA0003802387600000045
To pair
Figure BDA0003802387600000046
Is normalized to obtain
Figure BDA0003802387600000047
Figure BDA0003802387600000048
Figure BDA0003802387600000049
According to
Figure BDA00038023876000000410
The method comprises the following steps of improving a graph convolution network layer, wherein the improved graph convolution network layer is as follows:
Figure BDA00038023876000000411
and stacking graph convolution network layers to obtain implicit expression of node characteristics, and sending the implicit expression to a downstream classification layer.
Preferably, in S411, before the fault sample graph is input into the graph neural network, a hyper-parameter of the graph neural network is initialized, where the hyper-parameter includes the number of layers of the graph convolution neural network, a node feature dimension of a hidden layer, a learning rate, and an iteration number.
Preferably, the specific contents of performing the calculation of the loss function and the optimization updating of the corresponding parameter on the training set node in S4 include:
s421, selecting an activation function; the activation function is a relu function between the graph volume network layer and a log (softmax) function between the graph neural network layer and the classification layer, wherein:
relu(x)=max(0,x)
Figure BDA0003802387600000051
mapping the final implicit expression of the node characteristics to a (∞, 0) probability interval by using a log (soffmax) function, and selecting the category with the maximum probability as the classification result of the corresponding sample node;
s422, selecting a cross entropy function as lossA loss function, if the true class of sample i is c, then y ic Get 1, otherwise y ic Taking 0; p is a radical of ic A predicted probability that sample i belongs to class c;
Figure BDA0003802387600000052
where N is the number of classes, y ic Is a sign function;
s423, selecting Adam algorithm as optimization function, m i And v i The update of (1) is:
m i =β 1 m i-1 +(1-β 1 )g i
Figure BDA0003802387600000053
wherein m is i The index moving average value of the gradient is obtained through the first moment of the gradient; v. of i The square gradient is obtained through the second moment of the gradient; i represents the number of iterations; beta is a beta 1 、β 2 Is constant, controls exponential decay; g is a radical of formula i Is a first guide;
s424, obtaining
Figure BDA0003802387600000054
And
Figure BDA0003802387600000055
wherein
Figure BDA0003802387600000056
In order to correct for the mi,
Figure BDA0003802387600000057
is v is i The correction of (1):
Figure BDA0003802387600000058
Figure BDA0003802387600000059
s425, acquiring weight matrix parameters w i Wherein the alpha learning rate; ε is a constant that maintains numerical stability:
Figure BDA00038023876000000510
according to the technical scheme, compared with the prior art, the invention discloses and provides the analog circuit fault diagnosis method based on the graph neural network, and the method forms structural constraints among the samples, namely edges among the sample nodes, by the structural features among the samples when a fault sample graph is constructed; when the graph neural network processes the fault sample graph, the structural features and the data features of the samples are extracted to classify the fault state of the samples. The structural constraint in the fault sample graph not only helps the model to extract more effective features for fault classification, thereby improving the fault diagnosis accuracy of the model; meanwhile, the method has the effect of sample clustering, and the model has a good training effect in the learning of few samples with a low sample proportion in the training set.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the prior art descriptions will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a schematic flow diagram provided in a graph neural network-based analog circuit fault diagnosis method according to the present invention;
fig. 2 is a schematic diagram of a quad-operational-amplifier bi-quad filter circuit provided in an embodiment of the analog circuit fault diagnosis method based on the graph neural network.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention discloses a method for diagnosing faults of an analog circuit based on a graph neural network, which comprises the following steps of:
s1, constructing a fault sample data set by using an impulse response signal at the output end of a sampling circuit, wherein each fault state comprises at least one group of sample data;
s2, dividing a fault sample data set into a training set and a testing set;
s3, taking a sampling sample as a node V and taking the multidimensional data characteristics of the sample as node characteristics to form a node characteristic matrix X, and calculating the cosine similarity between every two nodes; establishing a fault sample graph by taking the connection relation E between each node and a fixed number of most similar neighbor nodes as edges, wherein the fixed number is determined by the connection accuracy between the training set sample nodes of the same category in the fault sample graph;
s4, initializing parameters of the neural network of the graph, in a training stage, enabling all nodes to participate in forward transmission of node characteristics, performing calculation of a loss function and optimization updating of corresponding parameters only on nodes of a training set, training until a model is converged, and establishing a fault diagnosis classification model based on the neural network of the graph;
and S5, predicting the class labels of the nodes of the test set by using the fault diagnosis classification model based on the graph neural network, and outputting a diagnosis result.
It should be noted that:
in S3, the closer the cosine similarity between every two nodes is to 1, the more similar the characteristics of the two nodes are, the more likely the two nodes belong to the same category; screening out a fixed number of most similar neighbor nodes of each node, establishing a connection relation as an edge E, and identifying the fault sample graph as interconnection among the nodes of the same category by a cosine similarity formula; if the sample node in each fault state and the sample node in the same state have a connection relation, the connection is accurate; the fixed number is determined by the average connection accuracy between the training set sample nodes of all classes in the fault sample graph.
Calculating the percentage of correct connection relation in each fault state, namely the connection accuracy in the state; and taking the average value of all state clustering accuracies, namely the clustering accuracy of the constructed fault sample graph.
In order to further implement the above technical solution, the specific content of S3 includes:
s31, selecting a fault sample as a node, wherein the sample data is node characteristics, and the characteristic dimension is N; constructing a connection relation among nodes according to the node characteristics:
for any two samples x in the failure sample set i And y i Where i =1, 2.. Times.n, two samples x are calculated i And y i The cosine similarity r method comprises the following steps:
Figure BDA0003802387600000081
wherein, the cosine similarity is defined as the cosine value of the included angle between the sample vectors; the closer the absolute value is to 1, the closer the included angle of the sample vector is to 0, which indicates that the two samples are more similar;
s32, circularly calculating the cosine similarity of each node and the rest nodes, and considering that each node and a fixed number of most similar nodes belong to the same fault state, wherein the connection relation between the nodes in the same fault state is used as an edge;
and S33, after determining the connection relation between the nodes, constructing to obtain a fault sample graph.
In order to further implement the above technical solution, the specific content of the forward transmission of the node characteristics in S4 includes:
s411, inputting the fault sample graph into a graph neural network, wherein the connection relation E of the nodes of the fault sample graph is represented by an adjacency matrix A:
Figure BDA0003802387600000082
wherein N represents the number of samples;
s412, the node characteristics are updated in the graph convolution neural network by adopting an aggregation and update message transmission mechanism, and the central node updates the central node characteristics by aggregating the node characteristics of the neighbor nodes and the node characteristics of the central node. The node characteristics are updated based on the fault diagnosis classification model of the graph neural network, and the specific content comprises the following steps:
(1) And finishing node feature updating, wherein the node feature matrix of the l +1 th layer is as follows:
H (l+1) =σ(AH (l) W (l) )
wherein H (l) Is a node feature matrix of the l-th layer, W (l) The final output matrix is Z, and sigma (·) represents a nonlinear activation function;
(2) Since the diagonal elements of the adjacency matrix are 0, the above formula has no characteristic of aggregating the central node when graph convolution is performed, and therefore, the identity matrix I is added on the basis of the adjacency matrix A N To obtain
Figure BDA0003802387600000091
Figure BDA0003802387600000092
(3) By aggregating matrices of the characteristics of the nodes themselves
Figure BDA0003802387600000093
Updating the time node characteristics, wherein the specific contents comprise:
acquiring a degree matrix D, wherein the degree matrix D is a diagonal matrix, elements on a main diagonal represent degrees of corresponding nodes, and on the basis of the degree matrix D, adding one to the degrees of the corresponding nodes of the main diagonal elementsTo obtain
Figure BDA0003802387600000094
Use of
Figure BDA0003802387600000095
For is to
Figure BDA0003802387600000096
Is normalized to obtain
Figure BDA0003802387600000097
Figure BDA0003802387600000098
Figure BDA0003802387600000099
According to
Figure BDA00038023876000000910
The method comprises the following steps of improving a graph convolution network layer, wherein the improved graph convolution network layer is as follows:
Figure BDA00038023876000000911
and stacking graph convolution network layers to obtain implicit expression of node characteristics, and sending the implicit expression to a downstream classification layer.
Stacking multiple layers of graph convolutions can allow the center node to learn deeper node features.
In order to further implement the above technical solution, in S411, before the fault sample graph is input into the graph neural network, a hyper-parameter of the graph neural network is initialized, where the hyper-parameter includes the number of layers of the graph convolution neural network, the node feature dimension of the hidden layer, the learning rate, and the number of iterations.
In order to further implement the above technical solution, the specific contents of performing the calculation of the loss function and the optimization updating of the corresponding parameter on the training set node in S4 include:
s421, selecting a proper activation function to enable the model to have the ability of learning more complex functions and improve the calculation efficiency of the network. The activation function between the graph volume network layer is a relu function, and the activation function between the graph neural network layer and the classification layer is a log (softmax) function, so that the overflow problem of the softmax function can be effectively solved, wherein:
relu(x)=max(0,x)
Figure BDA0003802387600000101
mapping the final implicit expression of the node characteristics to a (∞, 0) probability interval by using a log (softmax) function, and selecting the category with the maximum probability as the classification result of the corresponding sample node;
s422, selecting a cross entropy function as a loss function, and if the real category of the sample i is c, then y ic Get 1, otherwise y ic Taking 0; p is a radical of ic A predicted probability that sample i belongs to class c;
Figure BDA0003802387600000102
where N is the number of categories, y ic Is a sign function;
s423, selecting Adam algorithm as optimization function, m i And v i The update of (1) is:
m i =β 1 m i-1 +(1-β 1 )g i
Figure BDA0003802387600000103
wherein m is i The index moving average value of the gradient is obtained through the first moment of the gradient; v. of i The square gradient is obtained through the second moment of the gradient; i represents the number of iterations; beta is a 1 、β 2 Is constant, controls exponential decay; g is a radical of formula i Is a first guide;
s424, obtaining
Figure BDA0003802387600000104
And
Figure BDA0003802387600000105
wherein
Figure BDA0003802387600000106
In order to correct for the mi,
Figure BDA0003802387600000107
is v is i The correction of (1):
Figure BDA0003802387600000108
Figure BDA0003802387600000109
s425, obtaining weight matrix parameters w i Wherein the alpha learning rate; ε is a constant that maintains numerical stability:
Figure BDA00038023876000001010
the default values for all the above parameters are set as: α =0.001, β 1 =0.9,β 2 =0.999,ε=10 -8
Figure BDA0003802387600000111
The smaller the value of (c), the more uncertain the current direction is, so the smaller the step size at this time.
The above will be illustrated by specific examples:
in this embodiment, a quad-operational amplifier bi-quad filter circuit in an international standard circuit is selected as an experimental circuit.
(1-1) simulating a four-operational amplifier bi-second order filter circuit by utilizing PSPICE software, using a 10 mu s pulse signal with the amplitude of 5V and the frequency of 2kHz as an excitation source, and selecting a circuit output end as a test point;
(1-2) experimental circuit schematic diagram as shown in fig. 2, tolerance ranges of the resistance element and the capacitance element of the set circuit seed are 5% and 10%, respectively. Selection of C by sensitivity analysis of circuit elements 1 、C 2 、R 1 、R 2 、R 3 As the tested element. The circuit is deemed to have a soft fault when the component parameter value deviates from the nominal value by approximately 30%. The failure categories, nominal values and failure values of the elements in the experimental circuit are shown in the following table:
Figure BDA0003802387600000112
(1-3) generating 300 sets of impulse response sample signals for each fault class by Monte-Carlo analysis; each set of signals samples 1000 points, that is, each sample has 1000-dimensional characteristic data;
(2) Randomly adding a training set or a testing set Mask to the samples in each type of fault state according to the proportion of 7: 3, 5: 5, 3: 7 and 1: 9;
(3-1) selecting a fault sample as a node, sampling sample data as node characteristics, and aiming at any two samples x in a fault sample set i And y i (i =1,2, \8230;, N), N =1000, the remaining string similarities are calculated;
and (3-2) circularly calculating the cosine similarity of each node and the rest nodes, and selecting the most similar five nodes for each node to construct the connection relation of edges. The number of the neighbor nodes is too large, an erroneous connection relation is introduced, the number of the neighbor nodes is too small, structural constraint is insufficient, the generalization capability and the operation speed of the model are too low, and experiments prove that the number of the most similar nodes of each central node is set to be 5 in the circuit, so that each node at least has 5 neighbor nodes;
(3-3) determining the connection relation between the nodes to complete the construction of the fault sample graph;
(4-1-1) initializing graph neural network parameters, and setting two-layer graph convolution; the hidden layer node characteristic is 100, the learning rate lr is 0.001, and the iteration number is 1000;
(4-1-2) inputting a fault sample graph (comprising sample nodes V, connection relations E among the nodes and a node characteristic matrix X) into a graph convolution neural network, wherein the connection relations E of the nodes in the graph are usually represented by an adjacency matrix A;
(4-2-1) the node characteristics are updated in the graph convolution neural network by adopting an aggregation and update message transmission mechanism, and the central node updates the central node characteristics by aggregating the node characteristics of the neighbor nodes and the node characteristics of the central node.
(4-2-2) stacking two layers of graph convolution networks to enable the nodes to learn deeper node characteristics during training;
(4-3-1) the activation function selects a relu function between the graph volume network layers and selects a log (softmax) function between the graph neural network layer and the classification layer;
let the input node characteristic be X and the adjacency matrix be A, W (l) The final output matrix is Z, and a forward updating formula of the node characteristics of the two-layer graph convolutional neural network is shown as follows;
Figure BDA0003802387600000121
(4-3-2) selecting a cross entropy function by the loss function;
(4-3-4) selecting Adam algorithm by an optimization function;
(4-3-5) circularly training the neural network of the graph until the model converges;
(5) And predicting the class labels of the sample nodes of the test set by using the trained fault diagnosis classification model, and outputting a diagnosis result and the fault diagnosis accuracy of the model.
Table 1 is the test set diagnostic accuracy of the analog circuit fault diagnostic model at different training set and test set ratios.
Figure BDA0003802387600000131
According to the experimental results of the examples, the present invention has the following advantages:
(1) Constructing a fault sample graph from the fault sample set, and reserving structural constraints among samples to enable the fault sample set to have the characteristics and advantages of semi-supervised learning;
(2) Under four different training set and test set sample proportions, the data characteristics and the structural characteristics of the fault sample graph are extracted by using the graph convolution neural network, and compared with the method of extracting only the data characteristics or the structural characteristics of the sample, the fault diagnosis accuracy can be effectively improved;
(3) The graph neural network is applied to the field of analog circuit fault diagnosis, and a new thought is provided for solving the problem of analog circuit fault diagnosis under the condition that the theoretical system of the graph neural network and the variant technology thereof are more and more mature.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (5)

1. A simulation circuit fault diagnosis method based on a graph neural network is characterized by comprising the following steps:
s1, constructing a fault sample data set by using an impulse response signal at the output end of a sampling circuit, wherein each fault state comprises at least one group of sample data;
s2, dividing the fault sample data set into a training set and a testing set;
s3, taking a sampling sample as a node V and taking multi-dimensional data characteristics of the sample as node characteristics to form a node characteristic matrix X, and calculating cosine similarity between every two nodes; constructing a fault sample graph by taking the connection relation E between each node and a fixed number of most similar neighbor nodes as an edge, wherein the fixed number is determined by the connection accuracy between the training set sample nodes of the same category in the fault sample graph;
s4, initializing parameters of the neural network of the graph, in a training stage, enabling all nodes to participate in forward transmission of node characteristics, performing calculation of a loss function and optimization updating of corresponding parameters only on nodes of a training set, training until a model is converged, and establishing a fault diagnosis classification model based on the neural network of the graph;
and S5, predicting the class labels of the nodes of the test set by using the fault diagnosis classification model based on the graph neural network, and outputting a diagnosis result.
2. The method for diagnosing the fault of the analog circuit based on the neural network of the graph according to claim 1, wherein the specific content of S3 comprises:
s31, selecting a fault sample as a node, wherein the sample data is node characteristics, and the characteristic dimension is N; constructing a connection relation between nodes according to the node characteristics:
for any two samples x in the failure sample set i And y i Where i =1,2, \ 8230, N, two samples x are calculated i And y i The cosine similarity r method comprises the following steps:
Figure FDA0003802387590000011
wherein, the cosine similarity is defined as the cosine value of the included angle between the sample vectors; the closer the absolute value is to 1, the closer the included angle of the sample vector is to 0, which indicates that the two samples are more similar;
s32, circularly calculating the cosine similarity of each node and the rest nodes, and considering that each node and a fixed number of most similar nodes belong to the same fault state, wherein the connection relation between the nodes in the same fault state is used as an edge;
and S33, after determining the connection relation between the nodes, constructing to obtain a fault sample graph.
3. The method for diagnosing the fault of the analog circuit based on the neural network of the graph according to claim 1, wherein the forward transmission of the node characteristics in S4 comprises:
s411, inputting the fault sample graph into the graph neural network, wherein the connection relation E of the nodes of the fault sample graph is represented by an adjacency matrix A:
Figure FDA0003802387590000021
wherein N represents the number of samples;
s412, updating the node characteristics based on the fault diagnosis classification model of the graph neural network, wherein the specific contents comprise:
(1) And finishing node feature updating, wherein the node feature matrix of the l +1 th layer is as follows:
H (l+1) =σ(AH (l) W (l) )
wherein H (l) Is a node feature matrix of the l-th layer, W (l) The final output matrix is Z, and sigma (·) represents a nonlinear activation function;
(2) Obtaining a matrix of the self characteristics of the aggregation nodes according to the adjacency matrix A
Figure FDA0003802387590000022
Figure FDA0003802387590000023
(3) Universal jointMatrix of over-aggregated node self-characteristics
Figure FDA0003802387590000024
Updating the node characteristics, wherein the specific contents comprise:
obtaining a degree matrix D, wherein the degree matrix D is a diagonal matrix, elements on a main diagonal represent degrees of corresponding nodes, and on the basis of the degree matrix D, the degrees of the main diagonal elements as the corresponding nodes are added by one to obtain the degree matrix D
Figure FDA0003802387590000025
Use of
Figure FDA0003802387590000026
To pair
Figure FDA0003802387590000027
Is normalized to obtain
Figure FDA0003802387590000028
Figure FDA0003802387590000029
Figure FDA0003802387590000031
According to
Figure FDA0003802387590000032
The method comprises the following steps of improving a graph convolution network layer, wherein the improved graph convolution network layer is as follows:
Figure FDA0003802387590000033
and stacking graph convolution network layers to obtain implicit expression of node characteristics, and sending the implicit expression to a downstream classification layer.
4. The method as claimed in claim 3, wherein before the fault sample graph is input into the neural network, the hyper-parameters of the neural network are initialized in step S411, and the hyper-parameters include the number of layers of the convolutional neural network, the node feature dimensions of hidden layers, the learning rate, and the number of iterations.
5. The method according to claim 1, wherein the specific contents of performing the calculation of the loss function and the optimization and update of the corresponding parameter on the training set node in S4 include:
s421, selecting an activation function; the activation function is a relu function between the graph volume network layer and a log (softmax) function between the graph neural network layer and the classification layer, wherein:
relu(x)=max(0,x)
Figure FDA0003802387590000034
mapping the final implicit expression of the node characteristics to a probability interval of (∞, 0) by using a log (softmax) function, and selecting a category with the maximum probability as a classification result of the corresponding sample node;
s422, selecting a cross entropy function as a loss function, and if the real category of the sample i is c, then y ic Get 1, otherwise y ic Taking 0; p is a radical of formula ic A predicted probability that sample i belongs to class c;
Figure FDA0003802387590000035
where N is the number of categories, y ic Is a sign function;
s423, selecting Adam algorithm as optimization function, m i And v i The update of (1) is:
m i =β 1 m i-1 +(1-β 1 )g i
Figure FDA0003802387590000041
wherein m is i The index moving average value of the gradient is obtained through the first moment of the gradient; v. of i The square gradient is obtained through the second moment of the gradient; i represents the number of iterations; beta is a beta 1 、β 2 Is constant, controls exponential decay; g is a radical of formula i Is a first guide;
s424, acquiring
Figure FDA0003802387590000042
And
Figure FDA0003802387590000043
wherein
Figure FDA0003802387590000044
Is m i The correction of (2) is carried out,
Figure FDA0003802387590000045
is v is i The correction of (1):
Figure FDA0003802387590000046
Figure FDA0003802387590000047
s425, obtaining weight matrix parameters w i Updating of (3); wherein the alpha learning rate; ε is a constant that maintains numerical stability:
Figure FDA0003802387590000048
CN202210987528.7A 2022-08-17 2022-08-17 Analog circuit fault diagnosis method based on graph neural network Pending CN115291091A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210987528.7A CN115291091A (en) 2022-08-17 2022-08-17 Analog circuit fault diagnosis method based on graph neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210987528.7A CN115291091A (en) 2022-08-17 2022-08-17 Analog circuit fault diagnosis method based on graph neural network

Publications (1)

Publication Number Publication Date
CN115291091A true CN115291091A (en) 2022-11-04

Family

ID=83830826

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210987528.7A Pending CN115291091A (en) 2022-08-17 2022-08-17 Analog circuit fault diagnosis method based on graph neural network

Country Status (1)

Country Link
CN (1) CN115291091A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117235465A (en) * 2023-11-15 2023-12-15 国网江西省电力有限公司电力科学研究院 Transformer fault type diagnosis method based on graph neural network wave recording analysis

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117235465A (en) * 2023-11-15 2023-12-15 国网江西省电力有限公司电力科学研究院 Transformer fault type diagnosis method based on graph neural network wave recording analysis
CN117235465B (en) * 2023-11-15 2024-03-12 国网江西省电力有限公司电力科学研究院 Transformer fault type diagnosis method based on graph neural network wave recording analysis

Similar Documents

Publication Publication Date Title
CN108334936B (en) Fault prediction method based on migration convolutional neural network
CN115018021B (en) Machine room abnormity detection method and device based on graph structure and abnormity attention mechanism
CN109086799A (en) A kind of crop leaf disease recognition method based on improvement convolutional neural networks model AlexNet
Yin et al. Wasserstein generative adversarial network and convolutional neural network (WG-CNN) for bearing fault diagnosis
WO2024021246A1 (en) Cross-device incremental bearing fault diagnosis method based on continuous learning
CN108875772B (en) Fault classification model and method based on stacked sparse Gaussian Bernoulli limited Boltzmann machine and reinforcement learning
CN112087447B (en) Rare attack-oriented network intrusion detection method
CN112766315B (en) Method and system for testing robustness of artificial intelligence model
CN113392914B (en) Anomaly detection algorithm for constructing isolated forest based on weight of data features
CN111612051A (en) Weak supervision target detection method based on graph convolution neural network
CN112611982A (en) Depth wavelet twin network fault diagnosis method of modular multilevel converter
CN111046961A (en) Fault classification method based on bidirectional long-and-short-term memory unit and capsule network
CN115291091A (en) Analog circuit fault diagnosis method based on graph neural network
CN111695611B (en) Bee colony optimization kernel extreme learning and sparse representation mechanical fault identification method
CN107229945A (en) A kind of depth clustering method based on competition learning
CN112364747A (en) Target detection method under limited sample
CN110289987B (en) Multi-agent system network anti-attack capability assessment method based on characterization learning
CN114897085A (en) Clustering method based on closed subgraph link prediction and computer equipment
EP4145453A1 (en) System for generating compound structure representation
CN114441173A (en) Rolling bearing fault diagnosis method based on improved depth residual shrinkage network
CN116051911B (en) Small sample bearing vibration image data fault diagnosis method based on uncertainty learning
CN113361928A (en) Crowdsourcing task recommendation method based on special-pattern attention network
CN103714385B (en) Analogue circuit fault diagnosis method based on improved type clone selection algorithm
CN111783941A (en) Mechanical equipment diagnosis and classification method based on probability confidence degree convolutional neural network
CN114494819B (en) Anti-interference infrared target identification method based on dynamic Bayesian network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination