CN111368888B - Service function chain fault diagnosis method based on deep dynamic Bayesian network - Google Patents

Service function chain fault diagnosis method based on deep dynamic Bayesian network Download PDF

Info

Publication number
CN111368888B
CN111368888B CN202010116968.6A CN202010116968A CN111368888B CN 111368888 B CN111368888 B CN 111368888B CN 202010116968 A CN202010116968 A CN 202010116968A CN 111368888 B CN111368888 B CN 111368888B
Authority
CN
China
Prior art keywords
fault
network
node
model
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010116968.6A
Other languages
Chinese (zh)
Other versions
CN111368888A (en
Inventor
唐伦
廖皓
贺兰钦
曹睿
胡彦娟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Wanzhida Technology Transfer Center Co ltd
Xixian New Area Digital Technology Co.,Ltd.
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN202010116968.6A priority Critical patent/CN111368888B/en
Publication of CN111368888A publication Critical patent/CN111368888A/en
Application granted granted Critical
Publication of CN111368888B publication Critical patent/CN111368888B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/29Graphical models, e.g. Bayesian networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/04Arrangements for maintaining operational condition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/06Testing, supervising or monitoring using simulated traffic

Abstract

The invention relates to a service function chain fault diagnosis method based on a deep dynamic Bayesian network, which belongs to the technical field of communication, and is characterized in that a fault diagnosis model is constructed according to a fault propagation relation in a hierarchical network architecture of a service function chain by combining the characteristics of a service function chain scene, and high-dimensional data of symptoms is collected by adopting a mode of monitoring a plurality of virtual network function performance data on a physical node. And in consideration of the diversity of network symptom observation data based on an SDN/NFV framework and the spatial correlation of physical nodes and virtual network functions, a deep belief network is adopted to extract the characteristics of the observation data. And finally, introducing a dynamic Bayesian network to diagnose the fault source in real time by utilizing the time correlation of fault propagation. The service function chain fault diagnosis method for the 5G end-to-end network slice scene can effectively process high-dimensional network data and meet the requirement of a system on fault diagnosis precision.

Description

Service function chain fault diagnosis method based on deep dynamic Bayesian network
Technical Field
The invention belongs to the technical field of communication, and relates to a service function chain fault diagnosis method based on a deep dynamic Bayesian network.
Background
In recent years, with the increasing demand for user diversity, it is difficult for the conventional network architecture to adapt to the change of user demand. Therefore, a 5G network should have high flexibility to cope with the diversification of user service demands. Network slicing based on Software Defined Networking (SDN) and Network Function Virtualization (NFV) has become a key technology for 5G network operators to provide various customized services on demand in a sustainable way. NFV decouples Network Functions (NF) and physical hardware facilities, replaces special hardware with general hardware, can conveniently and quickly deploy network functions at any position in a network, and simultaneously realizes demand-based allocation and dynamic expansion of general hardware resources. In a sliced network, each traffic request consists of a set of Virtual Network Functions (VNFs) that provide different network services, which are interconnected in a certain order to form a Service Function Chain (SFC). Currently, most of the research generally focuses on the acceptance rate of the virtual network requests, the allocation of underlying network resources, the deployment of the SFC, and the like, and neglects the reliability of the virtual network. Network failures can occur to a greater extent because service function chains can be created, migrated, and destroyed dynamically. To ensure quality of service (QoS) of service function chains in a network function virtualization environment, it is necessary for a network to be able to achieve a fast recovery from a failure.
However, failover to a virtualized network is a very serious problem currently facing. With the exponential growth of user traffic and the increasing complexity of network structures, the current manual-based network operation and maintenance mode is not only inefficient but also expensive. In order to reduce operation and maintenance expenditure and improve operation and maintenance efficiency, the 5G network introduces the concept of self-organizing network technology (SON), namely self-configuration, self-optimization and self-healing are utilized to realize self-management of the network [7,8 ]. The fault diagnosis is used as the key for positioning the fault source of the network, and is the primary condition for realizing self-healing of the network.
The existing diagnostic methods have some problems: (1) in the existing explicit diagnosis method based on the network topology and the fault propagation model, a static network is usually obtained through expert knowledge, however, in a virtualized network, the network topology and the fault dependency relationship may change at any time, and mutual interference between faults and even unknown faults may be caused due to sharing of bottom layer resources, thereby causing wrong diagnosis; the method based on the probe needs to occupy network resources, so that congestion in the network is more easily caused, and the problem of service quality reduction is caused; the method based on the model needs to reconstruct a network topology and a fault dependence graph model during each diagnosis, and the timeliness of diagnosis is difficult to guarantee in a large-scale network. (2) Data in a virtualized network has the characteristics of mass, high dimension and multiple sources, and the traditional diagnosis method cannot well explain the characteristics of weak correlation. (3) The learning-based method can intelligently classify faults, but training data is not time-efficient, so that the actual diagnosis accuracy is not high.
Disclosure of Invention
In view of the above, the present invention provides an SFC fault diagnosis method based on a deep dynamic bayesian network to study the problem of VNF node fault location. The method can effectively diagnose the fault position of the bottom node according to the change of the performance data of the VNF node in the network virtualization environment, and meets the requirement of network reliability.
In order to achieve the purpose, the invention provides the following technical scheme:
a service function chain fault diagnosis method based on a deep dynamic Bayesian network comprises the following steps:
s1: constructing a fault diagnosis model according to a fault propagation relation in a hierarchical network architecture of a service function chain;
s2: monitoring, at a physical node, performance data of a plurality of Virtual Network Function (VNF) thereon, and collecting high-dimensional data of a symptom;
s3: aiming at the diversity of Network observation data and the spatial correlation between a physical node and a VNF under an SDN/NFV framework, a correlated Deep Belief Network (DBN) model is established to perform feature extraction and dimension reduction on the observation data, a k-step contrast Divergence algorithm (CD-k) is used for approximately sampling a historical observation data set, and a self-adaptive BP algorithm added with a momentum term is used for fine adjustment of the model;
s4: a Dynamic Bayesian Network (DBN) model is established to diagnose fault sources in real time by utilizing the time correlation existing between faults, and a 1.5 time slice joint tree reasoning algorithm is used for positioning the fault sources.
Further, in step S1, in the service function chain scenario, the NFV MANO in the virtualization layer determines, according to the user service request, the VNF required for the service and the logical link thereof, and ensures the resource of the general server occupied by the operation of the VNF and the specific bandwidth on the path, where the resource of the general server includes computation, network, and storage, and then the SDN controller connects the VNFs to form an SFC and control transmission connection; the application layer comprises a plurality of SFCs for serving various service flows, and each SFC is formed by different network functions in a chain mode according to a certain sequence and provides end-to-end service for the service flows.
Further, in step S1, according to the fault propagation relationship in the hierarchical network architecture of the service function chain, the fault diagnosis model that is established needs to first locate a VNF node that may have a fault at the application layer, and then locate the root of the fault according to the mapping relationship between the VNF node that has a fault at the application layer and the infrastructure layer; for a layered network architecture, a Dynamic Bayesian Network (DBN) capable of causal association over time and adapting to environmental dynamics is employed for fault diagnosis.
Further, in step S2, the method further includes monitoring performance data of a plurality of VNFs on the physical node, collecting high-dimensional data of the symptom, performing normalization preprocessing on the data, eliminating the influence caused by different dimensions of the symptom information, and preprocessing the data by using a linear maximum and minimum method, where a conversion function is:
Figure BDA0002391790580000021
further, in step S3, a relevant Deep Belief Network (DBN) model is established for the diversity of network observation data based on the SDN/NFV architecture and the spatial correlation between the physical nodes and the VNF:
s31: carrying out greedy layer-by-layer training on the network in an unsupervised learning mode by using a multi-hidden-layer neural network consisting of three-layer stacked Restricted Boltzmann Machines (RBMs), and learning the high-level fault characteristics of the physical nodes only by using an SFC virtual node historical observation data set;
s32: adding a softmax layer on the three layers of RBM models to form a Deep Belief Network (DBN) to classify the node faults, and performing reverse supervised fine adjustment by combining label data to obtain a classification model of an initial time slice;
s33: the parameters are further optimized using real-time symptom data.
Further, step S3 specifically includes the following steps:
the parameter to be learned is theta, and in an SFC scene, theta is the probability dependence relationship between the fault symptom to be obtained and the actual fault, namely
θ={wij,ai,bj:1 i m,1 j n}
Wherein, wijRepresenting the weight between the visual level node i and the hidden level node j, aiRepresenting the bias of the visible level node a, bjRepresenting the bias of a hidden layer node j, wherein n is the number of various fault factors X of the physical node, and m is the number of virtual node observation data Y;
for parameters of the SFC fault diagnosis model in a single time slice, learning is carried out by adopting a deep belief network, and the parameters are trained in an off-line and on-line learning mode:
firstly, collecting a historical observation data set of a fault node as a training sample, and dividing the training sample into a marked sample and an unmarked sample; given that S and U represent sets of marked and unmarked samples, respectively, Y and X represent various types of symptom information of a failed VNF node and a label output of the failure type, respectively; the set of historical observation data for VNF nodes on the same physical node is denoted Q ═ … Qi… } in which Qi=[Yt,Yt-1,…,Yt-d+1]D represents the dimension of the sample of the model input; finally, marking an unmarked sample of unsupervised learning as U (Y), marking a marked sample of supervised learning as S (Y, X), and dividing all data samples into small-batch data sets so as to improve the training speed of the DBN model through batch training;
then, dividing the sample set into a training set, a verification set and a test set according to a proportion, and training the model by using unmarked and marked small-batch data sets; learning parameters of the RBM in an unsupervised learning mode by using an unlabeled data set U (Y), so that the network probability distribution of the RBM can be better fitted to a training sample; adopting a k-step Contrast Divergence (CD) algorithm to approximately sample the sample, and updating a parameter theta by solving the gradient of a log-likelihood function;
after the RBM1 model is subjected to iterative adjustment by a CD-K fast learning algorithm, obtaining preliminary model parameters; then, the activation state of the hidden layer neuron nodes obtained by RBM1 training is used as the network input of the RBM2, and the subsequent RBM3 models are sequentially trained in this way until all RBMs in the DBN model are trained; outputting the hidden layer of the last RBM as the input of a softmax classifier;
after the optimal model parameters in the unsupervised pre-training stage are obtained, carrying out supervised reverse fine adjustment by combining tag data S (Y, X), and establishing a complex nonlinear relation between fault characteristics and node state tags, wherein the tag values represent the real state of each VNF fault of the SFC; the BP algorithm with the self-adaptive learning rate reduction added with momentum items is used for reversely fine-tuning the integral model parameters of the deep belief network, and the parameters in the unsupervised stage are taken as initialization parameters, and the expression is as follows:
Figure BDA0002391790580000041
wherein, thetatAnd thetat-1Respectively representing the correction quantity of the parameter in the t-th iteration and the t-1-th iteration, b is a momentum term coefficient, a is a learning rate, and ln L/ln theta is the gradient of the log-likelihood function of the current sample.
After obtaining a DBN model by using historical observation data of VNF nodes of the same type, optimizing the model in real time by using real-time observation data of the VNF fault nodes in a slicing period; sample R is updated in real time by a sampling sliding window mechanismt=[Yt,Yt-1,…,Yt-d+1]Where d denotes the length of the sliding window, i.e. each time a time slice t has elapsed, the observation data Y at time t is introducedtSimultaneously deleting Yt-dKeeping the size of the input sample unchanged; the single sample set training mode is used for optimizing the parameters of the model, and the fault symptom is output to be Y at the moment tt-d+1:tThe predicted infrastructure layer node state under the condition is p (X)t|Yt-d+1:t) Wherein X ist={x1,x2,…,xn}。
Further, in step S4, the method specifically includes:
the dynamic Bayesian network DBN model is defined as (B)0,B) In which B is0And the prior network represents the initial time slice of the online learning phase of the deep belief network, namely the initial time physical node state. B isA hidden state transition model representing a BN composed of more than two time slices;
the dynamic Bayesian network DBN is based on the observation data Yt={y1,y2,…,ymDeducing hidden variable Xt={x1,x2,…,xnThe probability of the maximum possible value, wherein Y represents the symptom information of the SFC virtual node, m possible values are provided, X represents the physical node state of the infrastructure layer, and n possible actual results are provided;
if the initial hidden variable prior distribution matrix is pi, then
π=(πi)1×n,i=1,2,…,n
Wherein
Figure BDA0002391790580000042
A priori probabilities corresponding to the operating states of infrastructure layer nodes at an initial time, and then using an initial time slice of an online learning phase of a deep information networkThe estimated posterior probability is used as the prior probability of the node state;
the state transition matrix between the failed nodes is A, then
A=(aik)n×n,aik=P(Xt=i|Xt-1=k),i,k=1,2,…,n
Wherein a isijThe influence of the state of a certain fault factor of the fault node at the time t-1 on the state at the time t is shown;
the state transition matrix between the fault node and the symptom information is B, then
B=(bij)n×m,bij=P(Yt=j|Xt=i),i=1,2,…,n,j=1,2,…,m.
Wherein b isijThe method comprises the steps of representing the influence of a fault on working performance data of a virtual node when the fault occurs at time i;
under the classical assumption of the dynamic bayesian network model, the joint probability of observation and state is given by:
Figure BDA0002391790580000051
wherein
Figure BDA0002391790580000052
Representing the observed emission probability required for DDBN inference. Then, modeling the observed emission probability by adopting a Deep Belief Network (DBN) which well extracts the high-dimensional data characteristics;
finally, reasoning SFC fault, calculating probability distribution of fault root under the condition of given fault symptom, and adopting 1.5 time slice combined tree reasoning algorithm to maximize possible value P (x) of faultt=i|y1:T);
In the SFC fault diagnosis model, the main idea of using a 1.5 time slice joint tree reasoning algorithm to carry out SFC fault reasoning is as follows: according to the Markov property of the dynamic Bayesian network, the set of fault nodes has child nodes in the next time slice, and under the condition that the values of the child nodes are known, the state sum of the past nodes isThe state of the node has no relation, and such child nodes are called interface nodes; is provided with JTtIs a union tree in time t, CtIs JTtIn which contains ItThe ball of (D)tIs JTtIn which contains It-1Interface node I in clique, time slicetInterface I for receiving previous time slicet-1And also the next time slice interface It+1Reasoning is carried out between the interfaces through message propagation, and the reasoning process is as follows:
step 1: constructing a 1.5 time slice joint tree JT by performing normalization, triangulation and other steps on a DBN-based SFC fault inference modelt(ii) a Establishing a group tree by triangularizing a transition probability matrix A between fault nodes, finding triangulated maximum groups, and connecting separation nodes formed by intersection of two groups between each maximum group to form a connection tree; wherein each clique has a potential function ψ which is a (CPT) conditional probability table product of nodes in the respective cliques;
step 2: information forward propagation, Joint Tree JT for Current time slicetJoint Tree JT from previous time slicet-1Obtaining new evidence;
and step 3: information back propagation, Joint Tree JT for Current time slicetJoint Tree JT from a later time slicet+1Absorb evidences in the middle and realize JT to the current time slicetThe probability distribution of the joint tree of (1) is updated.
Further, in the process of reasoning through message propagation between the interfaces in step S4, step 2 specifically includes the following steps:
step 21) initializing junction tree JTtThe potential function ψ of;
step 22) firstly assigning values to symptom nodes, and then taking symptoms before t time slices as prior information P (I)t-1|Y1:t-1) Collecting symptom information to Ct-1Performing marginalization to obtain It-1Probability distribution of (2). Through an interface It-1From Ct-1To DtTransferring;
Figure BDA0002391790580000061
wherein It-1∈Ct-1∩DtIndicating propagation of the fault node between time slices;
step 23) collecting symptom data Y of the fault node of the SFC of the current time slicetAs evidence input to the junction tree;
step 24) for root CtCollecting and adding evidence;
step 25) return to JTtThe process ends when T is T, otherwise, T +1 goes to step 22).
Further, in the process of reasoning through message propagation between the interfaces in step S4, step 3 specifically includes the following steps:
step 31) if T is T, distributing the evidence;
step 32) marginalizing Dt+1To obtain ItA probability distribution of (a);
Figure BDA0002391790580000062
step 33) by absorption Dt+1To update JTtC intI of (A)t
Figure BDA0002391790580000063
Wherein the content of the first and second substances,
Figure BDA0002391790580000064
and
Figure BDA0002391790580000065
representing the original potential;
step 34) from root CtDistributing the evidence;
step 35) Return to JTtAll clusters, including segmentation clustersWhen t is 1, the process is ended, otherwise, t-1 goes to step 31).
The invention has the beneficial effects that: the fault diagnosis method provided by the invention can effectively extract massive, high-dimensional and multi-source data characteristics in a complex network on the basis of meeting the requirement of the system on the diagnosis precision, ensures the real-time performance of fault diagnosis and has high application value in a wireless communication system.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objectives and other advantages of the invention may be realized and attained by the means of the instrumentalities and combinations particularly pointed out hereinafter.
Drawings
For the purposes of promoting a better understanding of the objects, aspects and advantages of the invention, reference will now be made to the following detailed description taken in conjunction with the accompanying drawings in which:
FIG. 1 is a schematic diagram of a scenario in which the present invention may be applied;
FIG. 2 is a schematic diagram illustrating the dependence of a network slice on a fault in the present invention;
FIG. 3 is a schematic diagram of a fault diagnosis model according to the present invention;
FIG. 4 is a flow chart of the SFC fault diagnosis algorithm of the present invention.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention in a schematic way, and the features in the following embodiments and examples may be combined with each other without conflict.
Wherein the showings are for the purpose of illustrating the invention only and not for the purpose of limiting the same, and in which there is shown by way of illustration only and not in the drawings in which there is no intention to limit the invention thereto; to better illustrate the embodiments of the present invention, some parts of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The same or similar reference numerals in the drawings of the embodiments of the present invention correspond to the same or similar components; in the description of the present invention, it should be understood that if there is an orientation or positional relationship indicated by terms such as "upper", "lower", "left", "right", "front", "rear", etc., based on the orientation or positional relationship shown in the drawings, it is only for convenience of description and simplification of description, but it is not an indication or suggestion that the referred device or element must have a specific orientation, be constructed in a specific orientation, and be operated, and therefore, the terms describing the positional relationship in the drawings are only used for illustrative purposes, and are not to be construed as limiting the present invention, and the specific meaning of the terms may be understood by those skilled in the art according to specific situations.
FIG. 1 is a schematic diagram of a scenario in which the present invention may be implemented. Referring to fig. 1, a service function forwarding graph, consisting of two service function chains, subdivides the service function chains into an application layer, a virtualization layer, and an infrastructure layer. The application layer includes service functions and virtual links connecting the service functions. The virtualization layer comprises an NFV MANO controller and an SDN controller, and realizes the functions of resource management, network service arrangement, fault management and the like. The infrastructure layer comprises infrastructure physical nodes and connection relations thereof, and the access network and the core network equipment adopt various general servers and realize network function virtualization through virtual machines. Different slices can flexibly deploy VNFs of SFCs according to network service requests of users to meet quality of service requirements of the users, in fig. 1, VNFs of two SFCs in a slice are sequentially deployed in a DU pool, a CU pool, and a core network, and a common VNF and a virtual link may exist between the two SFCs. The general server node of the infrastructure layer may actually be affected by random factors in the environment to cause a failure, and the VNF node expressed as SFC by the mapping relationship may cause a failure. In order to ensure the stability and the service quality of the network, the node state of the VNF needs to be monitored, and then a self-healing technology is used to quickly recover the virtual network from a failure.
Fig. 2 is a schematic diagram illustrating the dependence of a network slice on a failure in the present invention. The fault diagnosis system needs to abstract supervised system resources, network resources related to the 5G end-to-end slicing network can be divided into physical layer resources and logic layer resources, and the corresponding resource types comprise the physical layer resources such as a CPU (Central processing Unit), a memory, a network, storage, bandwidth, a port and a link and the logic layer resources such as a Web agent, a firewall, network address conversion, an intrusion prevention system, a TCP (Transmission control protocol) optimizer, a load balancer and the like. The fault dependency graph reflects the hierarchical structure and the bearing relation of the network, and represents how faults propagate in the network and cause the breakdown of services, and the root cause analysis strategy finds the root cause of the faults according to the fault dependency graph.
Fig. 3 is a schematic diagram of a fault diagnosis model in the present invention. Referring to fig. 1, the SFC fault diagnosis model according to the present invention relies on a dynamic bayesian network that is causally related in time and adapts to environmental dynamic changes and a deep belief network that processes high-dimensional input data well, formalizes the fault inference relationship of the SFC based on a DDBN model, models a high-level temporal relationship using the dynamic bayesian network model, and extracts features from observation data using the deep belief network model in each time slice.
For the detected SFC failure node, all VNF node data of the physical node is collected once per time slice, and the node data collected by T time slices is represented as Y ═ { Y ═ Y1,…,Yt,…,YTThe node data collected by a single time slice is expressed as
Figure BDA0002391790580000081
Wherein the content of the first and second substances,
Figure BDA0002391790580000082
representing a VNF node H mapped to the physical nodemCPU utilization, processing delay, latency, bandwidth occupancy, etc. If S isWhen a VNF node of the FC fails, data observed by the node is characterized as a fault symptom, and abnormal data such as processing delay, waiting delay increase, bandwidth occupancy rate increase, and CPU load increase may occur. Hidden variable X ═ X1,…,Xt,…,XTThe real state of the physical node in each time slice is represented, and the real state corresponds to various failure sources of a physical layer and a logical layer, including software and hardware components which may fail, such as a CPU, a port, a cache, a DNS and the like. Wherein, the data Y collected by the fault symptom node in the time slice ttIs the physical node true state XtExpression, t time slice physical node true state XtReal state X dependent on t-1 time slicet-1
The invention discloses a service function chain fault diagnosis method based on a deep dynamic Bayesian network, which comprises the following steps of:
s1: constructing a fault diagnosis model according to a fault propagation relation in a hierarchical network architecture of a service function chain;
under a service function chain scene, the NFV MANO of a virtualization layer determines VNFs and logic links thereof required by services according to user service requests, and ensures resources of a general server occupied by the operation of the VNFs and specific bandwidth on a path, wherein the resources of the general server comprise calculation, network and storage, and then an SDN controller connects the VNFs together to form an SFC and control transmission connection; the application layer comprises a plurality of SFCs for serving various service flows, each SFC is formed by different network functions in a chain mode according to a certain sequence and provides end-to-end service for the service flows.
According to a fault propagation relation in a layered network architecture of a service function chain, a fault diagnosis model is established, a VNF node which is possibly faulted needs to be positioned at an application layer, and then the root of the fault is positioned according to a mapping relation between the VNF node which is faulted at the application layer and an infrastructure layer; for a layered network architecture, a Dynamic Bayesian Network (DBN) capable of causal association over time and adapting to environmental dynamics is employed for fault diagnosis.
S2: monitoring, at a physical node, performance data of a plurality of Virtual Network Function (VNF) thereon, and collecting high-dimensional data of a symptom;
monitoring a plurality of VNF performance data at a physical node thereon and collecting high dimensional data of a symptom; the data is subjected to normalization preprocessing to eliminate the influence caused by different symptom information dimensions, and the data is preprocessed by adopting a linear maximum and minimum method, wherein the conversion function is as follows:
Figure BDA0002391790580000091
s3: aiming at the diversity of Network observation data and the spatial correlation between a physical node and a VNF under an SDN/NFV framework, a correlated Deep Belief Network (DBN) model is established to perform feature extraction and dimension reduction on the observation data, a k-step contrast Divergence algorithm (CD-k) is used for approximately sampling a historical observation data set, and a self-adaptive BP algorithm added with a momentum term is used for fine adjustment of the model;
aiming at the diversity of network observation data and the spatial correlation between a physical node and a VNF under the SDN/NFV architecture, a related Deep Belief Network (DBN) model is established:
s31: carrying out greedy layer-by-layer training on the network in an unsupervised learning mode by using a multi-hidden-layer neural network consisting of three-layer stacked Restricted Boltzmann Machines (RBMs), and learning the high-level fault characteristics of the physical nodes only by using an SFC virtual node historical observation data set;
s32: adding a softmax layer on the three layers of RBM models to form a Deep Belief Network (DBN) to classify the node faults, and performing reverse supervised fine adjustment by combining label data to obtain a classification model of an initial time slice;
s33: the parameters are further optimized using real-time symptom data.
The parameter to be learned is theta, and in the SFC scene, theta is the probability dependence relationship between the fault symptom to be obtained and the actual fault, namely
θ={wij,ai,bj:1 i m,1 j n}
Wherein, wijRepresenting the weight between the visual level node i and the hidden level node j, aiRepresenting the bias of the visible level node a, bjRepresenting the bias of a hidden layer node j, wherein n is the number of various fault factors X of the physical node, and m is the number of virtual node observation data Y;
for parameters of the SFC fault diagnosis model in a single time slice, learning is carried out by adopting a deep belief network, and the parameters are trained in an off-line and on-line learning mode:
firstly, collecting a historical observation data set of a fault node as a training sample, and dividing the training sample into a marked sample and an unmarked sample; given that S and U represent sets of marked and unmarked samples, respectively, Y and X represent various types of symptom information of a failed VNF node and a label output of the failure type, respectively; the set of historical observation data for VNF nodes on the same physical node is denoted Q ═ … Qi… } where Qi=[Yt,Yt-1,…,Yt-d+1]D represents the dimension of the sample of the model input; finally, marking the unmarked sample of unsupervised learning as U (Y), marking the marked sample of supervised learning as S (Y, X), and dividing all data samples into small-batch data sets so as to improve the training speed of the DBN model through batch training;
then, dividing the sample set into a training set, a verification set and a test set according to a proportion, and training the model by using unmarked and marked small-batch data sets; learning parameters of the RBM in an unsupervised learning mode by using an unlabeled data set U (Y), so that the network probability distribution of the RBM can be better fitted to a training sample; then, approximate sampling is carried out on the sample by adopting a k-step Contrast Divergence (CD) algorithm, and the parameter theta is updated by solving the gradient of a log-likelihood function;
after the RBM1 model is subjected to iterative adjustment by a CD-K fast learning algorithm, obtaining preliminary model parameters; then, the activation state of the hidden layer neuron nodes obtained by RBM1 training is used as the network input of the RBM2, and the subsequent RBM3 models are sequentially trained in this way until all RBMs in the DBN model are trained; outputting the hidden layer of the last RBM as the input of a softmax classifier;
after the optimal model parameters in the unsupervised pre-training stage are obtained, carrying out supervised reverse fine adjustment by combining tag data S (Y, X), and establishing a complex nonlinear relation between fault characteristics and node state tags, wherein the tag values represent the real state of each VNF fault of the SFC; the adaptive learning rate BP algorithm added with momentum items is used for reversely fine-tuning the integral model parameters of the deep belief network, and the parameters in the unsupervised stage are taken as initialization parameters, and the expression is as follows:
Figure BDA0002391790580000101
wherein, thetatAnd thetat-1Respectively representing the correction quantity of the parameter in the t-th iteration and the t-1-th iteration, b is a momentum term coefficient, a is a learning rate, and ln L/ln theta is the gradient of the log-likelihood function of the current sample;
after obtaining a DBN model by using historical observation data of VNF nodes of the same type, optimizing the model in real time by using real-time observation data of the VNF fault nodes in a slicing period; sample R is updated in real time by a sampling sliding window mechanismt=[Yt,Yt-1,…,Yt-d+1]Where d denotes the length of the sliding window, i.e. each time a time slice t has elapsed, the observation data Y at time t is introducedtSimultaneously deleting Yt-dKeeping the size of the input sample unchanged; then, a single sample set training mode is used for optimizing the parameters of the model, and the fault symptom is output to be Y at the moment tt-d+1:tThe predicted infrastructure layer node state under the condition is p (X)t|Yt-d+1:t) Wherein X ist={x1,x2,…,xn}。
S4: a Dynamic Bayesian Network (DBN) model is established to diagnose fault sources in real time by utilizing the time correlation existing between faults, and a 1.5 time slice joint tree reasoning algorithm is used for positioning the fault sources.
The dynamic Bayesian network DBN model is defined as (B)0,B) In which B is0RepresentThe prior network of the initial time slice in the online learning phase of the deep belief network is the initial physical node state. B isA hidden state transition model representing a BN composed of more than two time slices;
the dynamic Bayesian network DBN is based on the observation data Yt={y1,y2,…,ymDeducing hidden variable Xt={x1,x2,…,xnThe probability of the maximum possible values, wherein Y represents the symptom information of the SFC virtual node, m possible values are provided, X represents the physical node state of the infrastructure layer, and n possible actual results are provided;
setting the initial hidden variable prior distribution matrix as pi, then
π=(πi)1×n,i=1,2,…,n
Wherein
Figure BDA0002391790580000111
A prior probability corresponding to the operating state of the infrastructure layer node at the initial time; then, the posterior probability estimated by the initial time slice in the online learning stage of the deep belief network is used as the prior probability of the node state;
the state transition matrix between the failed nodes is A, then
A=(aik)n×n,aik=P(Xt=i|Xt-1=k),i,k=1,2,…,n
Wherein a isijThe influence of the state of a certain fault factor of the fault node at the time t-1 on the state at the time t is shown;
the state transition matrix between the fault node and the symptom information is B, then
B=(bij)n×m,bij=P(Yt=j|Xt=i),i=1,2,…,n,j=1,2,…,m.
Wherein b isijThe method comprises the steps of representing the influence of a fault on working performance data of a virtual node when the fault occurs at time i;
under the classical assumption of the dynamic bayesian network model, the joint probability of observation and state is given by:
Figure BDA0002391790580000112
wherein
Figure BDA0002391790580000113
Representing observed emission probabilities required for DDBN inference; then, modeling the observed emission probability by adopting a Deep Belief Network (DBN) which well extracts the high-dimensional data characteristics;
finally, SFC fault reasoning is carried out, probability distribution of fault root is calculated under the condition of given fault symptoms, and a 1.5 time slice joint tree reasoning algorithm is adopted to maximize a possible fault value P (x)t=i|y1:T);
In the SFC fault diagnosis model, the main idea of using a 1.5 time slice joint tree reasoning algorithm to carry out SFC fault reasoning is as follows: according to the Markov property of the dynamic Bayesian network, the set of fault nodes has child nodes in the next time slice, and under the condition that the values of the child nodes are known, the states of the past nodes and the states of the future nodes have no relation, and the child nodes are called interface nodes; is provided with JTtIs a union tree in time t, CtIs JTtIn which contains ItThe ball of (D)tIs JTtIn which contains It-1Interface node I in clique, time slicetInterface I for receiving previous time slicet-1And also the next time slice interface It+1Reasoning is carried out between the interfaces through message propagation, and the reasoning process is as follows:
step 1: constructing a 1.5 time slice joint tree JT by performing normalization, triangulation and other steps on a DBN-based SFC fault inference modelt(ii) a Establishing a group tree by triangularizing a transition probability matrix A between fault nodes, finding triangulated maximum groups, and connecting separation nodes formed by intersection of two groups between each maximum group to form a connection tree; wherein each clique has a potential function ψ which is a (CPT) conditional probability table product of nodes in the respective cliques;
step (ii) of2: information forward propagation, Joint Tree JT for Current time slicetJoint Tree JT from previous time slicet-1Obtaining new evidence;
step 21) initializing junction tree JTtThe potential function ψ of;
step 22) firstly assigning values to symptom nodes, and then taking symptoms before t time slices as prior information P (I)t-1|Y1:t-1) Collecting symptom information to Ct-1Performing marginalization to obtain It-1Probability distribution of (2). Through an interface It-1From Ct-1To DtTransferring;
Figure BDA0002391790580000121
wherein It-1∈Ct-1∩DtIndicating propagation of the fault node between time slices;
step 23) collecting symptom data Y of the fault node of the SFC of the current time slicetAs evidence input to the junction tree;
step 24) for root CtCollecting and adding evidence;
step 25) return to JTtThe process ends when T is T, otherwise, T +1 goes to step 22).
And step 3: information back propagation, Joint Tree JT for Current time slicetJoint Tree JT from a later time slicet+1Absorb evidences in the middle and realize JT to the current time slicetThe probability distribution of the joint tree of (1) is updated.
Step 31) if T is T, distributing the evidence;
step 32) marginalizing Dt+1To obtain ItA probability distribution of (a);
Figure BDA0002391790580000122
step 33) by absorption Dt+1Probability score ofUpdating JT by clothtC intI of (A)t
Figure BDA0002391790580000131
Wherein the content of the first and second substances,
Figure BDA0002391790580000132
and
Figure BDA0002391790580000133
representing the original potential;
step 34) from root CtDistributing the evidence;
step 35) return JTtThe process ends when t is 1, otherwise, t-1 goes to step 31).
FIG. 4 is a flow chart of the SFC fault diagnosis algorithm of the present invention. The process is as follows:
step 401: collecting related monitoring data of a plurality of VNF nodes on a physical node corresponding to the SFC symptom node, and extracting data characteristics;
step 402: carrying out normalization pretreatment on the characteristic data;
step 403: the observation dataset is divided into a historical observation dataset and a real-time symptom dataset. And performing multilayer RBM pre-training on the historical observation data set, adding a softmax layer, performing reverse fine adjustment on the deep belief network model by using a self-adaptive BP algorithm introducing momentum items, and extracting a model parameter theta. Finally, further optimizing the model parameters by using a real-time symptom data set, and outputting a predicted physical node state;
step 404: obtaining a state transition probability matrix of the dynamic Bayesian network model according to the predicted physical node state, obtaining an observed emission probability matrix according to the real-time symptom data set and the predicted physical node state, and constructing the dynamic Bayesian network model based on the observed emission probability matrix;
step 405: and (4) carrying out SFC fault reasoning according to the 1.5 time slice joint tree reasoning algorithm to obtain a fault source.
Finally, the above embodiments are only intended to illustrate the technical solutions of the present invention and not to limit the present invention, and although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions, and all of them should be covered by the claims of the present invention.

Claims (8)

1. A service function chain fault diagnosis method based on a deep dynamic Bayesian network is characterized in that: the method comprises the following steps:
s1: constructing a fault diagnosis model according to a fault propagation relation in a hierarchical network architecture of a service function chain;
s2: monitoring performance data of a plurality of Virtual Network Functions (VNFs) on a physical node, and collecting high-dimensional data of symptoms;
s3: aiming at the diversity of network observation data and the spatial correlation between a physical node and a VNF under an SDN/NFV framework, a correlated Deep Belief Network (DBN) model is established to carry out feature extraction and dimension reduction on the observation data, a historical observation data set is approximately sampled through a k-step contrastive divergence algorithm CD-k, and the model is finely adjusted by using a self-adaptive BP algorithm added with a momentum term;
s4: establishing a Dynamic Bayesian Network (DBN) model to diagnose a fault source in real time by utilizing the time correlation existing among faults, and positioning the fault source by using a 1.5 time slice joint tree inference algorithm; the method specifically comprises the following steps:
the dynamic Bayesian network DBN model is defined as (B)0,B) In which B is0The prior network represents the initial time slice of the online learning stage of the deep belief network, namely the initial physical node state; b isA hidden state transition model representing a BN composed of more than two time slices;
the dynamic Bayesian network DBN is based on the observation data Yt={y1,y2,…,ymDeducing hidden variable Xt={x1,x2,…,xnMaximum possible }The probability of values, wherein Y represents the symptom information of the SFC virtual node and has m possible values, and X represents the physical node state of the infrastructure layer and has n possible actual results;
setting the initial hidden variable prior distribution matrix as pi, then
π=(πi)1×n,i=1,2,…,n
Wherein
Figure FDA0003649828320000012
A prior probability corresponding to the operating state of the infrastructure layer node at the initial time; using the posterior probability estimated by the initial time slice in the online learning stage of the depth information network as the prior probability of the node state;
the state transition matrix between the failed nodes is A, then
A=(aik)n×n,aik=P(Xt=i|Xt-1=k),i,k=1,2,…,n
Wherein a isijThe influence of the state of a certain fault factor of the fault node at the time t-1 on the state at the time t is shown;
the state transition matrix between the fault node and the symptom information is B, then
B=(bij)n×m,bij=P(Yt=j|Xt=i),i=1,2,…,n,j=1,2,…,m.
Wherein b isijThe method comprises the steps of representing the influence of a fault on working performance data of a virtual node when the fault occurs at time i;
under the classical assumption of the dynamic bayesian network model, the joint probability of observation and state is given by:
Figure FDA0003649828320000011
wherein
Figure FDA0003649828320000021
Representing observed emission probabilities required for DDBN inference; using features on high-dimensional dataExtracting a good Deep Belief Network (DBN) to model the observed emission probability;
finally, reasoning SFC fault, calculating probability distribution of fault root under the condition of given fault symptom, and adopting 1.5 time slice combined tree reasoning algorithm to maximize possible value P (x) of faultt=i|y1:T);
According to the Markov property of the dynamic Bayesian network, the set of fault nodes has child nodes in the next time slice, and under the condition that the values of the child nodes are known, the states of the past nodes and the states of the future nodes have no relation, and the child nodes are called interface nodes; is provided with JTtIs a union tree in time t, CtIs JTtIn which contains ItBolus of (D), DtIs JTtIn which contains It-1Interface node I in clique, time slicetInterface I for receiving previous time slicet-1And also the next time slice interface It+1Reasoning is carried out between the interfaces through message propagation, and the reasoning process is as follows:
step 1: constructing a 1.5 time slice joint tree JT by performing normalization and triangulation steps on a DBN-based SFC fault inference modelt(ii) a Establishing a group tree by triangularizing a transition probability matrix A between fault nodes, finding triangulated maximum groups, and connecting separation nodes formed by intersection of two groups between each maximum group to form a connection tree; wherein each clique has a potential function ψ which is the product of the CPT conditional probability tables of the nodes in each clique;
and 2, step: information forward propagation, Joint Tree JT for Current time slicetJoint Tree JT from previous time slicet-1Obtaining new evidence;
and step 3: information back propagation, Joint Tree JT for Current time slicetJoint Tree JT from a later time slicet+1Absorb evidences in the middle and realize JT to the current time slicetThe probability distribution of the joint tree of (1) is updated.
2. The deep dynamic bayesian network based service function chain fault diagnosis method according to claim 1, wherein: in step S1, in the service function chain scenario, the NFVMANO in the virtualization layer determines, according to a user service request, VNFs and logical links thereof required by a service, and guarantees resources of a general server occupied by the operation of the VNFs and a specific bandwidth on a path, where the resources of the general server include computation, network, and storage, and then the SDN controller connects the VNFs to form an SFC and control a transport connection; the application layer comprises a plurality of SFCs for serving various service flows, each SFC is formed by different network functions in a chain mode according to a certain sequence and provides end-to-end service for the service flows.
3. The deep dynamic bayesian network based service function chain fault diagnosis method according to claim 1, wherein: in step S1, according to the fault propagation relationship in the hierarchical network architecture of the service function chain, the established fault diagnosis model needs to first locate a VNF node that may have a fault at the application layer, and then locate the root of the fault according to the mapping relationship between the VNF node that has a fault at the application layer and the infrastructure layer; for a layered network architecture, a Dynamic Bayesian Network (DBN) capable of causal association over time and adapting to environmental dynamics is employed for fault diagnosis.
4. The deep dynamic bayesian network based service function chain fault diagnosis method according to claim 1, wherein: in step S2, monitoring multiple VNF performance data on the physical node, and collecting high-dimensional data of the symptom; in order to improve the learning efficiency of model parameters and improve the accuracy of the model, data needs to be subjected to normalization preprocessing to eliminate the influence caused by different symptom information dimensions, a linear maximum and minimum value method is adopted to carry out preprocessing on the data, and the conversion function is as follows:
Figure FDA0003649828320000031
5. the deep dynamic bayesian network based service function chain fault diagnosis method according to claim 1, wherein: in step S3, a relevant deep belief network DBN model is established for the diversity of network observation data based on the SDN/NFV architecture and the spatial correlation between the physical node and the VNF:
s31: carrying out greedy layer-by-layer training on the network in an unsupervised learning mode by using a multi-hidden-layer neural network consisting of three layers of stacked Restricted Boltzmann Machines (RBMs), and learning the high-level fault characteristics of the physical nodes only by using an SFC virtual node historical observation data set;
s32: adding a softmax layer on the three layers of RBM models to form a Deep Belief Network (DBN) to classify the node faults, and performing reverse supervised fine adjustment by combining label data to obtain a classification model of an initial time slice;
s33: the parameters are further optimized using real-time symptom data.
6. The deep dynamic Bayesian network based service function chain fault diagnosis method as recited in claim 5, wherein: in step S3, the following contents are specifically included:
the parameter to be learned is theta, and in an SFC scene, theta is the probability dependence relationship between the fault symptom to be obtained and the actual fault, namely
θ={wij,ai,bj:1 i m,1 j n}
Wherein, wijRepresenting the weight between the visual level node i and the hidden level node j, aiRepresenting the bias of the visible level node a, bjRepresenting the bias of a hidden layer node j, wherein n is the number of various fault factors X of the physical node, and m is the number of observation data Y of the virtual node;
for parameters of the SFC fault diagnosis model in a single time slice, learning is carried out by adopting a deep belief network, and the parameters are trained in an off-line and on-line learning mode:
firstly, collecting a historical observation data set of a fault node as a training sample, and dividing the training sample into a marked sample and an unmarked sampleA sample; given that S and U represent sets of marked and unmarked samples, respectively, Y and X represent various types of symptom information of a failed VNF node and a label output of the failure type, respectively; the set of historical observation data for VNF nodes on the same physical node is denoted Q ═ … Qi… } in which Qi=[Yt,Yt-1,…,Yt-d+1]D represents the dimension of the sample of the model input; finally, marking an unmarked sample of unsupervised learning as U (Y), marking a marked sample of supervised learning as S (Y, X), and dividing all data samples into small-batch data sets so as to improve the training speed of the DBN model through batch training;
then, dividing the sample set into a training set, a verification set and a test set according to the proportion, and training the model by using the unlabeled and labeled small-batch data sets; learning parameters of the RBM in an unsupervised learning mode by using an unlabeled data set U (Y), so that the network probability distribution of the RBM can be better fitted to a training sample; adopting a k-step Contrast Divergence (CD) algorithm to approximately sample the sample, and updating a parameter theta by solving the gradient of a log-likelihood function;
after the RBM1 model is subjected to iterative adjustment by a CD-K fast learning algorithm, obtaining preliminary model parameters; then, the activation state of the hidden layer neuron nodes obtained by RBM1 training is used as the network input of the RBM2, and the subsequent RBM3 models are sequentially trained in this way until all RBMs in the DBN model are trained; outputting the hidden layer of the last RBM as the input of a softmax classifier;
after the optimal model parameters in the unsupervised pre-training stage are obtained, carrying out supervised reverse fine adjustment by combining tag data S (Y, X), and establishing a complex nonlinear relation between fault characteristics and node state tags, wherein the tag values represent the real state of each VNF fault of the SFC; the BP algorithm with the reduced self-adaptive learning rate is used for reversely fine-tuning the integral model parameters of the deep belief network, the parameters in the unsupervised stage are taken as initialization parameters, and the expression is as follows:
Figure FDA0003649828320000041
wherein, thetatAnd thetat-1Respectively representing the correction quantity of the parameter in the t-th iteration and the t-1-th iteration, wherein b is a momentum term coefficient, alpha is a learning rate, and lnL/ln theta is the gradient of the log-likelihood function of the current sample;
after obtaining a DBN model by using the same type VNF node historical observation data, optimizing the model in real time by using the real-time observation data of the VNF fault node in a slicing period; sample R is updated in real time by a sampling sliding window mechanismt=[Yt,Yt-1,…,Yt-d+1]Where d denotes the length of the sliding window, i.e. each time a time slice t has elapsed, the observation data Y at time t is introducedtSimultaneously deleting Yt-dKeeping the size of the input sample unchanged; the single sample set training mode is used for optimizing the parameters of the model, and the fault symptom is output to be Y at the moment tt-d+1:tThe predicted infrastructure layer node state under the condition is p (X)t|Yt-d+1:t) Wherein X ist={x1,x2,…,xn}。
7. The deep dynamic Bayesian network based service function chain fault diagnosis method as recited in claim 1, wherein: in the process of reasoning between the interfaces through message propagation in step S4, step 2 specifically includes the following steps:
step 21) initializing junction tree JTtThe potential function ψ of;
step 22) firstly assigning values to symptom nodes, and then taking symptoms before t time slices as prior information P (I)t-1|Y1:t-1) Collecting symptom information to Ct-1Performing marginalization to obtain It-1A probability distribution of (a); through an interface It-1From Ct-1To DtTransferring;
Figure FDA0003649828320000051
wherein It-1∈Ct-1∩DtIndicating propagation of the fault node between time slices;
step 23) collecting symptom data Y of the fault node of the SFC of the current time slicetAs evidence input to the junction tree;
step 24) for root CtCollecting and adding evidence;
step 25) return to JTtThe process ends when T is T, otherwise, T +1 goes to step 22).
8. The deep dynamic bayesian network based service function chain fault diagnosis method according to claim 7, wherein: in the process of reasoning through message propagation between the interfaces in step S4, step 3 specifically includes the following steps:
step 31) if T is T, distributing the evidence;
step 32) marginalizing Dt+1To obtain ItA probability distribution of (a);
Figure FDA0003649828320000052
step 33) by absorption Dt+1To update JTtC intI of (A)t
Figure FDA0003649828320000053
Wherein the content of the first and second substances,
Figure FDA0003649828320000054
and
Figure FDA0003649828320000055
representing the original potential;
step 34) from root CtDistributing the evidence;
step (ii) of35) Return JTtThe process ends when t is 1, otherwise, t-1 goes to step 31).
CN202010116968.6A 2020-02-25 2020-02-25 Service function chain fault diagnosis method based on deep dynamic Bayesian network Active CN111368888B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010116968.6A CN111368888B (en) 2020-02-25 2020-02-25 Service function chain fault diagnosis method based on deep dynamic Bayesian network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010116968.6A CN111368888B (en) 2020-02-25 2020-02-25 Service function chain fault diagnosis method based on deep dynamic Bayesian network

Publications (2)

Publication Number Publication Date
CN111368888A CN111368888A (en) 2020-07-03
CN111368888B true CN111368888B (en) 2022-07-01

Family

ID=71211585

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010116968.6A Active CN111368888B (en) 2020-02-25 2020-02-25 Service function chain fault diagnosis method based on deep dynamic Bayesian network

Country Status (1)

Country Link
CN (1) CN111368888B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112016239A (en) * 2020-07-07 2020-12-01 中国科学院西安光学精密机械研究所 Method for rapidly solving system reliability based on dynamic Bayesian network
CN112039695A (en) * 2020-08-19 2020-12-04 朔黄铁路发展有限责任公司肃宁分公司 Transmission network fault positioning method and device based on Bayesian inference
CN113761996B (en) * 2020-08-21 2023-11-07 北京京东振世信息技术有限公司 Fire disaster identification method and device
CN112381967A (en) * 2020-11-20 2021-02-19 南京航空航天大学 Unmanned vehicle brake system fault diagnosis method based on Bayesian network
CN112637879A (en) * 2020-12-18 2021-04-09 中国科学院深圳先进技术研究院 Method for deciding fault intervention time of telecommunication core network
CN112887145B (en) * 2021-01-27 2022-04-29 重庆邮电大学 Distributed network slice fault detection method
CN112884348B (en) * 2021-03-12 2023-09-15 重庆大学 Method for diagnosing production deviation source of space detonator based on dynamic Bayesian network
CN113065580B (en) * 2021-03-17 2024-04-16 国能大渡河大数据服务有限公司 Power plant equipment management method and system based on multi-information fusion
CN115150250B (en) * 2021-03-31 2024-01-12 中国电信股份有限公司 Causal learning-based method and causal learning-based device for positioning abnormal root cause of Internet of things
CN113315661B (en) * 2021-05-26 2022-06-24 广东电网有限责任公司 Carrier network fault diagnosis method based on dynamic Bayesian network
CN114244687B (en) * 2021-12-20 2023-08-08 中电信数智科技有限公司 Network fault self-healing operability judging method based on AIOps
CN114423035B (en) * 2022-01-12 2023-09-19 北京宇卫科技有限公司 Service function chain abnormality detection method in network slice scene
CN116400662B (en) * 2023-01-18 2023-09-15 北京控制工程研究所 Fault deduction method and device combining forward reasoning and reverse reasoning

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101783749A (en) * 2009-12-24 2010-07-21 北京市天元网络技术股份有限公司 Network fault positioning method and device
CN106372330A (en) * 2016-08-31 2017-02-01 北京化工大学 Application of dynamic Bayesian network to intelligent diagnosis of mechanical equipment failure
CN107272644A (en) * 2017-06-21 2017-10-20 哈尔滨理工大学 The DBN network fault diagnosis methods of latent oil reciprocating oil pumping unit
CN107449994A (en) * 2017-07-04 2017-12-08 国网江苏省电力公司电力科学研究院 Partial discharge method for diagnosing faults based on CNN DBN networks
CN109086817A (en) * 2018-07-25 2018-12-25 西安工程大学 A kind of Fault Diagnosis for HV Circuit Breakers method based on deepness belief network

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9366451B2 (en) * 2010-12-24 2016-06-14 Commonwealth Scientific And Industrial Research Organisation System and method for the detection of faults in a multi-variable system utilizing both a model for normal operation and a model for faulty operation
US11087221B2 (en) * 2017-02-20 2021-08-10 Saudi Arabian Oil Company Well performance classification using artificial intelligence and pattern recognition

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101783749A (en) * 2009-12-24 2010-07-21 北京市天元网络技术股份有限公司 Network fault positioning method and device
CN106372330A (en) * 2016-08-31 2017-02-01 北京化工大学 Application of dynamic Bayesian network to intelligent diagnosis of mechanical equipment failure
CN107272644A (en) * 2017-06-21 2017-10-20 哈尔滨理工大学 The DBN network fault diagnosis methods of latent oil reciprocating oil pumping unit
CN107449994A (en) * 2017-07-04 2017-12-08 国网江苏省电力公司电力科学研究院 Partial discharge method for diagnosing faults based on CNN DBN networks
CN109086817A (en) * 2018-07-25 2018-12-25 西安工程大学 A kind of Fault Diagnosis for HV Circuit Breakers method based on deepness belief network

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Bayesian network based fault diagnosis and maintenance for high-speed train control systems;Yu Cheng等;《2013 International Conference on Quality, Reliability, Risk, Maintenance, and Safety Engineering (QR2MSE)》;20131010;全文 *
Fault Diagnosis for Large-Scale IP Networks Based on Dynamic Bayesian Model;Zhi-qing Li等;《2009 Fifth International Conference on Natural Computation》;20091228;全文 *
基于动态贝叶斯的不确定多态系统可靠性研究;高静丽;《中国优秀硕士学位论文全文数据库工程科技II辑》;20200215;全文 *
基于动态贝叶斯网络的无人机巡检输电线路故障诊断研究;张泽浩;《中国优秀硕士学位论文全文数据库工程科技II辑》;20170215;全文 *

Also Published As

Publication number Publication date
CN111368888A (en) 2020-07-03

Similar Documents

Publication Publication Date Title
CN111368888B (en) Service function chain fault diagnosis method based on deep dynamic Bayesian network
CN106897268B (en) Text semantic understanding method, device and system
CN111526070B (en) Service function chain fault detection method based on prediction
CN111462282A (en) Scene graph generation method
CN111079931A (en) State space probabilistic multi-time-series prediction method based on graph neural network
CN108684046B (en) Random learning-based access network service function chain deployment method
CN112217674B (en) Alarm root cause identification method based on causal network mining and graph attention network
CN115358487A (en) Federal learning aggregation optimization system and method for power data sharing
CN112087329B (en) Network service function chain deployment method
CN115225536B (en) Virtual machine abnormality detection method and system based on unsupervised learning
Heo et al. Graph neural network based service function chaining for automatic network control
CN114091667A (en) Federal mutual learning model training method oriented to non-independent same distribution data
CN115686846A (en) Container cluster online deployment method for fusing graph neural network and reinforcement learning in edge computing
CN113541986B (en) Fault prediction method and device for 5G slice and computing equipment
US20230185253A1 (en) Graph convolutional reinforcement learning with heterogeneous agent groups
WO2021047665A1 (en) Method and device for predicting connection state between terminals, and analysis device
CN110717116A (en) Method, system, device and storage medium for predicting link of relational network
CN111027591B (en) Node fault prediction method for large-scale cluster system
CN116501444B (en) Abnormal cloud edge collaborative monitoring and recovering system and method for virtual machine of intelligent network-connected automobile domain controller
Ran et al. Dynamic margin for federated learning with imbalanced data
JP6725194B2 (en) Methods for generating trained models, methods for classifying data, computers and programs
Kayode et al. Lirul: A lightweight lstm based model for remaining useful life estimation at the edge
CN113572639B (en) Carrier network fault diagnosis method, system, equipment and medium
Papageorgiou et al. Bagged nonlinear hebbian learning algorithm for fuzzy cognitive maps working on classification tasks
CN115019342A (en) Endangered animal target detection method based on class relation reasoning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20231220

Address after: 712000 Xixian New Area, Xianyang City, Shaanxi Province Fengdong New City Fengjing Avenue and Energy Road Northwest Corner 4-A Xixian Group Office Building, 14th Floor

Patentee after: Xixian New Area Digital Technology Co.,Ltd.

Address before: 1003, Building A, Zhiyun Industrial Park, No. 13 Huaxing Road, Henglang Community, Dalang Street, Longhua District, Shenzhen City, Guangdong Province, 518000

Patentee before: Shenzhen Wanzhida Technology Transfer Center Co.,Ltd.

Effective date of registration: 20231220

Address after: 1003, Building A, Zhiyun Industrial Park, No. 13 Huaxing Road, Henglang Community, Dalang Street, Longhua District, Shenzhen City, Guangdong Province, 518000

Patentee after: Shenzhen Wanzhida Technology Transfer Center Co.,Ltd.

Address before: 400065 Chongqing Nan'an District huangjuezhen pass Chongwen Road No. 2

Patentee before: CHONGQING University OF POSTS AND TELECOMMUNICATIONS

TR01 Transfer of patent right