CN114070775A - Block chain network slice safety intelligent optimization method facing 5G intelligent network connection system - Google Patents

Block chain network slice safety intelligent optimization method facing 5G intelligent network connection system Download PDF

Info

Publication number
CN114070775A
CN114070775A CN202111203106.8A CN202111203106A CN114070775A CN 114070775 A CN114070775 A CN 114070775A CN 202111203106 A CN202111203106 A CN 202111203106A CN 114070775 A CN114070775 A CN 114070775A
Authority
CN
China
Prior art keywords
network
data
data set
model
block chain
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111203106.8A
Other languages
Chinese (zh)
Other versions
CN114070775B (en
Inventor
伍军
施远
李高磊
李建华
洪源
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Shanghai Intelligent and Connected Vehicle R&D Center Co Ltd
Original Assignee
Shanghai Jiaotong University
Shanghai Intelligent and Connected Vehicle R&D Center Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University, Shanghai Intelligent and Connected Vehicle R&D Center Co Ltd filed Critical Shanghai Jiaotong University
Priority to CN202111203106.8A priority Critical patent/CN114070775B/en
Publication of CN114070775A publication Critical patent/CN114070775A/en
Application granted granted Critical
Publication of CN114070775B publication Critical patent/CN114070775B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/12Shortest path evaluation
    • H04L45/121Shortest path evaluation by minimising delays
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/28Routing or path finding of packets in data switching networks using route fault recovery
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/104Peer-to-peer [P2P] networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention relates to a safety intelligent optimization method for a block chain network slice of a 5G intelligent networking system, which comprises the following specific steps: step 1: establishing a mobile block chain network based on a 5G slice environment; step 2: obtaining an original data set of the mobile block chain network operation, including data under a normal operation condition and data under a transmission link fault condition, and performing data preprocessing; and step 3: establishing a federal semi-supervised learning model according to a link state inference algorithm based on machine learning and training; and 4, step 4: compared with the prior art, the method has the advantages that the mobile block chain network consensus is quickly converged, the reasoning speed is obviously improved, the block chain nodes on different network slices in the Internet of things can effectively transmit local sensing data, and the like.

Description

Block chain network slice safety intelligent optimization method facing 5G intelligent network connection system
Technical Field
The invention relates to the field of intelligent optimization of network slice safety, in particular to a block chain network slice safety intelligent optimization method for a 5G intelligent networking system.
Background
With the popularity of bitcoin, the blockchain technology as the bottom layer thereof is gradually known and has attracted the attention of governments and enterprises in various countries around the world, the blockchain technology is not only applied to the fields of financial and digital encryption currency, but also has started to be applied to the fields of network security, industrial internet of things and the like, the blockchain network is established on an IP network, the time delay of a transmission block between blockchain nodes brings a lot of security threats to the blockchain network, many attacks aiming at the blockchain network are realized by using the transmission time delay, such as typical selfish mining attack and double-flower attack, in addition, the transmission time delay of a block can also influence the consensus speed of the blockchain network, but the transmission time delay cannot be completely eliminated, and only the transmission time delay can be shortened as much as possible, so that the minimization of the delay of the transmission block between blockchain nodes becomes a key point of the application scenes of related organizations and enterprise expanding blockchain The fast internet bitcoin relay engine FIBRE, which is a UDP-based protocol, is a solution for this transmission delay, and is a more efficient bitcoin relay network developed on the basis of the bitcoin relay network proposed in 2015, and the FIBRE functions to fast transmit blocks between the network nodes in the blockchain, and further reduces the amount of transmitted data and network delay through optimization of compact blocks.
Meanwhile, the mobile communication technology has undergone rapid iteration and development, and as of now, the 5G era has entered, the 5G today can not only provide a network transmission speed ten times that of 4G, but also can serve more business scenes by configuring and reasonably dividing the network slice technology according to the needs, providing more vertical and diversified services, so that, in order to adapt to more special scenes, such as disaster management and battlefield investigation, mobile internet of things and the like, the blockchain network starts to migrate from the original IP network to the mobile communication network, the blockchain network established on the mobile communication network like this is called as a mobile blockchain network, and as the 5G permeates in the industrial internet of things, the open interconnection degree of the industrial key information infrastructure is rapidly promoted, and a new concept of "intelligent internet connection" (such as intelligent internet connected automobiles and the like) is gradually formed, in order to simultaneously meet the dual requirements of an intelligent networking scene on low delay, reliable service support and privacy safety protection, the technology fusion of 5G and a block chain becomes a global hotspot in the field, but a mobile block chain network also faces the similar problem of the existing block chain network, in a new generation mobile communication network, in order to support diversified service scenes of different industries, customized network functions need to be provided for different application scenes according to different functions and requirements, therefore, a network slicing technology is proposed, in the mobile block chain network, nodes distributed on the block chain network are likely to be located in different slices, the stability and the fault rate of data transmission links among the slices can greatly influence the consensus convergence of the mobile block chain network, and although FIBRE greatly improves the block transmission band, the existing method does not consider the transmission links among different network slices in the mobile block chain network to have great influence on fault pair The impact of convergence of consensus on the blockchain network.
Disclosure of Invention
The invention aims to overcome the defects in the prior art and provide a safe and intelligent optimization method for a 5G intelligent networking system-oriented block chain network slice.
The purpose of the invention can be realized by the following technical scheme:
a safety intelligent optimization method for a block chain network slice of a 5G intelligent networking system comprises the following steps:
step 1: establishing a mobile block chain network based on a 5G slice environment;
step 2: obtaining an original data set of the mobile block chain network operation, including data under a normal operation condition and data under a transmission link fault condition, and performing data preprocessing;
and step 3: establishing a federal semi-supervised learning model according to a link state inference algorithm based on machine learning and training;
and 4, step 4: and after the federal semi-supervised learning model is trained, an optimized global model is obtained so as to quickly predict the range of the fault transmission link in the block link network and realize predictive quick consensus convergence.
In step 1, the core architecture of the mobile blockchain network includes a 5G network access layer, a 5G network slice layer, and a mobile blockchain network application layer, which are arranged from bottom to top.
The network architecture of the 5G network access layer is a centralized wireless access network so as to greatly reduce the cost and energy consumption of communication equipment, the 5G network access layer is used for displaying service users of the 5G access network, and the service users comprise individual users, cities, network operators and enterprises;
the 5G network slice layer is a technical premise that 5G provides differentiated service requirements for different users and a 5G network function arrangement combination, and is used for showing topology formed by block chain network slices and block transmission among the network slices;
the mobile blockchain network application layer is used for showing the whole process from transaction creation to blockchain broadcasting of blockchain nodes, and the expansibility and the block synchronization speed of the mobile blockchain network application layer are greatly improved based on the characteristics of low time delay and high extensibility of a 5G network.
The block chain network slice comprises block chain links and block chain link points, the block chain link points realize specific network functions through network function virtualization running in a data center, the block chain link points comprise key characteristics of the block chain, the key characteristics of the block chain comprise anonymity, safety, randomly generated addresses, block chain data storage and a workload certification mechanism, links among the block chain link points are established through an SDN controller based on global information of a network, an administrator is allowed to remotely configure a physical network, resources are reserved for the network slices with network resource requirements, the network slices are communicated with one another, and block transmission is carried out through transmission links.
In the step 2, when a transmission link failure occurs, part of affected retraced data is mixed in normal data flow, the retraced data is collected to obtain a retraced data packet, the retraced data packet is labeled and used as a labeled data set, and the rest of data is used as an unlabeled data set.
In the step 3, the link state inference algorithm establishes a federal semi-supervised learning model by using a federal semi-supervised learning algorithm, learns the characteristics of withdrawing data packets, and predicts the range of a fault transmission link, so that the data packets affected between the block link nodes are rerouted, the connection between the block link nodes is quickly recovered, and the consensus convergence speed of the mobile block link network is improved.
The federated semi-supervised learning algorithm applies a semi-supervised learning method to federated learning, the federated learning comprises a global model G, a local model set L and a server, each local model corresponds to a client, the client corresponds to a network slice, the federated semi-supervised learning algorithm divides model training of a labeled data set and a non-labeled data set into two processes, a supervised learning mode is adopted for training the labeled data set, optimization of the models is guided according to a cross entropy loss function, and the non-labeled data set is trained by adopting a consistency regularization method.
According to the consistency regularization method, the input data are disturbed by adding counter examples, so that the model has robustness to disturbance of the unlabeled data, the output results of the original data and the data processed through random transformation are consistent, and the consistency regularization expression is as follows:
Figure BDA0003305771640000031
wherein p isθ(y | u) is a class probability function representing a weight parameter of θ, input as unlabeled data u and output as a neural network model of y | · |)2Representing the L2 norm.
The process of the federal semi-supervised learning algorithm specifically comprises the following steps:
step 301: carrying out data collection through a 5G network slice layer, and dividing an original data set into a labeled data set and an unlabeled data set after obtaining the original data set:
D={di},i=1,2,…,N
S={xj,yj},j=1,2,…,L
U={uk},k=1,2,…M
M+L=N
wherein D is a known original data set, S is a tagged data set, U is an untagged data set, D is a known original data set, andiis the ith data, x, of the original data setjFor j with tag data, yjFor tagged data xjCorresponding label, ukIs the kth unlabeled data;
step 302: the consistency regularization-based method adopts a consistency loss function between clients to process the unlabeled data, and the consistency loss function is defined as follows:
Figure BDA0003305771640000041
wherein the content of the first and second substances,
Figure BDA0003305771640000042
global parameters selected by the server based on the similarity of untrained local models in the network slice represent consensus models whose output is label, representing that the parameters are fixed, KL is relative entropy representing the difference in the quantized probability distribution, u is all unlabeled data, y is the predicted output,
Figure BDA0003305771640000043
representing the prediction result of the local model for the output of the local model to the input u, wherein the consistency loss function between the clients represents the difference between the prediction result of the local model and the labels provided by the consensus models;
step 303: the server selects H global parameters in each communication and broadcasts, and trains the label-free data by adopting a final consistency regularization loss function guidance model, wherein the expression is as follows:
Figure BDA0003305771640000044
Figure BDA0003305771640000045
where Φ (-) is the final consistency regularization loss function, Θ (-) represents the generation of a single hot label with a given maximum normalized exponential function value, Max (-) represents the label that outputs a single hot form on the class with the largest agreement, to satisfy the computation of the cross-entropy loss function,
Figure BDA0003305771640000046
representing the resulting class probability, pi (·) is a random transformation function,
Figure BDA0003305771640000047
for protocol-based pseudo tags, i.e., tags in the form of a true class of single hot, low confidence prediction data below a confidence threshold τ is discarded and the pseudo tags reused when they are generated
Figure BDA0003305771640000048
Performing standard cross entropy minimization, wherein Cross entropy is a cross entropy loss function and represents cross entropy calculation on the class probability and the label of the true class in the single hot form;
step 304: modeling neural networks pθThe weight θ of (y | x) is divided into supervised learning parametersThe number σ and the unsupervised learning parameter ψ are used for supervised learning and unsupervised learning, respectively, to reduce the mutual influence between the supervised learning and the unsupervised learning, and in the course of the supervised learning with labeled data set, the unsupervised learning parameter ψ is fixed, that is, no back propagation is performed, to minimize the loss term, and the corresponding training targets are as follows:
minimizeLS(σ)=λsCrossEntropy)y,pσ+ψ*(y|x))
wherein x and y are elements in the tagged data set S, x is all tagged data, y is tags corresponding to all tagged data respectively, and λ S is a hyper-parameter for controlling the learning ratio among the items;
step 305: using a consistency regularization method to perform unsupervised learning and training on the unlabeled data set, and keeping a supervised learning parameter sigma unchanged in the learning process to minimize loss, wherein corresponding training targets are as follows:
Figure BDA0003305771640000051
wherein λ isIIn order to be a hyper-parameter,
Figure BDA0003305771640000052
is an L2 canonical term, representing the L2 paradigm, to retain the knowledge learned from the supervised learning parameter σ, so that the smaller the difference between the supervised learning parameter σ and the unsupervised learning parameter ψ,
Figure BDA0003305771640000053
regularizing terms for L1, so that an unsupervised parameter psi set comprises a plurality of 0 terms to improve the communication efficiency of federal learning, and thinning the communication without influencing the previous learning effect;
step 306: and for each round of model training, training and updating the supervised learning parameters and the unsupervised learning parameters in the local model, after the updating is finished, each client transmits the parameters of the local model to the server, the server uses a model aggregation method to aggregate the local models, selects a specified number of global variables, if the performance of the aggregated global model does not reach the expectation, the server transmits the global variables to each client again, and trains and aggregates the models repeatedly until the performance of the global model reaches the expectation.
In step 304, the relationship between the supervised learning parameter σ and the unsupervised learning parameter ψ is as follows:
θ=σ+ψ。
compared with the prior art, the invention has the following advantages:
the invention provides a core architecture of a mobile block chain network based on a 5G slice environment, and focuses on minimizing transmission delay among block chain nodes when a transmission link between 5G slices is in fault, and then introduces a link state inference algorithm based on machine learning in the architecture, wherein the algorithm uses a federal semi-supervised learning method to learn the characteristics of withdrawing data packets and infer a fault transmission link, so that rerouting is carried out on the data packets influenced among the block chain link points, and the connection among the block chain link points is quickly recovered, thereby quickly converging the common knowledge of the mobile block chain network; in the method based on the federal semi-supervised learning, the range of a fault transmission link is predicted only by learning the characteristics of part of withdrawing data packets, but not a specific fault link, and the reasoning speed is obviously improved by sacrificing certain accuracy; the FSSL model obtained by the inference algorithm can be deployed on any 5G network slice to quickly predict the fault link range, and the compatibility is good; under the support of a scheme of predictive rapid consensus convergence, the blockchain nodes located on different network slices in the internet of things can more effectively transmit local perception data.
Drawings
Fig. 1 is a schematic diagram of a mobile block chain network framework based on a 5G slice internet of things.
FIG. 2 is a schematic flow diagram of the FSSL.
Fig. 3 is a graph comparing the convergence speed of the loss function.
Fig. 4 is a graph of convergence time for different schemes.
Fig. 5 is a graph of convergence time for different network sizes.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments. It is to be understood that the embodiments described are only a few embodiments of the present invention, and not all embodiments. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, shall fall within the scope of protection of the present invention.
Examples
The invention provides a safe and intelligent optimization method for a 5G intelligent networking system-oriented block chain network slice, which is divided into two parts, namely a core architecture of a mobile block chain network based on a 5G network and a link state inference algorithm design based on a federal semi-supervised learning algorithm (FSSL).
A first part:
as shown in fig. 1, the core architecture of the mobile blockchain network includes a 5G network access layer, a 5G network slice layer, and a mobile blockchain network application layer.
5G network access layer:
the mobile communication network evolves from 4G to 5G, the network architecture of the access network is changed greatly, the network architecture of the access network is changed from the original distributed radio access network (D-RAN) to a centralized radio access network (C-RAN), the cost and the energy consumption of the communication equipment are greatly reduced by a centralized method, and the service users of the 5G access network, including individual users, cities, network operators and enterprises, are shown in the figure;
5G network slicing layer:
the network slicing technology is a key technology of 5G, which is a combination of a technical premise that 5G provides differentiated service requirements for different users and an arrangement of network slicing 5G network functions, and can view a network slice as a virtual network, which includes virtual block chain links and virtual block chain nodes, which is a subset of a physical network, and the virtual block chain nodes can implement specific network functions (such as routers and firewalls), in the existing network, the functions are implemented by devices in the physical network, in the 5G network, by network function Virtualization (VNF) running in a data center, virtual block chain links between the virtual block chain nodes are implemented by a plurality of physical links, and the physical links on each physical path need to meet bandwidth requirements of the virtual block chain links, and an SDN controller establishes the virtual block chain links based on global information on the network, and allows an administrator to remotely configure the physical network to reserve resources for slices that have network resource requirements.
Moving blockchain network application layer:
the mobile block chain network is established on a 5G network, and thanks to the characteristics of low time delay and high scalability of the 5G network, the scalability of the mobile block chain network and the speed of block synchronization (consensus convergence) are greatly improved, and in fig. 1, the process from transaction establishment to block broadcasting to the mobile block chain network is shown, wherein the data of the transaction is not only transfer and payment information in the prior sense, but also any data information needing interactive transmission, including data transmission between sensor nodes in the Internet of things and interaction of industrial control information in the industrial Internet of things, and the essence of the transaction is information interaction.
As shown in fig. 2, the present invention adopts an inference algorithm based on federal semi-supervised learning (FSSL), in the existing FSSL, a data set is usually given, then the data set is further divided into a tagged data set and an untagged data set, and the same as the data set used for semi-supervised learning, under the framework of federal learning, a global model G and a local model set L are included, the untagged data set is propagated to K clients, and for the tagged data set S, supervised learning can be directly used for training, the present invention takes each network slice as a client, when a transmission link is normal, data traffic transmitted between the network slices is also normal data traffic, but when the transmission link fails, a part of affected fallback data traffic is mixed into the normal data traffic, and the data is collected at the network slice interface, and obtaining a withdrawal data packet, manually marking the withdrawal data packet as a labeled data set S in order to enable the collected data set to meet the requirements, and using the rest part as a non-labeled data set U, namely using the data traffic in the original data set to comprise normal data traffic and withdrawal data traffic, and classifying the data traffic in the original data set, wherein one type is the normal data traffic, and the other type is the withdrawal data traffic.
After the collected original data set is classified, the labeled data set which is labeled does not need to be processed excessively, and more importantly, the unlabeled data set which is not labeled needs to be processed excessively, and in semi-supervised learning, the most common processing method for the unlabeled data set is a consistency regularization method so as to achieve the purpose of generating a more effective classification result by using less labeled data.
Processing the label-free data set by adopting a consistency regularization method, and adding counter-example disturbance to input data to make the model have robustness to disturbance of the label-free data, wherein the existing consistency regularization expression is as follows:
Figure BDA0003305771640000081
wherein p isθ(y | u) is a weight parameter theta, the input is an unlabeled data instance u, the output is a class probability function of y, the class probability function represents a neural network model, pi (·) is a random enhancement data function, | | | | | magnetism2The L2 norm is shown, and the consistency regularization shows that the output results of the original data and the data processed by the data enhancement means are consistent.
Describing an algorithm:
step 301: carrying out data collection through a 5G network slice layer, and dividing an original data set into a labeled data set and an unlabeled data set after obtaining the original data set:
D={di},i=1,2,…,N
S={xj,yj},j=1,2,…,L
U={uk},k=1,2,…M
M+L=N
wherein D is a known original data set, S is a tagged data set, U is an untagged data set, D is a known original data set, andiis the ith data, x, of the original data setjFor j with tag data, yjFor tagged data xjCorresponding label, ukIs the kth unlabeled data;
step 302: the consistency regularization-based method adopts a consistency loss function between clients to process the unlabeled data, and the consistency loss function is defined as follows:
Figure BDA0003305771640000082
wherein the content of the first and second substances,
Figure BDA0003305771640000083
global parameters selected by the server based on the similarity of untrained local models in the network slice represent consensus models whose output is label, representing that the parameters are fixed, KL is relative entropy representing the difference in the quantized probability distribution, u is all unlabeled data, y is the predicted output,
Figure BDA0003305771640000084
representing the prediction result of the local model for the output of the local model to the input u, wherein the consistency loss function between the clients represents the difference between the prediction result of the local model and the labels provided by the consensus models;
step 303: the server selects H global parameters in each communication and broadcasts, and trains the label-free data by adopting a final consistency regularization loss function guidance model, wherein the expression is as follows:
Figure BDA0003305771640000091
Figure BDA0003305771640000092
where Φ (-) is the final consistency regularization loss function, Θ (-) represents the generation with a given maximum returnA single hot label, Max (·), that normalizes the exponential function values represents the label that outputs the single hot form on the class with the largest protocol, to satisfy the calculation of the cross-entropy loss function,
Figure BDA0003305771640000093
representing the resulting class probability, pi (·) is a random transformation function,
Figure BDA0003305771640000094
for protocol-based pseudo tags, i.e., tags in the form of a true class of single hot, low confidence prediction data below a confidence threshold τ is discarded and the pseudo tags reused when they are generated
Figure BDA0003305771640000095
Performing standard cross entropy minimization, wherein Cross entropy is a cross entropy loss function and represents cross entropy calculation on the class probability and the label of the true class in the single hot form;
step 304: modeling neural networks pθThe weight θ of (y | x) is divided into a supervised learning parameter σ and an unsupervised learning parameter ψ, which are used for supervised learning and unsupervised learning, respectively, to reduce the mutual influence between the supervised learning and the unsupervised learning, and during the supervised learning with labeled data set, the unsupervised learning parameter ψ is fixed, i.e., no back propagation is performed, to minimize the loss term, and the corresponding training targets are as follows:
Figure BDA0003305771640000099
wherein x and y are elements in the tagged data set S, x is all tagged data, y is tags corresponding to all tagged data respectively, and λ S is a hyper-parameter for controlling the learning ratio among the items;
step 305: using a consistency regularization method to perform unsupervised learning and training on the unlabeled data set, and keeping a supervised learning parameter sigma unchanged in the learning process to minimize loss, wherein corresponding training targets are as follows:
Figure BDA0003305771640000096
wherein λ isIIn order to be a hyper-parameter,
Figure BDA0003305771640000097
is an L2 canonical term, representing the L2 paradigm, to retain the knowledge learned from the supervised learning parameter σ, so that the smaller the difference between the supervised learning parameter σ and the unsupervised learning parameter ψ,
Figure BDA0003305771640000098
regularizing terms for L1, so that an unsupervised parameter psi set comprises a plurality of 0 terms to improve the communication efficiency of federal learning, and thinning the communication without influencing the previous learning effect;
step 306: and for each round of model training, training and updating the supervised learning parameters and the unsupervised learning parameters in the local model, after the updating is finished, each client transmits the parameters of the local model to the server, the server uses a model aggregation method to aggregate the local models, selects a specified number of global variables, if the performance of the aggregated global model does not reach the expectation, the server transmits the global variables to each client again, and trains and aggregates the models repeatedly until the performance of the global model reaches the expectation.
The L2 regularization adds the square sum of the weight parameter directly on the basis of the original loss function, solve the overfitting phenomenon in the model training, adjust the weight through using a multiplicative factor, make the weight decay constantly, and decay fast when the weight is great, decay slow when the weight is hour; the L1 regularization directly adds the absolute value of the weight parameter on the basis of the original loss function, the attenuation is fast when the weight is small, and the reduction is slow when the weight is large, so the weight of the final model is concentrated on the feature with high importance, and the weight is fast close to 0 for the feature without importance, thereby the final weight becomes sparse.
In order to further improve the rapid prediction capability of the algorithm, in the method based on the federal semi-supervised learning, only the characteristic prediction failure link range of part of the withdrawing data packets is learned, rather than a specific failed link, the inference speed is significantly improved by sacrificing certain accuracy, the efficiency and the practicability of the algorithm are improved, the inference algorithm can be deployed on any 5G network slice to rapidly predict the range of the fault link, has good compatibility, because the inference algorithm carries out inference prediction based on the characteristics of abnormal data traffic, therefore, the model obtained by the inference algorithm can be deployed on any 5G network slice without modification on the basis of the routing mechanism among the original network slices, the inference algorithm can also be applied to the scene of the Internet of things based on the mobile block chain network, so that the block chain nodes on different network slices in the Internet of things can more effectively transmit local sensing data.
For example: in fig. 1, it can be seen that when slice 1 and slice 8 are communicating with each other for block transmission, the original transmission link is { slice 1, slice 2, slice 3, slice 4, slice 6, slice 8}, but for some reasons, the transmission link { slice 4, slice 6} is failed, slice 3 infers the range of the failed transmission link based on the model deployed thereon, and further reselects the transmission link, and data traffic is forwarded through transmission link { slice 3, slice 5, slice 6, slice 8}, so that the communication connection of slice 1 to slice 8 is rapidly restored.
An experimental environment is built for experimental verification, and the experiment is divided into two parts: the first step is to establish a P2P block chain simulation network to realize consensus simulation; and the second step is to operate the simulation block chain network, represent a network slice by using a block chain client in the simulation block chain network, establish a data set by collecting transmission data of each block chain node in an experiment, train a federal semi-supervised learning (FSSL) model by using the data set to infer the state of a transmission link, finally obtain a convergent global model, and quickly predict the range of a fault transmission link in the block chain network through the global model.
Establishing a block chain simulation network:
a block chain simulation network is established through python programming, the topology of the block chain network is generated according to the functions of the block chain network, firstly, a block chain node is established, the node comprises some key characteristics of the block chain, such as anonymity, safety, randomly generated addresses, block chain data storage and a workload proving mechanism, and a plurality of nodes are established to form the block chain network, when the network topology is formed and data transmission is carried out among the nodes, the operation condition of the actual network can be simulated, in order to meet the experimental requirements, a kademlia discovery protocol is selected as a node discovery protocol of a P2P network, the core idea is that nearby nodes are discovered through calculating the logical distance among the nodes, the node search convergence is realized, in order to enable experiments to be concise and clearly explain the problems, the protocol is simplified, and only three requests are realized:
1. node handshaking: the node interacts with other nodes through the handshake operation, requests the states of other nodes, compares the states with the states of the node, and updates the state when the state is expired;
2. generating block chain data: the generated block chain data comprises link discovery data, transaction data, link interruption information and other data generated when the simulation network operates, and communication among different network slices is labeled in subsequent model training;
3. transaction broadcasting: and sending the node handshake as a heartbeat, carrying out block synchronization according to heartbeat information, and broadcasting all transactions to each node in the block chain network.
Training the FSSL model:
after the block chain simulation network is operated, transaction data are transmitted among different block chain nodes, then a data set is established by collecting flow data of the block chain clients, one transmission link is manually disconnected to simulate transmission link faults in an actual environment, so that the data set under the condition that the transmission link is normal and the data set under the condition that the transmission link is faulty are obtained, the obtained original data set is an unmarked data set, and labeled data in the FSSL model are shared among the block chain Internet of things clients for obtaining a better training effect.
In the experiment, partial data of an original data set are manually marked, so that the original data set is divided into a labeled data set and a non-labeled data set, and then training is carried out to achieve the expected effect.
As shown in fig. 3, the training speed and convergence of the FSSL training penalty function are significantly improved when tagged data sets are shared with each other.
After training is finished, the trained FSSL model is deployed on a blockchain node of a blockchain simulation network, the reasoning effect of the FSSL model is tested through a simulation experiment, a transmission link fault is simulated by manually disconnecting a certain transmission link in the network, and then the time of a global model from predicting the fault transmission link range and recalculating a path to final route convergence is recorded.
As shown in fig. 4, a node adopts a predictive fast consensus convergence scheme (PFCC scheme) and deploys an FSSL model, and performs 20 times of convergence time values of simulation tests, in order to emphasize an experimental effect, two simulation experiments are performed on the same network topology, in the two simulation experiments, two routing protocols, namely RIP and OSPF, are respectively used, and as can be seen from the figure, compared with the existing routing mechanism, the routing convergence time of the scheme adopting predictive fast consensus convergence (PFCC scheme) in the present invention is shorter (less than 2 seconds), and when a network faces a transmission link interruption, the network can be quickly restored to be connected, so that the consensus convergence speed and robustness of a mobile block chain network are improved.
To further test the effectiveness of the model, as shown in fig. 5, tests were performed at different network scales, from which two trends can be seen:
as the network scale is enlarged, the convergence time is longer, which is in line with expectations, as the network scale is enlarged, the connection between nodes becomes more complex, and after transmission link failure occurs in the network topology, the global model obtained by the inference algorithm infers the more complex topology, which consumes more computing resources and time;
with the enlargement of the network scale, the increase of the convergence time is small, although the network scale is enlarged, the influence of the enlargement of the network scale on the prediction speed of the inference algorithm is small because the model obtained by the method predicts the range of the fault transmission link according to part of transmission data, so that the method provided by the invention is still effective for a larger network, and a scheme for representing the predictive rapid consensus convergence can be deployed on more network slices to support diversified services.
In addition, the FSSL model is used for predicting the transmission link state, so that the consensus convergence under the transmission link fault scene is improved, in order to verify the effect, the block generation speed using the FSSL is compared with the block generation speed not using the FSSL, and the difference of the block generation speeds which can be obtained according to the experimental record reaches the second level.
In summary, the invention provides a scheme for predictive rapid consensus convergence of a moving block chain based on link state inference in a 5G slice environment, in a built simulation experiment environment, a data set under a normal operation condition and a data set under a transmission link fault condition are obtained by operating a block chain simulation network, then manual labeling processing is performed on part of the data set of an original data set, the original data set is divided into a labeled data set and a non-labeled data set, finally, a federal semi-supervised learning inference algorithm is adopted to train a traffic data set in a network slice, an optimized global model is obtained, inference and prediction are performed on the range of a fault transmission link, and an experiment result obtained through experiments achieves an expected effect.
While the invention has been described with reference to specific embodiments, the invention is not limited thereto, and those skilled in the art can easily conceive of various equivalent modifications or substitutions within the technical scope of the invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A safety intelligent optimization method for a block chain network slice of a 5G intelligent networking system is characterized by comprising the following steps:
step 1: establishing a mobile block chain network based on a 5G slice environment;
step 2: obtaining an original data set of the mobile block chain network operation, including data under a normal operation condition and data under a transmission link fault condition, and performing data preprocessing;
and step 3: establishing a federal semi-supervised learning model according to a link state inference algorithm based on machine learning and training;
and 4, step 4: and after the federal semi-supervised learning model is trained, an optimized global model is obtained so as to quickly predict the range of the fault transmission link in the block link network and realize predictive quick consensus convergence.
2. The method according to claim 1, wherein in step 1, the core architecture of the mobile blockchain network includes a 5G network access layer, a 5G network slice layer, and a mobile blockchain network application layer from bottom to top.
3. The method according to claim 2, wherein the network architecture of the 5G network access layer is a centralized wireless access network to substantially reduce the cost and energy consumption of communication equipment, the 5G network access layer is used to display service users of the 5G access network, and the service users include individual users, cities, network operators and enterprises;
the 5G network slice layer is a technical premise that 5G provides differentiated service requirements for different users and a 5G network function arrangement combination, and is used for showing topology formed by block chain network slices and block transmission among the network slices;
the mobile blockchain network application layer is used for showing the whole process from transaction creation to blockchain broadcasting of blockchain nodes, and the expansibility and the block synchronization speed of the mobile blockchain network application layer are greatly improved based on the characteristics of low time delay and high extensibility of a 5G network.
4. The method as claimed in claim 3, wherein the blockchain network slice comprises blockchain links and blockchain link points, the blockchain link points implement specific network functions through network function virtualization running in a data center, the blockchain link points comprise key characteristics of blockchains, the key characteristics of blockchains include anonymity, security, randomly generated addresses, blockchain data storage and workload certification mechanisms, links between blockchain link points are established based on global information of a network through an SDN controller, and an administrator is allowed to remotely configure a physical network, reserve resources for network slices with network resource requirements, and the network slices communicate with each other and perform blockchain transmission through transmission links.
5. The method for intelligently optimizing the slice security of the blockchain network facing the 5G intelligent networking system according to claim 4, wherein in the step 2, when a transmission link failure occurs, part of affected retracted data is mixed in normal data traffic, the retracted data is collected to obtain a retracted data packet, the retracted data packet is labeled and used as a labeled data set, and the rest of data is used as an unlabeled data set.
6. The method as claimed in claim 1, wherein in step 3, the link state inference algorithm builds a federal semi-supervised learning model by using a federal semi-supervised learning algorithm, learns the characteristics of a fallback data packet, and predicts the range of a failed transmission link, so as to reroute the affected data packets between the block link nodes, so that the connection between the block link nodes is rapidly restored, thereby improving the consensus convergence rate of the mobile block link network.
7. The method for safely and intelligently optimizing the blockchain network slice facing the 5G intelligent networking system according to claim 1, wherein the federal semi-supervised learning algorithm applies a semi-supervised learning method to federal learning, the federal learning includes a global model G, a local model set L and a server, each local model corresponds to a client, the client corresponds to a network slice, the federal semi-supervised learning algorithm divides model training of a labeled data set and a unlabeled data set into two processes, a supervised learning mode is adopted for training of the labeled data set, optimization of the models is guided according to a cross entropy loss function, and the unlabeled data set is trained by a method of consistency regularization.
8. The method for safely and intelligently optimizing the tile chain network slice of the 5G-oriented intelligent networking system according to claim 7, wherein the consistency regularization method scrambles the input data by adding counter-examples, so that the model is robust to disturbance of unlabeled data, and the output result of the original data and the data processed by random transformation is consistent, and the consistency regularization expression is as follows:
Figure FDA0003305771630000021
wherein p isθ(y | u) is a class probability function representing a weight parameter of θ, input as unlabeled data u and output as a neural network model of y | · |)2Representing the L2 norm.
9. The method for intelligently optimizing the slicing security of the blockchain network for the 5G intelligent networking system according to claim 8, wherein the process of the federal semi-supervised learning algorithm specifically comprises:
step 301: carrying out data collection through a 5G network slice layer, and dividing an original data set into a labeled data set and an unlabeled data set after obtaining the original data set:
D={di},i=1,2,…,N
S={xj,yj},j=1,2,…,L
U={uk},k=1,2,…M
M+L=N
wherein D is a known original data set, S is a tagged data set, U is an untagged data set, D is a known original data set, andiis the ith data, x, of the original data setjFor j with tag data, yjFor tagged data xjCorresponding label, ukIs the kth unlabeled data;
step 302: the consistency regularization-based method adopts a consistency loss function between clients to process the unlabeled data, and the consistency loss function is defined as follows:
Figure FDA0003305771630000031
wherein the content of the first and second substances,
Figure FDA0003305771630000032
global parameters selected by the server based on the similarity of untrained local models in the network slice represent consensus models whose output is label, representing that the parameters are fixed, KL is relative entropy representing the difference in the quantized probability distribution, u is all unlabeled data, y is the predicted output,
Figure FDA0003305771630000033
representing the prediction result of the local model for the output of the local model to the input u, wherein the consistency loss function between the clients represents the difference between the prediction result of the local model and the labels provided by the consensus models;
step 303: the server selects H global parameters in each communication and broadcasts, and trains the label-free data by adopting a final consistency regularization loss function guidance model, wherein the expression is as follows:
Figure FDA0003305771630000034
Figure FDA0003305771630000035
where Φ (-) is the final consistency regularization loss function, Θ (-) represents the generation of a single hot label with a given maximum normalized exponential function value, Max (-) represents the label that outputs a single hot form on the class with the largest agreement, to satisfy the computation of the cross-entropy loss function,
Figure FDA0003305771630000036
representing the resulting class probability, pi (·) is a random transformation function,
Figure FDA0003305771630000037
for protocol-based pseudo tags, i.e., tags in the form of a true class of single hot, low confidence prediction data below a confidence threshold τ is discarded and the pseudo tags reused when they are generated
Figure FDA0003305771630000038
Performing standard cross entropy minimization, wherein Cross entropy is a cross entropy loss function and represents cross entropy calculation on the class probability and the label of the true class in the single hot form;
step 304: modeling neural networks pθThe weight θ of (y | x) is divided into a supervised learning parameter σ and an unsupervised learning parameter ψ, which are used for supervised learning and unsupervised learning, respectively, to reduce the mutual influence between the supervised learning and the unsupervised learning, and during the supervised learning with labeled data set, the unsupervised learning parameter ψ is fixed, i.e., no back propagation is performed, to minimize the loss term, and the corresponding training targets are as follows:
Figure FDA0003305771630000041
wherein x and y are elements in the tagged data set S, x is all tagged data, y is tags corresponding to all tagged data respectively, and λ S is a hyper-parameter for controlling the learning ratio among the items;
step 305: using a consistency regularization method to perform unsupervised learning and training on the unlabeled data set, and keeping a supervised learning parameter sigma unchanged in the learning process to minimize loss, wherein corresponding training targets are as follows:
Figure FDA0003305771630000042
wherein λ isIIn order to be a hyper-parameter,
Figure FDA0003305771630000043
is an L2 canonical term, representing the L2 paradigm, to retain the knowledge learned from the supervised learning parameter σ, so that the smaller the difference between the supervised learning parameter σ and the unsupervised learning parameter ψ,
Figure FDA0003305771630000044
regularizing terms for L1, so that an unsupervised parameter psi set comprises a plurality of 0 terms to improve the communication efficiency of federal learning, and thinning the communication without influencing the previous learning effect;
step 306: and for each round of model training, training and updating the supervised learning parameters and the unsupervised learning parameters in the local model, after the updating is finished, each client transmits the parameters of the local model to the server, the server uses a model aggregation method to aggregate the local models, selects a specified number of global variables, if the performance of the aggregated global model does not reach the expectation, the server transmits the global variables to each client again, and trains and aggregates the models repeatedly until the performance of the global model reaches the expectation.
10. The method for intelligently optimizing the slice security of the blockchain network facing the 5G intelligent networking system according to claim 9, wherein in the step 304, the relationship between the supervised learning parameter σ and the unsupervised learning parameter ψ is as follows:
θ=σ+ψ。
CN202111203106.8A 2021-10-15 2021-10-15 Block chain network slicing security intelligent optimization method for 5G intelligent networking system Active CN114070775B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111203106.8A CN114070775B (en) 2021-10-15 2021-10-15 Block chain network slicing security intelligent optimization method for 5G intelligent networking system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111203106.8A CN114070775B (en) 2021-10-15 2021-10-15 Block chain network slicing security intelligent optimization method for 5G intelligent networking system

Publications (2)

Publication Number Publication Date
CN114070775A true CN114070775A (en) 2022-02-18
CN114070775B CN114070775B (en) 2023-07-07

Family

ID=80234682

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111203106.8A Active CN114070775B (en) 2021-10-15 2021-10-15 Block chain network slicing security intelligent optimization method for 5G intelligent networking system

Country Status (1)

Country Link
CN (1) CN114070775B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114726743A (en) * 2022-03-04 2022-07-08 重庆邮电大学 Service function chain deployment method based on federal reinforcement learning
CN116701939A (en) * 2023-06-09 2023-09-05 浙江大学 Classifier training method and device based on machine learning

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109544334A (en) * 2018-10-22 2019-03-29 绿州蔚来(深圳)控股有限公司 A kind of network scalability block chain implementation method
WO2020056487A1 (en) * 2018-09-19 2020-03-26 Interbit Ltd. Method and system for performing hyperconvergence using blockchains
CN112348204A (en) * 2020-11-05 2021-02-09 大连理工大学 Safe sharing method for marine Internet of things data under edge computing framework based on federal learning and block chain technology
CN112615730A (en) * 2020-11-23 2021-04-06 北京邮电大学 Resource allocation method and device based on block chain network slice proxy
CN112861152A (en) * 2021-02-08 2021-05-28 北京航空航天大学 Federal learning incentive method and system based on permit chain
CN112887145A (en) * 2021-01-27 2021-06-01 重庆邮电大学 Distributed network slice fault detection method
CN112906859A (en) * 2021-01-27 2021-06-04 重庆邮电大学 Federal learning algorithm for bearing fault diagnosis
CN113159333A (en) * 2021-03-27 2021-07-23 北京邮电大学 Federated learning method, system and device based on hierarchical fragment block chain
CN113194126A (en) * 2021-04-21 2021-07-30 泉州华中科技大学智能制造研究院 Block chain-based transverse federated learning model construction method
WO2021155671A1 (en) * 2020-08-24 2021-08-12 平安科技(深圳)有限公司 High-latency network environment robust federated learning training method and apparatus, computer device, and storage medium
CN113379066A (en) * 2021-06-10 2021-09-10 重庆邮电大学 Federal learning method based on fog calculation

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020056487A1 (en) * 2018-09-19 2020-03-26 Interbit Ltd. Method and system for performing hyperconvergence using blockchains
CN109544334A (en) * 2018-10-22 2019-03-29 绿州蔚来(深圳)控股有限公司 A kind of network scalability block chain implementation method
WO2021155671A1 (en) * 2020-08-24 2021-08-12 平安科技(深圳)有限公司 High-latency network environment robust federated learning training method and apparatus, computer device, and storage medium
CN112348204A (en) * 2020-11-05 2021-02-09 大连理工大学 Safe sharing method for marine Internet of things data under edge computing framework based on federal learning and block chain technology
CN112615730A (en) * 2020-11-23 2021-04-06 北京邮电大学 Resource allocation method and device based on block chain network slice proxy
CN112887145A (en) * 2021-01-27 2021-06-01 重庆邮电大学 Distributed network slice fault detection method
CN112906859A (en) * 2021-01-27 2021-06-04 重庆邮电大学 Federal learning algorithm for bearing fault diagnosis
CN112861152A (en) * 2021-02-08 2021-05-28 北京航空航天大学 Federal learning incentive method and system based on permit chain
CN113159333A (en) * 2021-03-27 2021-07-23 北京邮电大学 Federated learning method, system and device based on hierarchical fragment block chain
CN113194126A (en) * 2021-04-21 2021-07-30 泉州华中科技大学智能制造研究院 Block chain-based transverse federated learning model construction method
CN113379066A (en) * 2021-06-10 2021-09-10 重庆邮电大学 Federal learning method based on fog calculation

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
JUN-TAE KIM; JUNGHA JIN; KEECHEON KIM: "A study on an energy-effective and secure consensus algorithm for private blockchain systems (PoM: Proof of Majority)", 《 2018 INTERNATIONAL CONFERENCE ON INFORMATION AND COMMUNICATION TECHNOLOGY CONVERGENCE (ICTC)》 *
JUN-TAE KIM; JUNGHA JIN; KEECHEON KIM: "A study on an energy-effective and secure consensus algorithm for private blockchain systems (PoM: Proof of Majority)", 《 2018 INTERNATIONAL CONFERENCE ON INFORMATION AND COMMUNICATION TECHNOLOGY CONVERGENCE (ICTC)》, 18 November 2018 (2018-11-18) *
黄庆东等: "基于CDS的分布式协作共识频谱感知方法", 《电子科技大学学报》 *
黄庆东等: "基于CDS的分布式协作共识频谱感知方法", 《电子科技大学学报》, no. 05, 30 September 2017 (2017-09-30) *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114726743A (en) * 2022-03-04 2022-07-08 重庆邮电大学 Service function chain deployment method based on federal reinforcement learning
CN116701939A (en) * 2023-06-09 2023-09-05 浙江大学 Classifier training method and device based on machine learning
CN116701939B (en) * 2023-06-09 2023-12-15 浙江大学 Classifier training method and device based on machine learning

Also Published As

Publication number Publication date
CN114070775B (en) 2023-07-07

Similar Documents

Publication Publication Date Title
Jiang Graph-based deep learning for communication networks: A survey
Bangui et al. A hybrid machine learning model for intrusion detection in VANET
CN112203282B (en) 5G Internet of things intrusion detection method and system based on federal transfer learning
Zeng et al. DeepVCM: A deep learning based intrusion detection method in VANET
Zeng et al. Senior2local: A machine learning based intrusion detection method for vanets
CN106603293A (en) Network fault diagnosis method based on deep learning in virtual network environment
CN114070775A (en) Block chain network slice safety intelligent optimization method facing 5G intelligent network connection system
Ortet Lopes et al. Towards effective detection of recent DDoS attacks: A deep learning approach
Olowononi et al. Federated learning with differential privacy for resilient vehicular cyber physical systems
Alnawayseh et al. Smart congestion control in 5g/6g networks using hybrid deep learning techniques
Aouedi et al. Intrusion detection for softwarized networks with semi-supervised federated learning
Dong et al. Secure distributed on-device learning networks with byzantine adversaries
Soleimani et al. Real-time identification of three Tor pluggable transports using machine learning techniques
Sankaranarayanan et al. SVM-based traffic data classification for secured IoT-based road signaling system
ALMahadin et al. VANET network traffic anomaly detection using GRU-based deep learning model
Selamnia et al. Edge computing-enabled intrusion detection for c-v2x networks using federated learning
Zhao et al. A novel traffic classifier with attention mechanism for industrial internet of things
Guo et al. ML-SDNIDS: an attack detection mechanism for SDN based on machine learning
Raja et al. An empirical study for the traffic flow rate prediction-based anomaly detection in software-defined networking: a challenging overview
Das et al. A comparative analysis of deep learning approaches in intrusion detection system
Uddin et al. Federated learning based intrusion detection system for satellite communication
Shi et al. PFCC: Predictive fast consensus convergence for mobile blockchain over 5G slicing-enabled IoT
Sapello et al. Application of learning using privileged information (LUPI): botnet detection
Modi et al. Enhanced routing using recurrent neural networks in software defined‐data center network
Ahuja et al. DDoS attack traffic classification in SDN using deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant