CN116306770B - Software defined network performance prediction method based on dense parent neural architecture search - Google Patents

Software defined network performance prediction method based on dense parent neural architecture search Download PDF

Info

Publication number
CN116306770B
CN116306770B CN202310127530.1A CN202310127530A CN116306770B CN 116306770 B CN116306770 B CN 116306770B CN 202310127530 A CN202310127530 A CN 202310127530A CN 116306770 B CN116306770 B CN 116306770B
Authority
CN
China
Prior art keywords
individuals
population
individual
network
network performance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310127530.1A
Other languages
Chinese (zh)
Other versions
CN116306770A (en
Inventor
刘静
李艺帆
赵宏
刘春生
杨方
常超
马春来
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Institute of Technology of Xidian University
Original Assignee
Guangzhou Institute of Technology of Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Institute of Technology of Xidian University filed Critical Guangzhou Institute of Technology of Xidian University
Priority to CN202310127530.1A priority Critical patent/CN116306770B/en
Publication of CN116306770A publication Critical patent/CN116306770A/en
Application granted granted Critical
Publication of CN116306770B publication Critical patent/CN116306770B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention relates to the technical field of software-defined network performance prediction, in particular to a software-defined network performance prediction method based on dense-matrix neural architecture search, which comprises the steps of collecting training data, determining a coding mode and a search space, initializing a population, calculating an adaptability value of the population, selecting parent individuals, executing crossover, mutation operators, executing local search operators, merging parent and offspring populations, elite to select a next generation population, determining an optimal individual and executing an reasoning process to predict SDN network performance; the neural architecture search technology is used for exploring the SDN network performance predictor, great contribution is made to improving the generalization capability of the SDN network performance predictor, and by means of the fashionable neural architecture search technology, an effective neural network structure can be automatically designed for a communication network with any routing protocol/routing strategy, so that great parameter adjusting pressure in the design of an SDN network performance evaluation algorithm is greatly relieved.

Description

Software defined network performance prediction method based on dense parent neural architecture search
Technical Field
The invention relates to a software defined network performance prediction method based on a dense parent neural architecture search, and belongs to the technical field of software defined network performance prediction.
Background
A software-defined network (SDN) separates the data plane from the control plane, so that an SDN controller can globally control the data plane for each device and link information. How to relate physical link information in a network to the communication performance of routing policy paths is a very complex problem, where SDN network performance prediction has evolved into one of the necessary tasks for deep analysis of SDN networks. The operation state of the network link can be effectively obtained by accurately predicting the SDN network performance, and the occurrence of network emergency is effectively avoided.
Conventional SDN network performance prediction methods are generally used for modeling network link information, routing strategies and the like based on queuing theory and other methods, and a plurality of premise assumptions are made for the network and solved in the modeling process, but the assumptions often deviate from the data characteristics of a real scene, and are key bottlenecks for limiting the improvement of network performance prediction precision. In recent years, more and more machine learning models (recurrent neural networks, BP neural networks, long and short term memory networks, etc.) are used to predict SDN network performance. While these models greatly improve the accuracy of predictions, the composition of these models is complex and has a large number of hyper-parameters, and algorithm designers need to expend a great deal of time and effort trying to find the optimal machine learning model structure and parameter settings when designing the machine learning model. Further, studies have shown that traffic transmissions under different routing configurations, such as multi-hop carrier sense protocol (CSMA/CA), fully connected dynamic time division protocol (DTDMA), multi-hop ad hoc time division multiple access protocol (ESTDMA), etc., have different potential characteristics, and a unified machine learning model cannot realize optimal performance prediction results (throughput, delay, packet loss rate, etc.) under all conditions.
Therefore, how to develop an SDN network performance predictor with high generalization capability and saving trial-and-error cost is a current problem to be solved. The invention automatically finds the optimal SDN network performance predictor under different route configurations by heuristically exploring the deep network structure with the aim of minimizing network performance prediction errors by means of the emerging neural architecture search technology.
The present invention has been made in view of this.
Disclosure of Invention
The invention aims to provide a software defined network performance prediction method based on dense parent neural architecture search, which uses a neural architecture search technology for the exploration of designing an SDN network performance predictor, makes great contribution for improving the generalization capability of the SDN network performance predictor, and can automatically design an effective neural network structure for a communication network of any routing protocol/routing strategy by means of the fashionable neural architecture search technology, thereby greatly relieving the huge parameter adjusting pressure during the design of an SDN network performance evaluation algorithm.
The invention realizes the aim through the following technical scheme, and discloses a software defined network performance prediction method based on a dense parent neural architecture search, which comprises the following steps:
a: collecting training data;
b: determining a coding mode and a search space, and adopting variable length coding as the coding mode to create a more flexible search mode;
c: initializing population, and randomly initializing m individuals X according to the coding mode of the step B i ={N i ,A i I=1, 2, …, m as initial population;
d: calculating the fitness value of the population, converting m individuals obtained in the step C into corresponding BP neural networks, training the neural networks, and taking the accuracy of SDN network performance prediction on a test set as the fitness value of the individuals, wherein the fitness value is used for judging the basis of individual quality in the subsequent secret mother algorithm search;
e: selecting parent individuals, after calculating the fitness value of each individual, selecting excellent individuals from the population according to the fitness value, enabling the individuals to have a chance to serve as parent reproduction offspring, and selecting individuals from the initialized population/the previous generation population as parent population by adopting a binary tournament selection mode;
f: executing crossover and mutation operators, and executing crossover and mutation operations on every two individuals in the parent population to generate child individuals and calculating fitness values of the child individuals;
g: executing a local search operator and calculating the fitness value of the newly generated individual;
h: combining the parent population and the offspring population, and combining the initialized population/the previous generation population and the offspring individuals obtained in the step F, G to obtain a combined population;
i: 2, elite selecting a next generation population, sorting the fitness values of the combined population obtained in the step H, and taking the first m individuals with the largest fitness values as elite individuals to enter the next generation;
j: determining an optimal individual, and continuously repeating the steps E to I until 200 times, namely after the secret mother algorithm is executed for 200 generations, taking the individual with the largest fitness value in the last generation of population as the optimal individual under the network configuration, wherein the BP network represented by the optimal individual is the optimal SDN network performance predictor searched finally;
k: and C, performing reasoning process to predict SDN network performance, and performing feedforward reasoning process only once on the test set to obtain prediction accuracy, namely, the prediction accuracy of SDN network performance under the SDN network configuration, by using the optimal individual obtained in the step J, namely, the trained BP network.
Further, in the step a, the specific operation method is as follows: the method comprises the steps of taking state data of an SDN network as input samples (comprising topology information and traffic matrix of the SDN network), taking state data of each link in the SDN network as labels (comprising time delay, jitter and packet loss of each link), forming a series of sample data, and dividing the sample data into training sets/test sets/verification sets according to the proportion of 60%/30% and 10%.
In step B, the specific operation method is that an individual of the secret mother algorithm is represented as a number of the individual in the population, the number of neurons in each hidden layer of the BP network represented by the individual is the activation function type in each hidden layer of the BP network represented by the individual, and the number of hidden layers is not fixed, and is obtained by searching the secret mother algorithm.
Further, in step D, the training process specifically includes the following steps:
(1) Network initialization, namely building a corresponding BP neural network according to the number of hidden layers, the number of neurons of each hidden layer and the activation function type set by the individual, setting an Epoch trained by the BP network as 300, setting the learning rate as 0.01, setting an optimizer as Adam, and initializing network weight by using an Xavi er;
(2) The feedforward process is calculated, SDN state data in a training set are input into a built BP neural network, and the output of each hidden layer and the output value of the last output layer are calculated according to the weight of the current BP network;
(3) Error calculation, namely calculating an error between a predicted output of the BP network and an expected output (namely state data of each link in the SDN network), wherein the error is measured by using a mean square error MSE value;
(4) Updating the weight, namely updating the connection weight in the BP network according to the prediction error, namely performing a back propagation process;
(5) And (3) whether the termination condition is met or not, repeating the steps until all training data undergo the feedforward and back propagation processes for 300 times, namely finishing training when the Epoch reaches 300.
Further, in step E, each time a binary tournament selection is performed, 2 individuals are selected from the population, and then the individual with the high fitness value of the two individuals is taken as a member of the parent population.
Further, in step F, specifically, X 0 ={N 0 ,A 0 Sum X 1 ={N 1 ,A 1 By way of example, where N 0 ={2,3,2},A 0 ={Sigmoid,Sigmoid,Tanh},N 1 ={3,4},A 1 = { ELU, tanh }, first performing a single-point cross operation with a probability of 0.8, wherein the cross points are randomly generated, resulting in X 0 '={N 0 ',A 0 ' and X 1 '={N 1 ',A 1 ' two individuals, N 0 '={2,3,4},A 0 '={Sigmoid,ELU,Tanh},N 1 '={3,2},A 1 ' = { Tanh, ELU, tanh }; then, the mutation operation is carried out by gene position, namely, whether the mutation operation is carried out or not is judged according to the probability of 0.2 for each gene position, so as to obtain X 0 ”={N 0 ”,A 0 "} and X 1 ”={N 1 ”,A 1 "-two individuals, N 0 ”={2,6,4},A 0 ”={Sigmoid,Tanh,Tanh},N 1 ”={3,2},A 1 "= { Tanh, ELU, tanh }. And (3) obtaining m generation individuals through the crossover and mutation operation, and repeating the step D for the m generation individuals to calculate the fitness value.
Further, in step G, specifically, the fitness values of the m sub-generations obtained in step F are ranked, and the first 5% of individuals with the largest fitness values are used as elite individuals to perform local disturbance, so that X is used as 1 ={N 1 ,A 1 By way of example, N 1 ={3,4},A 1 = { ELU, tanh }, for N 1 Each position of (a) is respectively added and subtracted by 1, A on the basis of the original value 1 Each position of the variable is replaced by any other value in the variable value range, thus obtaining offspring individual X 1-1 ={N 1-1 ,A 1-1 },N 1-1 ={2,4},A 1-1 ={Sigmoid,ELU};X 1-2 ={N 1-2 ,A 1-2 },N 1-2 ={4,4},A 1 - 2 ={ReLU,Sigmoid};X 1-3 ={N 1-3 ,A 1-3 },N 1-3 ={3,3},A 1 - 3 ={Sigmoid,Sigmoid};X 1-4 ={N 1-4 ,A 1-4 },N 1-4 ={3,5},A 1-2 = { ReLU, ELU }, sub-obtained by local searchThe surrogate person also needs to repeat step D to calculate his fitness value.
The invention has the technical effects and advantages that: aiming at the problem of poor generalization capability of an SDN network performance predictor, the invention designs a dense parent neural architecture search algorithm which automatically searches the optimal BP network structure under each SND network configuration for predicting the SDN network performance with high precision.
The invention designs a BP network coding mode with variable length, which can conveniently and flexibly consider BP neural network searching with different numbers of hidden layers, numbers of hidden layer nerve units and hidden layer activation function types.
The invention designs a local search based on a local disturbance operator of elite individuals so as to further improve the search efficiency of dense parent neural architecture search.
Drawings
FIG. 1 is a general flow diagram of the present invention;
fig. 2 is an example of a BP neural network with 3 hidden layers in the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Please refer to fig. 1-2.
A software defined network performance prediction method based on a dense parent neural architecture search comprises the following steps:
a: training data is collected. The method comprises the steps of taking state data of an SDN network as an input sample (comprising topology information, traffic matrix and the like of the SDN network), taking state data of each link in the SDN network as a label (comprising time delay, jitter, packet loss and the like of each link) and forming a series of sample data. Next, these sample data were divided into training/test/validation sets in a ratio of 60%/30% 10%;
b: and determining a coding mode and a search space. Since the number of hidden layers of the BP neural network is not yet determined, variable length coding is used as a coding mode to create a more flexible searching mode. Specifically, one individual of the cryptographic algorithms may be denoted as X i ={N i ,A i I refers to the number of the individual in the population,is the number of neurons in each hidden layer of the BP network represented by the individual, +.> The activation function type in each hidden layer of the BP network represented by the individual is shown, wherein h refers to the number of the hidden layers, the value of the activation function type is not fixed, and the activation function type is obtained by searching through a cryptographic algorithm. An example BP neural network with 3 hidden layers, denoted X, is shown in FIG. 2 0 The number N of neurons in each hidden layer 0 = {6,8,6}, activation function a for each hidden layer 0 = { ReLU, tanh, ELU }. In order to expand the search space as much as possible, more potential excellent BP network settings are provided for the secret mother algorithm to improve the prediction precision, the range of searchable settings in the BP network, namely the search space of the secret mother algorithm, is listed in Table 2;
c: initializing a population. Randomly initializing m individuals X according to the coding mode of the step B i ={N i ,A i I=1, 2, …, m as an initial population, wherein the number of hidden layers of the BP neural network represented by each individual, the number of neurons of each hidden layer, and the activation function type are randomly generated in a search space shown in the following table;
scope of searchable variables in BP network
D: and calculating the fitness value of the population. And C, converting the m individuals obtained in the step C into corresponding BP neural networks, training the neural networks, and taking the accuracy of SDN network performance prediction on a test set as an fitness value of the individuals, wherein the fitness value is used for judging the quality of the individuals during the subsequent secret mother algorithm search. Here, the BP neural network is used as a multi-layer feedforward neural network, and the network weight and the threshold are adjusted by the prediction error of each feedforward process, so that the predicted output of the BP network is continuously near the expected output, and the training process comprises the following specific steps:
(1) And initializing a network. Building a corresponding BP neural network according to the number of hidden layers, the number of neurons of each hidden layer and the activation function type set by the individual, setting the Epoch trained by the BP network as 300, the learning rate as 0.01, the optimizer as Adam, and initializing the network weight by using an Xavi er;
(2) And (5) calculating a feedforward process. SDN state data in the training set is input into a built BP neural network, and the output of h hidden layers and the output value of the last output layer are calculated according to the weight of the current BP network;
(3) And (5) calculating errors. Calculating an error E between a predicted output and an expected output (namely state data of each link in the SDN network) of the BP network, wherein E is measured by using a mean square error MSE value;
(4) And updating the weight value. Updating the connection weight in the BP network according to the prediction error E, namely, performing a back propagation process;
(5) Whether the termination condition is satisfied. Repeating the steps until all training data are subjected to the feedforward and back propagation processes for 300 times, namely finishing training when the Epoch reaches 300;
e: parent individuals are selected. After each individual fitness value is calculated, the good individuals are selected from the population according to the fitness value, so that the individuals can be used as parents to reproduce offspring. The invention adopts a binary tournament selection mode to select individuals from an initialized population/a previous generation population as a parent population. Specifically, each time a binary tournament selection is performed, 2 individuals are selected from the population, and then the individuals with high fitness values in the two individuals are used as one member of the parent population;
f: and executing crossover and mutation operators. For the parent population, every two individuals perform crossover and mutation operations to generate child individuals. Specifically, in X 0 ={N 0 ,A 0 Sum X 1 ={N 1 ,A 1 By way of example, where N 0 ={2,3,2},A 0 ={Sigmoid,Sigmoid,Tanh},N 1 ={3,4},A 1 = { ELU, tanh }, first performing a single-point cross operation with a probability of 0.8, wherein the cross points are randomly generated, resulting in X 0 '={N 0 ',A 0 ' and X 1 '={N 1 ',A 1 ' two individuals, N 0 '={2,3,4},A 0 '={Sigmoid,ELU,Tanh},N 1 '={3,2},A 1 ' = { Tanh, ELU, tanh }; then, the mutation operation is carried out by gene position, namely, whether the mutation operation is carried out or not is judged according to the probability of 0.2 for each gene position, so as to obtain X 0 ”={N 0 ”,A 0 "} and X 1 ”={N 1 ”,A 1 "-two individuals, N 0 ”={2,6,4},A 0 ”={Si gmoi d,Tanh,Tanh},N 1 ”={3,2},A 1 "= { Tanh, ELU, tanh }. Obtaining m generation individuals through the crossover and mutation operation, and then repeating the step D for the m generation individuals to calculate the fitness value of the m generation individuals;
g: a local search operator is performed. The local search operator further explores the neighborhood of the solution based on the global search to enhance the convergence rate of the algorithm. The common local search operators in the secret mother algorithm comprise a mountain climbing method, a simulated annealing method, a tabu search and the like, and the invention designs a local disturbance operator based on elite individuals to perform local search on excellent individuals in consideration of the specificity of individual coding forms in the invention. Specifically, the fitness values of m generation individuals obtained in the step F are ranked, and the individuals with the largest fitness value of the first 5% are used as elite individuals to carry out local disturbance. By X 1 ={N 1 ,A 1 By way of example, N 1 ={3,4},A 1 = { ELU, tanh }, for N 1 Each position of (a) is respectively added and subtracted by 1, A on the basis of the original value 1 Each position of the variable is replaced by any other value in the variable value range, thus obtaining offspring individual X 1-1 ={N 1-1 ,A 1-1 },N 1-1 ={2,4},A 1-1 ={Sigmoid,ELU};X 1-2 ={N 1-2 ,A 1-2 },N 1 - 2 ={4,4},A 1-2 ={ReLU,Sigmoid};X 1-3 ={N 1-3 ,A 1-3 },N 1 - 3 ={3,3},A 1-3 ={Sigmoid,Sigmoid};X 1-4 ={N 1-4 ,A 1-4 },N 1-4 ={3,5},A 1-2 = { ReLU, ELU }. D, repeating the step D to calculate the fitness value of the child individuals obtained through the local search;
h: the parent and offspring populations are combined. Combining the initialized population/the previous generation population and the offspring individuals obtained in the step F, G to obtain a combined population;
i: elite selected the next generation population. The purpose of the selection operation is to eliminate the winner and the bad of the population, continuously evolve and improve the convergence speed and the searching efficiency of the population, so the invention sorts the fitness values of the combined population obtained in the step H and takes the first m individuals with the largest fitness value as elite individuals to enter the next generation;
j: an optimal individual is determined. Continuously repeating the steps E to I until 200 times, namely after the secret mother algorithm is executed for 200 generations, taking the individual with the largest fitness value in the last generation population as the optimal individual under the network configuration, wherein the BP network represented by the optimal individual is the optimal SDN network performance predictor searched finally;
k: and executing an reasoning process to predict SDN network performance. And finally, performing feedforward reasoning process once on the optimal individual obtained in the step J, namely the trained BP network, to obtain prediction accuracy, namely the prediction accuracy of SDN network performance under the SDN network configuration.
The neural architecture search aims to solve the problems of structural design and parameter adjustment in a deep learning model, and is a cross research field combining an optimization algorithm and machine learning. Because the traditional machine learning model structure is relatively simple before deep learning emerges, the algorithm engineer can manually modify the model structure and super parameters to probe out satisfactory results. However, with the rapid development of computer technology, the scale of an SDN network and the structure of a neural network are gradually complex, and the number of super parameters of the SDN network grows exponentially, which brings greater challenges to algorithm engineers in designing a suitable SDN network performance predictor.
Current reinforcement learning and evolutionary computation are two major mainstream optimization strategies for neural architecture searching. Compared with a search mechanism based on reinforcement learning, the coding mode taking the evolutionary algorithm as an optimization strategy is more flexible, the theoretical research foundation is thicker, and the method is popular with experts in the field of artificial intelligence. Therefore, the invention designs a neural architecture search technology based on a secret mother algorithm to automatically explore an optimal SDN network performance predictor so as to improve SDN performance prediction precision under each route configuration, wherein the secret mother algorithm is a combination of global search based on population and local heuristic search based on individuals, and belongs to one branch of an evolutionary algorithm. For ease of understanding and expressing the details of the present invention, the term correspondence in evolutionary algorithms, neural architecture searches, and SDN networks are listed in the following table.
Evolutionary algorithm, neural architecture search and term correspondence interpretation in SDN network used in the invention
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.
Furthermore, it should be understood that although the present disclosure describes embodiments, not every embodiment is provided with a separate embodiment, and that this description is provided for clarity only, and that the disclosure is not limited to the embodiments described in detail below, and that the embodiments described in the examples may be combined as appropriate to form other embodiments that will be apparent to those skilled in the art.

Claims (3)

1. A software defined network performance prediction method based on a dense parent neural architecture search is characterized by comprising the following steps:
a: taking state data of an SDN network as an input sample, taking state data of each link in the SDN network as a label to form sample data, dividing the sample data into a training set, a testing set and a verification set according to the proportion of 60%, 30% and 10%, wherein the state data of the SDN network comprises topology information and a flow matrix of the SDN network, and the state data of each link in the SDN network comprises time delay, jitter and packet loss;
b: determining a coding mode and a search space, and adopting variable length coding as the coding mode to create a more flexible search mode: one individual of the cryptographic algorithm is denoted as X i ={N i ,A i "i" means individual X i The number in the population is the number in the population,is individual X i The number of neurons in each hidden layer of the represented BP neural network,is individual X i The activation function type in each hidden layer of the BP neural network is represented, wherein h refers to the number of the hidden layers, the value of the activation function type is not fixed, and the activation function type is obtained by searching a secret mother algorithm;
c: initializing a population, and randomly initializing m individuals to serve as the initial population according to the coding mode of the step B;
d: c, calculating fitness values of the population, converting the individuals obtained in the step C into corresponding BP neural networks, training the BP neural networks through a back propagation algorithm, and taking the accuracy of the BP neural networks for SDN network performance prediction on a test set as the fitness values of the corresponding individuals, wherein the fitness values are used for judging the basis of the individual quality in the subsequent secret mother algorithm search;
e: selecting parent individuals, after calculating the fitness value of each individual, selecting excellent individuals from the population according to the fitness value, so that the excellent individuals have the opportunity to be used as parent reproduction offspring, and selecting the individuals from the initialized population or the previous generation population as the parent population by adopting a binary tournament selection mode;
f: executing crossover and mutation operators, and for parent population, executing crossover and mutation operation on every two individuals to generate child individuals and calculating fitness value of the child individuals;
g: executing a local disturbance operator, calculating the fitness value of newly generated child individuals, sequencing the fitness values of all child individuals obtained in the step F, and taking the individuals with the largest fitness value of the top 5% as elite individuals to perform local disturbance, wherein the method comprises the following steps: for individual x i ={N i ,A i For N }, N i Each position of (a) is respectively added and subtracted by 1 based on the original value, and A is added i Replacing each position of the child with any other value in the value range to obtain a newly generated child individual, and repeating the step D to calculate the fitness value of the child individual;
h: combining the initial population or the previous generation population and the newly generated individuals in the step F, G to obtain a combined population;
i: 2, elite selecting a next generation population, sorting the fitness values of the combined population obtained in the step H, and taking the first m individuals with the largest fitness values as elite individuals to enter the next generation;
j: determining an optimal individual, and continuously repeating the steps E to I until 200 times, namely, after the secret mother algorithm is executed for 200 generations, taking the individual with the largest fitness value in the population of the last generation as the optimal individual, wherein the BP neural network represented by the optimal individual is the optimal SDN network performance predictor searched finally;
k: and C, performing an reasoning process to predict SDN network performance, and performing a feedforward reasoning process on the test set only once to obtain prediction accuracy, namely, the prediction accuracy of SDN network performance, for the optimal individual obtained in the step J, namely, the trained BP neural network.
2. The method for predicting performance of a software-defined network based on a dense-matrix neural architecture search according to claim 1, wherein in step D, the training process specifically comprises the following steps:
(1) Network initialization according to individual X i The number of hidden layers, the number of neurons of each hidden layer and the type of an activation function are set to build a corresponding BP neural network, the Epoch trained by the BP neural network is set to be 300, the learning rate is 0.01, the optimizer is Adam, and the network weight is initialized by using an Xavier;
(2) The feedforward process is calculated, SDN state data in a training set are input into a built BP neural network, and the output of each hidden layer and the output value of the last output layer are calculated according to the weight of the current BP neural network;
(3) Calculating an error between a predicted delay value and a real delay value of the BP neural network, wherein the error is measured by using a Mean Square Error (MSE) value;
(4) Updating the weight, namely updating the connection weight in the BP neural network according to the prediction error, namely performing a back propagation process;
(5) And (3) judging the termination condition, continuously repeating the steps (2) - (4), judging whether the repetition number reaches 300 times, if so, ending training, otherwise, continuously repeating the steps (2) - (4).
3. The method of claim 1, wherein in step E, each time a binary tournament selection is performed, 2 individuals are selected from the population, and then the individual with the highest fitness value of the two individuals is used as a member of the parent population.
CN202310127530.1A 2023-02-17 2023-02-17 Software defined network performance prediction method based on dense parent neural architecture search Active CN116306770B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310127530.1A CN116306770B (en) 2023-02-17 2023-02-17 Software defined network performance prediction method based on dense parent neural architecture search

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310127530.1A CN116306770B (en) 2023-02-17 2023-02-17 Software defined network performance prediction method based on dense parent neural architecture search

Publications (2)

Publication Number Publication Date
CN116306770A CN116306770A (en) 2023-06-23
CN116306770B true CN116306770B (en) 2023-11-14

Family

ID=86786037

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310127530.1A Active CN116306770B (en) 2023-02-17 2023-02-17 Software defined network performance prediction method based on dense parent neural architecture search

Country Status (1)

Country Link
CN (1) CN116306770B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20200002439A (en) * 2018-06-29 2020-01-08 주식회사 케이티 Apparatus and method for routing based on machine learning in software defined network environment
CN113158543A (en) * 2021-02-02 2021-07-23 浙江工商大学 Intelligent prediction method for software defined network performance
CN115456140A (en) * 2022-07-20 2022-12-09 浙江工商大学 Charpy neural network model important link determination method and device based on Charpy value interpretation
CN115620046A (en) * 2022-09-22 2023-01-17 广东工业大学 Multi-target neural architecture searching method based on semi-supervised performance predictor

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10148594B2 (en) * 2015-12-31 2018-12-04 Fortinet, Inc. Application based conditional forwarding and load balancing in a software defined networking (SDN) architecture
US10405219B2 (en) * 2017-11-21 2019-09-03 At&T Intellectual Property I, L.P. Network reconfiguration using genetic algorithm-based predictive models
TWI746038B (en) * 2020-07-02 2021-11-11 阿證科技股份有限公司 Neural network-like artificial intelligence decision-making core system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20200002439A (en) * 2018-06-29 2020-01-08 주식회사 케이티 Apparatus and method for routing based on machine learning in software defined network environment
CN113158543A (en) * 2021-02-02 2021-07-23 浙江工商大学 Intelligent prediction method for software defined network performance
CN115456140A (en) * 2022-07-20 2022-12-09 浙江工商大学 Charpy neural network model important link determination method and device based on Charpy value interpretation
CN115620046A (en) * 2022-09-22 2023-01-17 广东工业大学 Multi-target neural architecture searching method based on semi-supervised performance predictor

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Improving the Energy Efficiency of Software-Defined Networks through the Prediction of Network Configurations;Manuel Jimenez-Lazaro et al.;《Electronics》;第11卷;全文 *
Network Intrusion Detection Based on an Efficient Neural Architecture Search;Renjian Lyu et al.;《Symmetry》;第13卷;全文 *
一种基于SA-SOA-BP神经网络的网络安全态势预测算法;张然;刘敏;张启坤;尹毅峰;;小型微型计算机系统(10);全文 *
基于深度学习的加密流量分类技术研究;张稣荣;《中国优秀硕士学位论文全文数据库 信息科技辑》;第I139-203页 *
神经网络算法在SDN环境下的流量预测研究;朱凌云;庄玉娟;;网络安全技术与应用(03);全文 *

Also Published As

Publication number Publication date
CN116306770A (en) 2023-06-23

Similar Documents

Publication Publication Date Title
CN113158543B (en) Intelligent prediction method for software defined network performance
CN114697229B (en) Construction method and application of distributed routing planning model
CN107346459B (en) Multi-mode pollutant integrated forecasting method based on genetic algorithm improvement
CN113887787B (en) Flood forecast model parameter multi-objective optimization method based on long-short-term memory network and NSGA-II algorithm
CN113129585B (en) Road traffic flow prediction method based on graph aggregation mechanism of reconstructed traffic network
CN106874655A (en) Traditional Chinese medical science disease type classification Forecasting Methodology based on Multi-label learning and Bayesian network
CN113225370B (en) Block chain multi-objective optimization method based on Internet of things
CN111355633A (en) Mobile phone internet traffic prediction method in competition venue based on PSO-DELM algorithm
CN112260733B (en) Multi-agent deep reinforcement learning-based MU-MISO hybrid precoding design method
CN114118567A (en) Power service bandwidth prediction method based on dual-channel fusion network
CN104200096A (en) Lightning arrester grading ring optimization method based on differential evolutionary algorithm and BP neural network
CN112905436B (en) Quality evaluation prediction method for complex software
Li et al. Network topology optimization via deep reinforcement learning
CN116306770B (en) Software defined network performance prediction method based on dense parent neural architecture search
CN116170066B (en) Load prediction method for low-orbit satellite Internet of things
CN117131979A (en) Traffic flow speed prediction method and system based on directed hypergraph and attention mechanism
CN111310974A (en) Short-term water demand prediction method based on GA-ELM
CN115640845A (en) Method for generating few-category samples of neural network of graph based on generation of confrontation network
CN113285832B (en) NSGA-II-based power multi-mode network resource optimization allocation method
WO2023015674A1 (en) Multi-bit-width quantization method for deep convolutional neural network
CN115620046A (en) Multi-target neural architecture searching method based on semi-supervised performance predictor
CN115150335A (en) Optimal flow segmentation method and system based on deep reinforcement learning
CN109871953B (en) Wavelet neural network modeling method for heavy oil cracking process of fpRNA genetic algorithm
CN116132310B (en) Large-scale software defined network performance prediction method
Shi A Method of Optimizing Network Topology Structure Combining Viterbi Algorithm and Bayesian Algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant