CN112564986A - Two-stage deployment system in network function virtualization environment - Google Patents

Two-stage deployment system in network function virtualization environment Download PDF

Info

Publication number
CN112564986A
CN112564986A CN202011560134.0A CN202011560134A CN112564986A CN 112564986 A CN112564986 A CN 112564986A CN 202011560134 A CN202011560134 A CN 202011560134A CN 112564986 A CN112564986 A CN 112564986A
Authority
CN
China
Prior art keywords
module
starting
service
neural network
deep neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011560134.0A
Other languages
Chinese (zh)
Other versions
CN112564986B (en
Inventor
李健
殷豪
管海兵
张沪滨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN202011560134.0A priority Critical patent/CN112564986B/en
Publication of CN112564986A publication Critical patent/CN112564986A/en
Application granted granted Critical
Publication of CN112564986B publication Critical patent/CN112564986B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/145Network analysis or design involving simulating, designing, planning or modelling of a network
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/12Computing arrangements based on biological models using genetic models
    • G06N3/126Evolutionary algorithms, e.g. genetic algorithms or genetic programming
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/147Network analysis or design for predicting network behaviour
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/50Network service management, e.g. ensuring proper service fulfilment according to agreements
    • H04L41/5041Network service management, e.g. ensuring proper service fulfilment according to agreements characterised by the time relationship between creation and deployment of a service
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Molecular Biology (AREA)
  • Signal Processing (AREA)
  • Evolutionary Biology (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Genetics & Genomics (AREA)
  • Physiology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

A two-stage deployment system in a network function virtualization environment comprises: the profit prediction module comprises a deep neural network model and outputs the future profit expectation of the current situation; a pre-boot module that executes a genetic algorithm to generate a pre-boot policy; the service allocation module is used for mapping the service request to the virtual machine resource to obtain a specific deployment scheme; the revenue prediction module is used for guiding the pre-starting module and the service distribution module; and the service distribution module feeds back data in the service distribution process as a training set to a deep neural network profit prediction model in the profit prediction module, and feeds back the data as a selection basis to a pre-starting algorithm in the pre-starting module. The invention adopts the first allocation mapping algorithm designed for the two-stage deployment of the high-immediate application, thereby remarkably improving the total income in the two-stage deployment process; the first deep neural network revenue prediction model, and the first using genetic algorithms and iterative training to jointly optimize the pre-launch and re-launch processes.

Description

Two-stage deployment system in network function virtualization environment
Technical Field
The present invention relates to the field of Network Function Virtualization (NFV), and in particular, to a service chain deployment system and method.
Background
Network Function Virtualization (NFV) uses Virtualization technology to implement traditional Network functions in software on inexpensive and general-purpose hardware facilities, rather than using expensive and inflexible specialized hardware devices as in the past. Network function virtualization implements the same functions as the original device, but it changes the modality of the device. It is very similar to the traditional server virtualization technology, but is mainly applied to network services. Traditional network functions can be rapidly deployed and realized through network function virtualization, and the extension and the upgrade of the traditional network functions are easier, so that the flexibility is greatly improved, and the development of a communication network is powerfully promoted.
Network function virtualization is different from traditional virtualization, and is usually organized in the form of service chains (SFC) due to special network service requirements when deployed on a cloud computing platform. In this case, the entire network service is composed of a plurality of Virtual Network Functions (VNFs) that run independently on individual virtual machine instances of the cloud platform, and data streams are forwarded between these network functions such as firewalls, routers or network address translators in a certain path, and eventually cooperate together to provide the service.
The network function placement refers to that a cloud service provider deploys a series of service chain requests of users to a cluster server or a data center of the cloud service provider through a specific deployment strategy. There are many studies on the distribution and placement of service chains on cloud platforms. They have designed corresponding algorithms and frameworks from several goals of minimizing the starting overhead, fully utilizing the network topology structure to reduce the delay, eliminating the co-location interference, etc.
However, in some special scenarios where user experience is very much pursued, in order to provide timely and rapid network services, some network applications, such as IP Multimedia System (IMS), cannot start a virtual machine instance each time a request arrives, as in the conventional deployment. The temporary start of a series of virtual machine instances for providing services takes an extremely long time, which greatly affects the user experience.
In this case, the network service will be implemented in two steps, and the schematic diagram can be seen in fig. 1: firstly, in a pre-starting stage, starting a certain number of virtual machine instances on a cloud platform to form a resource pool for future use; then, in a redistribution phase, when a specific service request arrives, some corresponding network functions are selected from the started virtual machine instances, and a service chain is organized and formed by a certain path. This avoids the time consumed by the user to start a virtual machine instance each time the user initiates a request.
Disclosure of Invention
The existing network function placement strategy considers (1) resource sensitivity of various virtual network functions, (2) virtual network function dependency in a service chain, (3) service quality indexes such as delay, cost and service rate, and (4) performance degradation caused by virtual network function co-location interference. But they do not consider the influence of the service chain on the overall service revenue in the two-stage deployment mode to globally allocate resources and deploy network functions.
In a two-phase deployment of a service chain, although the steps are isolated, the two phases are actually affected by each other. The two-stage resource management and optimization is not noticed by the traditional framework and algorithm, and the two-stage resource management and optimization usually considers that the starting and the deployment are completed together when the request arrives, and the model analysis is carried out according to the two-stage resource management and optimization, so that the two-stage resource management and optimization is not suitable in some specific application scenarios.
Due to reasons such as a non-uniform storage access architecture and a network topology structure, virtual machines deployed on different processors (CPUs) have different resource affinities, and the network communication performance difference between the virtual machines is large, so that the performance of a network service formed by the virtual machines is influenced. The current optimal deployment scheme may greatly affect the deployment schemes of subsequent other services, because the current service occupies a certain virtual machine, and then the subsequent service is forced to select other data paths. This greatly affects subsequent service revenue and may ultimately lead to a reduction in overall revenue.
The reduction of the overall service income directly affects the network service quality in the cloud environment, so that whether the virtual network function deployment algorithm can optimize the influence generated in the two-stage deployment process is very important.
In order to solve the technical problems, a mathematical model for quantitatively calculating the total profit in the service chain two-stage deployment process is established, a two-stage deployment system in a network function virtualization environment is designed based on the mathematical model, the system uses a pre-start Genetic Algorithm (GA) to generate a start strategy in the first-stage pre-start process, a deep neural network model (DNN) is used for guiding and completing redistribution of the second stage, the total profit in the whole deployment process is maximized, the deployment process is continuously iterated to train the deep neural network model, and the profit prediction accuracy is improved.
The system comprises: and the profit prediction module comprises a deep neural network model, the deep neural network model is a deep neural network profit prediction model, the input of the deep neural network profit prediction model is a current layout vector of a deployment process, and the output of the deep neural network profit prediction model is the future profit expectation of the current situation.
Further, the system further comprises: the pre-starting module is used for generating a pre-starting strategy; the pre-start module generates a pre-start strategy by adopting a genetic algorithm.
Further, the system further comprises: the service allocation module is used for mapping the service request to the virtual machine resource to obtain a specific deployment scheme; the specific method is as follows. For each request, listing all alternative positions, giving total benefit expectation by a deep neural network benefit prediction model, weighting and summing the total benefit with the current benefit to be used as a target value to be maximized, and selecting the maximum value of all feasible solutions to be used as next distribution selection.
Further, the pre-start module executes the following pre-start genetic algorithm that generates a pre-start policy:
step A1: several pre-start seed strategies are randomly generated for initial training.
Step A2: and after the deployment is finished by using the service distribution module, calculating the total income of each pre-starting strategy.
Step A3: the pre-starting strategy used as a seed is coded, and then genetic variation is carried out on each code to form a new pre-starting strategy.
Step A4: and continuously iterating and repeating the genetic variation process, and repeating the steps A2-A3 until the maximum total income is not changed any more, so as to obtain the optimal pre-starting strategy.
Further, step a1 includes: physical resources of a deployment cloud environment, services and network functions needing to be deployed and historical data used for analysis and training are input, and the proportion of each network function is preliminarily determined according to the historical data.
Further, step a2 includes: and updating the deep neural network model in the perfect profit forecasting module according to the obtained new training set.
Further, the service allocation module executes the following allocation mapping algorithm:
step B1: under the pre-starting layout generated by the pre-starting module, all the alternative positions are listed for each request, the total income expectation of the current layout is given by the income prediction module, the weighted income expectation is summed with the current income to be used as a target value to be maximized, and the maximum value in all the feasible solutions is selected to be used as next distribution selection.
Step B2: and continuously receiving the service newly generated by the service sequence until the deployment can not be continued.
Further, in step B1, the method further includes: generating a service sequence generating function according to the historical data; testing to obtain a distance table of the physical machine by using a dynamic tool; and calculating the value of each network function, and finishing the initialization of data.
Further, in step B2, the method further includes: and feeding back the data in the process as a training set to a deep neural network profit prediction model in a profit prediction module for updating and perfecting the model.
Further, in step B2, the method further includes: and accumulating the final profits of all the services to form a total profit, and feeding the total profit back to a pre-starting algorithm in a pre-starting module for genetic variation selection.
Compared with the prior art, the invention has the following beneficial effects:
(1) the first allocation mapping algorithm designed for the two-stage deployment of the high-real-time application obviously improves the overall benefit in the two-stage deployment process.
(2) The first training phase deploys a revenue expectation Deep Neural Network (DNN) model.
(3) The first algorithm uses Genetic Algorithm (GA) and iterative training to jointly optimize the pre-boot and reassignment processes.
Drawings
FIG. 1 is a schematic diagram of a two-phase deployment of a high-instantaneity application of the prior art;
FIG. 2 is a schematic diagram of a pre-start Genetic Algorithm (GA) according to one embodiment of the present invention;
FIG. 3 is a diagram of the present neural network model (DNN) guided mapping of one embodiment of the present invention;
fig. 4 is a schematic diagram of the system structure and function according to an embodiment of the present invention.
Detailed Description
The preferred embodiments of the present application will be described below with reference to the accompanying drawings for clarity and understanding of the technical contents thereof. The present application may be embodied in many different forms of embodiments and the scope of the present application is not limited to only the embodiments set forth herein.
The conception, the specific structure and the technical effects of the present invention will be further described below to fully understand the objects, the features and the effects of the present invention, but the present invention is not limited thereto.
By analyzing the two-phase deployment of the web service, the different pre-boot policies of the first phase represent different initial placement vectors L (L)1,L2,L3,L4… …). Second stageWhen different network service requests arrive, different instances need to be selected to form a service chain S to form the current profit CPV of the service, the calculation mode is shown as formula 1, bsAnd lsRespectively representing the bandwidth and delay of the service S, and theta is a proportioning factor determined by the service type. The performance of all network services that the entire platform can provide can be regarded as the total revenue GPV, i.e. the summation of the current revenue of each service chain.
Figure BDA0002860291050000041
And the selection of each step in the second stage can determine the performance of the current network service, namely the current profit. The maximum current gain is achieved, but not the highest overall gain in the future on behalf of doing so. The deployment algorithm aims at obtaining the maximum total benefit through step-by-step distribution selection under a certain initial situation L; immediate benefits cannot be completely ignored either, because the estimation of the overall benefit is never accurate enough, and the current benefits generally cannot be made particularly poor.
According to the analysis condition, a deep neural network profit prediction model is designed. The model will use existing training data to fit a function. The input accepted by this function is the current layout vector L (L) of a deployment process1,L2,L3,L4… …), the output is the future revenue expectation gpv (l) for the current situation. The output is used to guide subsequent strategy selection, and the guided selection process is used as new training data to perfect the deep neural network profit model. The deep neural network profit model adopts Mean Square Error (MSE) as a loss function, and the relation sigma of the output O and the input I of neurons in each layer can be expressed as formula 2, wherein n represents the value of the output of the neurons in the nth layer, w is the weight coefficient of each layer, t is the offset value of the layer, u is the output value of the inner layer, and j and k are iteration variables.
Figure BDA0002860291050000042
In one embodiment of the invention, the system comprises three parts: the system comprises a profit forecasting module, a pre-starting module and a service distribution module. The function and operation of the system is shown in fig. 4, wherein,
a revenue prediction module comprising a deep neural network model: the main impact of two-phase deployment on overall revenue is reflected in the fact that when the network functionality is deployed next, it may interfere with other services deployed subsequently, and this impact is difficult to predict quantitatively. Because of the different layout vectors L (L)1,L2,L3,L4… …) can be converted into standard mathematical inputs, so we build a deep neural network model that can output the integrated profit expectations gpv (l). The model will use existing training data to fit a function. The input accepted by this function is the current layout vector for a deployment process, and the output is the future revenue expectation for the current scenario. Then, the model is trained and perfected by using the test data set, so that the deployment profit expectation under different deployment strategies is accurately predicted, the model is trained and perfected through a large number of experimental tests, and a more accurate expectation prediction is obtained and used for guiding the implementation of pre-starting and service allocation, and a schematic diagram can be shown in fig. 3. And guiding subsequent strategy selection by using the output of the profit model prediction, and perfecting the deep neural network profit model by using the guided selection process as new training data.
A pre-start module to execute a genetic algorithm: the pre-boot of the first phase also affects the optional deployment scenario of the second phase to some extent, so a pre-boot genetic algorithm (GA, schematic diagram as fig. 2) is used to cooperate with the deep neural network revenue prediction model of the revenue prediction module. Firstly, a plurality of pre-starting strategies are generated as seeds in a random distribution mode and are used for a subsequent mapping group chain. The pre-boot allocation strategy is essentially a permutation and the permutation is encoded to uniquely identify each pre-boot strategy. And carrying out deployment training on several randomly generated pre-starting strategies, and calculating the total income expectation of the pre-starting strategies after deployment is finished. And carrying out genetic variation on a plurality of pre-starting strategies serving as seeds, adjusting the partial values of codes, generating a plurality of new pre-starting strategies, and carrying out deployment training aiming at the new pre-starting strategies. And calculating the total income expectation of the new pre-starting strategy according to the deployed experimental data, keeping the strategy with the larger total income in the original strategy and the new strategy as a seed, and continuously repeating the process. The process is stopped until the total profit expectancy of all strategies no longer becomes large, an optimal pre-start strategy is obtained, and the pre-start genetic algorithm is terminated.
The service distribution module is used for service distribution and iterative training: for each pre-starting strategy generated in the pre-starting genetic algorithm, a mapping experiment test is carried out by using a generated random service sequence, and a specific deployment scheme is obtained for each service request, so that high-instantaneity application is generated; and calculating the benefit of the deployment mode for training a deep neural network model in a benefit prediction module. And guiding the selection of the next mapping mode according to the prediction of the existing deep neural network profit model. The specific guiding mode is that in all feasible solutions, the position with the weighted sum of the predicted total profit expectation and the current profit being the maximum is taken as the next distribution selection, and the service distribution of the current round is completed. The process is iterated, and the generated new layout is used for training a deep neural network model in the profit prediction module.
In one embodiment, a pre-start Genetic Algorithm (GA), schematically shown in fig. 2, comprises:
the first step is as follows: inputting physical resources of the deployment cloud environment requires deployment services and network functions for analysis and training of historical data.
The second step is that: and preliminarily determining the proportion of each network function according to historical data, and randomly generating a plurality of pre-starting strategies for initial training. After the deployment of the service distribution module is completed, the total income of each pre-starting strategy is calculated, and a deep neural network income model in the income prediction module is perfected according to the obtained new training set.
The third step: the pre-starting strategy used as a seed is coded, and then genetic variation is carried out on each code to form a new pre-starting strategy. And repeating the process by using a new pre-starting strategy, and inputting the training data into the deep neural network model to obtain new total income.
The fourth step: and continuously iterating and repeating the genetic variation process until the maximum total income is not changed any more, and obtaining the optimal pre-starting strategy.
In one embodiment, the service distribution module executes a federated map group chaining algorithm, and the flowchart is shown in fig. 4 and includes:
the first step is as follows: generating a service sequence generating function according to the historical data; testing to obtain a distance table of the physical machine by using a dynamic tool; and calculating the value of each network function, and finishing the initialization of data.
The second step is that: and under the pre-starting layout generated by the pre-starting module, starting to deploy the service generated by the sequence generating function, specifically as follows. For each request, all alternative positions are listed, the overall benefit expectation is given by a deep neural network model in a benefit prediction module, the overall benefit expectation is weighted and then summed with the current benefits to serve as a target value to be maximized, and the maximum value in all feasible solutions is selected to serve as next distribution selection.
The third step: and continuously receiving the service newly generated by the service sequence until the deployment can not be continued. Accumulating the final profits of all the services to form a total profit, feeding the total profit back to a pre-starting algorithm in a pre-starting module for genetic variation selection; and feeding back the data in the process as a training set to a deep neural network profit model in a profit forecasting module for updating and perfecting the model.
The foregoing detailed description of the preferred embodiments of the present application. It should be understood that numerous modifications and variations can be devised by those skilled in the art in light of the present teachings without departing from the inventive concepts. Therefore, the technical solutions available to those skilled in the art through logic analysis, reasoning and limited experiments based on the concepts of the present application should be within the scope of protection defined by the claims.

Claims (10)

1. A two-stage deployment system in a network function virtualization environment is characterized by comprising: and the profit prediction module comprises a deep neural network model, the input of the deep neural network model is a current layout vector of a deployment process, and the output of the deep neural network model is the future profit expectation of the current situation.
2. The system of claim 1, further comprising: the pre-starting module is used for generating a pre-starting strategy; the pre-launch module is configured to execute a genetic algorithm to generate the pre-launch strategy.
3. The system of claim 2, further comprising: the service allocation module is used for mapping the service request to the virtual machine resource to obtain a specific deployment scheme; the service allocation module is configured to execute the following algorithm: for each request, all alternative positions are listed, the overall benefit expectation is given by the benefit prediction module, the weighted benefits are summed with the current benefits to serve as the target value to be maximized, and the maximum value of all feasible solutions is selected to serve as the next distribution selection.
4. The system of claim 3, wherein the pre-boot module is configured to execute the following pre-boot genetic algorithm that generates the pre-boot policy:
step A1: randomly generating a plurality of pre-starting seed strategies for initial training;
step A2: completing deployment by using the service distribution module, and calculating the total income of each pre-starting strategy;
step A3: encoding a pre-starting strategy serving as a seed, and then performing genetic variation on each code to form a new pre-starting strategy;
step A4: and continuously iterating and repeating the genetic variation process, and repeating the steps A2-A3 until the maximum total income is not changed any more, so as to obtain the optimal pre-starting strategy.
5. The system of claim 4, wherein the step A1 further comprises: physical resources of a deployment cloud environment, services and network functions needing to be deployed and historical data used for analysis and training are input, and the proportion of each network function is preliminarily determined according to the historical data.
6. The system of claim 4, wherein the step A2 further comprises: and perfecting, updating and perfecting the deep neural network model in the revenue prediction module according to the obtained new training set.
7. The system of claim 3, wherein the service allocation module is configured to execute the following allocation mapping algorithm:
step B1: under the pre-starting layout generated by the pre-starting module, for each request, listing all alternative positions, giving out an overall income expectation by the deep neural network, weighting and then summing the income expectation with the current income to be used as a target value to be maximized, and selecting the maximum value in all feasible solutions to be used as next distribution selection;
step B2: and continuously receiving the service newly generated by the service sequence until the deployment can not be continued.
8. The system according to claim 7, wherein in the step B1, further comprising: generating a service sequence generating function according to the historical data; testing to obtain a distance table of the physical machine by using a dynamic tool; and calculating the value of each network function, and finishing the initialization of data.
9. The system according to claim 7, wherein in the step B2, further comprising: and feeding back the data in the step as a training set to the deep neural network model in the profit prediction module for updating and perfecting the deep neural network model.
10. The system according to claim 7, wherein in the step B2, further comprising: and accumulating the final profits of all the services to form a total profit, and feeding the total profit back to a pre-starting genetic algorithm in the pre-starting module for genetic variation selection and updating.
CN202011560134.0A 2020-12-25 2020-12-25 Two-stage deployment system in network function virtualization environment Active CN112564986B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011560134.0A CN112564986B (en) 2020-12-25 2020-12-25 Two-stage deployment system in network function virtualization environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011560134.0A CN112564986B (en) 2020-12-25 2020-12-25 Two-stage deployment system in network function virtualization environment

Publications (2)

Publication Number Publication Date
CN112564986A true CN112564986A (en) 2021-03-26
CN112564986B CN112564986B (en) 2022-06-21

Family

ID=75034257

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011560134.0A Active CN112564986B (en) 2020-12-25 2020-12-25 Two-stage deployment system in network function virtualization environment

Country Status (1)

Country Link
CN (1) CN112564986B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105389215A (en) * 2015-11-13 2016-03-09 中标软件有限公司 Virtual machine pool dynamic configuration method
US20180165167A1 (en) * 2014-10-13 2018-06-14 At&T Intellectual Property I, L.P. Network Virtualization Policy Management System
CN108322333A (en) * 2017-12-28 2018-07-24 广东电网有限责任公司电力调度控制中心 A kind of laying method of the virtual network function based on genetic algorithm
CN110460465A (en) * 2019-07-29 2019-11-15 天津大学 Service function chain dispositions method towards mobile edge calculations

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180165167A1 (en) * 2014-10-13 2018-06-14 At&T Intellectual Property I, L.P. Network Virtualization Policy Management System
CN105389215A (en) * 2015-11-13 2016-03-09 中标软件有限公司 Virtual machine pool dynamic configuration method
CN108322333A (en) * 2017-12-28 2018-07-24 广东电网有限责任公司电力调度控制中心 A kind of laying method of the virtual network function based on genetic algorithm
CN110460465A (en) * 2019-07-29 2019-11-15 天津大学 Service function chain dispositions method towards mobile edge calculations

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
XIAOYUAN FU等: "Dynamic Service Function Chain Embedding for NFV-Enabled IoT: A Deep Reinforcement Learning Approach", 《IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS》 *
XIAOYUAN FU等: "Dynamic Service Function Chain Embedding for NFV-Enabled IoT: A Deep Reinforcement Learning Approach", 《IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS》, 31 January 2020 (2020-01-31), pages 510 - 512 *

Also Published As

Publication number Publication date
CN112564986B (en) 2022-06-21

Similar Documents

Publication Publication Date Title
Even-Dar et al. Action elimination and stopping conditions for the multi-armed bandit and reinforcement learning problems.
CN110796399B (en) Resource allocation method and device based on block chain
CN109586954B (en) Network traffic prediction method and device and electronic equipment
CN112685165A (en) Multi-target cloud workflow scheduling method based on joint reinforcement learning strategy
CN113422812B (en) Service chain deployment method and device
Purohit et al. Clustering based approach for web service selection using skyline computations
WO2022005345A1 (en) Decentralized federated machine-learning by selecting participating worker nodes
CN112564986B (en) Two-stage deployment system in network function virtualization environment
Suzuki et al. Cooperative multi-agent deep reinforcement learning for dynamic virtual network allocation with traffic fluctuations
Zhao et al. Addressing budget allocation and revenue allocation in data market environments using an adaptive sampling algorithm
CN110971683B (en) Service combination method based on reinforcement learning
CN115794323A (en) Task scheduling method, device, server and storage medium
CN117202264A (en) 5G network slice oriented computing and unloading method in MEC environment
CN116149855A (en) Method and system for optimizing performance resource cost under micro-service architecture
CN116668351A (en) Quality of service prediction method, device, computer equipment and storage medium
Yi et al. Explicit personalization and local training: Double communication acceleration in federated learning
Suzuki et al. Cooperative multi-agent deep reinforcement learning for dynamic virtual network allocation
Ye et al. UPFL: Unsupervised Personalized Federated Learning towards New Clients
CN112488831A (en) Block chain network transaction method and device, storage medium and electronic equipment
CN116708581B (en) High-reliability function scheduling method for server-free edge computing
CN112445698A (en) Virtual service node performance test method, device and computer readable storage medium
Wu et al. D3T: Double Deep Q-Network Decision Transformer for Service Function Chain Placement
CN117076131B (en) Task allocation method and device, electronic equipment and storage medium
CN115174681B (en) Method, equipment and storage medium for scheduling edge computing service request
CN113923223B (en) User allocation method with low time cost in edge environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant