CN115858150A - Network function energy-saving method based on Openstack virtualization - Google Patents

Network function energy-saving method based on Openstack virtualization Download PDF

Info

Publication number
CN115858150A
CN115858150A CN202211474363.XA CN202211474363A CN115858150A CN 115858150 A CN115858150 A CN 115858150A CN 202211474363 A CN202211474363 A CN 202211474363A CN 115858150 A CN115858150 A CN 115858150A
Authority
CN
China
Prior art keywords
virtual
tasks
task
scheme
host
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211474363.XA
Other languages
Chinese (zh)
Inventor
邱日轩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Corp of China SGCC
Information and Telecommunication Branch of State Grid Jiangxi Electric Power Co Ltd
Original Assignee
State Grid Corp of China SGCC
Information and Telecommunication Branch of State Grid Jiangxi Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Corp of China SGCC, Information and Telecommunication Branch of State Grid Jiangxi Electric Power Co Ltd filed Critical State Grid Corp of China SGCC
Priority to CN202211474363.XA priority Critical patent/CN115858150A/en
Publication of CN115858150A publication Critical patent/CN115858150A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Multi Processors (AREA)

Abstract

The invention discloses a network function energy-saving method based on Openstack virtualization, which comprises the following steps of: counting the resource consumption of the current cloud data center to obtain the utilization rate of a CPU (central processing unit) and an internal memory of the virtual host; step 2: when the cloud data center receives a batch of tasks S, determining the distribution scheme of each task according to the pre-distribution algorithm to obtain a pre-distribution scheme T pre And optimality of this scheme G; and step 3: for each task in the task S, respectively distributing the tasks; and 4, step 4: continuing to detect the tasks received in the cloud data center, and executing the step 2 if a new batch of tasks are received currently; and if no new task is received currently, waiting for the task. The invention uses the migration algorithm and the scheduling algorithm to analyze different task types in detail, and has higher resource utilization rate, less server quantity and better energy-saving effect compared with other NFV energy-saving algorithms.

Description

Network function energy-saving method based on Openstack virtualization
Technical Field
The invention relates to the field of computer edge computing resource scheduling, in particular to a network function energy-saving method based on Openstack virtualization.
Background
In the NFV method, various network functions such as NAT, firewall, intrusion detection, DNS, cache, etc. are changed into independent software mainly by means of cloud computing and virtualization technologies in the current IT industry, and they use virtualization resources provided by devices such as x86 servers, storage and switches, etc. which are lower-layer industry standards, and can implement rapid elastic expansion and contraction according to actual needs at any time. According to the typical NFV use case, NFV services of multiple enterprises can share resources provided by the same infrastructure layer, and even NFV applications and general cloud computing applications can be run on these devices at the same time, so that resource utilization rate is greatly improved, and cost is saved for the enterprises.
Disclosure of Invention
The invention provides a network function energy-saving method based on Openstack virtualization, which aims at researching and analyzing energy consumption of a cloud data center and providing an energy-saving scheduling algorithm of Network Function Virtualization (NFV), and aims to maximize resource utilization rate and reduce the number of servers used so as to achieve the purpose of saving energy by firstly designing a method for acquiring the use conditions of resources of a physical server and a virtual machine and then checking and obtaining energy consumption models of the server and the virtual machine by using acquired data from the beginning of monitoring and analyzing the energy consumption conditions of the resources of the data center. Compared with other similar NFV energy-saving scheduling algorithms, the NFV energy-saving scheduling algorithm has the advantages of being higher in resource utilization rate, smaller in server number and better in energy-saving effect.
The invention adopts the following technical scheme:
a network function energy-saving method based on Openstack virtualization comprises the following steps:
step 1: counting the resource consumption of the current cloud data center to obtain the utilization rate of a CPU (central processing unit) and an internal memory of the virtual host; initializing an algorithm, monitoring the resource utilization rate of data of the physical server and the virtual machine in real time, and allocating and scheduling subsequent tasks;
step 2: when the cloud data center receives a batch of tasks S, determining the distribution scheme of each task according to the pre-distribution algorithm to obtain a pre-distribution scheme T pre And optimality of this scheme G;
and step 3: for each task in the task S, respectively distributing the tasks;
and 4, step 4: continuing to detect the tasks received in the cloud data center, and executing the step 2 if a new batch of tasks are received currently; and if no new task is received currently, waiting for the task.
Further, the step 3 comprises:
step 3.1: judging whether idle virtual hosts exist currently or not, wherein the number of the idle virtual hosts is L, if so, skipping to the step 3.2, and if not, skipping to the step 3.5;
step 3.2: according to a pre-allocation scheme T pre And the number M of the required virtual hosts, if the pre-allocation scheme is the optimal allocation scheme, skipping to the step 3.3; otherwise, if the pre-allocation scheme is not the optimal allocation scheme, skipping to step 3.4;
step 3.3: if L is not less than M, the M virtual hosts are used according to the pre-allocation scheme T pre Performing an allocation task; if L is less than M, opening up M-L virtual host machines, then making all idle virtual host machine M according to preallocation scheme T pre Performing an allocation task;
step 3.4: if L is not less than M-1, obtaining an actual distribution scheme T by the M-1 virtual hosts through an actual distribution algorithm, and distributing tasks according to the actual distribution scheme T; if L is smaller than M-1, additionally opening up M-1-L virtual hosts to obtain an actual distribution scheme T through an actual distribution algorithm, and distributing tasks according to the actual distribution scheme T;
step 3.5: if the current pre-allocation scheme T pre If the optimal distribution scheme is adopted, skipping to the step 3.6, otherwise skipping to the step 3.7;
step 3.6: newly opening up M virtual hosts and according to T pre Carrying out task allocation;
step 3.7: and newly opening up M-1 virtual hosts, obtaining an actual distribution scheme T through an actual distribution algorithm, and distributing tasks according to the actual distribution scheme T.
Further, in step 2, the pre-allocation algorithm is as follows:
for theta batch tasks S = { S = { S = } 1 ,S 2 ,S 3 ,…,S i ,…,S θ Independent and independent of each other, each task S i Has n i Subtasks, and the resource consumption of each subtask to CPU is
Figure BDA0003959123410000031
The consumption of the memory is
Figure BDA0003959123410000032
The CPU idle load after the start of each virtual host is alpha cpu The upper limit of the load is beta cpu With a memory empty load of alpha memory The upper limit of the load is beta memory
The following algorithm is designed to determine the actual selection of the ith task S i Number of virtual hosts that need to be deployed:
each virtual host can accommodate at most
Figure BDA0003959123410000033
A task, the task uses>
Figure BDA0003959123410000034
The method comprises the steps that each virtual host can finish batch tasks only by 1 task consumption time, C tasks run on each virtual host simultaneously, the number of deployed virtual hosts is set to be m, the number of distributable virtual hosts at most is set to be N, and the number of the distributable virtual hosts is taken
Figure BDA0003959123410000035
Executes the depletion pick>
Figure BDA0003959123410000036
Completing batch tasks within one task time, wherein the additional energy consumption is alpha tau (m-1); in the formula>
Figure BDA0003959123410000037
Means rounding up the result of dividing a by b; />
Figure BDA0003959123410000038
Is to remove aRounding down with the result of b;
compared with the task of executing the batch by using only one virtual host, when a plurality of virtual hosts are used, the additional energy consumption of the virtual hosts is as follows: w Is additionally provided with = α τ (m-1), the time earned is:
Figure BDA0003959123410000039
setting an optimization function f (m) = W Is additionally provided with /T Earn money
Therefore, it is
Figure BDA00039591234100000310
When there is i e [1, phi ]]When n% iC =0, f (i) is the minimum value, and the number of virtual hosts to be deployed is considered to be
Figure BDA0003959123410000041
The allocation scheme obtained in this case is the optimal allocation scheme, and when τ is 1, additionally satisfied ^ er>
Figure BDA0003959123410000042
The scheme at this time is the optimal distribution scheme; wherein a% b =0 means that a is divisible by b;
traverse all tasks S = { S = } 1 ,S 2 ,S 3 ,…,S i ,…,S θ Get the number of virtual host machines M = { M = required by each task 1 ,m 2 ,m 3 ,…,m i ,…,m θ For a task S i If m is i The obtained distribution scheme is the optimal distribution scheme G i =1, its pre-allocation scheme
Figure BDA0003959123410000043
Deployment is at m i Deploying tasks on the virtual host, and paralleling C subtasks on each host, at this time m i The platform virtual hosts are all in a full load condition; if m i The resulting allocation scheme is not the optimal allocation scheme G i =0, its pre-allocation scheme =>
Figure BDA0003959123410000044
Deployment is at m i Deployment of tasks on the desktop virtual host, and m i Parallel C subtasks on 1 host, in this case m i -1 virtual machines are all in a full load condition, one virtual machine remaining partially loaded;
so that all tasks S = { S } can be finally obtained 1 ,S 2 ,S 3 ,…,S i ,…,S θ Pre-allocation scheme of
Figure BDA0003959123410000045
Figure BDA0003959123410000046
And optimality of the pre-allocation scheme G = { G = { (G) 1 ,G 2 ,G 3 ,…,G i ,…,G θ }。
Further, in step 3, the actual allocation algorithm is:
for the pre-allocation scheme of the current non-optimal allocation scheme, tasks are allocated to m-1 virtual hosts according to the pre-allocation scheme; the CPU occupancy for the virtual host in which there is a remaining, less full load is set to ρ cpu The occupancy rate of the memory is set as rho memory (ii) a Notation q = { ρ = cpumemory }
Traversing the remaining availability of the CPUs and the memories of all the non-idle virtual hosts at present, Q = { Q = 1 ,Q 2 ,…,Q i ,…,Q t T is the number of the non-idle virtual hosts, and the CPU residual of the ith non-idle virtual host is set as
Figure BDA0003959123410000047
The remaining available condition of memory is->
Figure BDA0003959123410000048
I.e. is>
Figure BDA0003959123410000049
Traversing all the non-idle virtual hosts to obtain the ith station non-idleThe idle virtual host receives the remaining available condition Q' = Q-Q after the current remaining task is received;
find all the satisfies
Figure BDA00039591234100000410
Is/is>
Figure BDA00039591234100000411
And is
Figure BDA0003959123410000051
Is denoted as Γ, the f value of each element in Γ is calculated from f (cpu, memory) =0.5cpu +0.5memory, taken +>
Figure BDA0003959123410000052
Allocating the tasks which are not fully loaded in the pre-allocation scheme to the ith non-idle virtual host; on the contrary, if the gamma is an empty set, a new virtual machine is opened for storing the underloaded tasks in the pre-allocation scheme.
Further, the step 3 further includes:
step 3.8: using a migration algorithm to count the resource consumption of the executing task, and migrating the specific virtual machine;
the method comprises the following steps:
step 3.8.1: migration probability calculation is carried out on the load conditions of all the virtual hosts to obtain a server queue needing migration;
step 3.8.2: if the queue is empty, then no migration is currently required; otherwise, the queue is not empty and needs to be migrated;
step 3.8.3: migrating and selecting a target virtual host, and migrating all tasks on the source virtual host to the target host if the CPU and the memory consumption of the target virtual host are still within the upper limit of the load after all migration tasks of the source target host are received by the target virtual host; otherwise, the virtual host is not migrated temporarily;
step 3.8.4: through system detection, if the resource utilization rates of the CPUs and the memories of some virtual hosts are found to be in an idle state for a long time, the virtual hosts can be closed to save energy consumption.
Further, in step 3.8.1, a migration probability function F = F is designed cpu (cpu)+f memory (memory) wherein
Figure BDA0003959123410000053
Compared with the prior art, the invention has the beneficial effects that:
the invention classifies the virtual host tasks on the server, scientifically migrates the old tasks by using the migration algorithm and the allocation algorithm, allocates the new tasks, maximizes the resource utilization rate and reduces the number of the required servers as much as possible. The invention uses the migration algorithm and the scheduling algorithm to carry out detailed analysis on different task types, and has higher resource utilization rate, less server quantity and better energy-saving effect compared with other NFV energy-saving algorithms.
Drawings
Fig. 1 is a flowchart of a network function energy saving method based on Openstack virtualization.
Detailed Description
The following detailed description of the preferred embodiments of the present invention, taken in conjunction with the accompanying drawings, will make the advantages and features of the invention easier to understand by those skilled in the art, and thus will clearly and clearly define the scope of the invention.
Referring to fig. 1, a network function energy saving method based on Openstack virtualization is used for counting resource consumption of a task being executed by performing statistical analysis on resource consumption conditions of a cloud data center and combining a proposed migration algorithm and a proposed allocation algorithm, and performing migration and new task allocation on a specific virtual machine.
The method comprises the following steps:
step 1: counting the resource consumption of the current cloud data center to obtain the utilization rate of a CPU (central processing unit) and an internal memory of the virtual host; an initialization algorithm is used for monitoring the resource utilization rate of the data of the physical server and the virtual machine in real time and allocating and scheduling subsequent tasks;
step 2: current number of cloudsWhen a batch of tasks S is received by the data center, determining the distribution scheme of each task according to the pre-distribution algorithm to obtain a pre-distribution scheme T pre And optimality of this scheme G;
and step 3: for each task in the task S, respectively distributing the tasks;
step 3.1: judging whether idle virtual hosts exist currently or not, wherein the number of the idle virtual hosts is L, if so, skipping to the step 3.2, and if not, skipping to the step 3.5;
step 3.2: according to a pre-allocation scheme T pre And the number M of the required virtual hosts, if the pre-allocation scheme is the optimal allocation scheme, skipping to the step 3.3; otherwise, if the pre-allocation scheme is not the optimal allocation scheme, skipping to step 3.4;
step 3.3: if L is not less than M, the M virtual hosts are used according to the pre-allocation scheme T pre Performing an allocation task; if L is less than M, opening up M-L virtual host machines, then making all idle virtual host machines (M machines) according to preallocation scheme T pre Performing an allocation task;
step 3.4: if L is not less than M-1, obtaining an actual distribution scheme T by the M-1 virtual hosts through an actual distribution algorithm, and distributing tasks according to the actual distribution scheme T; if L is smaller than M-1, additionally opening up M-1-L virtual hosts to obtain an actual distribution scheme T through an actual distribution algorithm, and distributing tasks according to the actual distribution scheme T;
step 3.5: if the current pre-allocation scheme T pre If the optimal distribution scheme is adopted, skipping to the step 3.6, otherwise skipping to the step 3.7;
step 3.6: newly opening up M virtual hosts and according to T pre Carrying out task allocation;
step 3.7: newly opening up M-1 virtual hosts, obtaining an actual distribution scheme T through an actual distribution algorithm, and distributing tasks according to the actual distribution scheme T;
and 4, step 4: continuing to detect the tasks received in the cloud data center, and executing the step 2 if a new batch of tasks are received currently; and if no new task is received currently, waiting for the task.
In addition, in the working process of the whole cloud data center, the resource consumption condition of each virtual host needs to be monitored in real time. The migration algorithm can be used for migrating tasks on the virtual host with low resource utilization rate and closing the current virtual host so as to increase the overall resource utilization rate with the data center, reduce energy consumption and achieve the energy-saving effect.
Further, in step 2, the pre-allocation algorithm is as follows:
for theta batch tasks S = { S = { S = } 1 ,S 2 ,S 3 ,…,S i ,…,S θ Independent and independent of each other, each task S i Has n i Subtasks, and the resource consumption of each subtask to CPU is
Figure BDA0003959123410000081
The consumption of the memory is
Figure BDA0003959123410000082
The CPU idle load after the start of each virtual host is alpha cpu The upper limit of the load is beta cpu With a memory empty load of alpha memory The upper limit of the load is beta memory
The following algorithm is designed to determine the actual selection of the ith task S i Number of virtual hosts that need to be deployed:
each virtual host can accommodate at most
Figure BDA0003959123410000083
A task, the task uses>
Figure BDA0003959123410000084
Each virtual host can complete batch tasks by only consuming 1 task time, C tasks run on each virtual host simultaneously, the number of deployed virtual hosts is set to be m, the number of virtual hosts which can be distributed at most is set to be N, and the number of deployed virtual hosts is taken as
Figure BDA0003959123410000085
Executes the depletion pick>
Figure BDA0003959123410000086
Completing batch tasks within one task time, wherein the additional energy consumption is alpha tau (m-1); in the formula>
Figure BDA0003959123410000087
Means rounding up the result of dividing a by b; />
Figure BDA0003959123410000088
Means rounding down the result of dividing a by b;
compared with the task of executing the batch by using only one virtual host, when a plurality of virtual hosts are used, the additional energy consumption of the virtual hosts is as follows: w Is additionally provided with = α τ (m-1), the time earned is:
Figure BDA00039591234100000813
setting an optimization function f (m) = W Is additionally provided with /T Earn money
Therefore, it is
Figure BDA0003959123410000089
In particular:
when there is i e [1, phi ]]When n% iC =0, f (i) is the minimum value, and the number of virtual hosts to be deployed is considered to be
Figure BDA00039591234100000810
The resulting allocation scheme is now an optimal allocation scheme (τ is not 1), which additionally is satisfied for ^ R when τ is 1>
Figure BDA00039591234100000811
The scheme at this time is the optimal distribution scheme; where a% b =0 means that a is divisible by b.
Traverse all tasks S = { S = } 1 ,S 2 ,S 3 ,…,S i ,…,S θ Get the number of virtual host machines M = { M = required by each task 1 ,m 2 ,m 3 ,…,m i ,…,m θ For a task S i If m is i The resulting allocation scheme is the optimal allocation scheme (G) i = 1), then its pre-allocation scheme
Figure BDA00039591234100000812
Deployment is at m i Deploying tasks on the virtual host, and paralleling C subtasks on each host, wherein m is the moment i The platform virtual hosts are all in a full load condition; if m i The resulting allocation scheme is not the optimal allocation scheme (G) i = 0), then its pre-allocation scheme £ is asserted>
Figure BDA0003959123410000091
Deployment is at m i Deployment of tasks on the desktop virtual host, and m i Parallel C subtasks on 1 host, in this case m i -1 virtual machines are all in a full load condition, one virtual machine remaining partially loaded; />
So that all tasks S = { S } can be finally obtained 1 ,S 2 ,S 3 ,…,S i ,…,S θ Preallocation scheme of
Figure BDA0003959123410000092
Figure BDA0003959123410000093
And optimality of the pre-allocation scheme G = { G = { (G) 1 ,G 2 ,G 3 ,…,G i ,…,G θ }。
Further, in step 3, the actual allocation algorithm is:
for the pre-allocation scheme of the current non-optimal allocation scheme, tasks are allocated to m-1 virtual hosts according to the pre-allocation scheme; the CPU occupancy for the virtual host in which there is a remaining underfill is set to ρ cpu The occupancy rate of the memory is set as rho memory (ii) a Notation q = { ρ = cpumemory };
Traversing all the remaining available (excluding the upper limit of the load beta) cases of the CPUs and the memories of the non-idle virtual hosts (assuming t stations) at present, Q = { Q = 1 ,Q 2 ,…,Q i ,…,Q t }, setCPU of the ith non-idle virtual host remains available as
Figure BDA0003959123410000094
The remaining available condition of memory is->
Figure BDA0003959123410000095
I.e. based on>
Figure BDA0003959123410000096
Traversing all the non-idle virtual hosts to obtain the condition Q' = Q-Q of the i-th non-idle virtual host after receiving the current residual tasks (not including the load upper limit beta);
find all the satisfies
Figure BDA0003959123410000097
Is/is>
Figure BDA0003959123410000098
And is provided with
Figure BDA0003959123410000099
Is denoted as Γ, the f value of each element in Γ is calculated from f (cpu, memory) =0.5cpu +0.5memory, taken +>
Figure BDA00039591234100000910
Allocating the tasks which are not fully loaded in the pre-allocation scheme to the ith non-idle virtual host; on the contrary, if the gamma is an empty set, a new virtual machine is opened for storing the underloaded tasks in the pre-allocation scheme.
Furthermore, the migration algorithm mainly aims at migrating all tasks on the virtual host with the resource utilization rate close to the idle load to other virtual hosts, trying to migrate the virtual host to be idle, and releasing the virtual host to save energy consumption. If there are not enough resources on the remaining VMs to accommodate all tasks on the source VM, then no migration occurs.
Designing a migration probability function
Figure BDA0003959123410000101
Wherein->
Figure BDA0003959123410000102
Figure BDA0003959123410000103
The steps of the migration algorithm are as follows:
1. migration probability calculation is carried out on the load conditions of all the virtual hosts to obtain a server queue needing migration;
2. if the queue is empty, then no migration is currently required; otherwise, the queue is not empty and needs to be migrated;
3. migrating and selecting a target virtual host, and migrating all tasks on the source virtual host to the target host if the CPU and the memory consumption of the target virtual host are still within the upper limit of the load after all migration tasks of the source target host are received by the target virtual host; otherwise, the virtual host is not migrated for the moment;
4. through system detection, if the resource utilization rates of the CPUs and the memories of some virtual hosts are found to be in an idle state for a long time, the virtual hosts can be closed to save energy consumption.
Example 1
The method comprises the steps of currently receiving batch tasks S1, S2 and S3, wherein S1 comprises 100 subtasks, and each subtask occupies 10% of a CPU and 3% of a memory. S2 comprises 50 subtasks, each subtask occupies 15% of the CPU, and occupies 5% of the memory. S3 comprises 20 subtasks, wherein each subtask occupies 20% of the CPU and 20% of the memory; the idle CPU utilization rate is 20% when the virtual host is started, the memory utilization rate is 15%, the CPU utilization rate is 80% when the virtual host is at the highest load, the memory utilization rate is 90%, and one task can only be distributed on 15 virtual hosts at most once.
Step 1, counting the resource consumption of the current cloud data center to obtain the utilization rate of a CPU and an internal memory of the virtual host. And initializing an algorithm, monitoring the resource utilization rate of the data of the physical server and the virtual machine in real time, and allocating and scheduling subsequent tasks.
And 2, according to a pre-allocation algorithm, calculating to obtain 9, 13 and 7 numbers of the allocated virtual hosts of the S1, the S2 and the S3 respectively, wherein the consumed unit task time is 2,1 and 1, and the three pre-allocation schemes are not optimal allocation schemes.
Step 3, for task S1:
step 3.1, no idle virtual host exists at present, and the step 3.5 is skipped;
step 3.5, the current pre-allocation scheme is not the optimal allocation scheme, and the step 3.7 is skipped;
step 3.7 opens up 8 virtual hosts, executes the actual distribution algorithm, no non-idle virtual host exists at present,
so 1 virtual host needs to be opened up to allocate the task.
Step 3, for task S2:
step 3.1, no idle host computer exists at present, and the step 3.5 is skipped to;
step 3.5, the current pre-allocation scheme is not the optimal allocation scheme, and the step 3.7 is skipped;
and 3.7, opening up 12 virtual hosts, executing an actual distribution algorithm, and opening up 1 virtual host to distribute tasks because no non-idle virtual host exists currently.
Step 3. For task S3:
step 3.1, no idle host computer exists at present, and the step 3.5 is skipped to;
step 3.5, the current pre-allocation scheme is not the optimal allocation scheme, and the step 3.7 is skipped;
and 3.7, opening up 6 virtual hosts, executing an actual distribution algorithm, and opening up 1 virtual host to distribute tasks because no non-idle virtual host exists currently.
And 4, continuing to detect the tasks received in the cloud data center, if a new batch of tasks are received currently, executing the step 2, and if no new tasks are received currently, waiting for the tasks.
Example 2
On the basis of embodiment 1, task 1 has calculated one unit of task time, assuming that task 2 and task 3 have completed the calculation and their assigned virtual hosts have not been released.
At this time, the tasks S4, S5, and S6 are received, where the task S4 includes 60 subtasks, each subtask occupies 10% of the CPU and occupies 3% of the memory, the task S5 includes 5 subtasks, each subtask occupies 30% of the CPU and occupies 20% of the memory, the task S6 includes 15 subtasks, each subtask occupies 10% of the CPU and occupies 30% of the memory, the empty CPU utilization rate when the virtual host is started is 20%, the memory utilization rate is 15%, the CPU utilization rate when the virtual host is at the highest load is 80%, the memory utilization rate is 90%, and one task can be allocated to 15 virtual hosts at most once.
Step 1, counting the resource consumption of the current cloud data center to obtain the utilization rate of a CPU and a memory of the virtual host. And initializing an algorithm, monitoring the resource utilization rate of the data of the physical server and the virtual machine in real time, and allocating and scheduling subsequent tasks.
And 2, according to a pre-allocation algorithm, calculating to obtain the numbers of the allocated virtual hosts of S4, S5 and S6 which are respectively 10,3 and 8, wherein the consumed unit task time is 1,1 and 1, and only the pre-allocation scheme of S4 is the optimal allocation scheme.
Step 3, for task S4:
step 3.1, currently, an idle virtual host L =21, and step 3.2 is skipped;
step 3.2 the scheme is an optimal distribution scheme, M =10, and step 3.3 is skipped;
step 3.3L is not less than M, and virtual hosts are distributed according to a pre-distribution scheme;
step 3, for task S5:
step 3.1, currently, an idle virtual host L =11, and skipping to step 3.2;
step 3.2 the allocation scheme is not the optimal allocation scheme, M =3, step 3.4 is skipped;
and 3.4L is not less than M-1, at the moment, 2 virtual hosts are distributed according to a pre-distribution scheme, 18 virtual hosts are currently distributed according to an actual distribution algorithm, the current residual tasks need 30% of CPU resources and 20% of memory resources, the most appropriate virtual host is matched, and the task S1 is one virtual host which is not fully loaded.
Step 3, for task S5:
step 3.1, currently, an idle virtual host L =9, and step 3.2 is skipped;
step 3.2, the allocation scheme is not the optimal allocation scheme, M =8, and step 3.4 is skipped;
and 3.4L is not less than M-1, at the moment, 7 virtual hosts are allocated according to a pre-allocation scheme, 20 virtual machines are currently not idle according to an actual allocation algorithm, the current residual tasks need 10% of CPU resources and 30% of memory resources, but no virtual machine capable of storing the current tasks is available, and therefore 1 virtual host needs to be newly developed to store the residual tasks.
And 4, continuing to detect the tasks received in the cloud data center, if a new batch of tasks are received currently, executing the step 2, and if no new tasks are received currently, waiting for the tasks.
The foregoing merely represents preferred embodiments of the invention, which are described in some detail and detail, and therefore should not be construed as limiting the scope of the invention. It should be noted that, for those skilled in the art, various changes, modifications and substitutions can be made without departing from the spirit of the present invention, and these are all within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (6)

1. A network function energy-saving method based on Openstack virtualization is characterized by comprising the following steps:
step 1: counting the resource consumption of the current cloud data center to obtain the utilization rate of a CPU (central processing unit) and an internal memory of the virtual host; an initialization algorithm is used for monitoring the resource utilization rate of the data of the physical server and the virtual machine in real time and allocating and scheduling subsequent tasks;
step 2: when the cloud data center receives a batch of tasks S, determining the distribution scheme of each task according to the pre-distribution algorithm to obtain a pre-distribution scheme T pre And optimality of this scheme G;
and step 3: for each task in the task S, respectively distributing the tasks;
and 4, step 4: continuing to detect the tasks received in the cloud data center, and executing the step 2 if a new batch of tasks are received currently; and if no new task is received currently, waiting for the task.
2. The Openstack virtualization-based network function power saving method as claimed in claim 1, wherein the step 3 comprises:
step 3.1: judging whether idle virtual hosts exist currently or not, wherein the number of the idle virtual hosts is L, if so, skipping to the step 3.2, and if not, skipping to the step 3.5;
step 3.2: according to a pre-allocation scheme T pre And the number M of the required virtual hosts, if the pre-allocation scheme is the optimal allocation scheme, skipping to the step 3.3; otherwise, if the pre-allocation scheme is not the optimal allocation scheme, skipping to step 3.4;
step 3.3: if L is not less than M, the M virtual hosts are used according to the pre-allocation scheme T pre Carrying out distribution tasks; if L is less than M, opening up M-L virtual host machines, then making all idle virtual host machine M according to preallocation scheme T pre Performing an allocation task;
step 3.4: if L is not less than M-1, obtaining an actual distribution scheme T by the M-1 virtual hosts through an actual distribution algorithm, and distributing tasks according to the actual distribution scheme T; if L is smaller than M-1, additionally opening up M-1-L virtual hosts to obtain an actual distribution scheme T through an actual distribution algorithm, and distributing tasks according to the actual distribution scheme T;
step 3.5: if the current pre-allocation scheme T pre If the optimal distribution scheme is adopted, skipping to the step 3.6, otherwise skipping to the step 3.7;
step 3.6: newly opening up M virtual hosts and according to T pre Carrying out task allocation;
step 3.7: and newly opening up M-1 virtual hosts, obtaining an actual distribution scheme T through an actual distribution algorithm, and distributing tasks according to the actual distribution scheme T.
3. The Openstack virtualization-based network function power saving method according to claim 1 or 2, wherein in step 2, the pre-allocation algorithm is:
for theta batch tasks S = { S = { S = } 1 ,S 2 ,S 3 ,…,S i ,…,S θ Independent and independent of each other, each task S i Has n i Subtasks, and the resource consumption of each subtask to CPU is
Figure FDA0003959123400000021
The consumption of the memory is
Figure FDA0003959123400000022
The CPU idle load after the start of each virtual host is alpha cpu The upper limit of the load is beta cpu With a memory empty load of alpha memroy The upper limit of the load is beta memory
The following algorithm is designed to determine the actual selection of the ith task S i Number of virtual hosts that need to be deployed:
each virtual host can accommodate at most
Figure FDA0003959123400000023
A task, the task uses>
Figure FDA0003959123400000024
Each virtual host can complete a batch of tasks with only 1 task consuming time, C tasks are run simultaneously on each virtual host, setting the number of the deployed virtual hosts as m and the maximum number of the distributable virtual hosts as N, and taking the number of the distributable virtual hosts as N>
Figure FDA0003959123400000025
Executes the depletion pick>
Figure FDA0003959123400000026
Completing batch tasks within one task time, wherein the additional energy consumption is alpha tau (m-1); in the formula>
Figure FDA0003959123400000027
Means forRounding up the result of dividing a by b; />
Figure FDA0003959123400000028
Means rounding down the result of dividing a by b;
compared with the task of executing the batch by using only one virtual host, when a plurality of virtual hosts are used, the additional energy consumption of the virtual hosts is as follows: w Is additionally provided with = α τ (m-1), the time earned is:
Figure FDA0003959123400000029
setting an optimization function f (m) = W Addition of /T Earn money
Therefore, it is
Figure FDA0003959123400000031
When there is i e [1, phi ]]When n% iC =0, f (i) is the minimum value, and the number of virtual hosts to be deployed is considered to be
Figure FDA0003959123400000032
The allocation scheme obtained in this case is the optimal allocation scheme, which is satisfied additionally when τ is 1>
Figure FDA0003959123400000033
The scheme at this time is the optimal distribution scheme; wherein a% b =0 means that a can be divided exactly by b;
traverse all tasks S = { S = } 1 ,S 2 ,S 3 ,…,S i ,…,S θ Get the number of virtual host machines M = { M = required by each task 1 ,m 2 ,m 3 ,…,m i ,…,m θ For a task S i If m is i The obtained distribution scheme is the optimal distribution scheme G i =1, its pre-allocation scheme
Figure FDA0003959123400000034
Deployment is at m i Desktop virtual host topDeploying the tasks, and paralleling C subtasks on each host, wherein m is the moment i The platform virtual hosts are all in a full load condition; if m i The resulting allocation scheme is not the optimal allocation scheme G i =0, its pre-allocation scheme =>
Figure FDA0003959123400000035
Deployment is at m i Deployment of tasks on the desktop virtual host, and m i Parallel C subtasks on 1 host, in this case m i -1 virtual machines are all in a full load condition, one virtual machine remaining partially loaded;
so that all tasks S = { S } can be finally obtained 1 ,S 2 ,S 3 ,…,S i ,…,S θ Pre-allocation scheme of
Figure FDA0003959123400000036
Figure FDA0003959123400000037
And optimality of pre-allocation scheme G = { G = 1 ,G 2 ,G 3 ,…,G i ,…,G θ }。
4. The Openstack virtualization-based network function energy-saving method as claimed in claim 3, wherein in step 3, the actual allocation algorithm is:
for the pre-allocation scheme of the current non-optimal allocation scheme, tasks are allocated to m-1 virtual hosts according to the pre-allocation scheme; the CPU occupancy for the virtual host in which there is a remaining underfill is set to ρ cpu The occupancy rate of the memory is set as rho pmemory (ii) a Notation q = { ρ = cpumemory }
Traversing the remaining availability Q = { Q ] of the CPUs and the memories of all the non-idle virtual hosts at present 1 ,Q 2 ,…,Q i ,…,Q t And f, t is the number of the non-idle virtual hosts, and the residual CPU of the ith non-idle virtual host is set as
Figure FDA0003959123400000041
The remaining available condition of memory is->
Figure FDA0003959123400000042
I.e. is>
Figure FDA0003959123400000043
Traversing all the non-idle virtual hosts to obtain the remaining available condition Q' = Q-Q of the ith non-idle virtual host after the ith non-idle virtual host receives the current remaining tasks;
find all the satisfies
Figure FDA0003959123400000044
Is/is>
Figure FDA0003959123400000045
And is
Figure FDA0003959123400000046
Is denoted as Γ, the f value of each element in Γ is calculated from f (cpu, memory) =0.5cpu +0.5memory, taken +>
Figure FDA0003959123400000047
Allocating the tasks which are not fully loaded in the pre-allocation scheme to the ith non-idle virtual host; on the contrary, if the gamma is an empty set, a new virtual machine is opened for storing the underloaded tasks in the pre-allocation scheme.
5. The Openstack virtualization-based network function power saving method according to claim 4, wherein the step 3 further comprises:
step 3.8: using a migration algorithm to count the resource consumption of the executing task, and migrating the specific virtual machine;
the method comprises the following steps:
step 3.8.1: migration probability calculation is carried out on the load conditions of all the virtual hosts to obtain a server queue needing migration;
step 3.8.2: if the queue is empty, then no migration is currently required; otherwise, the queue is not empty and needs to be migrated;
step 3.8.3: migrating and selecting a target virtual host, and migrating all tasks on the source virtual host to the target host if the CPU and the memory consumption of the target virtual host are still within the upper limit of the load after all migration tasks of the source target host are received by the target virtual host; otherwise, the virtual host is not migrated for the moment;
step 3.8.4: through system detection, if the resource utilization rates of the CPUs and the memories of some virtual hosts are found to be in an idle state for a long time, the virtual hosts can be closed to save energy consumption.
6. The Openstack virtualization-based network function power saving method as claimed in claim 5, wherein in step 3.8.1, a migration probability function F = F is designed cpu (cpu)+f memory (memory) wherein
Figure FDA0003959123400000051
/>
CN202211474363.XA 2022-11-23 2022-11-23 Network function energy-saving method based on Openstack virtualization Pending CN115858150A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211474363.XA CN115858150A (en) 2022-11-23 2022-11-23 Network function energy-saving method based on Openstack virtualization

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211474363.XA CN115858150A (en) 2022-11-23 2022-11-23 Network function energy-saving method based on Openstack virtualization

Publications (1)

Publication Number Publication Date
CN115858150A true CN115858150A (en) 2023-03-28

Family

ID=85665344

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211474363.XA Pending CN115858150A (en) 2022-11-23 2022-11-23 Network function energy-saving method based on Openstack virtualization

Country Status (1)

Country Link
CN (1) CN115858150A (en)

Similar Documents

Publication Publication Date Title
US10542079B2 (en) Automated profiling of resource usage
EP2898410B1 (en) Automated profiling of resource usage
US7945913B2 (en) Method, system and computer program product for optimizing allocation of resources on partitions of a data processing system
US9442760B2 (en) Job scheduling using expected server performance information
US9135048B2 (en) Automated profiling of resource usage
US11074092B2 (en) Virtual machine batch live migration
JP5343523B2 (en) Job management apparatus, job management method, and job management program
US20230127141A1 (en) Microservice scheduling
US11755369B2 (en) Techniques for container scheduling in a virtual environment
US10884779B2 (en) Systems and methods for selecting virtual machines to be migrated
Chen et al. Elastic parameter server load distribution in deep learning clusters
Zhu et al. Deadline-constrained workflow scheduling in IaaS clouds with multi-resource packing
Xu et al. Prophet: Scheduling executors with time-varying resource demands on data-parallel computation frameworks
CN116089051A (en) Task allocation method, device and system
Rathinaraja et al. Dynamic ranking-based MapReduce job scheduler to exploit heterogeneous performance in a virtualized environment
CN113010309B (en) Cluster resource scheduling method, device, storage medium, equipment and program product
TW201327205A (en) Managing method for hardware performance and cloud computing system
CN115858150A (en) Network function energy-saving method based on Openstack virtualization
KR101639947B1 (en) Hadoop preemptive deadline constraint scheduling method, execution program thereof method and recorded medium of the program
CN113626162A (en) Data center task hybrid deployment method and system based on dynamic resource sharing
Zheng et al. Energy-efficient statistical live virtual machine placement for big data information systems in cloud computing environments
Nabavinejad et al. Data locality and VM interference aware mitigation of data skew in hadoop leveraging modern portfolio theory
Chen et al. Throughput enhancement through selective time sharing and dynamic grouping
CN117311910B (en) High-performance virtual password machine operation method
CN117149401A (en) Resource scheduling method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination