CN115617526A - Cloud data center energy-saving method based on cloud data center construction and virtual machine integration - Google Patents

Cloud data center energy-saving method based on cloud data center construction and virtual machine integration Download PDF

Info

Publication number
CN115617526A
CN115617526A CN202211386286.2A CN202211386286A CN115617526A CN 115617526 A CN115617526 A CN 115617526A CN 202211386286 A CN202211386286 A CN 202211386286A CN 115617526 A CN115617526 A CN 115617526A
Authority
CN
China
Prior art keywords
data center
operator
cloud data
virtual machine
physical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211386286.2A
Other languages
Chinese (zh)
Inventor
吕义飞
刘筱
夏云霓
吴曾
彭青蓝
朱治学
孙晓宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Jinyuyun Energy Technology Co ltd
Chongqing University
Original Assignee
Chongqing Jinyuyun Energy Technology Co ltd
Chongqing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Jinyuyun Energy Technology Co ltd, Chongqing University filed Critical Chongqing Jinyuyun Energy Technology Co ltd
Priority to CN202211386286.2A priority Critical patent/CN115617526A/en
Publication of CN115617526A publication Critical patent/CN115617526A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5094Allocation of resources, e.g. of the central processing unit [CPU] where the allocation takes into account power or heat criteria
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5077Logical partitioning of resources; Management or configuration of virtualized resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • G06F9/5088Techniques for rebalancing the load in a distributed system involving task migration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/504Resource capping
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention provides a cloud data center energy-saving method based on cloud data center construction and virtual machine integration, which comprises the following steps: s1, establishing a cloud data center resource model; s2, if the data center resources are insufficient, selecting a physical machine capacity expansion data center by using the DQN model; if the data center resources are sufficient, executing the next step; and S3, integrating the virtual machines to optimize idle resources and reduce the energy consumption of the cloud data center. The invention can reduce the quantity of resource fragments generated by the virtual machine and the system by selecting the type of the physical machine; and dynamically integrating the virtual machines on different host machines, and closing part of low-load physical machines, thereby reducing idle resources and reducing energy consumption.

Description

Cloud data center energy-saving method based on cloud data center construction and virtual machine integration
Technical Field
The invention relates to the field of cloud computing energy consumption, in particular to a cloud data center energy-saving method based on cloud data center construction and virtual machine integration.
Background
Cloud computing has important practical application significance as a service computing mode which develops gradually and maturely. In recent years, cloud computing technology architectures and business models are more and more mature, and good compatibility of the cloud computing technology architectures and the business models to numerous types of application programs and computing scenes enables the service range of the cloud computing technology architectures and the business models to cover various industries such as enterprise units, government organs, scientific research institutions and the like. The cloud computing technology provides convenience for daily work and life of people, the user group is increasingly huge, and the user requirements of the cloud data center are increasingly intensive. In order to cope with the huge user demand, cloud providers need to build a large number of cloud data centers. Nowadays, large-scale data centers are equipped with thousands or even tens of thousands of physical machines, and the physical machines have the problems of low resource utilization rate, large energy consumption and the like.
User requests are usually deployed on a host machine in a virtual machine or container manner, when one host machine carries multiple different virtual machines, the situation that other hardware resources cannot be used due to the limitation of the quantity of certain hardware resources easily occurs, and the idle resources can cause unnecessary resource fragments in a data center. An important reason for the generation of resource fragmentation is the affinity of the host and the virtual machine resource amount, i.e. the resource demand amount of the virtual machine and the resource ownership amount of the host are not always completely matched. Under actual industrial production conditions, indexes such as models and resource amounts of physical machines forming a cloud data center are often different, and the types of virtual machines rented and sold by a cloud provider are also different in resource amounts of each type of virtual machine. In the cloud data center cluster operation process, the type of physical machine is selected to bear the virtual machine, and the number of resource fragments generated by the system is closely related.
In the existing business mode, a user mostly requests service from a cloud provider in a lease mode, a virtual machine running on a host machine is released after the user request is expired, and the host machine has idle resources; a host in a cloud data center needs a large amount of energy consumption overhead in the operation process, and unnecessary power consumption is increased by resource fragments and idle resources, so that electric energy waste is caused. The virtual machines on different host machines are dynamically integrated, so that idle resources can be reduced, part of low-load physical machines are closed, and the energy consumption is reduced.
After extensive and intensive research, we find that the research on reducing resource fragments and idle resources in the current cloud environment has some disadvantages:
(1) According to an existing cloud data center optimization strategy, most research objects are set to be static cloud data centers, and few researches are conducted on dynamic cloud data centers with constantly changing resource holding capacity and bearing capacity of the research objects.
(2) The existing method does not consider the influence of user resource requirements on a physical machine model selection strategy in the cloud data center building process, and does not deeply research the problem of optimizing cloud data center resource fragments.
(3) Most of the existing virtual machine integration strategies only consider single resource limitation (such as the number of CPU cores or the size of a memory) but do not comprehensively consider multiple resource limitations.
Disclosure of Invention
The invention aims to at least solve the technical problems in the prior art, and particularly provides a cloud data center energy-saving method based on cloud data center construction and virtual machine integration.
In order to achieve the above object, the present invention provides a cloud data center energy saving method based on cloud data center construction and virtual machine integration, comprising the following steps:
s1, establishing a cloud data center resource model;
s2, if the data center resources are insufficient, selecting a physical machine capacity expansion data center by using the DQN model; if the data center resources are sufficient, executing the next step;
and S3, integrating the virtual machines to optimize idle resources and reduce the energy consumption of the cloud data center.
Further, the S1 includes the steps of:
s1-1, setting a cloud provider to provide L different types of virtual machines to rent to a user, wherein a resource request sequence of the user is defined as: r = { R = 1 ,r 2 ,r 3 ,...},r j E R, j = (1,2,3,. Cndot., n), wherein R j Indicating a jth resource request in the request sequence; each user request corresponds to one virtual machine
Figure BDA0003929937060000031
Wherein the virtual machine
Figure BDA0003929937060000032
The resource amount of
Figure BDA0003929937060000033
Figure BDA0003929937060000034
Represents a request r j The number of CPUs required for the request is,
Figure BDA0003929937060000035
represents a request r j The amount of memory requested;
s1-2, K types of physical machines with different resource quantities can be used for constructing a cloud data center, and each type of physical machine has different CPU core quantity, memory resource quantity and single-day maximum energy consumption;
s1-3, establishing a cloud data center resource model through constraint conditions:
constructing a physical machine sequence forming a cloud data center:
Figure BDA0003929937060000036
Figure BDA0003929937060000037
wherein
Figure BDA0003929937060000038
Representing the ith physical machine of type k, each physical machine containing three attributes
Figure BDA0003929937060000039
Wherein
Figure BDA00039299370600000310
Indicates the number of cpus of the physical machine,
Figure BDA00039299370600000311
indicating the amount of memory of the physical machine,
Figure BDA00039299370600000312
representing the single day energy consumption of the physical machine;
all user requests are deployed on corresponding physical machines in a virtual machine mode, so that the resource quantity owned by the physical machines for constructing the cloud data center always meets the following constraint conditions:
Figure BDA00039299370600000313
Figure BDA00039299370600000314
wherein m represents the total number of physical machines;
n represents the number of requests made by the user;
Figure BDA00039299370600000315
the representation is located at a physical machine
Figure BDA00039299370600000316
Virtual machine of
Figure BDA00039299370600000317
The cpu core number of (c);
Figure BDA00039299370600000318
the representation is located at a physical machine
Figure BDA00039299370600000319
Virtual machine of
Figure BDA00039299370600000320
The memory size of (d);
the constraint has the effect of ensuring that the resource quantity owned by the target physical machine is larger than the application resource quantity of the virtual machine, otherwise, the current physical machine cannot meet the condition of forming the cloud data center infrastructure.
The physical machine and the loaded virtual machine have a one-to-one correspondence relationship, the mapping relationship has uniqueness in the whole data center cluster, and a Boolean variable f is used i,j = {0,1} identification user request r j And physical machine
Figure BDA00039299370600000321
The mapping relationship of (c). And each physical machine
Figure BDA00039299370600000322
And its loaded virtual machine
Figure BDA00039299370600000323
Is a one-to-many relationship, for a given ith physical machine
Figure BDA00039299370600000324
Should satisfy the following load conditions:
Figure BDA0003929937060000041
Figure BDA0003929937060000042
wherein r is j Indicating order of requestsThe jth user request in the column;
Figure BDA0003929937060000043
representing the ith physical machine of type k.
The constraint is used for specifying each resource threshold value of a single physical machine and restricting the resource upper limit of the single physical machine in the model.
Further, the S2 includes the steps of:
s2-1, determining an agent state set, an action set and an award value in the DQN model;
and S2-2, carrying out physical model selection by using the DQN model, and regarding the model selection problem as a Markov decision process.
Further, the S2-1 comprises the following steps:
(1) Acquiring system state and state set of physical machine in cloud data center at time t
Figure BDA0003929937060000044
Wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0003929937060000045
represents the average CPU utilization of a physical machine of type k in the cluster,
Figure BDA0003929937060000046
representing the average memory utilization rate of the physical machine of type k in the cluster;
(2) Setting an action set to cover all K types of physical machines to be selected, wherein an action a belongs to A, the action a comprises two states { add, pass }, namely the physical machines of the type are required to be added or not required to be added into the cluster; a is an action set of all models of physical machines, namely the action set;
(3) Deriving a reward function associated with the set of states and the set of actions:
firstly, respectively obtaining the amount of idle resources generated by each physical machine which has added into the cluster, wherein the calculation method is as follows:
Figure BDA0003929937060000047
Figure BDA0003929937060000048
wherein the content of the first and second substances,
Figure BDA0003929937060000049
indicating the number of idle CPUs of the physical machine;
Figure BDA00039299370600000410
representing the idle memory quantity of a physical machine with the type k;
Figure BDA00039299370600000411
the representation is located at a physical machine
Figure BDA00039299370600000412
Virtual machine of
Figure BDA00039299370600000413
The cpu core number of (c);
Figure BDA00039299370600000414
the representation is located at a physical machine
Figure BDA00039299370600000415
Virtual machine of
Figure BDA00039299370600000416
The memory size of (c).
Then, calculating the resource fragment quantity generated by the physical machine at a certain moment, and respectively recording the CPU fragment quantity and the memory fragment quantity:
Figure BDA0003929937060000051
the goal of DQN model optimization is to reduce the total amount of resource fragments, and to reduce the computational degree model of the algorithm, they are normalized:
Figure BDA0003929937060000052
wherein
Figure BDA0003929937060000053
Is the amount of resource fragmentation;
alpha is a parameter for adjusting the unit value between the CPU and the memory;
the reward value R for a physical machine of type k is set as follows, i.e. a physical machine that generates a smaller fragmentation resource can obtain a higher reward:
Figure BDA0003929937060000054
wherein the content of the first and second substances,
Figure BDA0003929937060000055
representing the prize value at time t for a physical machine of type k.
Further, the updating rule of the Q value table in the type selection process is as follows:
s2-2-1, performing feed forward once on the current state S, and obtaining the predicted Q values of all actions:
max a' Q(s',a')
s2-2-2, performing feedforward once on the next state S', and calculating the maximum output value of the whole network:
Figure BDA0003929937060000056
wherein
Figure BDA0003929937060000057
Representing the reward value of a physical machine with the model number k at the moment t + 1;
s2-2-3, setting a Q value for the action target;
s2-2-4, approximating a value function by using a deep convolutional neural network;
s2-2-5, training the learning process of reinforcement learning by using experience playback.
Further, the S3 includes the steps of:
s3-1, determining a source host list of the virtual machine to be migrated;
s3-1-1, establishing an emigration operator pool D = { D = 1 ,d 2 ,d 3 ,...,d z },d p ∈d z Wherein d is p Representing different migration operators, wherein each operator represents different physical machine migration priority determination strategies;
the migration operator comprises:
operator d 1 Determining the migration priority according to the load of the physical machines in the cluster, wherein the physical machines with low load are preferentially migrated;
operator d 2 Determining the migration priority according to the CPU resource utilization rate of the physical machines in the cluster, wherein the physical machines with the resource utilization rate are migrated preferentially;
operator d 3 Determining the migration priority according to the memory resource utilization rate of the physical machines in the cluster, wherein the physical machines with low memory utilization rate are migrated preferentially;
operator d 4 Determining the migration priority according to the difference between the CPU utilization rate and the memory utilization rate of the physical machines in the cluster, wherein the physical machines with large difference in the utilization rates are migrated preferentially.
S3-1-2, giving weight to each operator, and respectively recording the weight as: { w 1 ,w 2 ,w 3 ,...,w p In which w p Corresponding operator d p The weight of (c);
s3-1-3, determining the number of virtual machine migration stations to which each operator can be allocated according to the upper limit of the number of virtual machine migration in each day and the weight of each operator;
s3-1-4, updating the weight value of each operator;
s3-1-5, calculating each operator with a new weight w n New total energy consumption of system after distribution of migration timesThis E n And E, the E represents the total energy consumption cost after the pre-emigration is finished; if E is>E n If so, outputting a migration sequence L; if E is<E n Returning to the step S3-1-4, and updating the weight again;
s3-2, determining a target host list to be migrated into the virtual machine;
s3-2-1, acquiring the migrated virtual machine list L output in the step S5-1 as the input of the step;
s3-2-2, establishing an immigration operator pool R = { R = 1 ,r 2 ,r 3 ,...,r P },r p ∈r P Wherein r is p Representing different migration operators, wherein each operator represents different virtual machine migration priority determination strategies;
the migration operator comprises: operator r 1 : the target physical machines are sorted according to the CPU resource amount in priority, and the virtual machines are migrated to the physical machines with high CPU idle rate in priority;
operator r 2 : the target physical machines are sorted according to the memory resource amount, and the virtual machines are migrated to the physical machines with high memory idle rate preferentially;
operator r 3 : and the target physical machines are sorted according to the comprehensive resource amount, and the virtual machines are preferentially migrated into the physical machines with high total resource idle rate.
S3-2-3, storing the current state of the cluster, respectively using each migration operator to perform pre-migration, and calculating the increased energy consumption cost;
and S3-2-4, selecting an operator with the least energy consumption cost, and completing the immigration operation.
Further, the upper limit of the number of virtual machine migrations per day is:
Figure BDA0003929937060000071
wherein u is the number of virtual machines that can be migrated per day;
m represents the total number of physical machines;
n represents the number of requests made by the user;
f i,j = {0,1} represents user request r j And physical machines s i The mapping relationship of (2);
Figure BDA0003929937060000072
after the virtual machine migration is finished, the user requests r j And the mapping situation between the new physical machine;
nav is the number of virtual machines in the entire data center.
Further, the updating of the weight value of each operator comprises the following steps:
(1) Allocating the number of migratable virtual machines for each operator, performing virtual machine pre-migration, adding the virtual machines which have completed the pre-migration into a migration list L, and calculating the total energy consumption cost E saved after the pre-migration is completed;
(2) The operator weight is composed of a basic weight and a temporary weight, and the temporary weight of the operator is updated after each pre-migration;
(3) Calculating the energy consumption cost e saved by each operator, and selecting the operator with the most cost saving from the calculated energy consumption cost e
Figure BDA0003929937060000073
Updating its temporary weight according to the following formula:
Figure BDA0003929937060000081
wherein the content of the first and second substances,
Figure BDA0003929937060000082
representation operator
Figure BDA0003929937060000083
A temporary weight of (d);
Figure BDA0003929937060000084
representation operator
Figure BDA0003929937060000085
The starting weight of (a);
Figure BDA0003929937060000086
represents the operator
Figure BDA0003929937060000087
Reduced energy consumption overhead after premigration;
e' p operator d is indicated p Reduced energy consumption overhead after premigration;
z represents the total number of operators in the migration operator pool;
(4) Updating the temporary weights of other operators according to the following formula:
Figure BDA0003929937060000088
wherein, w t Represents the operator d p The starting weight of (a);
Figure BDA0003929937060000089
is the temporary weight of the operator;
e' p represents the operator d p Reduced energy consumption overhead after premigration;
z represents the total number of operators in the migration operator pool;
(5) The operator weights are updated according to the following formula:
w n =λw p +(1+λ)w
wherein, w n Is the weight value after the operator is updated;
w p is operator d p A base weight when not more recent;
Figure BDA00039299370600000810
is the temporary weight of the operator;
lambda epsilon (0,1) is a parameter, and the weight proportion of the weighted summation of the basic weight of the operator and the temporary weight during updating is adjusted to the optimal weight by using a dichotomy.
In summary, by adopting the above technical scheme, the invention can reduce the number of resource fragments generated by the virtual machine and the system by selecting the type of the physical machine; and dynamically integrating the virtual machines on different host machines, and closing part of low-load physical machines, thereby reducing idle resources and reducing energy consumption.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Detailed Description
Reference will now be made in detail to the embodiments of the present invention, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The following examples are illustrative only and are not to be construed as limiting the invention.
S1, establishing a cloud data center resource model.
S1-1, setting a cloud provider to provide L different types of virtual machines to rent to users. The resource request sequence of a user is defined as: r = { R = 1 ,r 2 ,r 3 ,...},r j E R, j = (1,2,3.., n), where R j Indicating the jth resource request in the request sequence. Each user request corresponds to one virtual machine
Figure BDA0003929937060000091
Wherein the virtual machine
Figure BDA0003929937060000092
The resource amount of
Figure BDA0003929937060000093
Figure BDA0003929937060000094
Represents a request r j The number of CPUs (unit: core) requested,
Figure BDA0003929937060000095
represents a request r j The amount of memory requested (unit: GB).
S1-2, setting K types of physical machines with different resource quantities to be used for constructing the cloud data center. Each type of physical machine has a different number of CPU cores, amount of memory resources, and maximum energy consumption per day.
S1-3, establishing a cloud data center resource model through constraint conditions.
Constructing a physical machine sequence forming a cloud data center:
Figure BDA0003929937060000096
wherein
Figure BDA0003929937060000097
Representing the ith physical machine of type k, each physical machine containing three attributes
Figure BDA0003929937060000098
Wherein
Figure BDA0003929937060000099
Represents the cpu number (unit: core) of the physical machine,
Figure BDA00039299370600000910
indicating the amount of memory (unit: GB) of the physical machine,
Figure BDA00039299370600000913
represents the energy consumption per day (unit: kw/h) of the physical machine.
All user requests are deployed on corresponding physical machines in a virtual machine mode, so that the resource quantity owned by the physical machines for constructing the cloud data center always meets the following constraint conditions:
Figure BDA00039299370600000911
Figure BDA00039299370600000912
wherein m represents the total number of physical machines;
n represents the number of requests made by the user;
Figure BDA0003929937060000101
the representation is located in a physical machine
Figure BDA0003929937060000102
Virtual machine of
Figure BDA0003929937060000103
The cpu core number of (c);
Figure BDA0003929937060000104
the representation is located in a physical machine
Figure BDA0003929937060000105
Virtual machine of
Figure BDA0003929937060000106
The memory size of (d);
the constraint function is to ensure that the resource quantity owned by the target physical machine is greater than the application resource quantity of the virtual machine, otherwise, the current physical machine cannot meet the condition of forming the cloud data center infrastructure.
The physical machine and the loaded virtual machine have a one-to-one correspondence relationship, the mapping relationship has uniqueness in the whole data center cluster, and a Boolean variable f is used i,j = {0,1} identification user request r j And physical machine
Figure BDA0003929937060000107
The mapping relationship of (c). And each physical machine
Figure BDA0003929937060000108
And the virtual of its loadSimulation machine
Figure BDA0003929937060000109
Is a one-to-many relationship, for a given ith physical machine
Figure BDA00039299370600001010
Should satisfy the following load conditions:
Figure BDA00039299370600001011
Figure BDA00039299370600001012
the constraint is used for specifying each resource threshold value of a single physical machine and restricting the resource upper limit of the single physical machine in the model.
And S2, selecting a physical machine capacity expansion data center by using a DQN model, wherein the DQN model is a deep reinforcement learning model.
The resources with insufficient quantity of cloud data centers are defined as short board resources. When expanding the capacity of the data center resources, the physical machine selection needs to be dynamically performed according to the request amount of the user for the virtual machines of different types and the remaining state of various resources in the current cloud data center. The core idea of the DQN in the virtual machine is to determine the agents in the model, make them execute the action sequence to interact with the surrounding environment to obtain the reward, and optimize the action sequence according to the reward value until the convergence of the model result is completed. In the model, the physical machine resource fragment amount is used as a learning intelligent agent to interact with indexes such as cloud data center resource amount, load change, user request conditions and the like to perform optimization learning.
And S2-1, determining an agent state set, an action set, a reward value and a system environment in the DQN model.
(1) Obtaining the system state and state set of a physical machine in a cloud data center at the moment t
Figure BDA00039299370600001013
Wherein the content of the first and second substances,
Figure BDA00039299370600001014
represents the average CPU utilization of a physical machine of type k in the cluster,
Figure BDA00039299370600001015
representing the average memory utilization rate of the physical machine of type k in the cluster;
(2) Setting an action set to cover all K types of physical machines to be selected, wherein the action a belongs to A, wherein a contains one-dimensional two states { add, pass }, namely the physical machines of the type are required to be added into the cluster or are not required to be added into the cluster; a is the action set of all models of physical machines, namely the action set.
(3) Deriving reward functions related to a set of states and a set of actions
Firstly, respectively obtaining the amount of idle resources generated by each physical machine which has added into the cluster, wherein the calculation method is as follows:
Figure BDA0003929937060000111
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0003929937060000112
indicating the number of CPUs that the physical machine is idle,
Figure BDA0003929937060000113
representing the idle memory quantity of a physical machine with the type k;
then, calculating the resource fragment quantity generated by the physical machine at a certain moment, and respectively recording the CPU fragment quantity and the memory fragment quantity:
Figure BDA0003929937060000114
the goal of DQN model optimization is to reduce the total amount of resource fragments, and to reduce the computational degree model of the algorithm, they are normalized:
Figure BDA0003929937060000115
wherein
Figure BDA0003929937060000116
α is a parameter that adjusts the unit value between the CPU and the memory for the amount of resource fragmentation.
The reward value R for a physical machine of type k is set as follows, i.e. a physical machine that generates a smaller fragmentation resource can obtain a higher reward:
Figure BDA0003929937060000117
wherein the content of the first and second substances,
Figure BDA0003929937060000118
representing the reward value of a physical machine with the model k at the time t;
the action quantity in each state is related to the size of the k value and the sampling time t, in order to save memory space, a neural network is used for representing a Q value function, and the weight of each layer of network is the corresponding Q value function.
Example (c): assuming k =100 and the number of read time instants is 10, 2000Q values are generated in each set of states, the action with the highest Q value is decided by using the neural network, and the neural network is optimized by using a simple square error as a loss function:
Figure BDA0003929937060000121
wherein s' represents the next state after the s state;
a 'represents an action to be performed in the s' state;
the following update rule is used for the Q-value function:
s2-2, the type selection problem of the physical machine can be regarded as a Markov decision process, and the process is expressed by using < S, a, r, S' >. For a given Markov decision process, the updating rule of the Q value table in the physical model selection step based on the DQN model is as follows:
s2-2-1, performing feed forward once on the current state S, and obtaining the predicted Q values of all actions:
max a' Q(s',a')
s2-2-2, performing feedforward once on the next state S', and calculating the maximum output value of the whole network:
Figure BDA0003929937060000122
wherein
Figure BDA0003929937060000123
Representing the reward value of a physical machine with the model number k at the moment t + 1;
s2-2-3, setting a Q value for the action target;
s2-2-4, approximating a value function by using a deep convolutional neural network;
s2-2-5, training a learning process of reinforcement learning by using experience playback;
the specific implementation process is as follows:
1. initializing an experience pool D, wherein the capacity is N;
2. initializing an estimate Q network: q (s, a), where θ is a parameter of the neural network;
3. initializing the target Q network: q π (s, a) wherein θ - Parameters of the target network;
4. initializing an initial state s;
5. in the state s, randomly selecting an action a belonging to A by utilizing the probability epsilon; if the small probability event does not occur, greedy selecting the action with the maximum value function in the current state;
6. get the reward r and the new state s';
7. putting s, a, r, s' into an experience pool D, wherein the D can be continuously expanded along with the running of the program;
8. sampling from the D to obtain ss, aa, rr, ss';
9. calculating the reward value of the sampling action in the target Q network according to the following mode;
Figure BDA0003929937060000131
here ss' is the termination state, i.e., the maximum number of allowable cycles in the learning process.
10. With [ y-Q (ss, aa)] 2 Training an estimate Q network as a loss function;
11. update state s to s'
12. Assigning the theta parameter value in the estimated value Q network to the parameter theta in the target Q network every C steps -
13. Repeating the steps 5-12 until s is in a termination state (i.e. the quantity of the resource fragments is lowest)
14. And repeating the steps 4-13 until the estimated value Q network converges.
15. Output evaluation Q network Q (s, a)
16. And selecting the physical machine with the maximum Q value according to the output result.
And S3, integrating the virtual machines to optimize idle resources and reduce the energy consumption of the cloud data center.
The virtual machine integration is performed according to the resource surplus condition of the physical machine in the cloud data center, the approximate optimal solution in the target solution set space field is searched, the virtual machine deployment mode which enables the idle resource amount to be the minimum is obtained, and therefore the energy consumption of the cloud data center is reduced. The load state of the physical machine affects the power consumption of the physical machine, and in the case of an empty load, a cloud provider usually switches the physical machine to a standby state or a shutdown state to save power. Here, a Boolean variable b is used i = {0,1} identification physical machine
Figure BDA0003929937060000132
The on/off state of (1). When the energy consumption cost of the whole data center is counted, the energy consumption overhead generated by the physical machine in the shutdown state can be ignored, and the total energy consumption cost ec of the data center within a period of time d The calculation method is as follows:
Figure BDA0003929937060000133
Figure BDA0003929937060000134
representing the single day energy consumption of the ith physical machine.
Where Y represents the total energy consumption cost of the data center over successive Y days.
In an actual scenario, in order to reduce the system energy consumption cost, virtual machines inside a cluster are generally integrated, but the virtual machine integration work may bring time cost overhead, that is, the maximum number of virtual machines that can be migrated per day is fixed, and then the migration number of virtual machines in a data center per day is limited to:
Figure BDA0003929937060000141
wherein u is the number of virtual machines that can be migrated per day;
f i,j = {0,1} represents user request r j And physical machines s i The mapping relationship of (2);
Figure BDA0003929937060000142
after the virtual machine migration is finished, the user requests r j And mapping cases between new physical machines.
nav is the number of virtual machines in the whole data center;
the overall goal of the optimization of the problem is thus:
min tec
s.t.:
Figure BDA0003929937060000143
Figure BDA0003929937060000144
u≤nav*5%
wherein tec is the total energy consumption cost of the data center over a continuous period of time.
When the virtual machine is deployed for the first time, in order to meet the corresponding user requirements as soon as possible and ensure the service quality, a greedy algorithm adapted for the first time is adopted to deploy the virtual machine to the data center cluster. The algorithm inevitably causes idle resources in the cloud data center.
The virtual machine integration can greatly reduce energy consumption overhead brought by idle resources, the aim of the virtual machine integration is to reduce energy consumption cost, and the main idea is to migrate loads on physical machines with low resource utilization rate or severely unbalanced resource utilization to other physical machines so as to close the low-load physical machines and realize load balance of the physical machines so as to save energy consumption overhead. A key step in the virtual machine consolidation policy is to determine which physical machines are the source hosts that need to migrate a virtual machine and which physical machines are the target hosts that receive the virtual machine. Because the types of physical machines forming the cloud data center are not single, and the specific migration strategy is influenced by indexes such as the resource use condition of each physical machine, the total resource amount and the like, the reference indexes of the source host and the target host are determined to be dynamically changed, and the method comprises the following specific steps:
s3-1, determining a source host list of the virtual machine to be migrated;
s3-1-1, establishing an emigration operator pool D = { D = 1 ,d 2 ,d 3 ,...,d z },d p ∈d z Wherein d is p Representing different migration operators, each operator representing a different physical machine migration prioritization strategy.
For ease of understanding, some examples of operators are listed here:
operator d 1 Determining the migration priority according to the load of the physical machines in the cluster, wherein the physical machines with low load are preferentially migrated;
operator d 2 Determining the migration priority according to the CPU resource utilization rate of the physical machines in the cluster, wherein the physical machines with the resource utilization rate are migrated preferentially;
operator d 3 Determining the migration priority according to the utilization rate of the memory resources of the physical machines in the cluster, wherein the physical machines with low memory utilization rate are migrated preferentially;
operator d 4 Determining the migration priority according to the difference between the CPU utilization rate and the memory utilization rate of the physical machines in the cluster, wherein the physical machines with large utilization rate difference are migrated preferentially;
s3-1-2, giving weight to each operator, and respectively recording the weight as: { w 1 ,w 2 ,w 3 ,...,w p In which w p Corresponding operator d p The weight of (c);
and S3-1-3, determining the number of virtual machine migration stations to which each operator can be allocated according to the upper limit of the number of virtual machine migration in each day and the weight of each operator. (initial each operator has the same weight value)
S3-1-4, updating the weight value of each operator according to the following formula;
(1) Allocating the number of migratable virtual machines for each operator, performing virtual machine pre-migration, adding the virtual machines which have completed the pre-migration into a migration list L, and calculating the total energy consumption cost E saved after the pre-migration is completed;
(2) The operator weight is composed of a basic weight and a temporary weight, and the temporary weight of the operator is updated after each pre-migration;
(3) Calculating the energy consumption cost e saved by each operator, and selecting the operator with the most cost saving as the record
Figure BDA0003929937060000161
Its temporary weight is updated according to the following formula:
Figure BDA0003929937060000162
wherein the content of the first and second substances,
Figure BDA0003929937060000163
representation operator
Figure BDA0003929937060000164
A temporary weight of (d);
Figure BDA0003929937060000165
representing operators
Figure BDA0003929937060000166
The starting weight of (a);
Figure BDA0003929937060000167
represents the operator
Figure BDA0003929937060000168
Reduced energy consumption overhead after premigration;
e' p operator d is indicated p Reduced energy consumption overhead after premigration;
z represents the total number of operators in the migration operator pool.
(4) Updating the temporary weights of other operators according to the following formula:
Figure BDA0003929937060000169
wherein, w t Represents the operator d p The starting weight of (a);
Figure BDA00039299370600001610
is the temporary weight of the operator;
e' p represents the operator d p Reduced energy consumption overhead after premigration;
z represents the total number of operators in the migration operator pool.
(5) The operator weights are updated according to the following formula:
w n =λw p +(1+λ)w
wherein, w n Is the weight value after the operator is updated;
w p is operator d p A base weight when not more recent;
Figure BDA00039299370600001611
is the temporary weight of the operator;
lambda epsilon (0,1) is a parameter, and the weight proportion of the weighted sum of the operator basic weight and the temporary weight during updating is adjusted to the optimal weight by using a dichotomy.
S3-1-5, calculating each operator with a new weight w n New total energy consumption cost E of system after distribution of migration times n And E, the E represents the total energy consumption cost after the pre-emigration is finished; if E is>E n If so, outputting a migration sequence L; if E is<E n Then, the procedure returns to step S3-1-4, and the weight is updated again.
S3-2, determining a target host list to be migrated into the virtual machine;
s3-2-1, acquiring the migrated virtual machine list L output in the step S5-1 as an input of the step;
s3-2-2, establishing an immigration operator pool R = { R = 1 ,r 2 ,r 3 ,...,r P },r p ∈r P Wherein r is p Representing different migration operators, wherein each operator represents different virtual machine migration priority determination strategies;
for ease of understanding, some examples of operators are listed below:
operator r 1 : the target physical machines are sorted according to the priority of the CPU resource quantity, and the virtual machines are migrated to the physical machines with high CPU idle rate preferentially;
operator r 2 : the target physical machines are sorted according to the memory resource amount in priority, and the virtual machines are migrated to the physical machines with high memory idle rate in priority;
operator r 3 : the target physical machines are sorted according to the comprehensive resource amount, and the virtual machines are preferentially migrated into the physical machines with high total resource idle rate;
s3-2-3, storing the current state of the cluster, respectively using each migration operator to perform pre-migration, and calculating the increased energy consumption cost;
and S3-2-4, selecting an operator with the least energy consumption cost, and completing the immigration operation.
While embodiments of the invention have been shown and described, it will be understood by those of ordinary skill in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.

Claims (8)

1. A cloud data center energy-saving method based on cloud data center construction and virtual machine integration is characterized by comprising the following steps:
s1, establishing a cloud data center resource model;
s2, if the data center resources are insufficient, selecting a physical machine capacity expansion data center by using the DQN model; if the data center resources are sufficient, executing the next step;
and S3, integrating the virtual machines to optimize idle resources and reduce the energy consumption of the cloud data center.
2. The cloud data center energy saving method based on cloud data center construction and virtual machine integration according to claim 1, wherein the S1 comprises the following steps:
s1-1, setting a cloud provider to provide L different types of virtual machines to rent to a user, wherein a resource request sequence of the user is defined as: r = { R = 1 ,r 2 ,r 3 ,...},r j E R, j = (1,2,3,. Cndot., n), wherein R j Indicating the jth resource request in the request sequence; each user request corresponds to one virtual machine
Figure FDA0003929937050000011
Wherein the virtual machine
Figure FDA0003929937050000012
The resource amount of
Figure FDA0003929937050000013
Figure FDA0003929937050000014
Represents a request r j The number of CPUs that are requested is,
Figure FDA0003929937050000015
represents a request r j The amount of memory requested;
s1-2, K types of physical machines with different resource quantities can be used for constructing a cloud data center, and each type of physical machine has different CPU core quantity, memory resource quantity and single-day maximum energy consumption;
s1-3, establishing a cloud data center resource model through constraint conditions:
constructing a physical machine sequence forming a cloud data center:
Figure FDA0003929937050000016
Figure FDA0003929937050000017
wherein
Figure FDA0003929937050000018
Representing the ith physical machine of type k, each physical machine containing three attributes
Figure FDA0003929937050000019
Wherein
Figure FDA00039299370500000110
Indicates the number of cpus of the physical machine,
Figure FDA00039299370500000111
indicating the amount of memory of the physical machine,
Figure FDA00039299370500000112
representing the single day energy consumption of the physical machine;
all user requests are deployed on corresponding physical machines in a virtual machine mode, so that the resource quantity owned by the physical machines for constructing the cloud data center always meets the following constraint conditions:
Figure FDA00039299370500000113
Figure FDA0003929937050000021
wherein m represents the total number of physical machines;
n represents the number of requests made by the user;
Figure FDA0003929937050000022
the representation is located in a physical machine
Figure FDA0003929937050000023
Virtual machine of
Figure FDA0003929937050000024
The cpu core number of (c);
Figure FDA0003929937050000025
the representation is located in a physical machine
Figure FDA0003929937050000026
Virtual machine of
Figure FDA0003929937050000027
The size of the memory of (c);
each physical machine
Figure FDA0003929937050000028
And its loaded virtual machine
Figure FDA0003929937050000029
Is a pair ofMultiple relationships, for a given ith physical machine
Figure FDA00039299370500000210
Should satisfy the following load conditions:
Figure FDA00039299370500000211
Figure FDA00039299370500000212
wherein r is j Representing the jth user request in the request sequence;
Figure FDA00039299370500000213
representing the ith physical machine of type k.
3. The cloud data center energy saving method based on cloud data center construction and virtual machine integration according to claim 1, wherein the step S2 comprises the following steps:
s2-1, determining an agent state set, an action set and an award value in the DQN model;
and S2-2, carrying out physical model selection by using the DQN model, and regarding the model selection problem as a Markov decision process.
4. The cloud data center energy saving method based on cloud data center construction and virtual machine integration according to claim 3, wherein the S2-1 comprises the following steps:
(1) Acquiring system state and state set of physical machine in cloud data center at time t
Figure FDA00039299370500000214
Wherein the content of the first and second substances,
Figure FDA00039299370500000215
represents the average CPU utilization of a physical machine of type k in the cluster,
Figure FDA00039299370500000216
representing the average memory utilization rate of the physical machine of type k in the cluster;
(2) Setting an action set to cover all K types of physical machines to be selected, wherein the action a belongs to A, wherein a contains one-dimensional two states { add, pass }, namely the physical machines of the type are required to be added into the cluster or are not required to be added into the cluster; a is an action set of all models of physical machines, namely the action set;
(3) Deriving a reward function associated with the set of states and the set of actions:
firstly, respectively obtaining the amount of idle resources generated by each physical machine which has added into the cluster, wherein the calculation method is as follows:
Figure FDA0003929937050000031
Figure FDA0003929937050000032
wherein the content of the first and second substances,
Figure FDA0003929937050000033
indicating the number of idle CPUs of the physical machine;
Figure FDA0003929937050000034
representing the idle memory quantity of a physical machine with the type k;
Figure FDA0003929937050000035
the representation is located in a physical machine
Figure FDA0003929937050000036
Virtual machine of
Figure FDA0003929937050000037
The cpu core number of (c);
Figure FDA0003929937050000038
the representation is located in a physical machine
Figure FDA0003929937050000039
Virtual machine of
Figure FDA00039299370500000310
The memory size of (d);
then, calculating the resource fragment quantity generated by the physical machine at a certain moment, and respectively recording the CPU fragment quantity and the memory fragment quantity:
Figure FDA00039299370500000311
the goal of DQN model optimization is to reduce the total amount of resource fragments, and to reduce the computational degree model of the algorithm, they are normalized:
Figure FDA00039299370500000312
wherein
Figure FDA00039299370500000313
Is the amount of resource fragmentation;
alpha is a parameter for adjusting the unit value between the CPU and the memory;
the reward value R of a physical machine of type k is set as follows, i.e., a physical machine that generates a smaller fragmentation resource can obtain a higher reward:
Figure FDA00039299370500000314
wherein the content of the first and second substances,
Figure FDA00039299370500000315
representing the prize value at time t for a physical machine of type k.
5. The energy-saving method for the cloud data center based on the cloud data center construction and the virtual machine integration according to claim 3, wherein the updating rule of the Q value table in the type selection process is as follows:
s2-2-1, performing feed forward once on the current state S, and obtaining the predicted Q values of all actions:
max a' Q(s',a')
s2-2-2, performing feedforward once on the next state S', and calculating the maximum output value of the whole network:
Figure FDA0003929937050000041
wherein
Figure FDA0003929937050000042
Representing the reward value of a physical machine with the model number k at the moment t + 1;
s2-2-3, setting a Q value for the action target;
s2-2-4, approximating a value function by using a deep convolutional neural network;
and S2-2-5, training the learning process of reinforcement learning by using experience playback.
6. The cloud data center energy saving method based on cloud data center construction and virtual machine integration according to claim 3, wherein the S3 comprises the following steps:
s3-1, determining a source host list of the virtual machine to be migrated;
s3-1-1, establishing a migration operator pool D = { D = 1 ,d 2 ,d 3 ,...,d z },d p ∈d z Wherein d is p Representing different emigrationsOperators, wherein each operator represents a different physical machine migration priority determination strategy;
s3-1-2, giving weight to each operator, and respectively recording the weight as: { w 1 ,w 2 ,w 3 ,...,w p In which w p Corresponding operator d p The weight of (c);
s3-1-3, determining the number of virtual machine migration stations to which each operator can be allocated according to the upper limit of the number of virtual machine migration stations per day and the weight of each operator;
s3-1-4, updating the weight value of each operator;
s3-1-5, calculating each operator with a new weight w n New total energy consumption cost E of system after distribution of migration times n And E, the E represents the total energy consumption cost after the pre-emigration is finished; if E is>E n If so, outputting a migration sequence L; if E is<E n Returning to the step S3-1-4, and updating the weight again;
s3-2, determining a target host list to be migrated into the virtual machine;
s3-2-1, acquiring the migrated virtual machine list L output in the step S5-1 as the input of the step;
s3-2-2, establishing an immigration operator pool R = { R = 1 ,r 2 ,r 3 ,...,r P },r p ∈r P Wherein r is p Representing different migration operators, wherein each operator represents different virtual machine migration priority determination strategies;
s3-2-3, storing the current state of the cluster, respectively using each migration operator to perform pre-migration, and calculating the increased energy consumption cost;
and S3-2-4, selecting an operator with the least energy consumption cost, and completing the immigration operation.
7. The cloud data center energy saving method based on cloud data center construction and virtual machine integration according to claim 6, wherein the upper limit of the number of virtual machine migrations per day is as follows:
Figure FDA0003929937050000051
wherein u is the number of virtual machines that can be migrated per day;
m represents the total number of physical machines;
n represents the number of requests made by the user;
f i,j = {0,1} represents user request r j And physical machines s i The mapping relationship of (2);
Figure FDA0003929937050000052
after the virtual machine migration is finished, a user requests r j And the mapping situation between the new physical machine;
nav is the number of virtual machines in the entire data center.
8. The cloud data center energy saving method based on cloud data center construction and virtual machine integration according to claim 6, wherein the updating of the weight value of each operator comprises the following steps:
(1) Allocating the number of migratable virtual machines for each operator, performing virtual machine pre-migration, adding the virtual machines which have completed the pre-migration into a migration list L, and calculating the total energy consumption cost E saved after the pre-migration is completed;
(2) The operator weight is composed of a basic weight and a temporary weight, and the temporary weight of the operator is updated after each pre-migration;
(3) Calculating the energy consumption cost e saved by each operator, and selecting the operator with the most cost saving from the calculated energy consumption cost e
Figure FDA0003929937050000053
Updating its temporary weight according to the following formula:
Figure FDA0003929937050000054
wherein the content of the first and second substances,
Figure FDA0003929937050000055
representation operator
Figure FDA0003929937050000056
A temporary weight of (d);
Figure FDA0003929937050000061
representation operator
Figure FDA0003929937050000062
The starting weight of (a);
Figure FDA0003929937050000063
represents the operator
Figure FDA0003929937050000064
Reduced energy consumption overhead after premigration;
e' p index operator d p Reduced energy consumption overhead after premigration;
z represents the total number of operators in the migration operator pool;
(4) Updating the temporary weights of other operators according to the following formula:
Figure FDA0003929937050000065
wherein, w t Represents the operator d p The starting weight of (a);
Figure FDA0003929937050000066
is the temporary weight of the operator;
e' p represents the operator d p Reduced energy consumption overhead after premigration;
z represents the total number of operators in the migration operator pool;
(5) The operator weights are updated according to the following formula:
w n =λw p +(1+λ)w
wherein, w n Is the weight value after the operator is updated;
w p is an operator d p A base weight when not more recent;
Figure FDA0003929937050000067
is the temporary weight of the operator;
lambda epsilon (0,1) is a parameter, and the weight proportion of the weighted summation of the basic weight of the operator and the temporary weight during updating is adjusted to the optimal weight by using a dichotomy.
CN202211386286.2A 2022-11-07 2022-11-07 Cloud data center energy-saving method based on cloud data center construction and virtual machine integration Pending CN115617526A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211386286.2A CN115617526A (en) 2022-11-07 2022-11-07 Cloud data center energy-saving method based on cloud data center construction and virtual machine integration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211386286.2A CN115617526A (en) 2022-11-07 2022-11-07 Cloud data center energy-saving method based on cloud data center construction and virtual machine integration

Publications (1)

Publication Number Publication Date
CN115617526A true CN115617526A (en) 2023-01-17

Family

ID=84878407

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211386286.2A Pending CN115617526A (en) 2022-11-07 2022-11-07 Cloud data center energy-saving method based on cloud data center construction and virtual machine integration

Country Status (1)

Country Link
CN (1) CN115617526A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116627240A (en) * 2023-07-25 2023-08-22 腾讯科技(深圳)有限公司 Power consumption adjustment method, device, electronic equipment, storage medium and program product

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116627240A (en) * 2023-07-25 2023-08-22 腾讯科技(深圳)有限公司 Power consumption adjustment method, device, electronic equipment, storage medium and program product
CN116627240B (en) * 2023-07-25 2024-01-26 腾讯科技(深圳)有限公司 Power consumption adjustment method, device, electronic equipment, storage medium and program product

Similar Documents

Publication Publication Date Title
Wang et al. Task scheduling algorithm based on improved firework algorithm in fog computing
CN108829494B (en) Container cloud platform intelligent resource optimization method based on load prediction
CN110069341B (en) Method for scheduling tasks with dependency relationship configured according to needs by combining functions in edge computing
Shan et al. A survey on computation offloading for mobile edge computing information
Zhang et al. Joint edge server placement and service placement in mobile-edge computing
Dong et al. A ‘joint-me’task deployment strategy for load balancing in edge computing
CN113806018A (en) Kubernetes cluster resource hybrid scheduling method based on neural network and distributed cache
CN113076177B (en) Dynamic migration method of virtual machine in edge computing environment
Gu et al. A multi-objective fog computing task scheduling strategy based on ant colony algorithm
Dai et al. A learning algorithm for real-time service in vehicular networks with mobile-edge computing
CN115617526A (en) Cloud data center energy-saving method based on cloud data center construction and virtual machine integration
Liu et al. A data placement strategy for scientific workflow in hybrid cloud
Shao et al. A load balancing strategy based on data correlation in cloud computing
KR20230035024A (en) Apparatus and method for dynamic resource allocation in cloud radio access networks
Fu et al. An efficient VNF deployment scheme for cloud networks
CN111176784A (en) Virtual machine integration method based on extreme learning machine and ant colony system
CN113360245A (en) Internet of things equipment task downloading method based on mobile cloud computing deep reinforcement learning
CN109857562A (en) A kind of method of memory access distance optimization on many-core processor
CN112527450A (en) Super-fusion self-adaption method, terminal and system based on different resources
CN115086249B (en) Cloud data center resource allocation method based on deep reinforcement learning
CN113296893B (en) Cloud platform low-resource-loss virtual machine placement method based on hybrid sine and cosine particle swarm optimization algorithm
Swarnakar et al. A novel improved hybrid model for load balancing in cloud environment
Lin et al. A workload-driven approach to dynamic data balancing in MongoDB
CN115116879A (en) Dynamic weight optimization load balancing algorithm for wafer surface defect detection
Shi et al. Research on Multi-Objective Optimization Method of Edge Cloud Computing Virtual Machine Placement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination