CN113342462A - Cloud computing optimization method, system and medium integrating limitation periodic quasi-dormancy - Google Patents

Cloud computing optimization method, system and medium integrating limitation periodic quasi-dormancy Download PDF

Info

Publication number
CN113342462A
CN113342462A CN202110613497.4A CN202110613497A CN113342462A CN 113342462 A CN113342462 A CN 113342462A CN 202110613497 A CN202110613497 A CN 202110613497A CN 113342462 A CN113342462 A CN 113342462A
Authority
CN
China
Prior art keywords
cloud
physical machine
task
service
average
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110613497.4A
Other languages
Chinese (zh)
Other versions
CN113342462B (en
Inventor
金顺福
宋家
余靖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Lihe Newspaper Big Data Center Co.,Ltd.
Original Assignee
Yanshan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yanshan University filed Critical Yanshan University
Priority to CN202110613497.4A priority Critical patent/CN113342462B/en
Publication of CN113342462A publication Critical patent/CN113342462A/en
Application granted granted Critical
Publication of CN113342462B publication Critical patent/CN113342462B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/4401Bootstrapping
    • G06F9/4418Suspend and resume; Hibernate and awake
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/4557Distribution of virtual machine instances; Migration and load balancing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45575Starting, stopping, suspending or resuming virtual machine instances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The invention discloses a cloud computing optimization method integrating limiting periodic quasi-dormancy, which comprises the following steps: obtaining the average response time T of the cloud task unloaded to the cloud server to receive the serviceCAverage response time T of cloud task receiving service at local processorMDAnd the average response time T of the cloud task, and the average power consumption of receiving the service on the ith physical machine in the cloud end according to the cloud task
Figure DDA0003096993530000012
And constructing a system optimization function F by using the cloud system average power E to obtain an optimal cloud task unloading probability combination
Figure DDA0003096993530000011
By adopting the cloud computing optimization method provided by the invention, reasonable parameters are set for the cloud computing energy-saving strategy, so that the cloud system energy consumption level can be reduced, and the service quality of cloud tasks can be ensured.

Description

Cloud computing optimization method, system and medium integrating limitation periodic quasi-dormancy
Technical Field
The invention relates to the technical field of cloud computing and task unloading, in particular to a cloud computing optimization method, a cloud computing optimization system and a readable storage medium thereof, wherein the cloud computing optimization method and the cloud computing optimization system are integrated with a limitation periodic quasi-dormancy.
Background
At present, the operation means of a data center network in a cloud environment is increasingly flexible, and the traditional data center can not meet the requirements. The cloud data center is different from the traditional data center in high virtualization, automation management and green energy conservation. Generally, a cloud data center comprises two energy-saving strategies of energy consumption reduction and power management. The strategy for reducing the energy consumption mainly starts from two aspects of reducing idle energy consumption and operating energy consumption. The power management strategy can be divided into dynamic power management and static power management, wherein the dynamic load change of the cloud data center is a prerequisite for the dynamic power management, and the dynamic load change of the cloud data center is a prerequisite for adjusting a hardware structure to save energy consumption.
A great deal of work is done for numerous scholars at home and abroad on the method and the strategy for saving energy consumption of the cloud data center. Some methods adjust the resource utilization rate of the virtual machine, so that the minimum energy consumption and the maximum efficiency of cloud computing resource allocation are realized, and the aim of reducing the energy consumption of a cloud system is fulfilled; the energy consumption, the system performance and the QoS requirements of the cloud system are balanced through a virtual machine integration algorithm, load balance among physical machines is realized, and the idle host is closed to reduce the overall energy consumption; in addition, by adopting a resource redistribution method and a dynamic resource supply method, the flexible adjustment and the efficient distribution of cloud resources are realized, the requirements of the working load are met, the performance is obviously improved, and the resource consumption is effectively reduced; the resources are automatically stretched and changed, so that an optimal extension item is made for a certain load capacity, and the energy consumption is fundamentally reduced. All the above steps are to save the energy consumption of the cloud system by adopting the virtual machine integration and the resource optimization configuration. However, sometimes the cloud resource usage rate is very low, the excellent integrated configuration scheme is very little compared with the energy waste when the resources are idle. In this situation, some studies have been started to achieve the purpose of reducing energy consumption by applying a sleep mechanism. However, in the hibernation mechanism in these researches, the virtual machine in the hibernation state does not work at all, and although the energy consumption of the cloud system is effectively reduced, the service quality of the cloud task cannot be guaranteed. So far, the research on the energy-saving strategy of cloud computing introducing a semi-dormancy mechanism is very lacking. Therefore, in order to overcome the defects, the development of a cloud computing optimization method fusing the limitation of periodic quasi-dormancy is urgently needed.
Disclosure of Invention
In order to reduce the energy consumption level of a cloud system and ensure the service quality of cloud tasks, the invention provides a cloud computing optimization method, a cloud computing optimization system and a readable storage medium integrating limitation periodic quasi-dormancy, which have high requirements on response time for real-time network application.
The invention provides a cloud computing optimization method fusing a limit periodic quasi-dormancy, which comprises the following steps:
step 1, obtaining average response time T of cloud tasks unloaded to a cloud server to receive serviceCThe method comprises the following substeps:
step 11, unloading the cloud task to the ith physical machine for average waiting time TwiComprises the following steps:
Figure BDA0003096993510000021
wherein e is a 2 × 1 full 1 column vector;
Figure BDA0003096993510000022
representing the probability of the cloud task being unloaded to the cloud end to receive the service; λ represents the arrival rate of the cloud task; q. q.siRepresenting the probability that the cloud task unloaded to the cloud end is distributed to the ith physical machine to receive service, wherein n is the number of the physical machines deployed at the cloud end; l represents the number of cloud tasks in the ith physical machine under the steady state, and is called as a system level; c. CiRepresenting the number of virtual machines deployed on the ith physical machine; pii(l)=(πi(l,0),πi(l,1)) is expressed as a probability vector at the system level of l at steady state;
step 12, determining the average service time T of the cloud task on the ith physical machine according to the state of the cloud task unloaded to the ith physical machine when the cloud task reaches the cloud endsiThe cloud task is served on average on the ith physical machineInter TsiExpressed as:
Figure BDA0003096993510000023
wherein, mubiIndicating high speed service rate, mu, on the ith physical machineviIndicating a low speed service rate, theta, on the ith physical machineiIndicating a semi-dormant parameter on the ith physical machine,
Figure BDA0003096993510000031
the probability that the ith physical machine in the cloud is in a normal state is represented,
Figure BDA0003096993510000032
the probability that the ith physical machine in the cloud is in a semi-dormant state is represented,
Figure BDA0003096993510000033
representing the number of cloud tasks in the cache on the ith physical machine;
and further obtaining the average response time T of the cloud task unloaded to the cloud server to receive the serviceCComprises the following steps:
Figure BDA0003096993510000034
step 2, obtaining the average response time T of the cloud task receiving service at the local processorMDAnd obtaining the average response time T of the cloud task receiving service at the local processor according to the M/M/1 queuing modelMDComprises the following steps:
Figure BDA0003096993510000035
wherein, mu0The service rate of the local server is represented, p represents the probability that the cloud task receives the service at the local server, and lambda represents the arrival rate of the cloud task;
by applying a total probability formula, the average response time T of the cloud task is as follows:
Figure BDA0003096993510000036
step 3, obtaining the average power consumption E of the cloud task receiving the service at the local processor according to different energy values consumed in unit time under different working states of the serverMDAverage power consumption of service received by cloud task on ith physical machine in cloud
Figure BDA0003096993510000037
And cloud system average power E;
Figure BDA0003096993510000038
Figure BDA0003096993510000041
wherein, PbusyIs the average running power, P, of the local processor in the working stateidleThe average running power when the local processor is in an idle state; when the ith physical machine deployed on the cloud end is in a semi-dormant state,
Figure BDA0003096993510000042
maintaining an average operating power at which a virtual machine deployed on the physical machine idles for the physical machine;
Figure BDA0003096993510000043
average operating power to maintain operation of one virtual machine deployed thereon; when the ith physical machine deployed on the cloud end is in a normal state,
Figure BDA0003096993510000044
maintaining an average operating power at which a virtual machine deployed on the physical machine idles for the physical machine;
Figure BDA0003096993510000045
average operating power to maintain operation of one virtual machine deployed thereon;
the average power E of the cloud system is obtained by applying a total probability formula as follows:
Figure BDA0003096993510000046
step 4, integrating the average response time of the cloud tasks and the average power of the cloud system, and constructing a system optimization function F:
F=β1T=β2E;
wherein, beta1Represents the influence factor of the average response time of the cloud task on the system optimization function, beta2Representing the influence factor of the cloud system average power on the system optimization function;
step 5, generating a chaotic variable by using a Logistic mapping method to improve a gull intelligent optimization algorithm, and obtaining a strategy optimization result by using MATLAB software to enable the system optimization function in the step 4 to reach the minimum value to obtain the optimal cloud task unloading probability combination
Figure BDA0003096993510000047
So that the cloud task to be generated from the mobile terminal is p*Is served at the local server to (1-p)*) Offloading the probability to the cloud; offloading cloud tasks to cloud
Figure BDA0003096993510000048
Is served on the ith physical machine.
In a preferred embodiment, the average service time T of the cloud task on the ith physical machine is determined in step 12siThere are four cases:
(1) if the ith physical machine is in a normal state, the current cloud task receives high-rate service, and the average service time is
Figure BDA0003096993510000051
(2) If the ith physical machine is in a semi-dormant state, all the cloud tasks queued in the cache before the semi-dormant timer expires and the current cloud task are all served, the current cloud task receives low-rate service, and the average service time is
Figure BDA0003096993510000052
(3) If the ith physical machine is in a semi-dormant state, all the cloud tasks queued and waiting in the cache before the semi-dormant timer expires are served, but the current cloud task is not served, the current cloud task is subjected to a section of low-rate service and a section of high-rate service, and the average service time is
Figure BDA0003096993510000053
(4) If the ith physical machine is in a semi-dormant state and the semi-dormant timer expires, the cloud tasks waiting in the cache are not served, the current cloud task receives high-speed service, and the average service time is
Figure BDA0003096993510000054
Further, in step 2, the average response time T of the cloud task receiving service at the local processorMDFor the average lingering time in the M/M/1 queuing model, the expression of the average queue length in the model is obtained by calculation
Figure BDA0003096993510000055
And further obtaining an average residence time of
Figure BDA0003096993510000056
In a preferred embodiment, E in step 3MDIn the expression of (a) in (b),
Figure BDA0003096993510000057
is obtained according to the queue length distribution of the M/M/1 queuing modelProbability of local processor operation, and
Figure BDA0003096993510000058
the probability of being idle for the local processor.
In a preferred embodiment, in step 5, the gull optimization algorithm is improved by using a mapping method to generate a chaotic variable. The gull can be diversified by initializing with the chaos method, and premature convergence is avoided. At present, methods for generating chaotic variables mainly comprise a Logistic method and a TentMap method. The method comprises the steps of respectively executing a traditional gull optimization algorithm and an improved gull optimization algorithm which adds two different mechanisms for generating chaotic variables by using the same parameters and the same random initialization conditions. From the comparison of time values, the gull optimization algorithm improved by using the Logistic mapping method has obvious advantages.
A second aspect of the present invention provides a cloud computing optimization system, comprising: the system comprises a mobile equipment terminal, an access port, a load balancer and a server, wherein the access port and the load balancer are connected with the mobile equipment terminal; the access port collects cloud tasks from the mobile equipment terminal, the load balancer distributes the cloud tasks of the access port according to an optimization method, and the server executes the cloud computing optimization method fusing the limit periodic quasi-dormancy.
Another aspect of the present invention provides a readable storage medium applied to a computer, wherein the readable storage medium stores a computer program, and the computer program, when executed by a processor, implements the foregoing cloud computing optimization method for fusion limited periodic quasi-dormancy.
In conclusion, the beneficial effects of the invention are as follows:
aiming at the problems of insufficient storage space, limited processing capacity and the like of the mobile equipment, the invention considers that a part of cloud tasks are migrated to the cloud end to relieve the current situations of overlong response time and excessive energy consumption. In order to give consideration to the service quality of cloud tasks and the energy consumption level of a cloud system, a cloud computing energy-saving strategy fusing a periodic semi-dormancy limiting mechanism is provided. And aiming at the service rates of the virtual machines under different load conditions, a multi-service-platform synchronous multi-level adaptive vacation queuing model is constructed. The time length of the physical machine in the semi-dormant state can be effectively controlled by setting a reasonable limit value of the continuous dormancy times of the physical machine by balancing the arrival rate of the cloud task and the state of the virtual machine deployed on the cloud physical machine. And obtaining the average response time of the cloud task and the average power consumption of the cloud system under a steady state by using a matrix geometric solution method and a simulated birth and death process. And (4) balancing the average response time of the cloud task and the average power of the cloud system, and constructing a system optimization function. And improving a seagull optimization algorithm to obtain an optimization result of the cloud task unloading probability.
Drawings
FIG. 1 is a schematic diagram of a cloud computing optimization method incorporating the constrained periodic quasi-dormancy of the present invention;
fig. 2 is a state transition diagram of a virtual machine deployed on a cloud physical machine in a cloud computing optimization method with fusion of limiting periodic pseudo-hibernation according to the present invention;
fig. 3A, fig. 3B and fig. 3C are schematic diagrams illustrating a change trend of an average response time of a cloud task according to the optimization method of the invention;
fig. 4A, 4B and 4C are schematic diagrams illustrating average power variation trends of a cloud system according to the optimization method of the invention;
FIG. 5 is a schematic diagram illustrating a variation trend of a system optimization function F of the optimization method for integrating the limitation of the periodic quasi-dormant cloud computing according to the present invention;
fig. 6 is a schematic diagram of a cloud computing optimization system incorporating a constrained periodic quasi-dormancy according to the present invention.
Detailed Description
The technical solution of the present invention will be further described with reference to the accompanying drawings and embodiments.
Fig. 1 is a schematic diagram of a cloud computing optimization method incorporating a limitation periodic quasi-hibernation according to the present invention. The method specifically comprises the following steps:
step 1, obtaining average response time T of cloud tasks unloaded to a cloud server to receive serviceCThe method comprises the following substeps:
step 11, unloading the cloud task to the ith physical machine for average waiting time TwiComprises the following steps:
Figure BDA0003096993510000071
wherein e is a 2 × 1 full 1 column vector;
Figure BDA0003096993510000074
representing the probability of the cloud task being unloaded to the cloud end to receive the service; λ represents the arrival rate of the cloud task; q. q.siRepresenting the probability that the cloud task unloaded to the cloud end is distributed to the ith physical machine to receive service, wherein n is the number of the physical machines deployed at the cloud end; l represents the number of cloud tasks in the ith physical machine under the steady state, and is called as a system level; c. CiRepresenting the number of virtual machines deployed on the ith physical machine; pii(l)=(πi(l,0),πi(l,1)) is expressed as a probability vector at the system level of l at steady state;
step 12, determining the average service time T of the cloud task on the ith physical machine according to the state of the cloud task unloaded to the ith physical machine when the cloud task reaches the cloud endsiAverage service time T of the cloud task on the ith physical machinesiExpressed as:
Figure BDA0003096993510000072
wherein, mubiIndicating high speed service rate, mu, on the ith physical machineviIndicating a low speed service rate, theta, on the ith physical machineiIndicating a semi-dormant parameter on the ith physical machine,
Figure BDA0003096993510000073
the probability that the ith physical machine in the cloud is in a normal state is represented,
Figure BDA0003096993510000081
the probability that the ith physical machine in the cloud is in a semi-dormant state is represented,
Figure BDA0003096993510000082
representing the number of cloud tasks in the cache on the ith physical machine; and further obtaining the average response time T of the cloud task unloaded to the cloud server to receive the serviceCComprises the following steps:
Figure BDA0003096993510000083
step 2, obtaining the average response time T of the cloud task receiving service at the local processorMDAnd obtaining the average response time T of the cloud task receiving service at the local processor according to the M/M/1 queuing modelMDComprises the following steps:
Figure BDA0003096993510000084
wherein, mu0The service rate of the local server is represented, p represents the probability that the cloud task receives the service at the local server, and lambda represents the arrival rate of the cloud task;
by applying a total probability formula, the average response time T of the cloud task is as follows:
Figure BDA0003096993510000085
step 3, obtaining the average power consumption E of the cloud task receiving the service at the local processor according to different energy values consumed in unit time under different working states of the serverMDAverage power consumption of service received by cloud task on ith physical machine in cloud
Figure BDA0003096993510000086
And cloud system average power E;
Figure BDA0003096993510000087
Figure BDA0003096993510000088
wherein, PbusyIs the average running power, P, of the local processor in the working stateidleThe average running power when the local processor is in an idle state; when the ith physical machine deployed on the cloud end is in a semi-dormant state,
Figure BDA0003096993510000091
maintaining an average operating power at which a virtual machine deployed on the physical machine idles for the physical machine;
Figure BDA0003096993510000092
average operating power to maintain operation of one virtual machine deployed thereon; when the ith physical machine deployed on the cloud end is in a normal state,
Figure BDA0003096993510000093
maintaining an average operating power at which a virtual machine deployed on the physical machine idles for the physical machine;
Figure BDA0003096993510000094
average operating power to maintain operation of one virtual machine deployed thereon;
the average power E of the cloud system is obtained by applying a total probability formula as follows:
Figure BDA0003096993510000095
step 4, integrating the average response time of the cloud tasks and the average power of the cloud system, and constructing a system optimization function F:
F=β1T=β2E;
wherein, beta1Represents the influence factor of the average response time of the cloud task on the system optimization function, beta2Representing the influence factor of the cloud system average power on the system optimization function;
step 5, generating chaos variables by using Logistic mapping methodAn improved gull intelligent optimization algorithm is adopted, MATLAB software is used for obtaining a strategy optimization result which enables the system optimization function in the step 4 to reach the minimum value, and the optimal cloud task unloading probability combination is obtained
Figure BDA0003096993510000096
So that the cloud task to be generated from the mobile terminal is p*Is served at the local server to (1-p)*) Offloading the probability to the cloud; offloading cloud tasks to cloud
Figure BDA0003096993510000097
Is served on the ith physical machine.
The optimization method of the present invention will be described in detail below with reference to specific embodiments. And aiming at the problems of insufficient storage space, limited processing capacity and the like of the mobile equipment, part of cloud tasks are migrated to the cloud. Cloud services rely on a cloud computing environment through distributed software. One or more physical machines are usually deployed on a cloud data service center, and a plurality of virtual machines can be deployed on one physical machine. When there is no cloud task to be executed on one physical machine, if all the virtual machines deployed on the physical machine are kept in an idle state, a large amount of energy consumption is generated, and making the idle virtual machines enter a dormant state is a practical method for reducing energy consumption of cloud computing. But the full sleep mode may affect the quality of service of the cloud task. Therefore, a mechanism for limiting periodic synchronous semi-dormant is introduced into the cloud end, and a new cloud computing energy-saving strategy is provided.
The cloud tasks generated by various mobile devices are firstly converged at the access point, and then are distributed to a local processor, namely the mobile devices to receive services or are unloaded to a cloud end to receive services according to a certain probability according to the set energy-saving strategy parameters under the scheduling of the local load balancer. And the cloud task unloaded to the cloud end is distributed to a certain physical machine to receive service by the cloud end load balancer with a certain probability according to the set energy-saving strategy parameters.
Fig. 2 is a state transition diagram of a virtual machine deployed on a cloud physical machine in accordance with a cloud computing energy saving policy for limiting periodic quasi-hibernation according to the present invention. The transition between different states of the virtual machine is as follows:
(1) in the high-speed working state, the physical machine and all the virtual machines deployed on the physical machine work normally, and at least one virtual machine in the physical machine is serving cloud tasks in the state. And the cloud tasks reaching the cache region in the high-speed working state are sequentially served by the virtual machines. When the last cloud task service is completed, the physical machine starts a semi-dormant timer, a semi-dormant period is started, and all the virtual machines on the physical machine enter a low-speed idle state at the same time. If the service of the virtual machine in the high-speed working state completes the current cloud task and is not allocated to a new cloud task, and other virtual machines in the same physical machine are in the high-speed working state, the virtual machine is converted into a high-speed idle state.
(2) And in the high-speed idle state, if the virtual machine in the high-speed idle state is allocated to a new cloud task, the virtual machine is immediately converted into a high-speed working state to provide service. And if the service of the physical machine finishes the last cloud task, starting a semi-dormant timer, and enabling all the virtual machines on the semi-dormant timer to enter a low-speed idle state at the same time.
(3) And in the low-speed idle state, the virtual machine in the low-speed idle state can provide service for newly arrived cloud tasks at any time and is not controlled by the semi-dormancy timer. If no cloud task arrives within the range specified by the semi-sleep timer and the number of continuous semi-sleep times of the physical machine does not reach the limit value HiWhen the virtual machine is in the low-speed idle state, the physical machine restarts the semi-sleep timer to start a new semi-sleep period, and the virtual machines on the physical machine are all kept in the low-speed idle state; if no cloud task arrives within the range specified by the semi-sleep timer and the number of consecutive semi-sleeps of the physical machine has reached the limit value HiAnd when the physical machine enters a normal state, the virtual machines on the physical machine are all converted into a high-speed idle state. When one virtual machine is in a low-speed idle state, if the semi-sleep timer expires and other virtual machines in the same physical machine are not served to finish cloud tasks, the physical machine finishes semi-sleep and the virtual machine is converted into a high-speed idle state; and if the new cloud task is allocated, the virtual machine is immediately switched to a low-speed working state from a low-speed idle state.
(4) In the low-speed working state, when one virtual machine completes the current cloud task at a low speed, if no other waiting cloud task exists in the cache region, the virtual machine is converted into a low-speed idle state from the low-speed working state; otherwise, the virtual machine is kept in a low-speed working state, and the service is continuously provided for the cloud task waiting in the cache region at a low service rate. When the semi-dormant timer expires, if other virtual machines are still serving the current cloud task, all the virtual machines on the physical machine enter a high-speed working state at the same time.
Aiming at the cloud task with higher real-time requirement, the service quality of the cloud task is further improved, and the response time of the cloud task is shortened. The cloud task response time increased due to the fact that the physical machine is in the semi-dormant state is reduced, the limitation periodicity and the semi-dormant mode are considered to be fused, and a novel cloud computing energy-saving strategy is provided. When the physical machine finishes the semi-sleep once, no cloud task exists in the system, the physical machine restarts the semi-sleep timer to start a new semi-sleep once, and when the continuous semi-sleep times of the physical machine reach the limit value HiAnd the physical machine is converted to a normal state on the physical machine, waits for the arrival of the cloud task and provides services. According to the working principle of the cloud computing energy-saving strategy, a multi-service-platform synchronous multi-level adaptive working vacation queuing model is constructed at the cloud end. And evaluating the system performance of the cloud computing energy-saving strategy according to the result of the system experiment.
Considering that the utilization rate of most cloud resources is low, that is, when the cloud system does not have a cloud task to be serviced, the virtual machine deployed on the cloud physical machine is still in a high-speed idle state in a conventional state, which results in energy consumption. Considering that the virtual machine enters a semi-dormant state when the virtual machine is idle, the energy consumption level of the cloud system is greatly reduced. But the service quality of the cloud task is greatly affected. The semi-sleep mode is introduced into the cloud end, so that the virtual machine deployed on the physical machine of the cloud end can still provide low-speed service for the cloud task when being in the semi-sleep state, and the problems of the service quality of the cloud task and the energy consumption of a cloud system can be balanced.
In order to further improve the service quality of cloud tasks, the energy consumption level of a cloud system is reduced, meanwhile, the requirement of real-time network application is considered, further, the fusion limit periodic semi-sleep mode is considered on the basis of the fusion periodic semi-sleep mode, and the limit value H of the continuous semi-sleep times of a physical machine is introducedi. When all cloud tasks deployed on one physical machine of the cloud data service center are served, if no cloud task waiting for receiving service exists in the cache region, all virtual machines deployed on the physical machine are converted into a semi-dormant state, and the virtual machine in the semi-dormant state can still provide low-speed service for the cloud tasks which arrive randomly. When a semi-dormant period is finished, if a cloud task exists in the system, all the virtual machines on the physical machine are converted into a conventional state, the virtual machine which is allocated with the cloud task enters a high-speed working state to provide service, and the virtual machine which is not allocated with the cloud task enters a high-speed idle state; if the system does not have the cloud task, the physical machine restarts the semi-dormant timer to enter a new semi-dormant period, and when the continuous semi-dormant times of the physical machine reach a limit value HiAnd when the cloud task is started, all the virtual machines deployed on the physical machine enter a high-speed idle state in a normal state, so that high-speed service is provided for the cloud task which can arrive randomly in the future in time. Based on the above, a cloud computing energy-saving strategy integrating a limited periodic semi-sleep mode is provided.
The cloud is considered to be a mechanism for limiting periodicity, and the number of continuous semi-dormant times of the physical machine is called the number H of stagesi. And taking the CPU of the mobile equipment as a local service desk, and establishing continuous work queue of a single service desk. And regarding the semi-dormant state as a working vacation, regarding the limit periodicity as multi-level adaptability, and establishing a queuing model of the multi-service desk synchronous multi-level adaptive working vacation. And establishing a system model by combining the continuous work queue of the single service desk and the synchronous multi-level adaptive work and vacation queue of the multiple service desks.
Assuming that the arrival of the cloud task of the mobile equipment follows a Poisson process with a parameter of lambda (0 < lambda < + > ∞), the probability that the cloud task is distributed to a local service desk to receive service under the scheduling of a local load balancer is p, and the probability that the cloud task is unloaded to a cloud server to receive service is p
Figure BDA0003096993510000121
Therefore, the arrival compliance parameter of the cloud task of the local service desk is lambda0=pλ(0<λ0< lambda). The service time obeying parameter of a cloud task served by a local service desk is assumed to be mu0(0<μ0< + ∞) in the distribution of the indices. The method is characterized in that only one service desk is arranged locally, the capacity of a waiting buffer area is infinite, and the arrival process and the service process of the cloud task are independent. Therefore, the process of the local service desk executing the cloud task is regarded as an M/M/1 queue.
Scheduling the cloud task through a cloud load balancer, wherein the cloud task is qiAnd (i is more than 0 and less than or equal to n) (n is the number of physical machines deployed in the cloud) is distributed to the ith physical machine to receive service. For the ith physical machine, the arrival compliance parameter of the cloud task is
Figure BDA0003096993510000122
Is used as the index distribution of (1). Suppose c is deployed in the ith physical machineiAnd the virtual machines work independently and in parallel. When the cloud task is distributed to one of the physical machines, if the physical machine has a virtual machine in a high-speed idle state, the cloud task can be immediately served, otherwise, the cloud task enters a waiting cache region to be queued for waiting. Once the last cloud task on the physical machine is completed by the service, ciEach virtual machine will start a random length V at the same timeiOn vacation of (1), wherein ViCompliance parameter is thetai(0<θi< + ∞) in the distribution of the indices. In the non-working vacation period, the service time compliance parameter of the cloud task is mubi(0<μbi+ ∞) with a service time compliance parameter of μ for cloud tasks during work vacationvi(0<μvi<μbi) Is used as the index distribution of (1). Assume that all physical machines have infinite caches. Supposing the arrival interval, service time, length of working holiday and random variable H of cloud taskiAre independent of each other. Therefore, the process of the ith physical machine executing the cloud task can be regarded as synchronous multi-level adaptive workVacation M/M/ciAnd (6) queuing.
Let random variable XiThe number of cloud tasks in the ith physical machine at time instant tth is denoted as system level. Let random variable Yi(t) ═ y (y ═ 0,1) represents the state of the physical machine at time t: y-0 indicates that the physical machine is in a duty-off state, and y-1 indicates that the physical machine is in a normal busy state. Let random variable Zi(t)=z(z=0,1,2,...,Hi) Indicating the number of times the ith physical machine has been semi-dormant. { (X)i(t),Yi(t),Zi(t)), t ≧ 0} form a three-dimensional continuous-time Markov chain whose state space ΩiSee formula (1).
Ωi={(x,y,z):x≥0,y=0,z=1,2,...,Hi}∪{(x,y,z):x≥0,y=1,z=0} (1)
Let Pii(x, y, z) represents the probability distribution of the system level being x, the physical machine state being y and the continuous semi-sleep time being z under the steady state of the three-dimensional continuous-time Markov chain. Pii(x, y, z) is shown in formula (2).
Figure BDA0003096993510000131
Definition of piiRepresented as a probability vector with a system level i at steady state, a three-dimensional continuous-time Markov chain { (X)i(t),Yi(t),Zi(t)), t ≧ 0} steady-state probability distribution ΠiSee formula (3).
Πi=(πi(0),πi(1),...) (3)
The most important point for researching the steady state distribution of the system model is to build a transfer rate matrix. Let QiRepresenting a three-dimensional Markov chain { (X)i(t),Yi(t),Zi(t)), t ≧ 0} a one-step transfer rate matrix. According to the system leveliDivided into several transfer rate sub-arrays. Let Qi(m, m ') represents a transfer rate sub-array of system levels m (m 0, 1. -) to m ' (m ' 0, 1. -). For convenience of presentation, Q will bei(m,m-1)、Qi(m, m) and Qi(m, m +1) are each independently denoted asBi(m)、Ai(m) and Ci(m)。
(1) When the system level is reduced by 1, namely the number of cloud tasks in the ith physical machine is changed from m to m-1, the transfer rate subarray is Bi(m)。
When m is 1, if the cloud task is completed at a low speed, the virtual machine will be switched from a low-speed working state to a low-speed idle state, i.e. from state (1,0, z) (z is 1, 2.. multidot.h) to state (0,0, z), with a transition rate μvi(ii) a If the cloud task is completed at a high rate, the virtual machine is converted from a high-speed working state to a low-speed idle state, namely, the state (1, 1,0) is changed into the state (0,0, 1), and the transfer rate is mubi。Bi(1) Is one (H)i+1)×(Hi+1) dimensional matrix, see equation (4).
Figure BDA0003096993510000141
When m is more than or equal to 2 and less than ciWhen the cloud task is completed at a low speed, the virtual machine is switched from a low-speed working state to a low-speed idle state, namely, the state (m,0, z) is changed into the state (m-1,0, z), and the transfer rate is m muvi(ii) a If the cloud task is completed at a high speed, the virtual machine is converted from a high-speed working state to a high-speed idle state, namely the state (m,1,0) is changed into the state (m-1, 1,0), and the transfer rate is m mubi。Bi(m) is a (H)i+1)×(Hi+1) dimensional matrix, see equation (5).
Figure BDA0003096993510000142
When m is more than or equal to ciMeanwhile, if the cloud task is completed at a low speed, the virtual machine will continue to be maintained in a low-speed working state, namely, the state (m,0, z) is changed into the state (m-1,0, z), and the transfer rate is ciμvi(ii) a If the cloud task is completed at a high speed, the virtual machine will continue to be maintained in a high-speed working state, namely, the state (m,1,0) is changed into the state (m-1, 1,0), and the transition rate is ciμbi。Bi(m) is a (H)i+1)×(Hi+1) dimensional matrix, see equation (6).
Figure BDA0003096993510000151
(2) The system level is kept unchanged, namely when the number m of cloud tasks in the ith physical machine is kept unchanged, the transfer rate subarray is Ai(m)。
When m is 0, y is 0 and 0 < z < HiMeanwhile, the ith physical machine is in a semi-dormant state, and all virtual machines on the ith physical machine idle at a low speed. If the system has no cloud task at the end of the first semi-sleep period, restarting the semi-sleep timer by all the virtual machines deployed on the physical machine to enter a new semi-sleep period, namely changing the state from (0,0, z) to (0,0, z +1), wherein the transfer rate is thetai(ii) a When m is 0, y is 0 and z is HiMeanwhile, the ith physical machine is in a semi-dormant state, and all virtual machines on the ith physical machine idle at a low speed. If at HiWhen the semi-dormancy period is over, the cloud task still does not exist in the system, and the continuous dormancy times of the physical machine reach the limit value HiAt this time, all the virtual machines deployed on the physical machine will be converted from the low-speed idle state to the high-speed idle state, i.e. from the state (0,0, H)i) Becomes the state (0,1,0) with the transition rate θi. When m is 0, y is 0 and z is more than 0 and less than or equal to HiMeanwhile, the ith physical machine is in a semi-dormant state, and all virtual machines on the ith physical machine idle at a low speed. If the cloudless task arrives and the semi-dormant timer does not expire, the state (0,0, z) remains unchanged, and the transition rate is- (λ)ii) (ii) a When m is 0, y is 1, and z is 0, the ith physical machine is in a high-speed idle state, on which all virtual machines spin at high speed. If no cloud task request arrives, the state (0,1,0) is kept unchanged, and the transfer rate is-lambdai。Ai(0) Is one (H)i+1)×(Hi+1) dimensional matrix, see equation (7).
Figure BDA0003096993510000152
When m is more than or equal to 1 and less than ciY is 0 and z is more than 0 and less than or equal to HiAnd when the ith physical machine is in a semi-dormant state, part of the virtual machine deployed on the ith physical machine runs at a low speed, and part of the virtual machine idles at a low speed. If no cloud task arrives and no cloud task leaves and the semi-dormant timer does not expire, the state (m,0, z) is kept unchanged, and the transfer rate is- (lambda)i+mμvii) (ii) a If the semi-dormant timer expires, the physical machine changes from state (m,0, z) to state (m,1,0) at a transition rate θi. When m is more than or equal to 1 and less than ciWhen y is 1 and z is 0, the ith physical machine is in a normal state, and the high-speed running part of the deployed virtual machine idles at high speed. If no cloud task arrives and no cloud task leaves, the state (m,1,0) is kept unchanged, and the transfer rate is- (lambda)i+mμbi)。Ai(m) is a (H)i+1)×(Hi+1) dimensional matrix, see equation (8).
Figure BDA0003096993510000161
When m is more than or equal to ciY is 0 and z is more than 0 and less than or equal to HiAnd meanwhile, the ith physical machine is in a semi-dormant state, and all the virtual machines deployed on the ith physical machine run at a low speed. If no cloud task arrives and no cloud task leaves and the semi-dormant timer does not expire, the state (m,0, z) is kept unchanged, and the transfer rate is- (lambda)i+ciμvii) (ii) a If the semi-dormant timer expires, the physical machine changes from state (m,0, z) to state (m,1,0) at a transition rate θi. When m is more than or equal to ciWhen y is 1 and z is 0, the ith physical machine is in a normal state, and all the virtual machines deployed on the ith physical machine operate at high speed. If no cloud task arrives and no cloud task leaves, the state (m,1,0) is kept unchanged, and the transfer rate is- (lambda)i+ciμbi)。Ai(m) is a (H)i+1)×(Hi+1) dimensional matrix, see equation (9).
Figure BDA0003096993510000162
(3) System level adding1, namely when the number of the cloud tasks in the ith physical machine is changed from m to m +1, the transfer rate subarray is Ci(m)。
When m is larger than or equal to 0, no matter the ith physical machine is in a semi-dormant state or a normal state, if a cloud task arrives, the physical machine is changed into a state (m +1,0, z) from a state (m,0, z), and the transfer rate is lambdai(ii) a Or the physical machine is changed from the state (m,1,0) to the state (m +1,1,0), and the transfer rate is lambdai。Ci(m) is a (H)i+1)×(Hi+1) dimensional matrix, see equation (10).
Figure BDA0003096993510000171
To sum up, the transfer rate matrix QiAll the transfer rate sub-arrays in the table are described, and the transfer rate sub-array B can be found easilyi(m) (i is more than 0 and less than or equal to n) and transfer rate subarray Ai(m) (0 < i.ltoreq.n) from the system level ciStart repeat, transfer rate sub-array Ci(m) (0 < i.ltoreq.n) is repeated starting from system level 0. Will repeat Bi(m)、Ai(m) and Ci(m) with B respectivelyi、AiAnd CiRepresents, then three-dimensional Markov chain { (X)i(t),Yi(t),Zi(t)), t ≧ 0} a one-step transfer rate matrix QiCan be expressed in a form of three diagonal lines of blocks, see formula (11).
Figure BDA0003096993510000172
From the transfer rate matrix QiThe structure of (1) shows that the state transition only occurs between adjacent system levels, indicating a three-dimensional continuous-time Markov chain { (X)i(t),Yi(t),Zi(t)), t ≧ 0} is the pseudo-extinction process, a sufficient requirement for normal return of this process is the matrix quadratic, see equation (12).
Figure BDA0003096993510000173
With a minimum non-negative solution RiAnd a spectral radius Sp (R)i)<1。
Minimum non-negative solution RiReferred to as a rate matrix. However, the rate matrix RiThe exact solution of (A) is difficult to give in the form of an expression, iteratively obtaining RiThe algorithm mainly comprises the following steps:
step 1, initializing parameters lambda and theta of a system modeli、Hi、p、qi、ci、μbi、μviInitializing a maximum error epsilon;
step 2, according to the formula
Figure BDA0003096993510000174
Obtaining Ri% I is unit matrix;
step 3, if | | Ri-Ri'||>ε
Ri=Ri'
Figure BDA0003096993510000181
Endif;
Step 4, Ri=Ri';
Step 5, outputting the obtained rate array RiThe solution of (1);
using the resulting rate matrix RiConstructing a matrix of B [ R ]i]See equation (13).
Figure BDA0003096993510000182
The simulated extinction process { (X) can be known from the equilibrium equation and the normalization condition in the matrix geometric solutioni(t),Yi(t),Zi(t)), t ≧ 0} satisfies equation set (14).
Figure BDA0003096993510000183
Wherein e is1Is ci(Hi+1) x 1 full 1 column vector, e2Is (H)iA vector of all 1 columns of +1) × 1, I being (H)i+1)×(Hi+1) unit array.
Introducing an amplification matrix
Figure BDA0003096993510000184
The equation set (14) can be equivalently expressed as equation (15).
i(0),πi(1),...,πi(ci))Pi=(0,0,...,0,1) (15)
The formula (15) is obtained by the Gauss-Seidel method, and pi can be obtainedi(k)(0≤k≤ci) The numerical solution of (c).
Based on a matrix geometric solution method, fromi(ci) Further give pii(k)(k≥ci+1), see formula (16).
Figure BDA0003096993510000185
The cloud task average response time T is an average value of the time length from the moment when the cloud task enters the system to the moment when the cloud task receiving service is finished and leaves the system, namely the average stay time of the cloud task, and comprises the average waiting time of the cloud task and the average service time of the cloud task receiving service.
According to the M/M/1 queuing model, the average stay time of the cloud task receiving service at the local processor can be obtained
Figure BDA0003096993510000191
Derivation of the average response time for offloading the cloud task to the cloud server for service can be obtained
Figure BDA0003096993510000192
Therefore, the cloud task average response time T is expressed in equation (17).
Figure BDA0003096993510000193
Wherein p (0 < p < 1) is the probability of the cloud task receiving service at the local processor,
Figure BDA0003096993510000194
and unloading the probability of receiving the service from the cloud server for the cloud task.
The cloud system average power E should be the average of the average running power of the cloud task allocated to the local processor for servicing and the average running power of the cloud task offloaded to the cloud server for servicing. Therefore, an expression of the cloud system average power E is given, see equation (18).
Figure BDA0003096993510000195
Average power consumption E in which cloud task is serviced at local processorMDExpression(s) of (a) and average power consumption of cloud task receiving service at cloud server
Figure BDA0003096993510000196
The expressions of (a) are given.
The method aims to evaluate the average response time of cloud tasks and the change level of cloud system energy consumption under the newly-proposed cloud computing energy-saving strategy. Using MATLAB to give system experiment result, analyzing performance index along with semi-dormancy parameter thetaiThe cloud task arrival rate lambda and the cloud task local distribution probability p.
Under the condition of ensuring the stability of the system model, because the experimental results are compared, the set experimental parameters are the same as those of the comparison experiment, and the newly added experimental parameters are as follows: the first group of limit values of the continuous half-dormancy times of the physical machine is Hi(3,5) and the second group is Hi=(43,45)。
FIGS. 3A and 3B illustrate different scenarios based on set experimental parameters and limits on the number of consecutive sleep cycles of the first set of physical machinesSemi-sleep parameter θiAnd the change trend of the cloud task average response time T of the two cloud computing energy-saving strategies along with the cloud task local distribution probability p under the cloud task arrival rate lambda. Wherein θ in FIG. 3Ai=(0.5s-1,0.5s-1),Hi(3, 5); theta in FIG. 3Bi=(1.5s-1,1.5s-1),Hi=(3,5)。
When the cloud task local distribution probability p and the cloud task arrival rate λ are constant, it can be known from comparing fig. 3A and 3B that, along with the semi-sleep parameter θiThe cloud task average response time T decreases. Semi-sleep parameter θiWhen the cloud physical machine is larger, the time length of the cloud physical machine in the semi-dormant state is shortened. In this case, the cloud server can return to the normal state more timely to provide services for the cloud task at a high service rate. Therefore, the cloud task average response time T is reduced.
When semi-sleep parameter θiWhen the cloud task arrival rate λ is constant, it can be known from observing fig. 3A and 3B respectively that the change trend of the cloud task average response time T is first decreasing and then increasing with the increase of the cloud task local distribution probability p. On the left side of the lowest point of the curve, the local distribution probability p of the cloud tasks is small, the cloud tasks are unloaded to the cloud server to receive service with high probability after arriving at the system, and the response time of the cloud tasks at the cloud end becomes the dominant factor of the average response time T of the cloud tasks. With the increase of the local distribution probability p of the cloud tasks, the cloud tasks distributed to the local area gradually increase, the cloud tasks accumulated at the cloud end decrease, and the average response time T of the cloud tasks decreases accordingly. On the right side of the lowest point of the curve, the local distribution probability p of the cloud task is high, the cloud task is selected to be served on the local processor at a high probability after arriving at the system, and the response time of the cloud task for serving on the local processor becomes the dominant factor of the average response time T of the cloud task. With the increase of the local distribution probability p of the cloud tasks, more and more cloud tasks are distributed to the local, a large number of cloud tasks are accumulated in a cache region of the local processor, and the average response time T of the cloud tasks is increased.
When semi-sleep parameter θiWhen the probability p of the local assignment of the cloud task is constant, the user can see that the cloud task follows the observation in fig. 3A and 3BAnd the task arrival rate lambda is increased, and the cloud task average response time T is increased. Whether the cloud task is in the local or the cloud, as the value of the cloud task arrival rate λ increases, the number of cloud tasks arriving in the system is increased, and obviously, the average response time T of the cloud tasks will increase.
Limit value H for continuous half-sleep times of two different groups of physical machinesiAnd giving the variation trend of the cloud task average response time T along with the cloud task local distribution probability p, as shown in fig. 3C.
Comparing the images of fig. 3C and 3A, the fixed cloud task arrival rate λ and the cloud task local allocation probability p, it can be known that the limit value H of the number of continuous semi-dormancy times of the physical machine is associated withiThe cloud task average response time T increases. Limit value H of continuous dormancy times of physical machineiThe smaller the response time is, the earlier the time when all the virtual machines deployed on the physical machine enter the high-speed idle state is, the earlier the time when the cloud task which arrives randomly is provided with the high-speed service is, and therefore the average response time of the cloud task is smaller.
According to the set experimental parameters and the limit value of the continuous sleep times of the first group of physical machines, fig. 4A and 4B reveal the variation trend of the cloud system average power E of the cloud computing optimization method along with the cloud task local distribution probability p under different cloud task arrival rates λ.
When the cloud task local distribution probability p and the cloud task arrival rate λ are constant, it can be known from comparing fig. 4A and 4B that, along with the semi-sleep parameter θiThe cloud system average power E increases. Semi-sleep parameter θiWhen the cloud physical machine is larger, the time length of the cloud physical machine in the semi-dormant state is shortened, and the physical machine can be converted into the conventional state in time. The physical machine in the normal state consumes more power than the physical machine in the semi-sleep state. Therefore, the cloud system average power E increases. When the cloud task arrival rate lambda is certain and the local cloud task allocation probability p is high, the number of cloud tasks unloaded to the cloud end is small, and the cloud end physical machine has more chances to be continuously in a semi-dormant state. Average power E and semi-sleep parameter theta of cloud systemiThe relationship of (c) weakens.
When semi-sleep parameter θiWhen the cloud task arrival rate λ is constant, it can be known from observing fig. 4A and 4B respectively that the change trend of the average power E of the cloud system is first decreased and then increased along with the increase of the local distribution probability p of the cloud task. On the left side of the lowest point of the curve, the local distribution probability p of the cloud tasks is small, and the cloud tasks are unloaded to the cloud server to be served after arriving at the system at a high probability. With the increase of the local distribution probability p of the cloud tasks, the cloud tasks unloaded to the cloud end are reduced, the cloud end physical machine has more chances to enter a semi-dormant state, and the average power E of the cloud system is reduced accordingly. On the right side of the lowest point of the curve, the local distribution probability p of the cloud task is high, and the cloud task is selected to be served at a local processor with high probability after arriving at the system. With the increase of the cloud task local distribution probability p, the power consumption required by the local processor to provide the service increases, and the average power E of the cloud system becomes larger.
When semi-sleep parameter θiWhen the cloud task local distribution probability p is constant, it can be known from observing fig. 4A and 4B respectively that the cloud system average power E increases with the increase of the cloud task arrival rate λ. Whether the system is local or in a cloud end, the number of the cloud tasks arriving in the system is increased along with the increase of the arrival rate lambda of the cloud tasks.
Limit value H for continuous sleep times of two different sets of physical machinesiThe variation trend of the cloud system average power E with the cloud task local allocation probability p is given, as shown in fig. 4C. Fixed semi-sleep parameter θiThe cloud task arrival rate lambda and the cloud task local distribution probability p, and comparing the images in fig. 4C and fig. 4A, it can be known that the limit value H of the continuous dormancy times of the physical machine is associated withiThe cloud system average power E decreases. Limit value H of continuous dormancy times of physical machineiWhen the average power E of the cloud system is larger, the later the time when all the virtual machines deployed on the physical machine enter the high-speed idle state in the normal state is, the power consumed by the physical machine in the semi-sleep state is smaller than the power consumed by the physical machine in the high-speed idle state, and therefore the average power E of the cloud system is reduced.
The experiment results are combined to discover that the cloud task local distribution probability is an important factor influencing the system performance. The method has the advantages that different requirements of the cloud task and the cloud system are considered, and the cloud task unloading probability is optimized under the condition of different cloud task arrival rates.
And aiming at cloud tasks with higher response performance requirements, a limited periodic semi-dormant mode is introduced at the cloud end, and a cloud computing energy-saving strategy integrating the limited periodic semi-dormant mode is provided. And a queuing model of multi-service desk synchronous multi-level adaptive working vacation is constructed, and expressions of cloud task average response time and cloud system average power are given. Carrying out system experiment to reveal limit value H of continuous dormancy times of physical machineiThe cloud computing energy-saving strategy performance is influenced by system parameters such as the cloud task local distribution probability p and the cloud task arrival rate lambda.
By beta1=2,β2=0.2,θi=(0.5,0.5)s-1,HiFor example, for different cloud task arrival rates λ, (3,5) a numerical experiment of the system optimization function is performed, the influence of the cloud task local allocation probability p on the system optimization function is disclosed, and the cloud task offloading probability is optimized. The results of the experiment are shown in FIG. 5.
The cloud task arrival rate λ is fixed, and as can be seen from fig. 5, as the cloud task local distribution probability p increases, the system optimization function F shows a trend of descending first and then ascending. According to the variation trends of the average response time of the cloud task and the average power of the cloud system in fig. 3 and 4, when the cloud task arrival rate λ is constant, the average response time of the cloud task and the average power of the cloud system both show a trend of descending first and then ascending along with the increase of the local distribution probability p of the cloud task.
And improving a seagull optimization algorithm to obtain the optimal cloud task unloading probability. Optimal cloud task offload probability (p) for different cloud task arrival rates λ*,q*) Value of (d) and minimum cloud system optimization function F*The values of (b) are summarized in table 1.
Table 1 optimization result of cloud computing energy saving policy with limited periodic semi-sleep mode fused
Figure BDA0003096993510000231
As shown in fig. 6, a second aspect of the present invention provides a cloud computing optimization system, including: the system comprises a mobile equipment terminal, an access port, a load balancer and a server, wherein the access port and the load balancer are connected with the mobile equipment terminal; the access port collects cloud tasks from the mobile equipment terminal, the load balancer distributes the cloud tasks of the access port according to the optimization method, and the server executes the cloud computing optimization method fusing the limit periodic quasi-dormancy.
Another aspect of the present invention provides a readable storage medium applied to a computer, where the readable storage medium stores a computer program, and the computer program, when executed by a processor, implements the foregoing cloud computing optimization method for merging limitation periodic quasi-dormancy.
The optimization method provided by the invention aims at the problems of insufficient storage space, limited processing capacity and the like of the mobile equipment, and part of cloud tasks are migrated to the cloud. Cloud services rely on a cloud computing environment through distributed software. One or more physical machines are usually deployed on a cloud data service center, and a plurality of virtual machines can be deployed on one physical machine. When there is no cloud task to be executed on one physical machine, if all the virtual machines deployed on the physical machine are kept in an idle state, a large amount of energy consumption is generated, and making the idle virtual machines enter a dormant state is a practical method for reducing energy consumption of cloud computing. But the full sleep mode may affect the quality of service of the cloud task.
Therefore, the invention introduces a mechanism of limiting periodic synchronous semi-dormant in the cloud end and provides a new cloud computing optimization method. The cloud tasks generated by various mobile devices are firstly converged at the access point, and then are distributed to a local processor, namely the mobile devices to receive services or are unloaded to a cloud end to receive services according to a certain probability according to the set energy-saving strategy parameters under the scheduling of the local load balancer. And the cloud task unloaded to the cloud end is distributed to a certain physical machine to receive service by the cloud end load balancer with a certain probability according to the set energy-saving strategy parameters.
In order to achieve the purpose, the arrival rate lambda of the cloud task in the system and the service rate mu of the local service desk are considered0Cloud high speed service rate mubi(i is the ith physical machine) and low speed of cloud endService rate muviAnd limit value H of continuous sleeping times of physical machine deployed at cloud endiRelated performance indexes are obtained by using queuing theory, probability theory, birth-to-death process, computer network and the like.
The above description is only for the specific embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (7)

1. A cloud computing optimization method fusing a limit periodic quasi-dormancy is characterized by comprising the following steps:
step 1, obtaining average response time T of cloud tasks unloaded to a cloud server to receive serviceCThe method comprises the following substeps:
step 11, unloading the cloud task to the ith physical machine for average waiting time TwiComprises the following steps:
Figure FDA0003096993500000011
wherein e is a 2 × 1 full 1 column vector;
Figure FDA0003096993500000012
representing the probability of the cloud task being unloaded to the cloud end to receive the service; λ represents the arrival rate of the cloud task; q. q.siRepresenting the probability that the cloud task unloaded to the cloud end is distributed to the ith physical machine to receive service, wherein n is the number of the physical machines deployed at the cloud end; l represents the number of cloud tasks in the ith physical machine under the steady state, and is called as a system level; c. CiRepresenting the number of virtual machines deployed on the ith physical machine; pii(l)=(πi(l,0),πi(l,1)) is expressed as a probability vector at the system level of l at steady state;
step 12, according to the cloud task unloaded to the ith physical machineDetermining the average service time T of the cloud task on the ith physical machine according to the state of the cloud reachingsiAverage service time T of the cloud task on the ith physical machinesiExpressed as:
Figure FDA0003096993500000013
wherein, mubiIndicating high speed service rate, mu, on the ith physical machineviIndicating a low speed service rate, theta, on the ith physical machineiIndicating a semi-dormant parameter on the ith physical machine,
Figure FDA0003096993500000014
the probability that the ith physical machine in the cloud is in a normal state is represented,
Figure FDA0003096993500000015
the probability that the ith physical machine in the cloud is in a semi-dormant state is represented,
Figure FDA0003096993500000016
representing the number of cloud tasks in the cache on the ith physical machine;
and further obtaining the average response time T of the cloud task unloaded to the cloud server to receive the serviceCComprises the following steps:
Figure FDA0003096993500000021
step 2, obtaining the average response time T of the cloud task receiving service at the local processorMDAnd obtaining the average response time T of the cloud task receiving service at the local processor according to the M/M/1 queuing modelMDComprises the following steps:
Figure FDA0003096993500000022
wherein, mu0The service rate of the local server is represented, p represents the probability that the cloud task receives the service at the local server, and lambda represents the arrival rate of the cloud task;
by applying a total probability formula, the average response time T of the cloud task is as follows:
Figure FDA0003096993500000023
step 3, obtaining the average power consumption E of the cloud task receiving the service at the local processor according to different energy values consumed in unit time under different working states of the serverMDAverage power consumption of service received by cloud task on ith physical machine in cloud
Figure FDA0003096993500000024
And cloud system average power E;
Figure FDA0003096993500000025
Figure FDA0003096993500000026
wherein, PbusyIs the average running power, P, of the local processor in the working stateidleThe average running power when the local processor is in an idle state; when the ith physical machine deployed on the cloud end is in a semi-dormant state,
Figure FDA0003096993500000027
maintaining an average operating power at which a virtual machine deployed on the physical machine idles for the physical machine;
Figure FDA0003096993500000028
average operation for maintaining operation of one virtual machine deployed thereonLine power; when the ith physical machine deployed on the cloud end is in a normal state,
Figure FDA0003096993500000029
maintaining an average operating power at which a virtual machine deployed on the physical machine idles for the physical machine;
Figure FDA00030969935000000210
average operating power to maintain operation of one virtual machine deployed thereon;
the average power E of the cloud system is obtained by applying a total probability formula as follows:
Figure FDA0003096993500000031
step 4, integrating the average response time of the cloud tasks and the average power of the cloud system, and constructing a system optimization function F:
F=β1T+β2E;
wherein, beta1Represents the influence factor of the average response time of the cloud task on the system optimization function, beta2Representing the influence factor of the cloud system average power on the system optimization function;
step 5, generating a chaotic variable by using a Logistic mapping method to improve a gull intelligent optimization algorithm, and obtaining a strategy optimization result by using MATLAB software to enable the system optimization function in the step 4 to reach the minimum value to obtain the optimal cloud task unloading probability combination
Figure FDA0003096993500000032
So that the cloud task to be generated from the mobile terminal is p*Is served at the local server to (1-p)*) Offloading the probability to the cloud; offloading cloud tasks to cloud
Figure FDA0003096993500000033
Is served on the ith physical machine.
2. The cloud computing optimization method based on fusion of limited periodic quasi-dormancy of claim 1, wherein in step 12, an average service time T of the cloud task on the ith physical machine is determinedsiThere are four cases:
(1) if the ith physical machine is in a normal state, the current cloud task receives high-rate service, and the average service time is
Figure FDA0003096993500000034
(2) If the ith physical machine is in a semi-dormant state, all the cloud tasks queued in the cache before the semi-dormant timer expires and the current cloud task are all served, the current cloud task receives low-rate service, and the average service time is
Figure FDA0003096993500000035
(3) If the ith physical machine is in a semi-dormant state, all the cloud tasks queued and waiting in the cache before the semi-dormant timer expires are served, but the current cloud task is not served, the current cloud task is subjected to a section of low-rate service and a section of high-rate service, and the average service time is
Figure FDA0003096993500000036
(4) If the ith physical machine is in a semi-dormant state and the semi-dormant timer expires, the cloud tasks waiting in the cache are not served, the current cloud task receives high-speed service, and the average service time is
Figure FDA0003096993500000041
3. The cloud computing optimization method based on fusion of limited periodic quasi-dormancy of claim 1, wherein in step 2, the average response time T of the cloud task served at the local processorMDQueuing for flats in M/M/1 modelAverage lingering time, and obtaining an expression of average queue length in the model by calculation as
Figure FDA0003096993500000042
And further obtaining an average residence time of
Figure FDA0003096993500000043
4. The cloud computing optimization method integrating the constrained periodic quasi-dormancy according to claim 1, wherein E is step 3MDIn the expression of (a) in (b),
Figure FDA0003096993500000044
is the probability of local processor operation based on the queue size distribution of the M/M/1 queuing model, and
Figure FDA0003096993500000045
the probability of being idle for the local processor.
5. The cloud computing optimization method with fusion of the limited periodic quasi-dormancy of claim 1, wherein in step 5, a mapping method is used to generate chaotic variables to improve a gull optimization algorithm.
6. A cloud computing optimization system, comprising: the system comprises a mobile equipment terminal, an access port, a load balancer and a server, wherein the access port and the load balancer are connected with the mobile equipment terminal; the access port collects cloud tasks from a mobile device terminal, the load balancer distributes the cloud tasks of the access port according to an optimization method, and the server executes the fusion limit periodic quasi-dormancy cloud computing optimization method according to any one of claims 1 to 5.
7. A readable storage medium applied to a computer, wherein the readable storage medium stores a computer program, and the computer program when executed by a processor implements the cloud computing optimization method for fusion-constrained periodic quasi-dormancy according to any one of claims 1-5.
CN202110613497.4A 2021-06-02 2021-06-02 Cloud computing optimization method, system and medium integrating limitation periodic quasi-dormancy Active CN113342462B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110613497.4A CN113342462B (en) 2021-06-02 2021-06-02 Cloud computing optimization method, system and medium integrating limitation periodic quasi-dormancy

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110613497.4A CN113342462B (en) 2021-06-02 2021-06-02 Cloud computing optimization method, system and medium integrating limitation periodic quasi-dormancy

Publications (2)

Publication Number Publication Date
CN113342462A true CN113342462A (en) 2021-09-03
CN113342462B CN113342462B (en) 2022-03-15

Family

ID=77473109

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110613497.4A Active CN113342462B (en) 2021-06-02 2021-06-02 Cloud computing optimization method, system and medium integrating limitation periodic quasi-dormancy

Country Status (1)

Country Link
CN (1) CN113342462B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115150892A (en) * 2022-06-14 2022-10-04 燕山大学 VM-PM (virtual machine-to-PM) repair strategy method in MEC (media independent center) wireless system with bursty service

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104899103A (en) * 2015-07-03 2015-09-09 中国人民解放军国防科学技术大学 Energy-saving scheduling method and energy-saving scheduling device for mobile cloud computing terminal
CN106598733A (en) * 2016-12-08 2017-04-26 南京航空航天大学 Three-dimensional virtual resource scheduling method of cloud computing energy consumption key
CN106775933A (en) * 2016-11-29 2017-05-31 深圳大学 A kind of virtual machine on server cluster dynamically places optimization method and system
CN107423109A (en) * 2017-05-24 2017-12-01 兰雨晴 Virtual machine energy-saving scheduling method based on anonymous stochastic variable
CN110149341A (en) * 2019-05-29 2019-08-20 燕山大学 Cloud system user access control method based on suspend mode
AU2019206135A1 (en) * 2018-07-23 2020-02-13 Julius Industrial & Scientific Pty Ltd Organic radio network for internet of things (iot) applications
CN111625321A (en) * 2020-07-30 2020-09-04 上海有孚智数云创数字科技有限公司 Virtual machine migration planning and scheduling method based on temperature prediction, system and medium thereof
CN112491957A (en) * 2020-10-27 2021-03-12 西安交通大学 Distributed computing unloading method and system under edge network environment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104899103A (en) * 2015-07-03 2015-09-09 中国人民解放军国防科学技术大学 Energy-saving scheduling method and energy-saving scheduling device for mobile cloud computing terminal
CN106775933A (en) * 2016-11-29 2017-05-31 深圳大学 A kind of virtual machine on server cluster dynamically places optimization method and system
CN106598733A (en) * 2016-12-08 2017-04-26 南京航空航天大学 Three-dimensional virtual resource scheduling method of cloud computing energy consumption key
CN107423109A (en) * 2017-05-24 2017-12-01 兰雨晴 Virtual machine energy-saving scheduling method based on anonymous stochastic variable
AU2019206135A1 (en) * 2018-07-23 2020-02-13 Julius Industrial & Scientific Pty Ltd Organic radio network for internet of things (iot) applications
CN110149341A (en) * 2019-05-29 2019-08-20 燕山大学 Cloud system user access control method based on suspend mode
CN111625321A (en) * 2020-07-30 2020-09-04 上海有孚智数云创数字科技有限公司 Virtual machine migration planning and scheduling method based on temperature prediction, system and medium thereof
CN112491957A (en) * 2020-10-27 2021-03-12 西安交通大学 Distributed computing unloading method and system under edge network environment

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
HIREN KUMAR THAKKAR; PRASAN KUMAR SAHOO; BHARADWAJ VEERAVALLI: "Resource and Network Aware Data Placement Algorithm for Periodic Workloads in Cloud", 《IEEE》 *
何丽,饶俊,赵富强: "一种基于能耗优化的云计算系统任务调度方法", 《计算机工程与应用》 *
杨鹏, 张义富, 李职杜, 吴大鹏, 王汝言: "面向可靠端边协同的时延保障模型", 《北京邮电大学学报》 *
王晓琛,王宇廷,张丽媛,金顺福: "基于(N,T)休眠机制的云计算中心节能策略及优化", 《高技术通讯》 *
金顺福,郄修尘,武海星,霍占强: "基于新型休眠模式的云虚拟机分簇调度策略及性能优化", 《吉 林 大 学 学 报 ( 工 学 版 )》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115150892A (en) * 2022-06-14 2022-10-04 燕山大学 VM-PM (virtual machine-to-PM) repair strategy method in MEC (media independent center) wireless system with bursty service
CN115150892B (en) * 2022-06-14 2024-04-09 燕山大学 VM-PM repair strategy method in MEC wireless system with bursty traffic

Also Published As

Publication number Publication date
CN113342462B (en) 2022-03-15

Similar Documents

Publication Publication Date Title
Chen et al. Energy efficient dynamic offloading in mobile edge computing for internet of things
JP2017503261A (en) Multi-core dynamic workload management
CN106775949B (en) Virtual machine online migration optimization method capable of sensing composite application characteristics and network bandwidth
CN102508714A (en) Green-computer-based virtual machine scheduling method for cloud computing
CN113342462B (en) Cloud computing optimization method, system and medium integrating limitation periodic quasi-dormancy
CN114996001A (en) Distributed machine learning task GPU resource scheduling and distributing method and system
CN112559122A (en) Virtualization instance management and control method and system based on electric power special security and protection equipment
Li et al. Optimal dynamic spectrum allocation-assisted latency minimization for multiuser mobile edge computing
CN115878260A (en) Low-carbon self-adaptive cloud host task scheduling system
Terzopoulos et al. Bag-of-task scheduling on power-aware clusters using a dvfs-based mechanism
CN114172558B (en) Task unloading method based on edge calculation and unmanned aerial vehicle cluster cooperation in vehicle network
CN108228356A (en) A kind of distributed dynamic processing method of flow data
Jin et al. A virtual machine scheduling strategy with a speed switch and a multi-sleep mode in cloud data centers
CN105393518B (en) Distributed cache control method and device
CN112835684B (en) Virtual machine deployment method for mobile edge computing
CN110850957B (en) Scheduling method for reducing system power consumption through dormancy in edge computing scene
CN108075915A (en) A kind of RDMA communication connection pond management methods based on ADAPTIVE CONTROL
CN107729070B (en) Virtual machine scheduling system and method based on double speed and work dormancy
CN110308991B (en) Data center energy-saving optimization method and system based on random tasks
Zheng et al. Markov model based power management in server clusters
CN109144664B (en) Dynamic migration method of virtual machine based on user service quality demand difference
CN116954866A (en) Edge cloud task scheduling method and system based on deep reinforcement learning
CN111562837A (en) Power consumption control method for multi-CPU/GPU heterogeneous server
CN116302507A (en) Application service dynamic deployment and update method based on vacation queuing
CN110149341B (en) Cloud system user access control method based on sleep mode

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230104

Address after: 518000 Floor 12, Book Building, Shenzhen Newspaper Group, Qinghu Community, Longhua Street, Longhua District, Shenzhen, Guangdong

Patentee after: Shenzhen Lihe Newspaper Big Data Center Co.,Ltd.

Address before: 066004 No. 438 west section of Hebei Avenue, seaport District, Hebei, Qinhuangdao

Patentee before: Yanshan University