CN106293000B - A kind of virtual machine storage subsystem power-economizing method towards cloud environment - Google Patents
A kind of virtual machine storage subsystem power-economizing method towards cloud environment Download PDFInfo
- Publication number
- CN106293000B CN106293000B CN201610631537.7A CN201610631537A CN106293000B CN 106293000 B CN106293000 B CN 106293000B CN 201610631537 A CN201610631537 A CN 201610631537A CN 106293000 B CN106293000 B CN 106293000B
- Authority
- CN
- China
- Prior art keywords
- request
- time
- time slot
- disk
- virtual machine
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/26—Power supply means, e.g. regulation thereof
- G06F1/32—Means for saving power
- G06F1/3203—Power management, i.e. event-based initiation of a power-saving mode
- G06F1/3234—Power saving characterised by the action undertaken
- G06F1/325—Power saving in peripheral device
- G06F1/3275—Power saving in memory, e.g. RAM, cache
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/45579—I/O management, e.g. providing access to device drivers or storage
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Power Sources (AREA)
Abstract
The invention discloses a kind of virtual machine storage subsystem power-economizing method towards cloud environment, this method is for the virtual machine under cloud environment, the workload of each virtual machine in the same physical machine is divided into the equal time slot of time span, predicts the number of request that each time slot will receive;Simultaneously by the end of the execution time scheduling of the request of each time slot to time slot.By the execution time of remodeling request, the length of effective free time section can be increased, then in conjunction with the energy consumption of the storage subsystem of the Virtual machine under the managing power consumption strategy reduction cloud environment of disk.Forecasting mechanism of filing a request is updated the time point that disk drive is waken up, is maximumlly controlled to realize for free time segment length by the number of request that will be arrived in prediction future time gap.The present invention can be effectively reduced under the premise of guaranteeing the service quality of virtual machine system towards the energy consumption for storing virtual storage subsystem.
Description
Technical field
The present invention relates to the energy-efficient technical field of computer cluster, in particular to a kind of virtual machine towards cloud environment
Storage subsystem power-economizing method.
Background technique
In recent years, computer module, performance and capacity including processor, memory, disc driver etc. are all with index side
Formula rises.Due to this growth not seen before, the resource utilization of large enterprise's IT system is probably 35%.In some enterprises
Industry, or even only 15%.It is completely idle and hardly in their peak use rates that Google's report points out that server is rarely in
In the case where run, most of the time, their operation level is between 10% to the 50% of peak use rate.Modern computing
Machine system has enough abilities while running multiple virtual machines, and each virtual machine runs independent operation system example.Pass through rush
Into virtual machine, multiple and different operating system environments can coexist on identical physical computing machine platform by force mutually isolatedly.
The ability of decentralized resource improves the utilization of resource between virtual machine, so that being virtualized into as a green technology.This is also
The major reason that virtualization technology is revived.
It saves energy consumption and has become one of significant challenge for the system of designing a calculating machine.To only one operating system control
For the computer of system, operating system can directly manage its activity, to meet energy limit.However, at one by mostly empty
The virtual machine basic environment of quasi- machine and guest operating system composition, the direct and concentration energy management that single operation system is taken have
Three aspects are not applicable.Firstly, guest operating system does not directly access physical hardware, but work in virtual unit
(such as: virtual cpu, virtual disk drive, Microsoft Loopback Adapter, etc.).Secondly, single guest operating system does not know that physics is hard
Whether part shares with other guest operating systems.Furthermore since the particular hardware share that each virtual machine is assigned is different, energy
Amount consumption cannot be divided simply with multi-dummy machine.Therefore, in virtual machine basic environment, it is related to the coordination energy of multiple operating system
Buret reason is extremely important.
Many researchs have been directed to the energy consumption based on virtual machine environment and save.Stoess etc. proposes a new frame,
For being managed to the energy based on virtual machine environment.The framework provides a unified modules to divide and distribute energy
Amount, there are one the calculating of Energy-aware source and distribution mechanisms.The virtual energy supports the object virtual machine on each virtual platform
Independent and non-interfering operation, different energy sources management strategy of these virtual machines of macro coordination to virtual resource.
Waldspurger proposes the method that several support memories and space are submitted, the efficiency of Lai Tigao memory management.Stoess and
Reinhardt proposition should be designed as two layers to the energy management of virtual server, and virtual machine internal layer is connected to virtual machine
Layer.On the one hand, only guest operating system can know application program and user information in detail.On the other hand, only main body
Operating system and its Resource Manager Subsystem can control the overall situation, the energy requirement and condition of all mechanical components.
Most of computer module supports multiple kinds of energy state (such as activity, idle and preparation).Under different-energy state,
The consumption of energy is different.Burst behavior shows that event is in short term, to break out irregularly.The work of modern computer
Make general presentation burst behavior, this there is an opportunity, when hardware component is converted to low energy consumption state by high power consumption state,
Mass energy can be saved.Legacy operating system is directed to fair allocat of the resource between competition task, by using up access mode
May be smooth, to obtain maximum output and the smallest delay.
Summary of the invention
The purpose of the present invention is to overcome the shortcomings of the existing technology with it is insufficient, a kind of virtual machine towards cloud environment is provided and is deposited
Subsystem power-economizing method is stored up, this method is under virtual machine environment, by each virtual machine in the same physical machine when same
Between I/O load in section be overlapped amplification, and disk is controlled by I/O quantity that prediction was sized in the period
The switching of power consumption state, the free time of Lai Yanchang virtual machine storage subsystem simultaneously save energy.
The purpose of the invention is achieved by the following technical solution:
A kind of virtual machine storage subsystem power-economizing method towards cloud environment, the power-economizing method include:
S1, load aggregation amplification, by the workload of each virtual machine in the physical machine same under cloud environment point
At the equal time slot of time span, then the time slot of different virtual machine is aligned by initial time and end time,
Again by the execution time centralized dispatching of all I/O request in unit time gap from multiple and different virtual machines between the time
It then concentrates again to realize the amplification of load and is sent to disk in the end of gap;
S2, request forecasting mechanism are then based on disk by predicting the I/O number of request in next time slot unit
The ability for handling I/O calculates disk being transferred to working condition from low energy consumption state to service the wakeup time of above-mentioned I/O request
Point W;
The switching of S3, disk energy consumption state, the quantity of the I/O request based on prediction, disk are transferred into wakeup time point W
Working condition is finished, disk is transferred to low energy consumption shape immediately with servicing the I/O being actually reached request when above-mentioned I/O request is processed
State, until reaching the start time point S of next time slot.
Further, the length of the time slot must satisfy the condition in following formula:
Wherein, B indicates the size of time slot, unit: the second, maximum I/O processing capacity of the C for disk, unit: request/
Second, TpIt is that disk by working condition switchs to low energy consumption state, then goes back to working condition the time it takes, R againpWhen being single
Between the number requested in gap.
Further, the I/O number of request in next time slot unit is requested according to I/O in current time slot
Several predicted values and true value predicts the number R requested in future time gapp。
Further, the I/O number of request in single time slot unit is predicted that formula is specific by following formula
Are as follows:
Wherein, Bn pIt is the predicted value of the number of request of n-th of time slot, Bn rIndicate the number of request of n-th of time slot
True value, Bn+1 pIndicate that the predicted value of the number of request of (n+1)th time slot, factor alpha are for adjusting historical data to prediction
It is worth the coefficient influenced.
Further, the factor alpha is adjusted by a kind of adaptation mechanism according to the accuracy of prediction, and detailed process is such as
Under:
The influential historical information of accuracy on prediction is stored using a sliding window, when predicting (n+1)th
When I/O number of request in a time slot, the I/O number of request in n-th of time slot is first predicted, at this point, being with step-length
The value of α is progressively increased to 1 from 0 by 0.01;Then the I/O number of request of n-th of time slot interior prediction and its true I/O are asked
Number is asked to be compared one by one, when the I/O in n-th of time slot interior prediction requests number and the number of true I/O request most
When close, the I/O number of request in (n+1)th time slot is predicted using α value at this time.
Further, in single time slot, by disk since low energy consumption state wake up and execute I/O request call out
The time span (B-Rp/C) waken up at the beginning of time point W to the time slot between point S at least more than by disk from low energy
Consumption state is switched to time T required for working conditionp, it may be assumed that
Wherein, B indicates the size of time slot, Rp/ C indicates that disk drive services RpThe time span that a request is spent.
Further, disk switchs to low energy consumption state by working condition, then goes back to the energy that working condition is saved again
It must be greater than disk and carry out ENERGY E consumed by different power consumption states switchingsp, i.e.,
RpFor the prediction number of coming request in future time gap, PiAnd PsRespectively disk drive idle and
Energy expenditure under ready state, TpIt is that disk by working condition switchs to low energy consumption state, then goes back to working condition again and spent
The time taken, B indicate the size of time slot, Rp/ C indicates that disk drive services RpThe time span that a request is spent.
Further, the selection of the wakeup time point W is determined by the state dynamic of virtual machine storage subsystem, is selected
Selection method is specific as follows:
The I/O request queue of disk is inspected periodically, if not having I/O request in queue, disk can keep low energy consumption state,
Until new request is come in;
If before reaching wakeup time point W, the number N of the I/O request of physical presence in I/O request queuerIt is greater than
Predicted value Bn+1 p, NrIt can be used to recalculate the predicted value w of wakeup time point.
In conclusion the present invention proposes to use mould by the multiple different operating systems on different virtual machine of association
Formula, the burst behavior of amplification I/O load.I/O is loaded decentralized dispatch into the time slot of equal length by this method, then
It predicts upcoming I/O number of request in each time slot, rather than predicts the prediction free time section that conventional method uses
Length.This method simultaneously in each time slot I/O request execution time delay be aggregated to time slot finally,
Therefore expand the length of free time.By deliberately being remolded to I/O load, it is exaggerated between disk free time experienced
Every length, to save the energy of disk storage sub-system.Further, since having carried out the scheduling of I/O load deliberately, disk
When in running order, resource utilization is improved.
The present invention has the advantage that compared with the existing technology
(1) present invention divides the time into equal-sized time slot, processing a period of time by the way of superposition amplification
When request in gap, the end for moving on to time slot is requested to focus on by all, to expand the magnetic in the time slot
The length of disk free time section experienced, is disposed to I/O request, disk is transferred to low energy consumption state immediately, to reach
Reduce the purpose of disk drive energy consumption.
(2) the request forecasting mechanism that proposes in the present invention, changes merely that (disk is at one section of the past according to historical information
Interior free time length experienced) predict the thought of free time length that following disk may be undergone.The present invention
By the I/O number of request that will be arrived in prediction future time gap, and disk is calculated by the I/O processing capacity of disk and is called out
Awake time point is to service I/O request.Then disk can be transferred to low energy consumption state immediately after I/O request is performed, thus real
Now it is directed to the maximization of free time segment length.
Detailed description of the invention
Fig. 1 is a kind of flow chart of the virtual machine storage subsystem power-economizing method towards cloud environment disclosed by the invention.
Specific embodiment
Below with reference to examples of implementation and attached drawing, invention is further described in detail, but embodiments of the present invention are not
It is limited to this.
Embodiment one
Present embodiment discloses a kind of virtual machine storage subsystem power-economizing method towards cloud environment, this method pass through amplification
The execution time of all requests in unit time gap is shifted onto the end of this section of time slot by the burst request of disk, is concentrated
The free time for expanding disk, to save disk energy;Meanwhile this method also provides a kind of prediction disk free time length
Technical solution, by prediction next section of unit time gap in number of tasks, instead of the simple predictive estimation next free time
Time span, this method can be more efficient, adjust predictive coefficient for greater flexibility, realize that more accurate disk runing time is pre-
It surveys.
A kind of flow chart of virtual machine storage subsystem power-economizing method towards cloud environment as shown in Figure 1, the energy-saving square
Method specifically includes the following steps:
S1, load aggregation amplification, by the workload of each virtual machine in the physical machine same under cloud environment point
At the equal time slot of time span, then the time slot of different virtual machine is aligned by initial time and end time,
Again by the execution time of all I/O request in unit time gap from multiple and different virtual machines, centralized dispatching to the time
The end in gap.It is poly- so as to concentrate the discrete I/O request being distributed in a time slot from multiple virtual machines
The end of each time slot is closed to realize the amplification of load, then concentrates again and is sent to disk.
The step can concentrate the free time for expanding disk, to save disk energy.
S2, request forecasting mechanism, provide a kind of method for predicting disk free time.When this method is by predicting next
Between I/O number of request in slot units, be then based on the ability of disk processing I/O, disk is transferred to by calculating from low energy consumption state
Working condition is to service the wakeup time point W that these I/O are requested.
The step can by efficiently, neatly adjust predictive coefficient, realize the prediction of more accurate disk free time.
Forecasting mechanism is requested, request is assigned in the time slot of equal length.It carries in each time slot
One group is continuously requested.The number of request that can be easily computed in practice in each time slot.By doing so, load quilt
It is converted into another new time series process.Based on this method, the number of request in single time slot is predicted, rather than it is pre-
Survey the length of the free time section in load.Prediction can be improved due to reducing the sudden peaks in load in this method
Accuracy.It is long that the relatively traditional direct predictive estimation disk of this method is likely to be at the idle time in the next period of time
Degree is efficiently simple.This method can realize more accurate disk free time by neatly adjusting predictive coefficient simultaneously
Control.
The switching of S3, disk energy consumption state, the quantity of the I/O request based on prediction, disk are transferred into wakeup time point W
Working condition is to service the I/O being actually reached request.It is finished when above-mentioned I/O request is processed, disk is transferred to low energy consumption shape immediately
State, until reaching the start time point S of next time slot.
So in each time slot, time point for being waken up from the start time point S of the time slot to disk
W, disk are all placed in low energy consumption state, to realize energy-efficient purpose.
In concrete application, in the step S1, the I/O request time sequence from each virtual machine storage subsystem is drawn
It is divided into time slot isometric one by one, the superposition amplification then made requests in time slot.The size of time slot has
Limitation, if time slot is too big, the delay of virtual machine storage subsystem respond request can be very long;If time slot is too small,
Disk can not achieve the mutual conversion from working condition and low energy consumption state, be also just unable to reach saving energy within the short time
The purpose of amount.
When the maximum I/O processing capacity of disk is C request/second.Disk switchs to low energy consumption state by working condition, then
Going back to working condition the time it takes again is Tp, it is E that this, which converts consumed energy,pJoule.The size of time slot is B
Second.Being sent to the predicted value of the request sum of the I/O on disk by multiple virtual machines in next time slot is Rp.Then determine
The length of time slot must satisfy the condition in following formula:
In concrete application, in the step S2, according to the predicted value and true value of I/O number of request in current time slot,
Predict the number R requested in future time gapp.Method for predicting I/O number of request in single time slot specifically:
One recursive prediction model, Bn pIt is the predicted value of the number of request of n-th of time slot, Bn rIndicate n-th of time
The true value of the number of request in gap.Bn+1 pIndicate the predicted value of the number of request of (n+1)th time slot, α is for adjusting history
The coefficient that data influence predicted value.According to the following formula come predict the I/O that will be received in next time slot request
Number.Formula is as follows:
The number of request that will be received in next time slot can be predicted according to above formula.It is different according to predicted value,
Time slot mechanism takes two kinds of measures.The first is to determine whether disk should stop.Second of measure be determine when
Between which in gap at time point, disk drive should be activated.When using the first measure, energy shows coefficient (EPC)
It is used as measuring, decides whether or not to stop disk.If disk has successfully carried out energy consumption shape in each time slot
The switching of state, the energy saved are denoted as Eps, predict that the delay of process is denoted as Dp, then EPC=Dp/Eps.Second of measure, dynamic
Wakeup time in adjustment time gap, inspects periodically request queue.If do not requested in queue, disk can keep low energy consumption
State, until new request is come in.This means that the time point w that prediction wakes up is postponed.By doing so, can be easy to locate
Manage the free time section for the length that those cross over multiple time slots.If requested in queue before reaching wakeup time point
Number (Nr) it is greater than predicted value (Bn+1 p), NrIt can be used to recalculate the predicted value w of wakeup time point.This means that when waking up
Between point can be shifted to an earlier date.After disk starting, if the processing capacity due to disk is inadequate, some I/O requests are in current time
Gap fails to respond to before terminating, these I/O request executes at the beginning of being postponed to next time slot.Work as request
After the completion, the prediction and calculating of next round time slot are immediately begun to.
In concrete application, α is adjusted according to the accuracy of prediction using a kind of adaptation mechanism.The mechanism utilizes a cunning
Dynamic window stores the influential historical information of accuracy on prediction.When predicting that the I/O in (n+1)th time slot asks
When asking several, the I/O number of request in n-th of time slot is first predicted.At this point, the value of α is gradually increased from 0 for 0.01 with step-length
To 1.Then by the I/O number of request of n-th of time slot interior prediction and its true I/O number of request (due to now at (n+1)th
The time point of time slot, the true value have been known) it is compared one by one.It is asked when in the I/O of n-th of time slot interior prediction
The number for asking number and true I/O to request closest to when, predict the I/O in (n+1)th time slot using α value at this time
Number of request.
It is numerous for the processing capacity and system that are driven for different disk according to virtual machine basic environment in concrete application
Busy degree controls the switching of disk energy consumption state, to maximize the saving of disk energy.It is idle when disk is in activity, just
Energy expenditure under not-ready status is P respectivelya,PiAnd Ps, then disk is waken up from low energy consumption state in single time slot
And start to execute the time span (B-Rp/C) at the beginning of the wakeup time point W to the time slot of I/O request between point S
Time T required for working condition is switched to from low energy consumption state at least more than by diskp, it may be assumed that
Rp/ C indicates that disk drive services RpThe time span that a request is spent.Above formula shows opening in each time slot
Begin squeeze out free time section length the free time should be switched to by activity than disk drive, then switch to activity time it is long.
In addition, the energy saved, which must be greater than disk, carries out ENERGY E consumed by different power consumption states switchingsp, i.e.,
RpFor the prediction number of coming request in future time gap, Pa,PiAnd PsRespectively disk drive is in work
It is dynamic, idle, the energy expenditure under ready state.Above formula shows that the energy saved should be than energy brought by starting disk drive
Amount consumption is bigger.By realizing the two constraint conditions, disk then can be transferred to low energy consumption state by the working condition of high energy consumption.
In concrete application, the selection of the wakeup time point W of disk is determined by system mode dynamic.Selection method: periodically
Check the I/O request queue of disk, if not having I/O request in queue, disk can keep low energy consumption state, until new request
Come in.This means that the I/O number of requests predicted using this forecasting mechanism is postponed come the wakeup time point W of the disk calculated
?.If before reaching wakeup time point, the number (N of the I/O request of physical presence in queuer) it is greater than predicted value (Bn+1 p),
NrIt can be used to recalculate the predicted value w of wakeup time point.This means that wakeup time point W can be shifted to an earlier date.
Free time interval between prediction load, it is normal that disk, which is then transferred to low energy consumption state, using the time interval
Method for reducing disc driver energy consumption.But since workload is concentrated, it is difficult to find longer idle period.Even if depositing
In prolonged idle period, it is also difficult to accurately predict it.Under cloud environment, modem computer systems physical resource
Multiple virtual machines can be run.The present invention is directed to the virtual machine under cloud environment, proposes the workload from each virtual machine
It is divided into the equal time slot of time span, predicts the I/O number of request that will be received in each time slot.It simultaneously will be each
End of the execution time scheduling of I/O request in time slot to the time slot.By adjusting the execution of scheduling I/O request
Time can increase the length of effective free time section, and the power consumption state switching then in conjunction with disk can reduce cloud environment
Under Virtual machine storage subsystem energy consumption.The present invention, can under the premise of guaranteeing the service quality of virtual machine system
To be effectively reduced towards the energy consumption for storing virtual storage subsystem.
In conclusion present embodiment discloses a kind of by way of superposition amplification I/O load to reduce virtual machine environment
The method of lower disk subsystem energy proposes to use mould by the multiple different operating systems on different virtual machine of association
Formula amplifies the burst I/O behavior in load.I/O load is distributed in time upper equal time slot by this method, then
Predict upcoming number of request in each time slot, rather than the length of the free time section in traditional prediction I/O load
Degree.This method executes the execution time scheduling requested in each time slot to the end of the time slot simultaneously, because
This expands the length of free time.By deliberately remolding to I/O load, it is exaggerated disk free time interval experienced
Length, to save the energy of disk storage sub-system.Further, since having carried out the scheduling of I/O load deliberately, disk exists
When in running order, resource utilization is improved.
Embodiment two
Free time interval between prediction load, it is normal that disk, which is then transferred to low energy consumption state, using the time interval
Method for reducing disc driver energy consumption.But since workload is concentrated, it is difficult to find longer idle period.Even if depositing
In prolonged idle period, it is also difficult to accurately predict it.Under cloud environment, modem computer systems physical resource
Multiple virtual machines can be run.The present embodiment is directed to the virtual machine under cloud environment, and the work from each virtual machine is born in proposition
Lotus is divided into the equal time slot of time span, predicts the number of request that each time slot will receive.When simultaneously will be each
Between execution time of request in gap be extruded into the end of time slot.By the execution time of remodeling request, can increase effectively
Free time section length, then in conjunction with disk managing power consumption strategy reduce cloud environment under Virtual machine storage son
The energy consumption of system.
The present embodiment specifically discloses a kind of power-economizing method for virtual machine basic environment, and S and E respectively indicate a time
The start time point in gap and end time point.Assuming that five requests are distributed on three virtual machines.If can be quasi- in advance
Really predict the number of request of a time slot on a virtual machine, so that it may calculate accurate disk drive need by
The time point W of wake-up, is then converted to high power consumption state by low energy consumption state for disk, and in a manner of undelayed, response is executed
All requests in the time slot.For example, it is assumed that the prediction number of the I/O request of three virtual machines is respectively 2,2 and 1.By this
A little numbers pile ups, consider the responding ability of disk drive, and five requests are responded in the end of time slot.From wakeup time point
The time span of end time point E of W to the time slot is equal to disk drive starting to activity energy consumption from low energy consumption state
The time that state needs adds Rp/C。
Open power-economizing method in the present embodiment can be effective under the premise of guaranteeing the service quality of virtual machine system
Ground is reduced towards the energy consumption for storing virtual storage subsystem.
Above-described embodiment is the preferable embodiment of the present invention, although needle invention is explained in detail and
Description, but do not limit the scope of the invention.Modification or the equivalent program of any spirit for not departing from the present invention program and principle,
It is intended to be within the scope of the claims of the invention.
Claims (6)
1. a kind of virtual machine storage subsystem power-economizing method towards cloud environment, which is characterized in that the power-economizing method includes:
S1, load aggregation amplification, when the workload of each virtual machine in the physical machine same under cloud environment is divided into
Between equal length time slot, then the time slot of different virtual machine is aligned by initial time and end time, then will
The execution time centralized dispatching of all I/O request in unit time gap from multiple and different virtual machines is to the time slot
It then concentrates again to realize the amplification of load and is sent to disk in end;
S2, request forecasting mechanism are then based at disk by predicting the I/O number of requests in next time slot unit
The ability for managing I/O calculates disk being transferred to working condition from low energy consumption state to service the wakeup time point of above-mentioned I/O request
W;
Individually the I/O number of request in the unit of time slot is predicted by following formula, formula specifically:
Wherein, Bn pIt is the predicted value of the number of request of n-th of time slot, Bn rIndicate the true of the number of request of n-th of time slot
Value, Bn+1 pIndicate that the predicted value of the number of request of (n+1)th time slot, factor alpha are for adjusting historical data to predicted value shadow
Loud coefficient;
The factor alpha is adjusted by a kind of adaptation mechanism according to the accuracy of prediction, and detailed process is as follows:
The influential historical information of accuracy on prediction is stored using a sliding window, when to predict at (n+1)th
Between I/O number of request in gap when, the I/O number of request in n-th of time slot is first predicted, at this point, being 0.01 by α with step-length
Value progressively increase to 1 from 0;Then by the I/O number of request of n-th of time slot interior prediction and its true I/O number of request by
One is compared, when n-th time slot interior prediction I/O request number and true I/O request number closest to when,
The I/O number of request in (n+1)th time slot is predicted using α value at this time;
The switching of S3, disk energy consumption state, the quantity of the I/O request based on prediction, disk are transferred into work in wakeup time point W
State is finished, disk is transferred to low energy consumption state immediately, directly with servicing the I/O being actually reached request when above-mentioned I/O request is processed
To the start time point S for reaching next time slot.
2. a kind of virtual machine storage subsystem power-economizing method towards cloud environment according to claim 1, which is characterized in that
The length of the time slot must satisfy the condition in following formula:
Wherein, B indicates the size of time slot, unit: the second, maximum I/O processing capacity of the C for disk, unit: number of request/second,
TpIt is that disk by working condition switchs to low energy consumption state, then goes back to working condition the time it takes, R againpIt is between the single time
The number requested in gap.
3. a kind of virtual machine storage subsystem power-economizing method towards cloud environment according to claim 1, which is characterized in that
I/O number of request in next time slot unit according to the predicted value of I/O number of request in current time slot and
True value predicts the number R requested in future time gapp。
4. a kind of virtual machine storage subsystem power-economizing method towards cloud environment according to claim 1, which is characterized in that
In single time slot, by disk since low energy consumption state wake up and execute I/O request wakeup time point W to this
Time span (B-Rp/C) at the beginning of time slot between point S is switched to work from low energy consumption state at least more than by disk
Make time T required for statep, it may be assumed that
Wherein, B indicates the size of time slot, Rp/ C indicates that disk drive services RpA I/O requests the time it takes length.
5. a kind of virtual machine storage subsystem power-economizing method towards cloud environment according to claim 1, which is characterized in that
Disk switchs to low energy consumption state by working condition, and then going back to the energy that working condition is saved again must be greater than disk
It carries out different power consumption states and switches consumed ENERGY Ep, i.e.,
RpFor the predicted value of I/O coming in future time gap request, PiAnd PsRespectively disk drive is idle and just
Energy expenditure under not-ready status, TpIt is that disk by working condition switchs to low energy consumption state, then goes back to working condition again and spent
Time, B indicate time slot size, Rp/ C indicates that disk drive services RpThe time span that a request is spent.
6. a kind of virtual machine storage subsystem power-economizing method towards cloud environment according to claim 1, which is characterized in that
The selection of the wakeup time point W is to determine that selection method is specifically such as by the state dynamic of virtual machine storage subsystem
Under:
The I/O request queue of disk is inspected periodically, if not having I/O request in queue, disk can keep low energy consumption state, until
New request is come in;
If before reaching wakeup time point W, the number N of the I/O request of physical presence in I/O request queuerGreater than predicted value
Bn+1 p, NrIt can be used to recalculate the predicted value w of wakeup time point.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610631537.7A CN106293000B (en) | 2016-08-02 | 2016-08-02 | A kind of virtual machine storage subsystem power-economizing method towards cloud environment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610631537.7A CN106293000B (en) | 2016-08-02 | 2016-08-02 | A kind of virtual machine storage subsystem power-economizing method towards cloud environment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106293000A CN106293000A (en) | 2017-01-04 |
CN106293000B true CN106293000B (en) | 2019-03-26 |
Family
ID=57665109
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610631537.7A Expired - Fee Related CN106293000B (en) | 2016-08-02 | 2016-08-02 | A kind of virtual machine storage subsystem power-economizing method towards cloud environment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106293000B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108153684B (en) * | 2017-12-30 | 2021-06-04 | 广东技术师范学院 | Disk Cache prefetch space adjusting method |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103064730A (en) * | 2012-12-20 | 2013-04-24 | 华中科技大学 | Two-stage disc scheduling method orienting cloud computing environment |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9152214B2 (en) * | 2013-05-16 | 2015-10-06 | Qualcomm Innovation Center, Inc. | Dynamic load and priority based clock scaling for non-volatile storage devices |
-
2016
- 2016-08-02 CN CN201610631537.7A patent/CN106293000B/en not_active Expired - Fee Related
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103064730A (en) * | 2012-12-20 | 2013-04-24 | 华中科技大学 | Two-stage disc scheduling method orienting cloud computing environment |
Non-Patent Citations (1)
Title |
---|
"磁盘阵列节能技术的研究与实现";刘珂;《中国优秀硕士学位论文全文数据库 信息科技辑》;20111215;2011年第S2期,I137-88,第I、16-34页 |
Also Published As
Publication number | Publication date |
---|---|
CN106293000A (en) | 2017-01-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020206705A1 (en) | Cluster node load state prediction-based job scheduling method | |
EP3036625B1 (en) | Virtual hadoop manager | |
CN101488098B (en) | Multi-core computing resource management system based on virtual computing technology | |
CN104239152B (en) | Method and apparatus for improving the turbine accelerating ability of event handling | |
CN102929720B (en) | A kind of energy-conservation job scheduling system | |
US20130167152A1 (en) | Multi-core-based computing apparatus having hierarchical scheduler and hierarchical scheduling method | |
CN105868004B (en) | Scheduling method and scheduling device of service system based on cloud computing | |
CN103927225A (en) | Multi-core framework Internet information processing and optimizing method | |
CN101339521A (en) | Tasks priority dynamic dispatching algorithm | |
CN114816715B (en) | Cross-region-oriented flow calculation delay optimization method and device | |
CN109960591A (en) | A method of the cloud application resource dynamic dispatching occupied towards tenant's resource | |
KR101770736B1 (en) | Method for reducing power consumption of system software using query scheduling of application and apparatus for reducing power consumption using said method | |
CN101819459B (en) | Heterogeneous object memory system-based power consumption control method | |
CN115878260A (en) | Low-carbon self-adaptive cloud host task scheduling system | |
CN106293000B (en) | A kind of virtual machine storage subsystem power-economizing method towards cloud environment | |
US9195514B2 (en) | System and method for managing P-states and C-states of a system | |
CN102043676A (en) | Visualized data centre dispatching method and system | |
Wang et al. | Power saving design for servers under response time constraint | |
CN101685335A (en) | Application server based on SEDA as well as energy-saving device and method thereof | |
CN110825212B (en) | Energy-saving scheduling method and device and computer storage medium | |
CN108415766A (en) | A kind of rendering task dynamic dispatching method | |
CN105706022B (en) | A kind of method, processing unit and the terminal device of prediction processor utilization rate | |
Li et al. | An energy efficient resource management method in virtualized cloud environment | |
CN110308991A (en) | A kind of data center's energy conservation optimizing method and system based on Random Task | |
EP3982258A1 (en) | Method and apparatus for reducing power consumption of virtual machine cluster |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information |
Address after: 511458, room 407, room 4 (office only), No. 4, building 56, Hexing Road, Guangdong, Guangzhou, Nansha District (MZ) Applicant after: GUANGDONG UNITEDDATA HOLDING GROUP Co.,Ltd. Address before: 511458 Guangzhou City, Nansha District province Hexing Road, No. 4, building, room 4, floor 407, room 56 Applicant before: GUANGDONG NANYUN NETWORK TECHNOLOGY CO.,LTD. |
|
CB02 | Change of applicant information | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20190326 |
|
CF01 | Termination of patent right due to non-payment of annual fee |