CN103095788A - Cloud resource scheduling policy based on network topology - Google Patents

Cloud resource scheduling policy based on network topology Download PDF

Info

Publication number
CN103095788A
CN103095788A CN2011103553737A CN201110355373A CN103095788A CN 103095788 A CN103095788 A CN 103095788A CN 2011103553737 A CN2011103553737 A CN 2011103553737A CN 201110355373 A CN201110355373 A CN 201110355373A CN 103095788 A CN103095788 A CN 103095788A
Authority
CN
China
Prior art keywords
resource
strategy
main frame
buffer memory
resources
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2011103553737A
Other languages
Chinese (zh)
Inventor
丁保剑
谭任辉
邓任远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
PCI Suntek Technology Co Ltd
Original Assignee
PCI Suntek Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by PCI Suntek Technology Co Ltd filed Critical PCI Suntek Technology Co Ltd
Priority to CN2011103553737A priority Critical patent/CN103095788A/en
Publication of CN103095788A publication Critical patent/CN103095788A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Computer And Data Communications (AREA)

Abstract

The invention discloses a resource model of a cloud platform based on network topology and provides a resource scheduling policy which is established by a virtual machine under the resource model. The resource scheduling policy comprises resource scheduling policies such as a filling-in policy, smooth policy, and a resource scheduling policy base on the network topology. The filling-in policy enables resources which are requested by the virtual machine to be centralized to the greatest extent, reduces fragments of resources, and improves use ratio of the resources. The smooth policy enables use ratio of each server and a shored storer to be similar as far as possible, is balanced in load, and ensures whole performance of each server. The resource scheduling policy based on the network topology ensures to the greatest extent that a network path between server resources and shared storage resources allocated by the virtual machine is minimum, and only under the condition of absolute necessity, a data center, a cluster allocation server and the shared storage resources are crossed.

Description

A kind of cloud resource dispatching strategy of topology Network Based
Technical field
The present invention relates to computer combined optimization technique field, particularly relate to a kind of strategy of cloud platform resource scheduling and optimize.
Background technology
Cloud computing is to be based upon on the basis of the long-term technological accumulation of computer, comprises software and services, the key technologies such as Intel Virtualization Technology, scheduling of resource.
More that cloud resource and task scheduling etc. are considered to the dispatching algorithm of cloud resource at present, this paper considers is moment of virtual machine creating the cloud resource to be dispatched, and how arranged rational cloud resource makes the interests of client and businessman maximize is a problem in the urgent need to address.Various resource dispatching strategy in this paper can address this problem well, all is greatly improved aspect the fail safe of dispatching algorithm and efficient proposing various optimisation strategy simultaneously.
Summary of the invention
Technical problem to be solved by this invention is under the environment of cloud platform, and the establishment of virtual machine needs the cloud platform that the cloud resource is provided, and how under limited resource, provides various strategies to satisfy the demand of virtual machine creating.
For achieving the above object, the invention provides a kind of resource dispatching strategy based on the cloud platform, comprise the cloud platform resource model that proposes a kind of topology Network Based, fill up strategy, level and smooth strategy, the resource dispatching strategy of topology Network Based and a kind of optimisation strategy of algorithm;
The cloud platform resource model of described a kind of topology Network Based is used to dispatching algorithm that a complete Mathematical Modeling is provided.Comprise:
Cloud platform resource: the cloud platform resource mainly is divided into two kinds here, comprises host resource and shared storage resources.It is also process to empty machine assign host machine resource and shared storage resources that virtual machine is carried out that resource distributes.
The information of host resource comprises: the information such as host id, host name, data center, cluster, memory size, free memory, CPU usage.Can correspondence be expressed as one-dimension array [HostID, DataCenter, Cluster, TotalMemory, FreeMemory, CpuUtilization].Main frame utilization rate: weigh an overall utilization rate HostUtilization=a*CpuUtilization+b*FreeMemory/TotalMemor y of current main frame, a wherein, b ∈ [0,1], a+b=1.The information of sharing storage volume comprises: the information such as storage volume ID, storage file label, memory capacity, remaining space can correspondence be expressed as one-dimension array [HostVolumeID, VolumeLabel, Capacity, FreeSpace].Storage volume utilization rate: VolumeUtilization=FreeSpace/Capacity.
There is the relation of multi-to-multi between main frame and storage.A main frame can be accessed a plurality of shared storage volume, and a shared storage volume also can be accessed by a plurality of main frame.To empty machine carry out resource divide the timing main frame and the storage between must have corresponding relation, namely storage volume can be accessed by main frame.If main frame, storage volume relational matrix are Relation_Host_Volume, suppose to have a n platform main frame and m storage volume, storage volume and main frame that 0 expression is corresponding do not have corresponding relation, and 1 main frame corresponding to expression can be accessed corresponding storage volume.Select main frame, shared storage must have a kind of incidence relation, i.e. Relation_Host_Volume[i when creating empty machine] [j]=1 (i ∈ [1, n], j ∈ [1, m]) (1).
Weigh four matrixes of network topology, comprise data center's distance matrix, main frame distance matrix, storage volume distance matrix, host stores distance matrix.
The Distance matrix D istanceDC of data center: weigh the matrix of network topology distance between data center, suppose to have d data center, matrix size is d*d.DistanceDC[i] distance between the larger expression of [j] value data center is larger, i wherein, j ∈ [1, d].
Main frame Distance matrix D istanceHost: weigh the matrix of network topology distance between main frame, suppose to have d platform main frame, matrix size is d*d.If two main frames are at same data center, DistanceHost[i] [j] network distance of two main frames just of expressing, otherwise network distance should be W*DistanceDC[k] [1]+DistanceHost[i] [j], and wherein W is constant, k, and 1 is respectively Host iAnd Host jData center numbering.
Storage volume Distance matrix D istanceVolume: weigh the matrix of network topology distance between storage, suppose to have d storage volume, matrix size is d*d.DistanceVolume[i] distance of [j] value between larger expression storage volume be larger, i wherein, j ∈ [1, d].
Host stores Distance matrix D istanceHostVolume: weigh the matrix of network topology distance between main frame and storage volume, suppose to have n platform main frame, m storage volume, matrix size is n*m.DistanceHostVolume[i] distance between [j] value larger expression main frame and storage volume is larger, i ∈ [1, n] wherein, j ∈ [1, m].
Solution space:
Suppose to be k platform virtual machine Resources allocation,
[(host h1, Volume v1), (host h2, Volume v2) ..., (host hk, Volume vk)], wherein h1, h2 ..., hk ∈ [1, n], v1, v2 ..., vk ∈ [1, m].
If f (s, j) is that s platform virtual machine is to the resource of j platform host request
Figure BSA00000610438500031
Vm wherein sBe S platform virtual machine, attribute host_resource represents the host resource of asking, h sIt is the main frame label that the empty machine of s platform distributes.
In like manner g (s, j) is that s platform virtual machine is to the resource of j shared storage volume request.
Vm wherein sBe S platform virtual machine, attribute volume_resource represents the storage resources of asking, v sIt is the storage label that the empty machine of s platform distributes.
Must satisfy all main frame j &Sigma; i = 1 k f ( i , j ) < ho st j . resource - - - ( 2 ) , In like manner also must satisfy each storage volume j &Sigma; i = 1 k g ( i , j ) < Volume j . resource - - - ( 3 ) .
Also must there be corresponding relation between main frame and storage.Namely for any one in solution, as (host hs, volume vs), h s∈ [1, n], v s∈ [1, m] certainly exists Relation_Host_Volume[h s] [v s]=1.
The problem that need to find the solution namely finds one group of disaggregation of meeting consumers' demand below above-mentioned restriction.
The network distance of separating calculates:
If disaggregation is [(host h1, Volume v1), (host h2, Volume v2) ..., (host hk, Volume vk)], wherein h1, h2 ..., hk ∈ [1, n], v1, v2 ..., vk ∈ [1, m].
1) establish S set { }, be initialized as sky.Network distance distance=0, count=1
2) attempt adding (host Hcount, Volume Vcount) in S set, S=SU{ (host Hcount, Volume Vcount).Upgrade distance, suppose host HcountWith the nearest main frame of main frame in S be host min, min ∈ [1, count-1].Distance=distance+distance[min] [h Count]+DistanceHostVolume[h Count] [v Count].
3)count++;
if(count>k)return;
Else forwards 2 to);
Fill up strategy, be used for making the resource of empty machine request to concentrate as best one can, reduce the fragment of resource, improve the utilization rate of resource.Comprise:
Data center's selection strategy: preferentially select the data center at active user place, system is that data center preserves a data data centers Priority Queues, and definition strategy is modified to Priority Queues in the time of the host assignment success and failure simultaneously.
Selection of chiller strategy: determined in front under the prerequisite of data center, system is that main frame is preserved a chiller priority formation, the Priority Queues initialization is 0, whenever becoming main frame of the distribution of work to increase the main frame value of the correspondence in formation is linear, when a host assignment unsuccessfully with index decreased.Fill up strategy and will preferentially select the larger main frame of weights in the main frame formation.Simultaneously obtain the host information of segmentation in conjunction with the segmentation selection strategy.The sequence of resource utilization is carried out in trial to the main frame of segmentation, the chiller priority high to utilization rate distributes.
Store selection strategy: under definite main frame prerequisite, obtain the store list (there are the relation of a correspondence in main frame and storage) of correspondence, the utilization rate of storing is sorted preferentially select.Under the prerequisite of the resource that satisfies empty machine request, the preferential large storage volume of choice for use rate is distributed.
Level and smooth tactful, be used for making every station server and the utilization rate above shared storage similar as much as possible, load balancing, the overall performance of assurance server.Comprise data center's selection strategy, Selection of chiller strategy, storage selection strategy.
Level and smooth tactful purpose makes every station server and shares the top utilization rate of storage similar as much as possible, load balancing, the overall performance of assurance server.
Data center's selection strategy: preferentially select the data center at active user place, system is that data center preserves a data data centers Priority Queues, and definition strategy is modified to Priority Queues in the time of the host assignment success and failure simultaneously.
Selection of chiller strategy: determined in front under the prerequisite of data center, system is that main frame is preserved a chiller priority formation, the Priority Queues initialization is 0, whenever becoming main frame of the distribution of work with the linear increase of the main frame value of the correspondence in formation, when a host assignment unsuccessfully will not change.Level and smooth strategy will preferentially be selected the less main frame of weights in the main frame formation, namely use recently less main frame.Simultaneously obtain the host information of segmentation in conjunction with the segmentation selection strategy, attempt the main frame of segmentation is carried out the sequence of resource utilization, the chiller priority little to utilization rate distributes.
Storage selection strategy: under definite main frame prerequisite, obtain corresponding store list (there are the relation of a correspondence in main frame and storage), the utilization rate of storage is sorted.Under the prerequisite of the resource that satisfies empty machine request, the preferential little storage volume of choice for use rate is distributed.
The resource dispatching strategy of topology Network Based is used for guaranteeing that the server resource and the network path between shared storage resources that distribute for empty machine are minimum as far as possible, only in the situation that last resort across data center, cluster distribution server and shared storage resources.Calculate the globally optimal solution of overall network topology path minimum in conjunction with simulated annealing (Simulated Annealing, SA).Follow the continuous decline of temperature parameter, join probability kick characteristic is the random globally optimal solution of seeking target function in solution space, namely can jump out probability and finally be tending towards global optimum at locally optimal solution.Neighborhood is defined as the resource of selecting a virtual machine to distribute and separates conversion in current solution vector, the storage volume (must satisfy formula (1) (2) (3)) of selecting at random a main frame and this main frame to be associated.
6, a kind of algorithm optimization strategy, is characterized in that, comprises the optimisation strategy such as cache management, main frame, memory segment Selection Strategy, concurrent control, cache pre-reading, buffer memory removing strategy.
Cache management: the unification of a buffer memory registration table of system maintenance is carried out unified management to the designed buffer memory of scheduling strategy.All need to will be controlled by caching management module the operation that buffer memory is read and write, the item that only success is registered in the buffer memory registration table control that can conduct interviews.
Main frame, memory segment Selection Strategy: be that the scheduling strategy that fills up, smoothly also is based on network topology all must be weighed scheduling to whole main frame and canned data, obtain all main frames and storage information cost concerning scheduling often too large, therefore adopted a strategy that segmentation is selected here, first preferentially arrange according to the weights of main frame formation and carry out segmentation, then in segmentation again for main frame and storage fill up, the strategy such as level and smooth preferentially selects.
Concurrent control: for guaranteeing the concurrent fail safe of thread, the read-write of buffer memory is strictly controlled, the buffer memory of all resources is read and write all must be guaranteed its fail safe with mutual exclusion lock.
Cache pre-reading: in order to accelerate the speed of scheduling strategy, reduce the number of times of Gains resources real-time status in scheduling process.Scheduling strategy provides a method can obtain uniformly the cloud resource information, and the buffer memory read-write safety with scheduling process is read in assurance simultaneously in advance.
Buffer memory is removed strategy: the not necessarily current cloud resource situation that reflects in buffer memory, but also can consider to carry out to the resource that the past scheduling has distributed, and namely to a prediction of surplus resources in future.Real resource might than in buffer memory the record must be many.Propose a kind of strategy here, suppose that continuous N AX_ERROR sub-distribution unsuccessfully is about to buffer memory and empties.
Can be found out by such scheme, a kind of resource model of cloud platform of topology Network Based is proposed in the present invention, and the resource dispatching strategy of the empty machine establishment of proposition under this model, comprise fill up, smoothly with the resource dispatching strategies such as topology Network Based, and on above-mentioned basis, the aspects such as the efficient of algorithm, concurrent safety are optimized, facts have proved that the efficient to scheduling strategy is greatly improved.
Description of drawings
In order to be illustrated more clearly in the embodiment of the present invention or technical scheme of the prior art, the below will do simple the introduction to the accompanying drawing of required use in embodiment or description of the Prior Art, apparently, accompanying drawing in the following describes is only some embodiments of the present invention, for those of ordinary skills, under the prerequisite of not paying creative work, can also obtain according to these accompanying drawings other accompanying drawing.
Fig. 1 is the cloud resource model schematic diagram in the embodiment of the present invention;
Fig. 2 is the dispatching algorithm strategic process figure of the topology Network Based in the embodiment of the present invention;
Embodiment
For above-mentioned purpose of the present invention, feature and advantage can be become apparent more, the present invention is further detailed explanation below in conjunction with the drawings and specific embodiments.Obviously, described embodiment is only the present invention's part embodiment, rather than whole embodiment.Based on the embodiment in the present invention, those of ordinary skills belong to the scope of protection of the invention not making the every other embodiment that obtains under the creative work prerequisite.
Embodiment, referring to Fig. 1, Fig. 2.
The invention provides a kind of resource model of cloud platform of topology Network Based, and propose the resource dispatching strategy that the empty machine under this model creates, comprise fill up, smoothly with the resource dispatching strategy such as topology Network Based wherein:
Fill up strategy, be used for making the resource of empty machine request to concentrate as best one can, reduce the fragment of resource, improve the utilization rate of resource.Comprise:
Data center's selection strategy: the data center of preferentially selecting the active user place, system is that data center preserves a data data centers Priority Queues, the Priority Queues initialization is 0, whenever linear the increasing of data center's value that becomes main frame of the distribution of work with the correspondence in formation, when a host assignment unsuccessfully with index decreased.
Selection of chiller strategy: determined in front under the prerequisite of data center, system is that main frame is preserved a chiller priority formation, the Priority Queues initialization is 0, whenever becoming main frame of the distribution of work to increase the main frame value of the correspondence in formation is linear, when a host assignment unsuccessfully with index decreased.Fill up strategy and will preferentially select the larger main frame of weights in the main frame formation.Because must the real-time status current according to main frame distribute when host resource is distributed, suppose to be a virtual machine Resources allocation, and we need to obtain the information of All hosts and compare selection, this is very unadvisable method, adopted a strategy that segmentation is selected here, first preferentially arrange according to the weights of main frame formation and carry out segmentation, if fragment size easily causes the Host List information of obtaining too huge too greatly, the time of reading is oversize, the too little host information that obtains that easily causes of segmentation is too local, does not have of overall importance.Therefore selecting a suitable fragment size is very important for the selection of main frame, and main frame fragment size PARTITION_LEN is set here.
Priority Queues to main frame sorts, and attempts according to the size of PARTITION_LEN the information that main frame is obtained in segmentation, attempts the main frame of segmentation is carried out the sequence of resource utilization, and the chiller priority high to utilization rate distributes.
The storage selection strategy:
Under definite main frame prerequisite, obtain the store list (there are the relation of a correspondence in main frame and storage) of correspondence, the utilization rate of storing is sorted preferentially select.Under the prerequisite of the resource that satisfies empty machine request, the preferential large storage volume of choice for use rate is distributed.
Level and smooth tactful, be used for making every station server and the utilization rate above shared storage similar as much as possible, load balancing, the overall performance of assurance server.Comprise data center's selection strategy, Selection of chiller strategy, storage selection strategy.
Level and smooth tactful purpose makes every station server and shares the top utilization rate of storage similar as much as possible, load balancing, the overall performance of assurance server.
Data center's selection strategy: the data center of preferentially selecting the active user place, system is that data center preserves a data centers Priority Queues, the Priority Queues initialization is 0, whenever linear the increasing of data center's value that becomes main frame of the distribution of work with the correspondence in formation, when a host assignment unsuccessfully with index decreased.
Selection of chiller strategy: determined in front under the prerequisite of data center, system is that main frame is preserved a chiller priority formation, the Priority Queues initialization is 0, whenever becoming main frame of the distribution of work with the linear increase of the main frame value of the correspondence in formation, when a host assignment unsuccessfully will not change.Level and smooth strategy will preferentially be selected the less main frame of weights in the main frame formation, namely use recently less main frame.Because must the real-time status current according to main frame distribute when host resource is distributed, suppose to be a virtual machine Resources allocation, and we need to obtain the information of All hosts and compare selection, this is very unadvisable method, adopted a strategy that segmentation is selected here, first preferentially arrange according to the weights of main frame formation and carry out segmentation, if fragment size easily causes the Host List information of obtaining too huge too greatly, the time of reading is oversize, the too little host information that obtains that easily causes of segmentation is too local, does not have of overall importance.Therefore selecting a suitable fragment size is very important for the selection of main frame, and main frame fragment size PARTITION_LEN is set here.Priority Queues to main frame sorts, and attempts according to the size of PARTITION_LEN the information that main frame is obtained in segmentation, attempts the main frame of segmentation is carried out the sequence of resource utilization, and the chiller priority little to utilization rate distributes.
Storage selection strategy: under definite main frame prerequisite, obtain corresponding store list (there are the relation of a correspondence in main frame and storage), the utilization rate of storage is sorted.Under the prerequisite of the resource that satisfies empty machine request, the preferential little storage volume of choice for use rate is distributed.
6, the resource dispatching strategy of topology Network Based is used for guaranteeing that the server resource and the network path between shared storage resources that distribute for empty machine are minimum as far as possible, only in the situation that last resort across data center, cluster distribution server and shared storage resources.Calculate the globally optimal solution of overall network topology path minimum in conjunction with simulated annealing (Simulated Annealing, SA).Follow the continuous decline of temperature parameter, join probability kick characteristic is the random globally optimal solution of seeking target function in solution space, namely can jump out probability and finally be tending towards global optimum at locally optimal solution.
Algorithm idiographic flow such as Fig. 2.
1) definition variations in temperature table ST=(t1, t2 ... tq).Here q is different temperature value.
2) generate initial solution by filling up algorithm, establish i=1, T=t1, T are Current Temperatures;
3) produce new solution in neighborhood.The resource of selecting a virtual machine to distribute in current solution vector is separated conversion, and the storage volume (must satisfy formula (1) (2) (3)) of selecting at random a main frame and this main frame to be associated is calculated the network distance F2 of new explanation;
4) variation of calculating target function F1 and F2, Δ t becomes the difference between new and old target function, if Δ t<0 changes step 6 over to), otherwise step 5);
5) produce a unified random digit u ∈ [0,1], calculate the acceptance probability P of new explanation, if u>P keeps old solution, turn step 3);
6) accept the new target function value that produces;
7) if system reaches heat balance at present temperature, establish i=i+1, if i>q stops computing.Otherwise T=T iChange step 3 over to).If heat balance does not still reach, continue to run until step 3 with constant temperature T).
7, a kind of algorithm optimization strategy, is characterized in that, comprises the optimisation strategy such as cache management, main frame, memory segment Selection Strategy, concurrent control, cache pre-reading, buffer memory removing strategy.
Cache management: the unification of a buffer memory registration table of system maintenance is carried out unified management to the designed buffer memory of scheduling strategy.All need to will be controlled by caching management module the operation that buffer memory is read and write, the item that only success is registered in the buffer memory registration table control that can conduct interviews.Consider the identical situation of shared storage possibility of multiple host access, taste academic probation and get main frame and storage volume relation when storage volume is conducted interviews, determine the storage volume that current main frame is corresponding, only have and just attempt obtaining by server the real time information of storage volume when not existing this storage volume information or storage volume information expired in buffer memory.
Main frame, memory segment Selection Strategy: be that the scheduling strategy that fills up, smoothly also is based on network topology all must be weighed scheduling to whole main frame and canned data, obtain all main frames and storage information cost concerning scheduling often too large, therefore adopted a strategy that segmentation is selected here, first preferentially arrange according to the weights of main frame formation and carry out segmentation, then in segmentation again for main frame and storage fill up, the strategy such as level and smooth preferentially selects.If fragment size easily causes the Host List information of obtaining too huge too greatly, the time of reading is oversize, and the too little host information that obtains that easily causes of segmentation is too local, does not have of overall importance.Therefore selecting a suitable fragment size is very important for the selection of main frame, and main frame fragment size PARTITION_LEN is set here.
Concurrent control: for guaranteeing the concurrent fail safe of thread, the read-write of buffer memory is strictly controlled, the buffer memory of all resources is read and write all must be guaranteed its fail safe with mutual exclusion lock.
Cache pre-reading: in order to accelerate the speed of scheduling strategy, reduce the number of times of Gains resources real-time status in scheduling process.scheduling strategy provides a method can obtain uniformly the cloud resource information, can cause all buffer memorys again to refresh in the process of cache pre-reading, therefore can the correctness of the dispatching algorithm moved be impacted, therefore be the operation of a mutual exclusion to pre-read operation and scheduling operation here, whether must first check when reading in advance has thread carrying out scheduling operation, if have wait for dispatching to complete again and read in advance, also must check whether carrying out pre-read operation when dispatching in addition, directly stop scheduling operation and point out the user if carrying out pre-read operation considering, because pre-read operation institute's time spent is in general all long.
Buffer memory is removed strategy: the not necessarily current cloud resource situation that reflects in buffer memory, but also can consider to carry out to the resource that the past scheduling has distributed, and namely to a prediction of surplus resources in future.Suppose a dispatching algorithm successful execution of front, what record in buffer memory is that current dispatching algorithm becomes distribution of work state afterwards, and reality failed situation might occur when empty machine is created, although the resource of distributing is much enough.Therefore the reflection of the state of buffer memory is a prediction to surplus resources, and real resource might must be many than record in buffer memory.In the face of this situation, suppose that continuous N AX_ERROR sub-distribution unsuccessfully removes buffer memory.
Therefore a kind of resource dispatching strategy based on the cloud platform provided by the invention has following advantage.
(1) take into account different user's requests, provide the different strategy of the demand to satisfy the user of different strategy all to realize identical interface, only need to replace the concrete class that realizes and to realize different scheduling strategies.Can directly control scheduling strategy by configuration item in concrete use, revise flexibly and change with realization.
(2) assurance is made in the utilization rate of cloud resource, empty machine overall performance aspect etc.
Fill up strategy and effectively improve the utilization rate of the integral body of cloud resource, level and smooth strategy makes the gap between the resource such as host stores as far as possible little, guarantees the overall performance of cloud resource.The strategy of topology Network Based makes the communication cost between empty machine reduce, and improves the overall performance of empty machine.
The above is only the specific embodiment of the present invention; should be pointed out that for those skilled in the art, under the prerequisite that does not break away from the principle of the invention; can also make some improvements and modifications, these improvements and modifications also should be considered as protection scope of the present invention.

Claims (5)

1. the cloud platform resource model of a topology Network Based, is characterized in that, comprising:
Definition and description to the cloud platform resource: the cloud platform resource mainly is divided into two kinds here, comprises host resource and shared storage resources.It is also process to empty machine assign host machine resource and shared storage resources that virtual machine is carried out that resource distributes;
The information of standardization definition host resource, main frame utilization rate, shared storage volume information, storage volume utilization rate, host stores volume relational matrix;
Propose to weigh a kind of scheme of network topology distance, the network distance that utilizes data center's distance matrix, main frame distance matrix, storage volume distance matrix, host stores distance matrix to calculate the target solution is weighed the quality of solution; The computing formula of the network distance that the mathematic(al) representation of the legal solution of proposition Solve problems is conciliate.
2. fill up strategy, it is characterized in that, make the resource of empty machine request concentrate as best one can, reduce the fragment of resource, improve the utilization rate of resource.
3. smoothly tactful, it is characterized in that, make every station server and the utilization rate above shared storage similar as much as possible, load balancing, the overall performance of assurance server.
4. the resource dispatching strategy of topology Network Based, it is characterized in that, in conjunction with simulated annealing (SimulatedAnnealing, SA) globally optimal solution of calculating overall network topology path minimum, reduce the network topology distance between virtual machine, reduce the cost of empty machine communication, improve the performance of virtual machine integral body.
5. an algorithm optimization strategy, is characterized in that, comprises the optimisation strategy such as cache management, main frame, memory segment Selection Strategy, concurrent control, cache pre-reading, buffer memory removing strategy;
Cache management: the unification of a buffer memory registration table of system maintenance is carried out unified management to the designed buffer memory of scheduling strategy.All need to will be controlled by caching management module the operation that buffer memory is read and write, the item that only success is registered in the buffer memory registration table control that can conduct interviews;
Main frame, memory segment Selection Strategy: be that the scheduling strategy that fills up, smoothly also is based on network topology all must be weighed scheduling to whole main frame and canned data, obtain all main frames and storage information cost concerning scheduling often too large, therefore adopted a strategy that segmentation is selected here, first preferentially arrange according to the weights of main frame formation and carry out segmentation, then in segmentation again for main frame and storage fill up, the strategy such as level and smooth preferentially selects;
Concurrent control: for guaranteeing the concurrent fail safe of thread, the read-write of buffer memory is strictly controlled, the buffer memory of all resources is read and write all must be guaranteed its fail safe with mutual exclusion lock;
Cache pre-reading: accelerate the speed of scheduling strategy, reduce the number of times of Gains resources real-time status in scheduling process; Buffer memory is removed strategy: the state reflection of buffer memory be a prediction to surplus resources, real resource might must be many than record in buffer memory.In the face of this situation, suppose that continuous N AX_ERROR (constant) sub-distribution unsuccessfully removes buffer memory.
CN2011103553737A 2011-11-02 2011-11-02 Cloud resource scheduling policy based on network topology Pending CN103095788A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2011103553737A CN103095788A (en) 2011-11-02 2011-11-02 Cloud resource scheduling policy based on network topology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2011103553737A CN103095788A (en) 2011-11-02 2011-11-02 Cloud resource scheduling policy based on network topology

Publications (1)

Publication Number Publication Date
CN103095788A true CN103095788A (en) 2013-05-08

Family

ID=48207916

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2011103553737A Pending CN103095788A (en) 2011-11-02 2011-11-02 Cloud resource scheduling policy based on network topology

Country Status (1)

Country Link
CN (1) CN103095788A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103414657A (en) * 2013-08-22 2013-11-27 浪潮(北京)电子信息产业有限公司 Cross-data-center resource scheduling method, super scheduling center and system
CN103812930A (en) * 2014-01-16 2014-05-21 华为技术有限公司 Method and device for resource scheduling
CN104580447A (en) * 2014-12-29 2015-04-29 中国科学院计算机网络信息中心 Spatio-temporal data service scheduling method based on access heat
CN105553741A (en) * 2015-12-28 2016-05-04 江苏省电力公司信息通信分公司 Automatic deployment method for application system based on cloud computing
CN107168797A (en) * 2017-05-12 2017-09-15 中国人民解放军信息工程大学 Resource regulating method based on dynamic game under cloud environment
CN107273308A (en) * 2017-06-12 2017-10-20 上海优刻得信息科技有限公司 A kind of shared buffer memory distribution method, device, medium and equipment based on CAT
CN103870339B (en) * 2014-03-06 2017-12-15 上海华为技术有限公司 A kind of cluster resource distribution method and device
WO2018001269A1 (en) * 2016-07-01 2018-01-04 华为技术有限公司 Method of processing cloud resource, and physical node
CN107748693A (en) * 2017-11-30 2018-03-02 成都启力慧源科技有限公司 Group's virtual machine scheduling policy under cloud computing environment
CN107872517A (en) * 2017-10-23 2018-04-03 北京奇艺世纪科技有限公司 A kind of data processing method and device
CN108170520A (en) * 2018-01-29 2018-06-15 北京搜狐新媒体信息技术有限公司 A kind of cloud computing resources management method and device
CN114424948A (en) * 2021-12-15 2022-05-03 上海交通大学医学院附属瑞金医院 Distributed ultrasonic scanning system and communication method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090138887A1 (en) * 2007-11-28 2009-05-28 Hitachi, Ltd. Virtual machine monitor and multiprocessor sysyem
CN101800762A (en) * 2009-12-30 2010-08-11 中兴通讯股份有限公司 Service cloud system for fusing multiple services and service implementation method
CN101854667A (en) * 2010-05-19 2010-10-06 中兴通讯股份有限公司 Cloud computing supporting mobile terminal side load balancing processing method and device
CN101894050A (en) * 2010-07-28 2010-11-24 山东中创软件工程股份有限公司 Method, device and system for flexibly scheduling JEE application resources of cloud resource pool

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090138887A1 (en) * 2007-11-28 2009-05-28 Hitachi, Ltd. Virtual machine monitor and multiprocessor sysyem
CN101800762A (en) * 2009-12-30 2010-08-11 中兴通讯股份有限公司 Service cloud system for fusing multiple services and service implementation method
CN101854667A (en) * 2010-05-19 2010-10-06 中兴通讯股份有限公司 Cloud computing supporting mobile terminal side load balancing processing method and device
CN101894050A (en) * 2010-07-28 2010-11-24 山东中创软件工程股份有限公司 Method, device and system for flexibly scheduling JEE application resources of cloud resource pool

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103414657A (en) * 2013-08-22 2013-11-27 浪潮(北京)电子信息产业有限公司 Cross-data-center resource scheduling method, super scheduling center and system
CN103812930A (en) * 2014-01-16 2014-05-21 华为技术有限公司 Method and device for resource scheduling
WO2015106618A1 (en) * 2014-01-16 2015-07-23 华为技术有限公司 Resource scheduling method and apparatus
CN103812930B (en) * 2014-01-16 2017-10-17 华为技术有限公司 A kind of method and device of scheduling of resource
CN103870339B (en) * 2014-03-06 2017-12-15 上海华为技术有限公司 A kind of cluster resource distribution method and device
CN104580447A (en) * 2014-12-29 2015-04-29 中国科学院计算机网络信息中心 Spatio-temporal data service scheduling method based on access heat
CN104580447B (en) * 2014-12-29 2019-04-09 中国科学院计算机网络信息中心 A kind of space-time data service scheduling method based on access temperature
CN105553741A (en) * 2015-12-28 2016-05-04 江苏省电力公司信息通信分公司 Automatic deployment method for application system based on cloud computing
WO2018001269A1 (en) * 2016-07-01 2018-01-04 华为技术有限公司 Method of processing cloud resource, and physical node
US10897431B2 (en) 2016-07-01 2021-01-19 Huawei Technologies Co., Ltd. Cloud resource processing method and physical node
CN107168797A (en) * 2017-05-12 2017-09-15 中国人民解放军信息工程大学 Resource regulating method based on dynamic game under cloud environment
CN107273308A (en) * 2017-06-12 2017-10-20 上海优刻得信息科技有限公司 A kind of shared buffer memory distribution method, device, medium and equipment based on CAT
CN107872517A (en) * 2017-10-23 2018-04-03 北京奇艺世纪科技有限公司 A kind of data processing method and device
CN107872517B (en) * 2017-10-23 2020-11-27 北京奇艺世纪科技有限公司 Data processing method and device
CN107748693A (en) * 2017-11-30 2018-03-02 成都启力慧源科技有限公司 Group's virtual machine scheduling policy under cloud computing environment
CN108170520A (en) * 2018-01-29 2018-06-15 北京搜狐新媒体信息技术有限公司 A kind of cloud computing resources management method and device
CN114424948A (en) * 2021-12-15 2022-05-03 上海交通大学医学院附属瑞金医院 Distributed ultrasonic scanning system and communication method
CN114424948B (en) * 2021-12-15 2024-05-24 上海交通大学医学院附属瑞金医院 Distributed ultrasonic scanning system and communication method

Similar Documents

Publication Publication Date Title
CN103095788A (en) Cloud resource scheduling policy based on network topology
US11237871B1 (en) Methods, systems, and devices for adaptive data resource assignment and placement in distributed data storage systems
US9542223B2 (en) Scheduling jobs in a cluster by constructing multiple subclusters based on entry and exit rules
CN102156665B (en) Differential serving method for virtual system competition resources
KR101502896B1 (en) Distributed memory cluster control apparatus and method using map reduce
US8972986B2 (en) Locality-aware resource allocation for cloud computing
US9798642B2 (en) Method for allocating a server amongst a network of hybrid storage devices
CN102521052B (en) Resource allocation method of virtualized data center and virtual machine monitor
US10356150B1 (en) Automated repartitioning of streaming data
US20110137805A1 (en) Inter-cloud resource sharing within a cloud computing environment
CN104679594B (en) A kind of middleware distributed computing method
CN110058932A (en) A kind of storage method and storage system calculated for data flow driven
CN107450855B (en) Model-variable data distribution method and system for distributed storage
CN102857548A (en) Mobile cloud computing resource optimal allocation method
CN106126340A (en) A kind of reducer system of selection across data center&#39;s cloud computing system
Ma et al. Dependency-aware data locality for MapReduce
CN111767139A (en) Cross-region multi-data-center resource cloud service modeling method and system
Pandya et al. Dynamic resource allocation techniques in cloud computing
Anan et al. SLA-based optimization of energy efficiency for green cloud computing
Sareen et al. Resource allocation strategies in cloud computing
Mirtaheri et al. Adaptive load balancing dashboard in dynamic distributed systems
US11336519B1 (en) Evaluating placement configurations for distributed resource placement
Lin et al. A workload-driven approach to dynamic data balancing in MongoDB
Wang et al. A Cloud‐Computing‐Based Data Placement Strategy in High‐Speed Railway
CN106027685A (en) Peak access method based on cloud computation system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20130508