CN106970824A - A kind of virtual machine (vm) migration compression method and system based on bandwidth aware - Google Patents

A kind of virtual machine (vm) migration compression method and system based on bandwidth aware Download PDF

Info

Publication number
CN106970824A
CN106970824A CN201710129704.2A CN201710129704A CN106970824A CN 106970824 A CN106970824 A CN 106970824A CN 201710129704 A CN201710129704 A CN 201710129704A CN 106970824 A CN106970824 A CN 106970824A
Authority
CN
China
Prior art keywords
compression
migration
virtual machine
speed
bandwidth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710129704.2A
Other languages
Chinese (zh)
Other versions
CN106970824B (en
Inventor
冯丹
华宇
李春光
秦磊华
黄月
周玉坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN201710129704.2A priority Critical patent/CN106970824B/en
Publication of CN106970824A publication Critical patent/CN106970824A/en
Application granted granted Critical
Publication of CN106970824B publication Critical patent/CN106970824B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/485Task life-cycle, e.g. stopping, restarting, resuming execution
    • G06F9/4856Task life-cycle, e.g. stopping, restarting, resuming execution resumption being on a different machine, e.g. task migration, virtual machine migration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/563Data redirection of data network streams
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/4557Distribution of virtual machine instances; Migration and load balancing

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Environmental & Geological Engineering (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention discloses a kind of virtual machine (vm) migration compression method based on bandwidth aware and system, belong to computer virtualized field.The inventive method is with the default frequency detecting network bandwidth, utilize each pair compression ratio in bandwidth and Compression Strategies table and compression speed computation migration speed, compression method corresponding to the maximum migration velocity of selection is compressed migration, multiple pages are first merged into a packet before internal storage data compression is carried out, reduced overall migration is carried out to packet again, until completing compression migration;Present invention also offers a kind of virtual machine (vm) migration compressibility based on bandwidth aware simultaneously, technical solution of the present invention adjusts compression method according to Bandwidth Dynamic makes migratory system obtain bigger migration velocity, so as to obtain shorter transit time, while reducing volume of transmitted data, Internet resources are saved.

Description

A kind of virtual machine (vm) migration compression method and system based on bandwidth aware
Technical field
The invention belongs to computer virtualized field, more particularly, to a kind of virtual machine (vm) migration based on bandwidth aware Compression method and system.
Background technology
In recent years, with cloud computing and the development of virtualization technology, virtual machine is deployed in more and more widely In data center and cluster environment., can be in existing meter because virtual machine can carry out abstract simulation to computer resource On the basis of calculation machine hardware resource, virtual hardware resource is simulated, therefore it has simulation different platform, improves computer resource profit With rate, it is easy to manage, using many merits such as isolation.
Virtual machine (vm) migration, refers to that virtual machine is in different physics masters while normal operation is serviced in ensureing virtual machine Migrated between machine.In order to ensure the availability of virtual machine service in transition process, transition process only has very of short duration stop The machine time.Because the time for shutting down switching is very of short duration, the interruption of the imperceptible service of user, thus transition process is to user Transparent.Virtual machine (vm) migration is applied to many scenes such as load balancing, energy-conservation and the system maintenance of data center, therefore is empty One very important characteristic of planization technology.
Virtual machine (vm) migration is typically what is carried out in local area network, and virtual machine is using shared storage in this environment Mode accesses external memory, therefore only needs to migrate the equipment states such as the internal storage data and virtual cpu of virtual machine, and virtually Machine internal memory account for the overwhelming majority of required migrating data.Pre-copy mode is by widely used main of each virtual platform Migration algorithm.The transition process of pre-copy is to copy complete virutal machine memory first to be mirrored to destination host.In this process In, because virtual machine is still in operation, one part page can be changed, and these internal memory containing dirty pages being modified are needed next Destination host is transferred to again in wheel iteration.Hereafter the containing dirty pages produced in each round iterative process are required for passing again in next round It is defeated, so as to ensure the uniformity of internal storage state.Through excessive wheel iteration, final remaining containing dirty pages quantity is fewer, reaches default During threshold value, it is possible to carry out shutdown copy, terminate the process of iteration copy.
Although existing pre-copy migration pattern can realize shorter downtime, there is problems with:Due to Internal storage data needs the iteration transmission of many wheels, and its transmitted data on network amount is larger, and transit time is also longer;If in addition, virtual Memory-intensive is write in the load run in machine, then virutal machine memory is write dirty speed may be too fast, at this moment copies in advance The migration pattern of shellfish just can not be normally restrained into the copy stage is shut down, and also can not just normally complete transition process.These are asked Topic leverages the performance of virtual machine (vm) migration, causes in the data center using can not reach expection during virtual machine migration technology Effect.
The content of the invention
For the disadvantages described above or Improvement requirement of prior art, the invention provides a kind of virtual machine based on bandwidth aware Migrate compression method and system, its object is to first detect a variety of compression methods be respectively used to a variety of Several Typical Loads compression ratio and Compression speed, sets up compression rope Policy Table, then with predetermined frequency perceived bandwidth, calculates each compression method institute under current bandwidth Corresponding migration velocity, is compressed migration using the compression method corresponding to maximum migration velocity, thus solves conventional compact Migrating technology problem.
To achieve the above object, moved according to one aspect of the present invention there is provided a kind of virtual machine based on bandwidth aware Compression method is moved, this method utilizes each pair compression ratio in bandwidth and Compression Strategies table with the default frequency detecting network bandwidth With compression speed computation migration speed, the compression method corresponding to the maximum migration velocity of selection enters to virtual machine current memory data Row compression migration.
Further, the inventive method is specifically comprised the steps of:
(1) monitoring network bandwidth, obtains the utilizable real-time network bandwidth St of virtual machine (vm) migration;
(2) the compression ratio ρ corresponding to various compression methods in Compression Strategies table is utilizediWith compression speed SciComputation migration speed Spend Smgti,
Smgti=min (Sci,St×ρi),
Multiple migration velocities are obtained, contrast draws the migration velocity of maximum;
(3) compression ratio and compression speed for obtaining maximum migration velocity are found out, with it in Compression Strategies table corresponding pressure Compression method is compressed migration to virtual machine current memory data.
Further, the Compression Strategies table is obtained using following methods in advance:
Multiple Several Typical Loads are chosen under data center's running environment to run in virtual machine successively, and respectively with a variety of pressures Compression method carries out the compressed detected of internal storage data, and every kind of compression method obtains a pair of compression ratios and compression speed, all compression sides Method compression ratio corresponding with its and compression speed composition Compression Strategies table.
Further, the method for obtaining Compression Strategies table specifically includes following sub-step:
(31) Several Typical Load in a kind of data center is chosen, is run in virtual machine;
(32) choose a kind of compression method and be compressed migration, and all set all pages after each wheel compression For containing dirty pages, and change another compression method and be compressed migration, m kinds compress mode iteration m wheels altogether;Record is needed for per wheel compression Compress size of data after total time and compression;
(33) another Several Typical Load, return to step (31), until the Several Typical Load in n kinds data center has compressed are changed Into;
(34) the compression ratio ρ that i-th kind of compression method is loaded for jth kind is calculatedij,
ρijSize of data after size of data/compression before=compression,
Calculate the compression speed Sc that i-th kind of compression method is loaded for jth kindij,
ScijSize of data/compression total time before=compression;
Wherein, 1≤i≤m;1≤j≤n;
(35) the average compression ratio ρ that i-th kind of compression method is loaded for n kinds is calculatedi,
ρi=(ρi1i2+…+ρin)/n,
Calculate the average compression speed Sc that i-th kind of compression method is loaded for n kindsi,
Sci=(Sci1+Sci2+…+Scin)/n
One is obtained m corresponding to m kind compression methods to average compression ratio ρiWith average compression speed Sci, compression method and Corresponding average compression ratio and average compression speed collectively form Compression Strategies table.
Further, the inventive method also includes a combining step:
Combining step:Multiple pages are first merged into a packet before carrying out internal storage data compression, then to packet Carry out reduced overall.
It is another aspect of this invention to provide that there is provided a kind of virtual machine (vm) migration compressibility based on bandwidth aware, this is Unite for the default frequency detecting network bandwidth, utilizing each pair compression ratio and compression speed meter in bandwidth and Compression Strategies table Migration velocity is calculated, the compression method corresponding to the maximum migration velocity of selection is compressed migration to virtual machine current memory data.
Further, present system specifically includes following part:
Bandwidth detection, for monitoring network bandwidth, obtains the utilizable real-time network bandwidth St of virtual machine (vm) migration;
Migration velocity computing module, for utilizing the compression ratio ρ corresponding to various compression methods in Compression Strategies tableiAnd pressure Contracting speed SciComputation migration speed Smgti,
Smgti=min (Sci,St×ρi),
Multiple migration velocities are obtained, contrast draws the migration velocity of maximum;
Transferring module is compressed, the compression ratio and compression speed of maximum migration velocity are obtained for finding out, with it in compression plan Corresponding compression method is compressed migration to virtual machine current memory data in sketch form.
Further, the Compression Strategies table is used obtained with lower module in advance:
Compression Strategies table module, runs on virtually successively for choosing multiple Several Typical Loads under data center's running environment In machine, and carry out the compressed detected of internal storage data with a variety of compression methods respectively, every kind of compression method obtain a pair of compression ratios and Compression speed, all compression methods compression ratio corresponding with its and compression speed composition Compression Strategies table.
Further, the Compression Strategies table module specifically includes following part:
Load running unit, for choosing the Several Typical Load in a kind of data center, runs in virtual machine;
Iteration Contraction unit, migration is compressed for choosing a kind of compression method, and will be all after each wheel compression Page be both configured to containing dirty pages, and change another compression method and be compressed migration, m kinds compress mode iteration m wheels altogether;Note Size of data after record compression total time needed for per wheel compression and compression;
Load unit is changed, for changing another Several Typical Load, load running unit is returned to, until n kinds data center In Several Typical Load compression complete;
Computing unit, for calculating the compression ratio ρ that i-th kind of compression method is loaded for jth kindij,
ρijSize of data after size of data/compression before=compression,
Calculate the compression speed Sc that i-th kind of compression method is loaded for jth kindij,
ScijSize of data/compression total time before=compression;
Wherein, 1≤i≤m;1≤j≤n;
Policy Table's construction unit, for calculating the average compression ratio ρ that i-th kind of compression method is loaded for n kindsi,
ρi=(ρi1i2+…+ρin)/n,
Calculate the average compression speed Sc that i-th kind of compression method is loaded for n kindsi,
Sci=(Sci1+Sci2+…+Scin)/n
One is obtained m corresponding to m kind compression methods to average compression ratio ρiWith average compression speed Sci, compression method and Corresponding average compression ratio and average compression speed collectively form Compression Strategies table.
Further, present system also includes merging module:
Merging module, for multiple pages first to be merged into a packet before internal storage data compression, then to data Bag carries out reduced overall.
In general, by the contemplated above technical scheme of the present invention compared with prior art, it is special with following technology Levy and beneficial effect:
(1) present invention dynamically adjusts compression method according to migration real-time bandwidth, with the compression method phase using single fixation Than shorter transit time can be obtained, and when the network bandwidth is higher, the present invention can be selected quickly and the relatively low compression of compression ratio Method;Conversely, when bandwidth is relatively low, can select at a slow speed and the higher compression method of compression ratio, such dynamic adjustment process can So that migratory system obtains bigger throughput, so as to obtain shorter transit time;
(2) when being run on due to different loads in virtual machine, the data content in virutal machine memory is widely different, therefore together A kind of compression method is when compressing the internal memory of different loads, and resulting compression ratio and compression speed has different, the present invention Average compression ratio and compression speed are obtained by some Several Typical Loads, so as to constitute the method for Compression Strategies table, it is to avoid pair Different loads use different Compression Strategies tables, and this causes the specific implementation of the present invention more feasible and simplicity;
(3) method that the page packing of multiple virutal machine memories is compressed by the present invention, has excavated the compression window of compression algorithm The characteristics of mouth is much larger than single page, compression algorithm is to find redundant data to be pressed in the range of compression window Contracting, therefore bigger compression granularity proposed by the present invention can further lift the compression ratio of internal storage data, so as to reduce data Transmission quantity, saves Internet resources.
Brief description of the drawings
Fig. 1 is the schematic flow sheet of the inventive method;
Fig. 2 is the system construction drawing of the inventive method embodiment;
Fig. 3 is the compression ratio of LZ4 compression algorithms and the change curve schematic diagram of compression speed;
Fig. 4 a are the transit time contrast schematic diagrams migrated using the inventive method and existing pre-copy;
Fig. 4 b are the migrating data amount contrast schematic diagrams migrated using the inventive method and existing pre-copy.
Embodiment
In order to make the purpose , technical scheme and advantage of the present invention be clearer, it is right below in conjunction with drawings and Examples The present invention is further elaborated.It should be appreciated that specific embodiment described herein is only to explain the present invention, not For limiting the present invention.As long as in addition, technical characteristic involved in each embodiment of invention described below that Not constituting conflict between this can just be mutually combined.
As shown in figure 1, the inventive method is with the default frequency detecting network bandwidth, using in bandwidth and Compression Strategies table Each pair compression ratio and compression speed computation migration speed, the compression method corresponding to the maximum migration velocity of selection are current to virtual machine Internal storage data is compressed migration;Multiple pages are first merged into a packet before end of moving out, compression, then to packet Carry out reduced overall;End is being moved into, the network packet received is being decompressed, so as to obtain the page of virtual machine.
It is specifically divided into following sub-step:
(1) monitoring network bandwidth, obtains the utilizable real-time network bandwidth St of virtual machine (vm) migration;
(2) the compression ratio ρ corresponding to various compression methods in Compression Strategies table is utilizediWith compression speed SciComputation migration speed Spend Smgti,
Smgti=min (Sci,St×ρi),
Multiple migration velocities are obtained, contrast draws the migration velocity of maximum;
(3) compression ratio and compression speed for obtaining maximum migration velocity are found out, with it in Compression Strategies table corresponding pressure Compression method is compressed migration to virtual machine current memory data;
At end of moving out, a buffering area is set, the virutal machine memory beginning of the page that will be transmitted first is copied in the buffering area, when After buffering area is full, whole buffering area is subjected to first compression, the packet after compression is transferred to and moves into end;
End is being moved into, one and an equal amount of buffering area in end of moving out are set, will obtained after the packet received decompression To page be put into the buffering area, then each page is placed into corresponding virtual machine address space again;
The compression method so chosen is the current generation so that maximum one of migration velocity;It is dynamic according to the network bandwidth State adjustment compression method causes each stage in whole transition process all to use the maximum compression method of throughput at this stage, from And significantly shorten transit time.
Compression Strategies table described in above content is obtained using following methods in advance:Chosen under data center's running environment Multiple Several Typical Loads are run in virtual machine successively, and carry out the compressed detected of internal storage data with a variety of compression methods respectively, often Plant compression method and obtain a pair of compression ratios and compression speed, all compression methods compression ratio corresponding with its and compression speed composition Compression Strategies table
It is specifically divided into following steps:
(31) Several Typical Load in a kind of data center is chosen, is run in virtual machine;
(32) choose a kind of compression method and be compressed migration, before compression, a buffering area, the void that will be transmitted are set Plan machine page is copied in the buffering area first, and after buffering area is full, whole buffering area is carried out into first compression, and each All pages are both configured into containing dirty pages after wheel compression, and change another compression method be compressed migration, m kinds compression side Iteration m takes turns formula altogether;Size of data after record compression total time needed for per wheel compression and compression;
It can so make pages all in virtual machine address space carry out many wheels to the modification that pre-copy is migrated to transmit, And compression ratio and compression speed are measured from a kind of compression method when each round is transmitted, so as to only need to once be migrated behaviour Work can just measure compression ratio and compression speed of all compression methods to a certain load, and this simplifies obtain Compression Strategies table Process;
Specific in embodiment, using LZ4 algorithms, 16 different LZ4 acceleration figures are have chosen, are 1,3,5 respectively, 7 ... 31, so constitute 16 kinds of compression methods because two neighboring LZ4 acceleration figures (such as 1 and 2) produce compression ratio and Compression speed difference is smaller, therefore the present embodiment selection uses so that 2 be step number to select acceleration figure;
(33) another Several Typical Load, return to step (31), until the Several Typical Load in n kinds data center has compressed are changed Into;
(34) the compression ratio ρ that i-th kind of compression method is loaded for jth kind is calculatedij,
ρijSize of data after size of data/compression before=compression,
Calculate the compression speed Sc that i-th kind of compression method is loaded for jth kindij,
ScijSize of data/compression total time before=compression;
Wherein, 1≤i≤m;1≤j≤n;
(35) the average compression ratio ρ that i-th kind of compression method is loaded for n kinds is calculatedi,
ρi=(ρi1i2+…+ρin)/n,
Calculate the average compression speed Sc that i-th kind of compression method is loaded for n kindsi,
Sci=(Sci1+Sci2+…+Scin)/n
One is obtained m corresponding to m kind compression methods to average compression ratio ρiWith average compression speed Sci, compression method and Corresponding average compression ratio and average compression speed collectively form Compression Strategies table.
As shown in Fig. 2 in the present embodiment, the embodiment of the present invention be KVM/QEMU increase income carry out on virtual platform it is real Existing, at end of moving out, width monitoring module obtains the current migration network bandwidth with the frequency of 1 second 1 time, submits to bandwidth aware pressure Contracting module;Compression module is obtained after the network bandwidth, and compression method is dynamically adjusted using Compression Strategies table;Meanwhile, at end of moving out, One buffering area is set, and the virutal machine memory beginning of the page that will be transmitted first is copied in the buffering area, after buffering area is full, to buffering Data in area carry out packing compression;Packet after compression is transferred to and moves into end;End is moved into connect the data for compression of packing Take in after buffering area, decompression module is decompressed to it, and obtained page is submitted in corresponding virtual machine address space.
It is proposed by the present invention that according to migration real-time bandwidth, dynamically adjustment compression method can greatly promote gulping down for migratory system Rate is told, so as to significantly shorten transit time.Different compression algorithm compression ratios and compression speed are different, it is however generally that, pressure The high compression algorithm compression speed of shrinkage is relatively low, and vice versa;In addition many compression algorithms provide parameter and selected for user, so that Adjusted between compression ratio and compression speed.Exemplified by the LZ4 compression algorithms used when present invention specific implementation, the algorithm is carried For an acceleration value parameter, Fig. 3 is changes of the LZ4 to the compression effectiveness of virutal machine memory when using different acceleration figures.With adding The change of speed value is big, and compression speed is constantly lifted, and is constantly reduced to cost with compression ratio.
As can be seen here, under a certain bandwidth, using the compression method that compression ratio is different with compression speed in migration, at this It is that the speed of its system migration is different using the different LZ4 algorithms of acceleration figure in embodiment.Side proposed by the present invention Method can be found out can bring the compression method of maximum migration velocity to migratory system under a certain bandwidth, and with the change of bandwidth Change, the optimal compression method of dynamic select, so as to obtain most short transit time.
The present invention obtains average compression ratio and compression speed by some Several Typical Loads, so as to constitute Compression Strategies table Method, it is to avoid different Compression Strategies table is used to different loads so that specific implementation of the invention is more simple and feasible.When When running different loads in virtual machine, the data content in its internal memory is different, therefore is compressed not with the LZ4 of a certain acceleration figure During with the internal memory loaded, resulting compression ratio and compression speed has different, so that the migration velocity for the system brought Also differ.However, our experiences show that, different loads can obtain maximum system at identical or neighbouring acceleration figure Migration velocity.Therefore, the present invention proposes to choose some Several Typical Loads, measures their compression ratios and pressure under different compression methods After contracting speed, these data are averaged, average compression speed and compression ratio is obtained, Compression Strategies table is constituted, calculates Optimal compression method under a certain bandwidth.Experiment shows that the Compression Strategies table so drawn is applied in the range of certain error All loads.This causes the present invention only to store one group of Compression Strategies table when implementing, and simplifies system realization, improves The feasibility of the present invention.
As shown in fig. 4 a, the transit time migrated using the inventive method and existing pre-copy is contrasted, and the inventive method is complete It is less into the migration used time;
With shown in Fig. 4 b, the migrating data amount migrated using the inventive method and existing pre-copy is contrasted, the inventive method Complete migration transmitted data amount smaller.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, all essences in the present invention Any modifications, equivalent substitutions and improvements made within refreshing and principle etc., should be included in the scope of the protection.

Claims (10)

1. a kind of virtual machine (vm) migration compression method based on bandwidth aware, it is characterised in that the inventive method is:With default frequency Rate detects the network bandwidth, utilizes each pair compression ratio in bandwidth and Compression Strategies table and compression speed computation migration speed, selection Compression method corresponding to maximum migration velocity is compressed migration to virtual machine current memory data.
2. a kind of virtual machine (vm) migration compression method based on bandwidth aware according to claim 1, it is characterised in that this hair Bright method is specifically comprised the steps of:
(1) monitoring network bandwidth, obtains the utilizable real-time network bandwidth St of virtual machine (vm) migration;
(2) the compression ratio ρ corresponding to various compression methods in Compression Strategies table is utilizediWith compression speed SciComputation migration speed Smgti,
Smgti=min (Sci,St×ρi),
Multiple migration velocities are obtained, contrast draws the migration velocity of maximum;
(3) compression ratio and compression speed for obtaining maximum migration velocity are found out, with it in Compression Strategies table corresponding compression side Method is compressed migration to virtual machine current memory data.
3. a kind of virtual machine (vm) migration compression method based on bandwidth aware according to claim 1 or 2, it is characterised in that The Compression Strategies table is obtained using following methods in advance:
Multiple Several Typical Loads are chosen under data center's running environment to run in virtual machine successively, and respectively with a variety of compression sides Method carries out the compressed detected of internal storage data, and every kind of compression method obtains a pair of compression ratios and compression speed, all compression methods and Its corresponding compression ratio and compression speed composition Compression Strategies table.
4. a kind of virtual machine (vm) migration compression method based on bandwidth aware according to claim 3, it is characterised in that described The method for obtaining Compression Strategies table specifically includes following sub-step:
(31) Several Typical Load in a kind of data center is chosen, is run in virtual machine;
(32) choose a kind of compression method and be compressed migration, and after each wheel compression be both configured to all pages dirty Page, and change another compression method and be compressed migration, iteration m takes turns m kinds compress mode altogether;Record compresses needed for per wheel compression Size of data after total time and compression;
(33) another Several Typical Load is changed, return to step (31) is completed until the Several Typical Load in n kinds data center compresses;
(34) the compression ratio ρ that i-th kind of compression method is loaded for jth kind is calculatedij,
ρijSize of data after size of data/compression before=compression,
Calculate the compression speed Sc that i-th kind of compression method is loaded for jth kindij,
ScijSize of data/compression total time before=compression;
Wherein, 1≤i≤m;1≤j≤n;
(35) the average compression ratio ρ that i-th kind of compression method is loaded for n kinds is calculatedi,
ρi=(ρi1i2+…+ρin)/n,
Calculate the average compression speed Sc that i-th kind of compression method is loaded for n kindsi,
Sci=(Sci1+Sci2+…+Scin)/n
One is obtained m corresponding to m kind compression methods to average compression ratio ρiWith average compression speed Sci, compression method and institute are right The average compression ratio and average compression speed answered collectively form Compression Strategies table.
5. a kind of virtual machine (vm) migration compression method based on bandwidth aware according to claim 1, it is characterised in that this hair Bright method also includes a combining step:
Combining step:Carry out that multiple pages first are merged into a packet before internal storage data compression, then packet is carried out Reduced overall.
6. a kind of virtual machine (vm) migration compressibility based on bandwidth aware, it is characterised in that:The system is used for default frequency The network bandwidth is detected, using each pair compression ratio in bandwidth and Compression Strategies table and compression speed computation migration speed, selection is most Compression method corresponding to big migration velocity is compressed migration to virtual machine current memory data.
7. a kind of virtual machine (vm) migration compressibility based on bandwidth aware according to claim 6, it is characterised in that this hair Bright system specifically includes following part:
Bandwidth detection, for monitoring network bandwidth, obtains the utilizable real-time network bandwidth St of virtual machine (vm) migration;
Migration velocity computing module, for utilizing the compression ratio ρ corresponding to various compression methods in Compression Strategies tableiWith compression speed Spend SciComputation migration speed Smgti,
Smgti=min (Sci,St×ρi),
Multiple migration velocities are obtained, contrast draws the migration velocity of maximum;
Transferring module is compressed, the compression ratio and compression speed of maximum migration velocity are obtained for finding out, with it in Compression Strategies table In corresponding compression method migration is compressed to virtual machine current memory data.
8. a kind of virtual machine (vm) migration compressibility based on bandwidth aware according to claim 6 or 7, it is characterised in that The Compression Strategies table is used obtained with lower module in advance:
Compression Strategies table module, virtual machine is run on for choosing multiple Several Typical Loads under data center's running environment successively In, and the compressed detected of internal storage data is carried out with a variety of compression methods respectively, every kind of compression method obtains a pair of compression ratios and pressure Contracting speed, all compression methods compression ratio corresponding with its and compression speed composition Compression Strategies table.
9. a kind of virtual machine (vm) migration compressibility based on bandwidth aware according to claim 8, it is characterised in that described Compression Strategies table module specifically includes following part:
Load running unit, for choosing the Several Typical Load in a kind of data center, runs in virtual machine;
Iteration Contraction unit, migration is compressed for choosing a kind of compression method, and will be all after each wheel compression in Deposit page and be both configured to containing dirty pages, and change another compression method and be compressed migration, iteration m takes turns m kinds compress mode altogether;Record is every Size of data after compression total time needed for wheel compression and compression;
Load unit is changed, for changing another Several Typical Load, load running unit is returned to, until in n kinds data center Several Typical Load compression is completed;
Computing unit, for calculating the compression ratio ρ that i-th kind of compression method is loaded for jth kindij,
ρijSize of data after size of data/compression before=compression,
Calculate the compression speed Sc that i-th kind of compression method is loaded for jth kindij,
ScijSize of data/compression total time before=compression;
Wherein, 1≤i≤m;1≤j≤n;
Policy Table's construction unit, for calculating the average compression ratio ρ that i-th kind of compression method is loaded for n kindsi,
ρi=(ρi1i2+…+ρin)/n,
Calculate the average compression speed Sc that i-th kind of compression method is loaded for n kindsi,
Sci=(Sci1+Sci2+…+Scin)/n
One is obtained m corresponding to m kind compression methods to average compression ratio ρiWith average compression speed Sci, compression method and institute are right The average compression ratio and average compression speed answered collectively form Compression Strategies table.
10. a kind of virtual machine (vm) migration compressibility based on bandwidth aware according to claim 6, it is characterised in that this Invention system also includes merging module:
Merging module, for multiple pages first to be merged into a packet before internal storage data compression, then enters to packet Row reduced overall.
CN201710129704.2A 2017-03-07 2017-03-07 Virtual machine migration compression method and system based on bandwidth sensing Active CN106970824B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710129704.2A CN106970824B (en) 2017-03-07 2017-03-07 Virtual machine migration compression method and system based on bandwidth sensing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710129704.2A CN106970824B (en) 2017-03-07 2017-03-07 Virtual machine migration compression method and system based on bandwidth sensing

Publications (2)

Publication Number Publication Date
CN106970824A true CN106970824A (en) 2017-07-21
CN106970824B CN106970824B (en) 2019-12-17

Family

ID=59329092

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710129704.2A Active CN106970824B (en) 2017-03-07 2017-03-07 Virtual machine migration compression method and system based on bandwidth sensing

Country Status (1)

Country Link
CN (1) CN106970824B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108469982A (en) * 2018-03-12 2018-08-31 华中科技大学 A kind of online moving method of container

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102609361A (en) * 2012-01-16 2012-07-25 北京邮电大学 Method and device for transferring storage data of virtual machine
CN103353850A (en) * 2013-06-13 2013-10-16 华为技术有限公司 Virtual machine thermal migration memory processing method, device and system
CN103685256A (en) * 2013-12-06 2014-03-26 华为技术有限公司 Virtual machine migration management method, device and system
CN105302494A (en) * 2015-11-19 2016-02-03 浪潮(北京)电子信息产业有限公司 Compression strategy selecting method and device
US9336042B1 (en) * 2015-11-19 2016-05-10 International Business Machines Corporation Performing virtual machine live migration within a threshold time by adding available network path in multipath network
CN106293870A (en) * 2015-06-29 2017-01-04 联发科技股份有限公司 Computer system and strategy thereof guide compression method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102609361A (en) * 2012-01-16 2012-07-25 北京邮电大学 Method and device for transferring storage data of virtual machine
CN103353850A (en) * 2013-06-13 2013-10-16 华为技术有限公司 Virtual machine thermal migration memory processing method, device and system
CN103685256A (en) * 2013-12-06 2014-03-26 华为技术有限公司 Virtual machine migration management method, device and system
CN106293870A (en) * 2015-06-29 2017-01-04 联发科技股份有限公司 Computer system and strategy thereof guide compression method
CN105302494A (en) * 2015-11-19 2016-02-03 浪潮(北京)电子信息产业有限公司 Compression strategy selecting method and device
US9336042B1 (en) * 2015-11-19 2016-05-10 International Business Machines Corporation Performing virtual machine live migration within a threshold time by adding available network path in multipath network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
PETTER SVÄRD,ET,AL: "Evaluation of Delta Compression Techniques for Efficient Live Migration of Large Virtual Machines", 《PROCEEDINGS OF THE 7TH ACM SIGPLAN/SIGOPS INTERNATIONAL CONFERENCE ON VIRTUAL EXECUTION ENVIRONMENTS》 *
马路路等: "云计算环境下虚拟机在线迁移优化", 《信息技术》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108469982A (en) * 2018-03-12 2018-08-31 华中科技大学 A kind of online moving method of container
CN108469982B (en) * 2018-03-12 2021-03-26 华中科技大学 Online migration method for container

Also Published As

Publication number Publication date
CN106970824B (en) 2019-12-17

Similar Documents

Publication Publication Date Title
CN103595780B (en) Cloud computing resource scheduling method based on the weight that disappears
Li et al. A limited resource model of fault-tolerant capability against cascading failure of complex network
Wang et al. Ada-Things: An adaptive virtual machine monitoring and migration strategy for internet of things applications
US10365943B2 (en) Virtual machine placement
Gao et al. Are cloudlets necessary?
US20140082202A1 (en) Method and Apparatus for Integration of Virtual Cluster and Virtual Cluster System
CN104331253B (en) The computational methods of object migration in a kind of object storage system
CN106339386A (en) Flexible scheduling method and device for database
CN105824881B (en) A kind of data de-duplication data placement method based on load balancing
CN107608757A (en) A kind of isolation processing method and relevant device based on container
CN106528270A (en) Automatic migration method and system of virtual machine based on OpenStack cloud platform
CN109144658B (en) Load balancing method and device for limited resources and electronic equipment
US20180300066A1 (en) Method and device for managing disk pool
CN102841759A (en) Memory system for ultra-large virtual machine cluster
CN105743677B (en) A kind of resource allocation method and device
WO2018086467A1 (en) Method, apparatus and system for allocating resources of application clusters under cloud environment
CN107220108A (en) A kind of method and system for realizing cloud data center load balancing
CN106682184A (en) Light-weight combination method based on log combination tree structure
CN103885856B (en) Diagram calculation fault-tolerant method and system based on information regeneration mechanism
CN108520296A (en) A kind of method and apparatus based on the distribution of deep learning chip dynamic cache
CN106970824A (en) A kind of virtual machine (vm) migration compression method and system based on bandwidth aware
CN108123891A (en) The dynamic load balancing method realized in SDN network using distributed domain controller
CN104572504A (en) Data prereading method and device
CN104850656A (en) Dynamic self-adaptive multistage Bloom filter device
CN104391735B (en) Virtualize dispatching method of virtual machine and system in all-in-one machine cluster

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant