TECHNICAL FIELD
The present disclosure is generally related to virtualized computer systems, and is more specifically related to systems and methods for virtual machine live migration.
BACKGROUND
The use of virtualization is becoming widespread. Virtualization describes a software abstraction that separates a computer resource and its use from an underlying physical device. Generally, a virtual machine (VM) provides a software execution environment and may have a virtual processor, virtual system memory, virtual storage, and various virtual devices. Virtual machines have the ability to accomplish tasks independently of particular hardware implementations or configurations.
Virtualization permits multiplexing of an underlying host machine (associated with a physical CPU) between different virtual machines. The host machine or “host” allocates a certain amount of its resources to each of the virtual machines. Each virtual machine may then use the allocated resources to execute applications, including operating systems (referred to as guest operating systems (OS) of a “guest”). The software layer providing the virtualization is commonly referred to as a hypervisor and is also known as a virtual machine monitor (VMM). The hypervisor emulates the underlying hardware of the virtual machine, making the use of the virtual machine transparent to the guest operating system and the user of the VM.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a block diagram that illustrates an example source host computer system and a destination host computer system in which examples of the present disclosure may operate.
FIG. 2 is a block diagram that illustrates one example of a configuration of a plurality of migration count registers managed by a hypervisor.
FIG. 3 is a flow diagram illustrating an example of a method for permitting an application running in a virtual machine to determine whether the virtual machine has migrated during a measurement interval, and based on that knowledge, determine whether values of a performance monitoring unit obtained during the measurement interval are valid or invalid.
FIG. 4 is an example of a performance monitoring tool application employing the method of FIG. 3 to program and read hardware parameters stored in PMU registers while running in a virtual machine.
FIG. 5 is another example of a performance monitoring tool application employing the method of FIG. 3 to auto-tune the application.
FIG. 6 is an example of a performance monitoring tool application employing the method of FIG. 3 to program and read hardware parameters stored in PMU registers while running in a virtual machine.
FIG. 7 illustrates a diagrammatic representation of a machine in the example form of a computer system within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, may be executed.
DETAILED DESCRIPTION
Methods and systems for permitting an application running on a virtual machine to determine whether performance monitoring unit (PMU) values measured at different time intervals are valid or invalid are disclosed. For example, a performance monitoring tool application running on the virtual machine may need to determine if one or more PMU values remain valid before, during, and/or after a migration event. Examples of PMU values may include the time of execution of a specific instruction or group of instructions, the number of cache misses per unit time, etc. The PMU values may depend on one or more underlying hardware parameters including processor frequency, cache line size of a cache, etc., respectively.
When the performance monitoring tool application attempts to read a hardware parameter associated with one or more virtual devices, the read call may be trapped by an underlying hypervisor. The hypervisor may access corresponding model specific registers (MSRs) of the underlying central processing unit (CPU). The MSRs may provide the one or more hardware parameters associated with one or more of corresponding physical devices.
If, for example, a hardware parameter (e.g., processor frequency) has changed during a time interval, the performance monitoring tool application may not “know” whether the change was due to a migration event occurring during the time interval. Similarly, a hardware parameter may appear to not change as a result of two successive migration events during the time interval. In one example, the virtual machine may attempt to read two values of processor frequency spaced apart in time. If the time interval is sufficiently large, the virtual machine may have been migrated from a first CPU to a second CPU and back to the first CPU with the operating frequency of the second CPU differing from the processor frequency of the first CPU. Similarly, the virtual machine may have been migrated from a first CPU with a first operating frequency and/or processor type to a second CPU with a second operating frequency/second processor type and then to a third CPU with the same operating frequency/processor type as the first CPU. In both cases, PMU values derived from the hardware parameter measured during the same time interval would be invalid, although the performance monitoring tool application takes the measurements to be valid measurements.
To permit an application (e.g., a performance monitoring tool) to determine whether or not to discard PMU values during a live migration, the application is provided with a migration counter for each virtual machine under the control of a hypervisor. An application associated with a host processing device reads a first value of a counter and a second value of the counter. The counter is indicative of a migration status of the application with respect to the host processing device. Responsive to determining that the first value of the counter does not equal the second value of the counter, the application ascertains whether a value of a hardware parameter associated with the host processing device has changed during a time interval. The migration status indicates a count of the number of times the application has migrated from one host processing device to another host processing device. The number of times the application has migrated may be employed by the application to determine whether the value of a performance monitoring unit derived from the hardware parameter is valid or not.
A hypervisor controlling the virtual machine may be configured to maintain, manipulate (e.g., increment, reset), and provide the application with access to the migration counter associated with the virtual machine. The migration counter may be stored by the hypervisor in a synthesized paravirtualized migration count register. The migration count register may be a model-specific register (MSR).
In one example, the migration count values for each of the virtual machines may be stored in a plurality of paravirtualized migration count registers by the hypervisor running on the host.
In one example, the migration counter may be stored by the hypervisor in the memory space of the application. In another example, reading may comprise performing a system call to the hypervisor. In an example, the hardware parameter may be at least one of an operating frequency of a CPU of the host, a cache-line size of the CPU, etc. In an example, the PMU counters derived from the hardware parameter may be at least one of a time stamp count of a CPU of the host, a count of cache misses, respectively, etc.
If the application determines that the value of the migration counter has not changed during the time interval, indicating that no migration event has occurred, then the application may declare the value of the performance monitoring unit derived from the hardware parameter to be valid.
If the application ascertains that the second value of the migration counter differs from the first value of the migration counter by more than one count, indicating two or more migration events, then the application may declare the value of the performance monitoring unit to be invalid.
If, however, the value of the migration counter changes by one count between a reading of the first value of the migration counter and the second value of the migration counter during the time interval, it is not certain whether the value of the performance monitoring unit is valid or not valid. In such circumstances, in one example, the application may be configured to read a third value of the migration counter.
If the application ascertains that the third value of the migration counter differs from the first value of the migration counter by one count and the value of hardware parameter has not changed, then the application may declare the value of the performance monitoring unit to be valid. If the application ascertains that the third value of the migration counter differs from the first value of the counter by one count and the value of hardware parameter has changed, then the application may declare the value of the performance monitoring unit to be invalid.
Accordingly, an efficient method and system is provided that enables an application to determine whether values of a performance monitoring unit(s) and underlying hardware parameter values measured at different time intervals are valid or invalid. The paravirtualized migration counter described herein permits performance tools running in a virtual machine to determine whether the virtual machine has migrated during its latest measurement interval. Based on that knowledge, the application can indicate performance results that are not valid. Additionally, performance tools can display the results of measurement intervals during live migration if no important attributes of a physical processor of a source host have changed on a physical processor of a destination host.
In the following description, numerous details are set forth. It will be apparent, however, to one skilled in the art, that the present disclosure may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring the present disclosure.
FIG. 1 is a block diagram that illustrates an example source host computer system 100 a (the “source host 100 a”) and a destination host computer systems 100 n (the “destination host 100 n”) in which examples of the present disclosure may operate. In one example, the source host 100 a may access the destination host 100 n over a network 110, which may be, for example, a local area network (LAN), a wide area network (WAN), intranet, the Internet, etc. The source host 100 a and the destination hosts 100 n may each include hardware components such as one or more central processing units (CPUs) 170 a-170 n, memory 180 a-180 n, and other hardware components 190 a-190 n (a network interface card (NIC), a disk, a virtual central processing unit, etc.), respectively. The source host 100 a and/or the destination host 100 n may be a server, a workstation, a personal computer (PC), a mobile phone, a palm-sized computing device, a personal digital assistant (PDA), etc.
Throughout the following description, the term “virtualization” herein shall refer to abstraction of some physical components into logical objects in order to allow running various software modules, for example, multiple operating systems, concurrently and in isolation from other software modules, on one or more interconnected physical computer systems. Virtualization allows, for example, consolidating multiple physical servers into one physical server running multiple virtual machines in order to improve the hardware utilization rate. Virtualization may be achieved by running a software layer, often referred to as “hypervisor,” above the hardware and below the virtual machines. A hypervisor may run directly on underlying hardware without an operating system beneath it or as an application running under a traditional operating system. A hypervisor may abstract the physical layer and present this abstraction to virtual machines to use, by providing interfaces between the underlying hardware and virtual devices of virtual machines. Processor virtualization may be implemented by the hypervisor scheduling time slots on one or more physical processors for a virtual machine, rather than a virtual machine actually having a dedicated physical processor. Memory virtualization may be implemented by employing a page table (PT) which is a memory structure translating virtual memory addresses to physical memory addresses.
“Physical processor” or “processor” or “central processing unit” (CPU) or “host processing device” herein shall refer to a device capable of executing instructions encoding arithmetic, logical, or I/O operations. In one illustrative example, a processor may follow Von Neumann architectural model and may include an arithmetic logic unit (ALU), a control unit, and a plurality of registers. In a further aspect, a processor may be a single core processor which is typically capable of executing one instruction at a time (or process a single pipeline of instructions), or a multi-core processor which may simultaneously execute multiple instructions. In another aspect, a processor may be implemented as a single integrated circuit, two or more integrated circuits, or may be a component of a multi-chip module (e.g., in which individual microprocessor dies are included in a single integrated circuit package and hence share a single socket). A processor may also be referred to as a CPU. “Memory device” herein shall refer to a volatile or non-volatile memory device, such as RAM, ROM, EEPROM, or any other device capable of storing data. “I/O device” herein shall refer to a device capable of providing an interface between one or more processor pins and an external device capable of inputting and/or outputting binary data.
As noted herein above, the source host 100 a and the destination host 100 b may run multiple virtual machines 130 a-130 n, by executing a software layer (e.g., 150 a, 150 n), often referred to as the “hypervisor,” above the hardware and below the virtual machines, as schematically shown in FIG. 1. In one illustrative example, the hypervisor (e.g., 150 a, 150 n) may be a component of a host operating system (e.g., 120 a-120 n) executed by the source host 100 a and the destination host 100 b. Alternatively, the hypervisor (e.g., 150 a, 150 n) may be provided by an application running under the host operating system (e.g., 120 a-120 n), or may run directly on the source host 100 a and the destination host 100 b without an operating system beneath it. The hypervisor (e.g., 150 a, 150 n) may abstract the physical layer, including processors, memory, and I/O devices, and present this abstraction to virtual machines (e.g., 130 a-130 n) as virtual devices (e.g., 155 a-155 n), including virtual processors, virtual memory, and virtual I/O devices.
A virtual machine (e.g., 130 a-130 n) may execute a guest operating system (e.g., 140 a-140 n) which may utilize the underlying virtual devices (e.g., 155 a-155 n), each of which may map to a device of the host machine (e.g., a network interface device, a CD-ROM drive, etc.). One or more applications (e.g., 145 a-145 n) may be running on a virtual machine (e.g., 130 a-130 n) under the guest operating system (e.g., 140 a-140 n). In an example, an application (e.g., 145 a-145 n) may include the corresponding complete virtual machine (e.g., 130 a-130 n) running on a respective hypervisor (e.g., 150 a, 150 n).
A virtual machine (e.g., 130 a-130 n) may include multiple virtual processors (not shown). Processor virtualization may be implemented by the hypervisor (e.g., 150 a, 150 n) scheduling time slots on one or more CPUs (e.g., 170 a-170 n) such that from the guest operating system's perspective those time slots are scheduled on a virtual processor. Memory virtualization may be implemented by a page table (PT) which is a memory structure translating virtual memory addresses to physical memory addresses.
The term “paravirtualization” refers to a virtualization technique that presents to virtual machines an interface that is similar but not identical to that of the underlying hardware, with the purpose of improving the overall system performance, e.g., by adding a register (MSR) that software in the virtual machine may read to obtain information that does not exist on physical CPUs.
The source host 100 a and the destination host 100 n may each instantiate, run, migrate, and/or terminate one or more virtual machines 130 a-130 n. A virtual machine (e.g., 130 a) may run a guest operating system (e.g., 140 a) to manage its resources. The virtual machine (e.g., 130 a) may run the same or different guest operating system (e.g., guest OS 140 a), such as Microsoft Windows®, Linux®, Solaris®, Mac® OS, etc.
In one example, the source host 100 a and the destination host 100 n may each instantiate and run a hypervisor (e.g., 150 a, 150 n) to virtualize access to the underlying source host hardware (e.g., CPUs 170 a-170 n, memory 180 a-180 n, other physical devices 190 a-190 i, etc.), making the use of the one or more virtual machines 130 a-130 n transparent to the guest OSs 140 a-140 n and users (e.g., a system administrator) of the source host 100 a and the destination host 100 n. A virtual machine (e.g., 130 a) may not have direct access to the underlying source host hardware 170 a-170 n, 180 a-180 n, 190 a-190 n, etc.
Access to or emulation of the underlying hardware of the source host 100 a and the destination host 100 n (e.g., CPUs 170 a-170 n, memory 180 a-180 n, other physical devices 190 a-190 n, etc.), may be indirectly handled by a corresponding hypervisor (e.g., 150 a, 150 n). A guest OS (e.g., 140 a-140 n) may be configured to load device-specific modules (guest device drivers, not shown) associated with one or more virtual devices 155 a-155 n. A hypervisor (e.g., 150 a, 150 n) may be configured to emulate (e.g., provide the guest OS (e.g., 140 a-140 n) with access to) the one or more virtual devices 155 a-155 n in cooperation with the guest device drivers (not shown) residing on a virtual machine (e.g., 130 a-130 n).
Initially a virtual machine (e.g., 130 a) running a guest OS (e.g., 140 a) is managed by a source hypervisor (e.g., 150 a). In one example, a process is provided wherein the virtual machine (e.g., 130 a) is migrated from the source hypervisor (e.g., 150 a) residing on a first host operating system (OS) (e.g., 120 a) to one or more destination hypervisors (e.g., 150 a,150 n).
A performance engineer may analyze performance statistics of a computer-based processing system (e.g., the source host 100 a and/or the destination host 100 b). The processing system (e.g., the source host 100 a and/or the destination host 100 b) may provide performance tools to the performance engineer. The performance tools may rely on one or more values of performance monitoring unit (PMU) counters, to provide the performance engineer with a report on performance statistics of the underlying host CPU (e.g., 170 a-170 n). PMU counts may be provided by one or more PMU registers or model specific registers (MSRs) of the host CPU (e.g., 170 a-170 n). Example PMU counters may include, but are not limited to, a timestamp counts, the number of cache misses, etc.
PMU counters may be provided with virtual machines (e.g., 130 a-130 n). Typically, a PMU counter of a host CPU (e.g., 170 a-170 n) may be used directly by the virtual machine (e.g., 130 a-130 n). The hypervisor (e.g., 150 a, 150 n) ensures that PMU register values are provided to the right context, either the host or any number of guests.
Virtual machines (e.g., 130 a-130 n) may be migrated between a source host computing platform (e.g., the source host 100 a) and a destination host computing platform (e.g., the destination host 100 b) connected over a network 110, which may be a local-area network or a wide area-network that may include the Internet. Migration permits a clean separation between hardware and software, thereby improving facilities fault management, load balancing, and low-level system maintenance.
Unfortunately, because virtual machines (e.g., 130 a-130 n) can be migrated, while running, from the source host 100 a to the destination host 100 b with different CPUs (e.g., 170 a-170 i, 170 j-170 n, respectively), performance tools typically cannot be used during the live migration process. The source host 100 a and the destination host 100 b may contain CPUs with different frequencies and cache sizes. After the virtual machine (e.g., 130 a-130 n) is migrated, performance calculations relying on the underlying PMU counters of the destination host CPUs (e.g., 170 j-170 n) may be incorrect and misleading.
For example, a performance engineer running a typical performance tool unmodified on a virtual machine (e.g., 130 a) that has been migrated may not know whether the destination host CPU (e.g., 170 n) is of the same type as the source host CPU (e.g., 170 a). Even if the destination host CPU (e.g., 170 n) is of the same type as the source host CPU (e.g., 170 a), the destination host CPU (e.g., 170 n) may run at a different frequency. Even if the performance tool provides a notification to the performance engineer that the virtual machine (e.g., 130 a) has migrated, the performance monitoring tool cannot determine which measurement intervals are valid or invalid. The performance monitoring tool cannot report valid processor attributes after the migration notification in situations where two migrations have happened. The virtual machine may have migrated back to the source host 100 a, which would mislead the tool to conclude that the measurements were valid when they were not valid.
To provide indications of valid and/or invalid measurements of PMU values performed during a live migration, in one example, the source hypervisor (e.g., 150 a) and the destination hypervisor (e.g., 150 n) may be provided with corresponding host migration agents (160 a,160 n). It should be noted that the “source” and “destination” designations for the hypervisors (e.g., 150 a, 150 n) and host migration agents (160 a,160 n) are provided for reference purposes in illustrating an exemplary implementation of the migration process according to examples of the present disclosure. It will be further appreciated that depending on the particulars of a given migration event, a hypervisor may at one time serve as the source hypervisor, while at another time the hypervisor may serve as the destination hypervisor.
The host migration agents (e.g., 160 a,160 n) are components (e.g., a set of instructions executable by a processing device of the source host 100 a and the destination host 100 n, such as CPUs 170 a-170 n). Although shown as discrete components of the hypervisors (e.g., 150 a,150 n), the host migration agents (e.g., 160 a,160 n) may be a separate component externally coupled to hypervisors (e.g., 150 a,150 n).
In one example, the virtual machine 130 a may be migrated to a destination host (e.g., 100 n). In another example, the virtual machine 130 a may be migrated concurrently to a plurality of destination hosts (not shown).
In one example, the source host 100 a may migrate a virtual machine (e.g., 130 a) residing on a source CPU (e.g., 170 a) to a CPU (e.g., 170 n) of the destination host 100 n. In another example, the source host 100 a may migrate the virtual machine (e.g., 130 a) residing on a source CPU (e.g., 170 a) to another CPU (e.g., 170 i) of the source host 100 a. A performance monitoring tool application (e.g., 145 a) running on a guest OS (e.g., 140 a) running on the virtual machine (e.g., 130 a) may need to determine if one or more values of a performance monitoring unit (PMU) remain valid before, during, and/or after a migration event. The values of a PMU may rely upon an underlying hardware parameter. If the underling hard parameter changed during a measurement time interval, then the PMU value derived from the hardware parameter is invalid. Examples of the hardware parameter may be at least one of an operating frequency of the processing device, a cache-line size, etc. In an example, the PMU counters derived from the hardware parameter may be at least one of a time stamp count, a count of cache misses, respectively, etc.
When an application (e.g., 145 a) attempts to read a hardware parameter associated with one or more of the virtual devices 155 a-155 n, the read call is trapped by the underlying hypervisor (e.g., 150 a). The hypervisor (e.g., 150 a) may access corresponding model specific registers (MSRs) 175 a-175 i of the underlying CPU (e.g., 170 a). The MSRs 175 a-175 i may provide the one or more hardware parameters associated with one or more of the corresponding physical devices 190 a-190 i.
If, for example, a hardware parameter (e.g., processor frequency) has changed during a time interval, the performance monitoring tool application (e.g., 145 a) may not “know” whether the change was due to a migration event during the time interval. Similarly, a hardware parameter may appear to not change as a result of two successive migration events during the time interval. In one example, the virtual machine may attempt to read two values of processor frequency spaced apart in time. If the time interval is sufficiently large, the virtual machine (e.g., 130 a) may have been migrated from CPU 170 a to CPU 170 n and back to CPU 170 a with the processor frequency of CPU 170 n differing from the processor frequency of CPU 170 a. Similarly, the virtual machine (e.g., 130 a) may have been migrated from CPU 170 a with a first clock frequency and/or processor type to CPU 170 i with a second clock frequency/second processor type and then to a third CPU 170 n with the same clock frequency/processor type as CPU 170 a. In both cases, the two obtained PMU values would be invalid, although the performance monitoring tool application (e.g., 145 a) takes the two measurements to be valid measurements.
To discard invalid measurements of hardware parameters and PMU values derived from the underling hardware parameters during migration events, the hypervisors 150 a, 150 n may be provided with corresponding host migration agents 160 a, 160 n, respectively. The host migration agents 160 a, 160 n may be configured to maintain, manipulate (e.g., increment, reset), and provide access to migration counters associated with corresponding ones of the virtual machines 130 a-130 n. Each migration counter may be indicative of the migration status of a corresponding application (e.g., 145 a-145 n). The application may be a separate executable program running on a virtual machine (e.g., 130 a-130 n, respectively), or may be the virtual machine itself (e.g., 130 a-130 n, respectively). Each migration counter may indicate a count of the number of times the application/virtual machine has migrated from one host processing device (e.g., CPU 170 a) to another host processing device (e.g., CPU 170 i, CPU 170 n, etc.). The paravirtualized migration counters may be stored by each of the hypervisors (150 a, 150 n) in a plurality of synthesized paravirtualized migration count registers 165 a-165 n. Each of the migration count registers 165 a-165 n may correspond to each active virtual machine (e.g., 130 a-130 n) associated with a hypervisor (e.g., 150 a, 150 n).
In one example, the application (e.g., 145 a) may be configured to read a first value of the migration counter and a second value of the migration counter. Responsive to the application (e.g., 145 a) determining that the first value of the migration counter does not equal the second value of the migration counter, the application (e.g., 145 a) may ascertain whether a value of a hardware parameter associated with a host processing device (e.g., CPU 170 a, CPU 170 i, CPU 170 n, etc.) associated with the application (e.g., 145 a) has changed during a time interval.
If the value of the migration counter does not change between readings of the first value of the migration counter and the second value of the migration counter during the time interval, then the application (e.g., 145 a) may be configured to declare the value of a performance monitoring unit derived from the hardware parameter to be valid. If the application (e.g., 130 a, 145 a) ascertains that the second value of the migration counter differs from the first value of the migration counter by more than one count (e.g., indicating two or more migrations of the application 145 a), then the application (e.g., 130 a, 145 a) may be configured to declare the value of performance monitoring unit to be invalid.
However, if the second value of the migration counter differs from the first value of the migration counter by one count, it is not certain whether the value of the performance monitoring unit derived from the hardware parameter is valid or not valid. In such circumstances, the application (e.g., 145 a) may be configured to read a third value of the counter.
The application (e.g., 145 a) may be configured to declare the value of the performance monitoring unit to be valid responsive to determining that the third value of the migration counter differs from the first value of the migration counter by one count and the value of hardware parameter has not changed during the time interval. The application (e.g., 145 a) may be configured to declare the value of the performance monitoring unit to be invalid responsive to determining that a third value of the counter differs from the first value of the counter by one count and the value of hardware parameter has changed during the time interval.
FIG. 2 is a block diagram that illustrates one example of a configuration of a plurality of migration count registers (e.g., 165 a-165 i) managed by a hypervisor (150 a). The hypervisor (e.g., 150 a, 150 n) may manage the migration count registers (e.g., 165 a-165 i, 165 j-165 n, respectively). An application (e.g., 145 a) may be provided by the hypervisor (e.g., 150 a) with a value of a migration counter (e.g., 165 a) associated with the virtual machine (e.g., 130 a) on which it resides and not the migration counter (e.g., 165 i, 165 n) of another virtual machine (e.g., 130 i, 130 n). In one example, the hypervisors (e.g., 150 a, 150 n) may be configured to store/write/retrieve/update one or more counters in the migration count registers 165 a-165 n corresponding to paravirtualized migration counts for each of its active virtual machine (e.g., 130 a-130 n). In an example, the hypervisor (e.g., 150 a, 150 n) may maintain a migration counter in a data structure (e.g., the entry 205 a) associated with a virtual machine (e.g., 130 a). In one example, the virtual machine structure (e.g., 205 a) may be constructed as follows:
Guest—structure:
<other fields>
uint64_t migration_count;
<other fields>
Within this virtual machine structure (e.g., 205 a), a field (e.g., 210 a) (e.g., Uint64_t migration_count) contains a migration counter. The hypervisor (e.g., 150 a) may initialize migration counter to 0 when a corresponding virtual machine (e.g., 130 a) is started.
Each guest identifier (e.g. 215 a) identifies a virtual machine structure (e.g., 205 a). Each virtual machine structure (e.g., 205 a) may include, for example, a paravirtualized migration counter 210 a (e.g., Migration_Count 210 a) corresponding to an associated virtual machine (e.g., 130 a).
The virtual machine structure (e.g., 205 a) may include other parameters (e.g., 220 a, 225 a) that may provide a virtual machine (e.g., 130 a-130 i) with access to one or more hardware parameters and/or PMU counters(e.g., processor frequency, TSCs, cache line size of a cache (e.g., 185 a-185 i), cache misses, etc.) of the underlying hardware of the host (e.g., of the CPUs 170 a-170 i, of the memory 180 a-180 i, of the other physical devices 190 a-190 i, etc.). In another example, the hypervisor (e.g., 150 a) may paravirtualize or expose (e.g., directly provide access to) the one or more hardware parameters/PMU counters from corresponding MSRs (e.g., 175 a-175 i) of the underlying hardware to a virtual machine (e.g., 130 a-130 i). In another example, the hypervisor (e.g., 150 a) may make its presence known to a guest application (e.g., 145 a-145 i) at virtual machine boot time. The hypervisor (e.g., 150 a) may advertise the availability of a paravirtualized migration counter (e.g., 210 a) and/or the one or more hardware parameters/PMU counters from corresponding MSRs (e.g., 175 a-175 i) of the underlying hardware to the virtual machine (e.g., 130 a) at boot time. The application (e.g., 145 a) running on the virtual machine (e.g., 130 a) may read the paravirtualized migration counter (e.g., 210 a) and/or the one or more hardware parameters/PMU counters from corresponding MSRs (e.g., 175 a-175 i) of the underlying hardware to the virtual machine (e.g., 130 a) using a paravirtualized system call or to access its corresponding “synthetic” MSRs (e.g., 165 a) in the migration count registers (e.g., 165 a-165 i) managed by a hypervisor (150 a).
In one example, the hypervisors (e.g., 150 a, 150 n) may read “real” MSRs of the underlying hardware and place their values in the “synthetic MSRs” (e.g., 165 a-165 n) to produce the paravirtualized migration counters associated with the virtual machines (e.g., 130 a-130 n). In another example, the hypervisors (e.g., 150 a, 150 n) may read “real” MSRs of the underlying hardware and place their values in the migration count registers (e.g., 165 a-165 n)). In another example, a migration count register (e.g., 165 a) associated with a virtual machine (e.g., 130 a) may be mapped by the hypervisor (e.g., 150 a) directly into the address space of the virtual machine (e.g., 130 a), for example, at virtual machine boot time. When the migration count registers (e.g., 165 a-165 n) are mapped by the hypervisors (e.g., 150 a, 150 n) into the address space of the corresponding virtual machines (e.g., 130 a-130 n), an associated application (e.g., 145 a-145 n) may read the migration_count and/or the one or more hardware parameters/PMU counters from corresponding MSRs (e.g., 175 a-175 n) directly. There are well known mechanisms for guest software to discover whether the hypervisor (e.g., 150 a) supports synthetic MSRs or paravirtualized system calls.
In an example, when a source host (e.g. 100 a) migrates a virtual machine (VM) (e.g., 130 a) to a destination host (100 n), the source host (e.g. 100 a) transfers the migration_count field in the migration count register (e.g., 165 a) with the VM's metadata to a corresponding migration count register (e.g., 165 n) of the destination host (e.g., 100 n). In an example, either the hypervisor (e.g., 150 a) of the source host (e.g. 100 a) or the hypervisor (e.g., 150 n) of the destination host (100 n) may increment the migration_count field to signify a migration event.
FIG. 3 is a flow diagram illustrating an example of a method 300 for permitting an application running in a virtual machine (e.g., 130 a), to determine whether the virtual machine (e.g., 130 a) has migrated during a measurement interval, and based on that knowledge, determine whether values of a performance monitoring unit obtained during the measurement interval are valid or invalid. The method 300 may be performed by the source host 100 a and/or the destination host 100 n of FIG. 1 and may comprise hardware (e.g., circuitry, dedicated logic, programmable logic, microcode, etc.), software (e.g., instructions run on a processing device), or a combination thereof. In one example, the method 300 is performed by an application (e.g., 145 a-145 n) of the source host 100 a and/or the destination host 100 n of FIG. 1.
As shown in FIG. 3, to permit the application (e.g., 145 a) running on a virtual machine (e.g., 130 a) to determine whether values of a performance monitoring unit obtained at different time intervals are valid or invalid, at block 305, the application (e.g., 145 a) reads a first value of a migration counter and a second value of the migration counter. In an example, the application may run as an executable program on an associated virtual machine (e.g., 130 a) or the application may be the virtual machine (e.g., 130) itself. The application (e.g., 130 a, 145 a) may be, for example, associated with a host processing device (e.g., the CPU 170 a).
The migration counter may be indicative of a migration status of the application (e.g., 145 a) with respect to the host processing device (e.g., the CPU 170 a). The migration status indicates a count of the number of times the application (e.g., 145 a) has migrated from one host processing device (e.g., the CPU 170 a) to another host processing device (e.g., the CPU 170 i or CPU 170 n). A hypervisor (e.g., 150 a) associated with a virtual machine (e.g., 130 a) may be configured to maintain, manipulate (e.g., increment, reset), and provide the application (e.g., 130 a, 145 a) with access to the migration counter associated with the virtual machine (e.g., 130 a). The migration counter may be stored by the hypervisor (e.g., 150 a) in a synthesized paravirtualized migration count register (e.g., 165 n). The migration count register may be a model-specific register (MSR).
In one example, the migration count values for a plurality of virtual machines (e.g., 130 a-130 n) may be stored in a plurality of paravirtualized migration count registers (165 a-165 n) by an associated host migration agent (e.g., 160 a, 160 n) of a hypervisor (e.g., 150 a, 150 n). Each register of the plurality of paravirtualized migration count registers (165 a-165 n) may correspond to each active virtual machine (e.g., 130 a-130 n) associated with a hypervisor (e.g., 150 a, 150 n).
In one example, the migration counter may be stored by the host migration agent (e.g., 160 a, 160 n) in the memory space of the application (e.g., 145 a, 130 a). In another example, application (e.g., 145 a, 130 a) may read the migration counter from one register of the plurality of paravirtualized migration count registers (165 a-165 n) through a system call to the hypervisor (e.g., 150 a).
At block 310, responsive to the application (e.g., 130 a, 145 a) determining that the first value of the migration counter does not equal the second value of the migration counter, the application (e.g., 130 a, 145 a) ascertains whether a value of a hardware parameter associated with the host processing device (e.g., the CPU 170 a) has changed during a time interval. In an example, the hardware parameter may be at least one of an operating frequency of the host processing device, a cache-line size of the host processing device, etc. In an example, a performance monitoring unit counter may be at least one of a time stamp count (TSC), a count of cache misses, etc.
At block 315, the application (e.g., 130 a, 145 a) determines the validity of a value of a performance monitoring unit derived from the hardware parameter in view of the application (e.g., 130 a, 145 a) ascertaining whether the value of the hardware parameter has changed during the time interval.
In one example, if the second value of the migration counter does not differ from the first value of the migration counter, then the application (e.g., 130 a, 145 a) may declare the value of performance monitoring unit to be valid. In one example, if the application (e.g., 130 a, 145 a) ascertains that the second value of the migration counter differs from the first value of the migration counter by more than one count (e.g., indicating two or more migrations of the application 130 a, 145 a), then the application (e.g., 130 a, 145 a) may declare the value of performance monitoring unit to be invalid.
However, if the second value of the migration counter differs from the first value of the migration counter by one count, then the application (e.g., 130 a, 145 a) may read a third value of the counter. If the application (e.g., 130 a, 145 a) ascertains that the third value of the migration counter differs from the first value of the migration counter by one count and the value of hardware parameter has not changed, then the application (e.g., 130 a, 145 a) declares the value of the performance monitoring unit to be valid. If the application (e.g., 130 a, 145 a) ascertains that the third value of the migration counter differs from the first value of the counter by one count and the value of hardware parameter has changed, then the application (e.g., 130 a, 145 a) declares the value of the performance monitoring unit to be invalid; otherwise, the application declares the value of the performance monitoring unit to be valid.
FIG. 4 is an example of a performance monitoring tool application (e.g., 145 a) employing the method of FIG. 3 to program and read hardware parameters stored in PMU registers while running in a virtual machine. The application (e.g., 145 a) determines if it may employ the PMU registers (e.g., TSCs/counts of cache misses, etc.) read during a measurement interval. The application (e.g., 145 a) reads a first paravirtualized migration count (M1). The application (e.g., 145 a) reads a hardware parameter, for example, a processor frequency (F1), then performs a first measurement of one or more performance monitoring unit values during a time interval. The application (e.g., 145 a) reads a second paravirtualized migration count (M2). If M1 equals M2, indicating no migration event has occurred, then the application (e.g., 145 a) declares the measurement of one or more performance monitoring units to be valid. If M1 differs from M2 by more than one count, indicating two or more migration events have occurred, then the application (e.g., 145 a) declares the measurement of one or more performance monitoring units to be invalid.
If M2 differs from M1 by one count, then the application (e.g., 145 a) performs a second measurement of the hardware parameter (F2) and a third measurement of the paravirtualized migration counter (M3). If the difference between M3 and M1 is one count, indicating one migration event has occurred, and F1 is equal to F2, indicating no change in the measured parameter, then the application declares the measurement of the one or more performance monitoring units to be valid; otherwise, the application declares the measurement of the one or more performance monitoring units to be invalid.
Note: If (M3−M1)>1 && F1==F2, the virtual machine (e.g., 130 a) may have migrated from the source host 100 a to the destination host 100 b and back to the source host 100 a. The frequency would be the same and could mislead the performance monitoring tool (145 a) to report the measurement of one or more performance monitoring units as valid when it may or may not be valid. The performance monitoring tool (145 a) has no processor information about destination host 100 b, so it cannot make that determination.
FIG. 5 is another example of a performance monitoring tool application (e.g., 145 a) employing the method of FIG. 3 to auto-tune the application. The performance monitoring tool application (e.g., 145 a) may obtain the paravirtualized migration counter associated with the virtual machine (e.g., 130 a) and any important processor attributes (e.g., hardware parameters) of its associated processor (e.g., the CPU 170 a) before and after performing its tuning calculations. The performance monitoring tool application (e.g., 145 a) may periodically read the paravirtualized migration counter. If the paravirtualized migration counter has changed, the performance monitoring tool application (e.g., 145 a) may read associated processor attributes and makes the decision to re-measure and adjust the tuning for a new host system (e.g., the CPU 170 n).
FIG. 6 is an example of a performance monitoring tool application (e.g., 145 a) employing the method of FIG. 3 to program and read hardware parameters stored in PMU registers while running in a virtual machine. The application (e.g., 145 a) determines if it may employ the PMU registers (e.g., TSCs/counts of cache misses, etc.) read during a measurement interval. The application (e.g., 145 a) reads a first paravirtualized migration count (M1). The application (e.g., 145 a) reads a hardware parameter, for example, a processor frequency (F1), then performs a first measurement of one or more performance monitoring unit values during a time interval. The application (e.g., 145 a) performs a second measurement of the hardware parameter (F2) and reads a second paravirtualized migration count (M2). If M1 equals M2, indicating no migration event has occurred, then the application (e.g., 145 a) declares the measurement of one or more performance monitoring units to be valid. If M1 differs from M2, then the application (e.g., 145 a) declares the measurement of one or more performance monitoring units to be invalid.
FIG. 7 illustrates a diagrammatic representation of a machine in the example form of a computer system 700 within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, may be executed. In some examples, the machine may be connected (e.g., networked) to other machines in a LAN, an intranet, an extranet, or the Internet. The machine may operate in the capacity of a server machine in client-server network environment. The machine may be a personal computer (PC), a set-top box (STB), a server, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.
The example computer system 700 includes a processing device (processor) 702, a main memory 704 (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM)), a static memory 706 (e.g., flash memory, static random access memory (SRAM)), and a data storage device 716, which communicate with each other via a bus 708.
Processor 702 represents one or more general-purpose processing devices such as a microprocessor, central processing unit, or the like. More particularly, the processor 702 may be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets or processors implementing a combination of instruction sets. The processor 702 may also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. The application 145 a-145 n and the host migration agent 160 a, 160 n shown in FIG. 1 may be executed by processor 702 configured to perform the operations and steps discussed herein.
The computer system 700 may further include a network interface device 722. The computer system 700 also may include a video display unit 710 (e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)), an alphanumeric input device 712 (e.g., a keyboard), a cursor control device 714 (e.g., a mouse), and a signal generation device 720 (e.g., a speaker).
A drive unit 716 may include a computer-readable medium 724 on which is stored one or more sets of instructions (e.g., instructions of the application 145 a-145 n and the host migration agent 160 a, 160 n) embodying any one or more of the methodologies or functions described herein. The instructions of the application 145 a-145 n and the host migration agent 160 a, 160 n may also reside, completely or at least partially, within the main memory 704 and/or within the processor 702 during execution thereof by the computer system 700, the main memory 704 and the processor 702 also constituting computer-readable media. The instructions of the application 145 a-145 n and the host migration agent 160 a, 160 n may further be transmitted or received over a network 726 via the network interface device 722.
While the computer-readable storage medium 724 is shown in an example to be a single medium, the term “computer-readable storage medium” should be taken to include a single non-transitory medium or multiple non-transitory media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term “computer-readable storage medium” shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure. The term “computer-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media.
In the above description, numerous details are set forth. It is apparent, however, to one of ordinary skill in the art having the benefit of this disclosure, that examples of the disclosure may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring the description.
Some portions of the detailed description are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.
It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the above discussion, it is appreciated that throughout the description, discussions utilizing terms such as “receiving”, “writing”, “maintaining”, or the like, refer to the actions and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (e.g., electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
Examples of the disclosure also relate to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions.
The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct a more specialized apparatus to perform the required method steps. Example structure for a variety of these systems appears from the description herein. In addition, the present disclosure is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the disclosure as described herein.
It is to be understood that the above description is intended to be illustrative, and not restrictive. Many other examples will be apparent to those of skill in the art upon reading and understanding the above description. The scope of the disclosure should, therefore, be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.