CN110990128A - Virtualization acceleration method and device, memory and equipment - Google Patents

Virtualization acceleration method and device, memory and equipment Download PDF

Info

Publication number
CN110990128A
CN110990128A CN201911344664.9A CN201911344664A CN110990128A CN 110990128 A CN110990128 A CN 110990128A CN 201911344664 A CN201911344664 A CN 201911344664A CN 110990128 A CN110990128 A CN 110990128A
Authority
CN
China
Prior art keywords
cache
virtual machine
cache memory
memory
calling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911344664.9A
Other languages
Chinese (zh)
Inventor
姜哲
邹仕洪
朱睿
李翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yuanxin Science and Technology Co Ltd
Original Assignee
Beijing Yuanxin Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yuanxin Science and Technology Co Ltd filed Critical Beijing Yuanxin Science and Technology Co Ltd
Priority to CN201911344664.9A priority Critical patent/CN110990128A/en
Publication of CN110990128A publication Critical patent/CN110990128A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0893Caches characterised by their organisation or structure
    • G06F12/0897Caches characterised by their organisation or structure with two or more cache hierarchy levels
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45583Memory management, e.g. access or allocation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1016Performance improvement
    • G06F2212/1024Latency reduction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/15Use in a specific computing environment
    • G06F2212/152Virtualized environment, e.g. logically partitioned system

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention discloses a virtualization acceleration method, a virtualization acceleration device, a storage and equipment, wherein each Cache memory is provided with a tag, wherein the Cache memories are formed when a virtual machine running in a system executes address conversion; and calling the Cache memory corresponding to the label for the virtual machine executing the operation by analyzing the label. The technical scheme has the beneficial effects of overcoming the problems that the processing time is long and the CPU performance is seriously influenced because the Cache of the Cache memory needs to be cleared and reloaded in time when the existing virtual machine is operated.

Description

Virtualization acceleration method and device, memory and equipment
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a virtualization acceleration method, apparatus, storage, and device.
Background
And memory virtualization, which refers to sharing a physical system memory through memory virtualization and dynamically allocating the memory to a virtual machine. The guest operating system continues to control the mapping of Virtual addresses to guest Physical addresses (VA- > PA), but the guest operating system cannot directly access real machine memory, so the Hypervisor layer needs to be responsible for mapping guest Physical memory to real machine memory (PA- > MA), which is GVA (guest Virtual address) > GPA (guest Physical address) > HPA (host Physical address). When the Hypervisor layer starts and configures the virtual machine according to the request, according to the QoS requirement of the request additional tag, a pure software mode is adopted in the prior art as follows: "shadow page tables", or hardware assisted approaches such as: EPT (extended Page Table) mechanism.
Because a logic CPU can run a plurality of virtual CPUs from different virtual machines, i.e., VCPUs, in a virtualization environment, in order to prevent mutual interference between a plurality of different VCPUs and between address translation caches (TLBs and Paging-Structure caches) between the VCPUs and the logic CPUs, when VM Entry (virtual machine load) or VM Exit (virtual machine Exit) operation is performed each time, caches used for address translation need to be cleared, and then the address translation Cache of a target CPU/VCPU needs to be reloaded, but the clearing and reloading of the caches are time-consuming operations, which can affect the performance of the CPU/VCPU.
Disclosure of Invention
Aiming at the problems existing in the loading and quitting operation of the existing system of the virtual machine, the virtualization acceleration method, the virtualization acceleration device, the storage and the equipment are provided, which aim to realize the purpose of reserving a unique Cache memory Cache for the virtual machine running in the system, and overcome the problems of longer processing time and serious influence on the performance of a CPU (Central processing Unit) caused by the fact that the Cache memory Cache needs to be cleared and reloaded in time when the existing virtual machine is operated.
The specific technical scheme is as follows:
a virtualization acceleration method comprises the following steps:
tagging each Cache memory Cache, wherein the Cache memory Cache is formed when a virtual machine running in a system executes address translation;
and analyzing the label, and calling the Cache memory corresponding to the label for the virtual machine executing the operation.
Preferably, the Cache memories Cache with tags are kept spatially independent from each other.
Preferably, the method for calling the Cache memory corresponding to the tag for the current virtual machine includes:
and after the system loads the virtual machine, a system logic CPU allocates the corresponding Cache memory to the virtual machine according to the tag.
Preferably, the tag includes a priority for an application in the virtual machine to call the Cache memory.
Preferably, the number of times of failure of calling the Cache memory by the application in the virtual machine is recorded, and if the calculated failure rate is higher than a predetermined threshold value, the priority of calling the Cache memory by the application at present is dynamically adjusted.
Also included is a virtualization acceleration apparatus, comprising:
the system comprises a tag module, a Cache module and a Cache module, wherein the tag module is used for tagging each Cache of a Cache memory, and the Cache memory is formed when a virtual machine running in the system executes address conversion;
the analysis module is used for analyzing the label;
and the calling module is used for calling the Cache corresponding to the label for the loaded virtual machine executing the operation according to the analysis result.
Preferably, the tag module is further configured to set a priority for the application in the virtual machine to call the Cache memory.
Preferably, the method comprises the following steps:
the counting module is used for recording the failure times of calling the Cache of the Cache memory by the application in the virtual machine;
and the dynamic adjustment module is used for dynamically adjusting the priority of calling the Cache of the Cache memory by the current application if the failure rate of calculation is higher than a preset threshold value.
Also included is a memory, wherein the following software is executed, the software being configured to perform the steps of:
tagging each Cache memory Cache, wherein the Cache memory Cache is formed when a virtual machine running in a system executes address translation;
and calling the Cache memory corresponding to the label for the virtual machine executing the operation by analyzing the label.
Also included is an apparatus comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the steps of the above-described interaction method are implemented when the processor executes the program.
The beneficial effects of the above technical scheme are: cache resources of a Cache memory are managed in a tag mode, the tag has the transitivity from top to bottom, after the tag is printed by upper-layer software, the tag is carried all the time in a function calling process, a hardware CPU can analyze and execute a strategy corresponding to the tag, and each virtual machine running in the system keeps a unique Cache memory, so that the problem that Cache memories of address conversion Cache memories used by a VCPU/CPU are mixed together can be solved, the Cache memory of the address conversion does not need to be cleared when VM Entry or VM Exit is carried out every time, the Cache memory reserved before the address conversion can be called, and the performance of the CPU can be effectively improved.
Drawings
FIG. 1 is a flowchart illustrating a virtualization acceleration method according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a virtualization acceleration device according to an embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating a virtualized acceleration device according to another embodiment of the invention;
fig. 4 is a schematic structural diagram of an embodiment of an apparatus of the present invention.
The above reference numerals denote:
1. a label module; 2. an analysis module; 3. calling a module; 4. a statistical module; 5. a dynamic adjustment module; A. a memory; B. a processor.
Detailed Description
The invention is further described with reference to the following drawings and specific examples, which are not intended to be limiting.
It should be noted that the embodiments described below and the technical features in the embodiments may be combined with each other without conflict.
The technical scheme of the invention provides a virtualization acceleration method, which comprises the following steps:
tagging each Cache memory Cache, wherein the Cache memory Cache is formed when a virtual machine running in a system executes address translation;
and resolving the tag, and calling a Cache memory corresponding to the tag for the virtual machine executing the operation.
As shown in fig. 1, the specific steps include:
step S1, labeling each Cache memory Cache, wherein the Cache memory Cache is formed when a virtual machine running in the system executes address conversion;
and step S2, analyzing the label, and calling the Cache memory corresponding to the label for the virtual machine executing the operation.
According to the technical scheme, the tag can be printed through upper-layer software, wherein the tag has the transitivity from top to bottom, after the tag is printed through the upper-layer software, the tag is carried all the time in the function calling process, and the hardware CPU can analyze and execute the strategy corresponding to the tag.
In a preferred embodiment, the tagged Cache memories are kept spatially independent of each other.
In the above technical solution, since each Cache memory Cache with a tag is spatially independent from each other, the VCPU/CPU uses a respective unique Cache memory Cache, thereby preventing mutual interference between a plurality of different VCPUs, and between address translation caches (TLB and Paging-Structure Cache) between the VCPU and the logical CPU.
In a preferred embodiment, the method for calling the Cache memory corresponding to the tag for the current virtual machine includes:
and after the system loads the virtual machine, the system logic CPU allocates a corresponding Cache memory for the virtual machine according to the tag.
In the above technical solution, when the Cache memory Cache cannot meet the current application, the Cache memory Cache needs to be updated at this time, and the above steps are continuously performed to allocate the Cache memory Cache, which is not described herein again.
In a preferred embodiment, the tag includes a priority for an application within the virtual machine to call the Cache memory Cache.
In a preferred embodiment, the failure times of calling the Cache memory by the application in the virtual machine are recorded, and if the calculated failure rate is higher than a predetermined threshold value, the priority of calling the Cache memory by the current application is dynamically adjusted.
In the above technical solution, while providing a corresponding Cache memory Cache for each virtual machine, the priority of the Cache memory Cache may also be invoked for the application therein to improve pertinence.
The following is a specific example:
for example, there are multiple applications, and 2 of them have high requirements on real-time performance, then Cache hit rate of a Cache memory can be guaranteed by marking the tags as high priority, and Cache failure rates of all applications are also counted. In order to avoid the situation that the hit rate of some applications is very high, the hit rate of some applications is too low, and the overall performance of the system is deteriorated, the tag can mark the network card peripheral in addition to the cache, for example, when a data packet needs to be transmitted in real time, the data packet can be preferentially sent through the tag and the network card device. The label strategy modification and the cache are the same mechanism, and the software callback function is triggered by a threshold value.
The technical scheme of the invention also comprises a virtualization acceleration device.
As shown in fig. 2, an embodiment of a virtualized acceleration device includes:
the system comprises a tag module 1, a Cache module and a Cache module, wherein the tag module is used for tagging each Cache of a Cache memory, and the Cache of the Cache memory is formed when a virtual machine running in the system executes address conversion;
the analysis module 2 is used for analyzing the label;
and the calling module 3 is used for calling the Cache corresponding to the tag for the loaded virtual machine executing the operation according to the analysis result.
In the above technical solution, the working mode of the apparatus is the same as that of the above method and is not repeated here, and the apparatus realizes cooperation between software and hardware by means of tagging, and the hardware resolves the tag to complete resource allocation and reservation of the hardware, thereby effectively improving the virtualization efficiency.
In a preferred embodiment, the tag module is further configured to set a priority for an application within the virtual machine to call the Cache memory.
In a preferred embodiment, as shown in fig. 3, the virtualized acceleration apparatus further includes:
the counting module 4 is used for recording the failure times of calling the Cache memory by the application in the virtual machine;
and the dynamic adjustment module 5 is used for dynamically adjusting the priority of the Cache memory called by the current application if the failure rate of the calculation is higher than a preset threshold value.
In the technical scheme, when the hardware statistics parameter reaches a threshold (a preset threshold), the software function is called back, so that the execution efficiency of the whole system is balanced.
The technical scheme of the invention also comprises a memory.
An embodiment of a memory, wherein software is executed to perform the steps of:
tagging each Cache memory Cache, wherein the Cache memory Cache is formed when a virtual machine running in a system executes address translation;
and calling the Cache corresponding to the label for the virtual machine executing the operation by analyzing the label.
The technical scheme of the invention also comprises equipment.
As shown in fig. 4, an embodiment of an apparatus comprises a memory a, a processor B and a computer program stored on the memory a and executable on the processor B, wherein the processor B implements the steps of the above-mentioned interaction method when executing the program.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be accomplished by hardware related to program instructions, the program may be stored in a computer-readable storage medium, and when executed, the program performs the steps including the method embodiments: and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
While the invention has been described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention.

Claims (10)

1. A virtualization acceleration method is characterized by comprising the following steps:
tagging each Cache memory Cache, wherein the Cache memory Cache is formed when a virtual machine running in a system executes address translation;
and calling the Cache memory corresponding to the label for the virtual machine executing the operation by analyzing the label.
2. The virtualization acceleration method of claim 1 wherein the Cache caches with tags are kept spatially independent of each other.
3. The virtualization acceleration method of claim 1, wherein calling the Cache method corresponding to the tag for the current virtual machine comprises:
and after the system loads the virtual machine, a system logic CPU allocates the corresponding Cache memory to the virtual machine according to the tag.
4. The virtualization acceleration method of claim 1 wherein the tag comprises a priority for an application within the virtual machine to call the Cache memory Cache.
5. The virtualization acceleration method of claim 1 or 4, wherein the number of times of failures of the application in the virtual machine to call the Cache memory Cache is recorded, and if the calculated failure rate is higher than a predetermined threshold value, the priority of the current application to call the Cache memory Cache is dynamically adjusted.
6. A virtualized acceleration apparatus, comprising:
the system comprises a tag module, a Cache module and a Cache module, wherein the tag module is used for tagging each Cache of a Cache memory, and the Cache memory is formed when a virtual machine running in the system executes address conversion;
the analysis module is used for analyzing the label;
and the calling module is used for calling the Cache corresponding to the label for the virtual machine executing the operation according to the analysis result.
7. The virtualization acceleration device of claim 6, comprising the tag module further to set a priority for an application within the virtual machine to invoke the Cache memory Cache.
8. The virtualization acceleration device of claim 6, comprising:
the counting module is used for recording the failure times of calling the Cache of the Cache memory by the application in the virtual machine;
and the dynamic adjustment module is used for dynamically adjusting the priority of calling the Cache of the Cache memory by the current application if the failure rate of calculation is higher than a preset threshold value.
9. A memory, wherein software is executed to perform the steps of:
tagging each Cache memory Cache, wherein the Cache memory Cache is formed when a virtual machine running in a system executes address translation;
and calling the Cache memory corresponding to the label for the virtual machine executing the operation by analyzing the label.
10. An apparatus comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the interaction method according to any of claims 1-5 are implemented when the processor executes the program.
CN201911344664.9A 2019-12-23 2019-12-23 Virtualization acceleration method and device, memory and equipment Pending CN110990128A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911344664.9A CN110990128A (en) 2019-12-23 2019-12-23 Virtualization acceleration method and device, memory and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911344664.9A CN110990128A (en) 2019-12-23 2019-12-23 Virtualization acceleration method and device, memory and equipment

Publications (1)

Publication Number Publication Date
CN110990128A true CN110990128A (en) 2020-04-10

Family

ID=70076109

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911344664.9A Pending CN110990128A (en) 2019-12-23 2019-12-23 Virtualization acceleration method and device, memory and equipment

Country Status (1)

Country Link
CN (1) CN110990128A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102483718A (en) * 2009-08-25 2012-05-30 国际商业机器公司 Cache partitioning in virtualized environments
CN108694068A (en) * 2017-03-29 2018-10-23 丛林网络公司 For the method and system in virtual environment
CN108694072A (en) * 2017-04-07 2018-10-23 英特尔公司 Device and method for efficient graphical virtual

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102483718A (en) * 2009-08-25 2012-05-30 国际商业机器公司 Cache partitioning in virtualized environments
CN108694068A (en) * 2017-03-29 2018-10-23 丛林网络公司 For the method and system in virtual environment
CN108694072A (en) * 2017-04-07 2018-10-23 英特尔公司 Device and method for efficient graphical virtual

Similar Documents

Publication Publication Date Title
EP2588957B1 (en) Cooperative memory resource management via application-level balloon
EP2195739B1 (en) System and method to improve memory usage in virtual machines running as hypervisor guests
US10534653B2 (en) Hypervisor-based virtual machine isolation apparatus and method
US8261267B2 (en) Virtual machine monitor having mapping data generator for mapping virtual page of the virtual memory to a physical memory
US9996370B1 (en) Page swapping in virtual machine environment
US8601201B2 (en) Managing memory across a network of cloned virtual machines
JP6764485B2 (en) Page fault solution
US8312201B2 (en) Managing memory allocations loans
US9229751B2 (en) Apparatus and method for managing virtual memory
US20190243757A1 (en) Systems and methods for input/output computing resource control
US11579908B2 (en) Containerized workload scheduling
US20190286465A1 (en) System and method for detection of underprovisioning of memory in virtual machines
WO2018182473A1 (en) Performance manager and method performed thereby for managing the performance of a logical server of a data center
US8255431B2 (en) Managing memory
US20200267071A1 (en) Traffic footprint characterization
CN104794069A (en) User state allocation method and system for cache in CPU
US9658775B2 (en) Adjusting page sharing scan rates based on estimation of page sharing opportunities within large pages
Min et al. Vmmb: Virtual machine memory balancing for unmodified operating systems
CN113032101A (en) Resource allocation method for virtual machine, server and computer readable storage medium
CN116302491A (en) Memory management method, device, computer equipment and storage medium
CN103782273A (en) Memory allocation method, program, and system
CN110990128A (en) Virtualization acceleration method and device, memory and equipment
US11720388B2 (en) Management of dynamic sharing of central processing units
US20220171700A1 (en) System and method for multimodal computer address space provisioning
WO2023159392A1 (en) Cache based memory access tracking

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200410

RJ01 Rejection of invention patent application after publication