CN114595038A - Data processing method, computing device and computer storage medium - Google Patents

Data processing method, computing device and computer storage medium Download PDF

Info

Publication number
CN114595038A
CN114595038A CN202210455256.6A CN202210455256A CN114595038A CN 114595038 A CN114595038 A CN 114595038A CN 202210455256 A CN202210455256 A CN 202210455256A CN 114595038 A CN114595038 A CN 114595038A
Authority
CN
China
Prior art keywords
memory
page
target
virtual machine
kernel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210455256.6A
Other languages
Chinese (zh)
Inventor
韦梦泽
杨伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Cloud Computing Ltd
Original Assignee
Alibaba Cloud Computing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Cloud Computing Ltd filed Critical Alibaba Cloud Computing Ltd
Priority to CN202210455256.6A priority Critical patent/CN114595038A/en
Publication of CN114595038A publication Critical patent/CN114595038A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0877Cache access modes
    • G06F12/0882Page mode
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/10Address translation
    • G06F12/1009Address translation using page tables, e.g. page table structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45583Memory management, e.g. access or allocation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The embodiment of the application provides a data processing method, computing equipment and a computer storage medium. The data processing method comprises the following steps: determining at least one memory mapped address obtained by a driver loaded into the virtual machine in response to a kernel dump instruction; acquiring at least one page descriptor stored by the virtual machine according to the at least one memory mapping address; acquiring at least one target memory page belonging to a target type according to the memory page type indicated by the at least one page descriptor; and storing the at least one target memory page as a kernel dump file. The technical scheme provided by the embodiment of the application realizes the technical effects of reducing the generation time of the kernel dump file and the memory occupation amount of the kernel dump file.

Description

Data processing method, computing device and computer storage medium
Technical Field
The embodiment of the application relates to the technical field of computers and virtualization, in particular to a data processing method, computing equipment and a computer storage medium.
Background
When the running condition of a virtual machine (guest) running on a host (host) needs to be analyzed, the running data of the kernel of the virtual machine can be stored into a kernel dump file through the kernel dump (dump), so that the kernel dump file can be analyzed by using a debugging tool to determine the running condition of the virtual machine.
The dump can be generally implemented as a host that captures the running data of the gust kernel and saves the generated kernel dump file on the host side.
In the related art, the running data of the guest is usually saved in the memory page of the guest, so the dump on the host side usually generates the kernel dump file by fetching all the memory pages of the guest kernel.
For a large-size virtual machine which runs for a long time, due to the fact that running data is large, a kernel dump file generation method in the related technology is adopted, the time for generating a kernel dump file is long, the generated kernel dump file occupies a large space, and the technical problem that the efficiency for generating the kernel dump file is low exists.
Disclosure of Invention
The embodiment of the application provides a data processing method, computing equipment and a computer storage medium.
In a first aspect, an embodiment of the present application provides a data processing method, including:
determining at least one memory mapped address obtained by a driver loaded into the virtual machine in response to a kernel dump instruction;
acquiring at least one page descriptor stored by the virtual machine according to the at least one memory mapping address;
acquiring at least one target memory page belonging to a target type according to the memory page type indicated by the at least one page descriptor;
and storing the at least one target memory page as a kernel dump file.
A second method, provided in an embodiment of the present application, is a data processing method, including:
creating a second memory area in the virtual machine;
acquiring at least one memory mapping address corresponding to at least one page descriptor from the virtual machine;
storing the at least one memory mapped address to the second memory region;
and sending the block address of the second memory area to a host machine, so that the host machine accesses the memory mapping address according to the block address to acquire a page descriptor under the condition that the host machine receives a kernel dump instruction, acquires at least one target memory page belonging to a target type according to the memory page type indicated by the page descriptor, and stores the at least one target memory page as a kernel dump file.
In a third aspect, an embodiment of the present application provides a host, including:
the address determination module is used for determining at least one memory mapping address obtained by using a driver loaded into the virtual machine in response to a kernel dump instruction;
a page descriptor obtaining module, configured to obtain at least one page descriptor stored in the virtual machine according to the at least one memory mapping address;
a memory page determining module, configured to obtain at least one target memory page belonging to a target type according to a memory page type indicated by the at least one page descriptor;
and the dumping module is used for storing the at least one target memory page as a kernel dumping file.
In a fourth aspect, an embodiment of the present application provides a virtual machine, including:
the creating module is used for creating a second memory area in the virtual machine;
an address obtaining module, configured to obtain at least one memory mapping address corresponding to at least one page descriptor from the virtual machine;
determining a memory area, configured to store the at least one memory mapped address to the second memory area;
an address sending module, configured to send a block address of the second memory area to a host, so that the host accesses the memory mapping address according to the block address to obtain a page descriptor when receiving a kernel dump instruction, obtains at least one target memory page belonging to a target type according to a memory page type indicated by the page descriptor, and stores the at least one target memory page as a kernel dump file.
In a fifth aspect, embodiments of the present application provide a computing device, comprising a processing component and a storage component;
the storage component stores one or more computer instructions; the one or more computer instructions are used for being called and executed by the processing component to realize the data processing method provided by the embodiment of the invention.
In a sixth aspect, an embodiment of the present application provides a computer storage medium, where a computer program is stored, and when the computer program is executed by a computer, the data processing method provided in the embodiment of the present invention is implemented.
The embodiment of the invention provides a data processing method, which comprises the steps of determining at least one memory mapping address obtained by using a driver loaded into a virtual machine by responding to a kernel dump instruction; acquiring at least one page descriptor stored by the virtual machine according to the at least one memory mapping address; acquiring at least one target memory page belonging to a target type according to the memory page type indicated by the at least one page descriptor; according to the technical scheme, when the kernel dump file is generated, the page descriptor can be used for determining the memory page type of the memory page corresponding to the page descriptor, so that only the memory page matched with the target type is stored as the kernel dump file, and the technical effects of reducing the generation time of the kernel dump file and the memory occupation amount of the kernel dump file are achieved.
The embodiment of the invention provides a data processing method, which comprises the steps of determining at least one memory mapping address obtained by using a driver loaded into a virtual machine by responding to a kernel dump instruction; acquiring at least one page descriptor stored by the virtual machine according to the at least one memory mapping address; acquiring at least one target memory page belonging to a target type according to the memory page type indicated by the at least one page descriptor; according to the technical scheme, when the kernel dump file is generated, the page descriptor can be used for determining the memory page type of the memory page corresponding to the page descriptor, so that only the memory page matched with the target type is stored as the kernel dump file, and the technical effects of reducing the generation time of the kernel dump file and the memory occupation amount of the kernel dump file are achieved.
These and other aspects of the present application will be more readily apparent from the following description of the embodiments.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is an architecture diagram of a physical host using virtualization technology according to an embodiment of the present invention;
FIG. 2 is a flow chart of a data processing method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating the determination of at least one memory mapped address obtained by a driver loaded into a virtual machine in response to a kernel dump instruction according to an embodiment of the present invention;
FIG. 4 is a schematic diagram illustrating a data processing method provided by an embodiment of the invention;
FIG. 5 is a flow chart of a data processing method according to another embodiment of the present invention;
FIG. 6 is a block diagram of a host provided by an embodiment of the present invention;
FIG. 7 is a block diagram of a driver provided by an embodiment of the invention;
fig. 8 is a block diagram of a computing device provided by an embodiment of the invention.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
In some of the flows described in the specification and claims of this application and in the above-described figures, a number of operations are included that occur in a particular order, but it should be clearly understood that these operations may be performed out of order or in parallel as they occur herein, the number of operations, e.g., 101, 102, etc., merely being used to distinguish between various operations, and the number itself does not represent any order of performance. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor do they limit the types of "first" and "second".
As will be appreciated by one skilled in the art, embodiments of the present invention may be embodied as a system, apparatus, device, method, or computer program product. Thus, the present invention may be embodied in the form of: entirely hardware, entirely software (including firmware, resident software, micro-code, etc.), or a combination of hardware and software.
In this context, it is to be understood that the terminology which has been referred to may be that which is used to implement a portion of the invention or that is otherwise conclusive. For example, the term may include:
cloud-native: representative technologies of cloud-native include containers, service grids, microservices, immutable infrastructure, and Application Programming Interfaces (API). The cloud native technology is beneficial to the establishment and the operation of elastically expandable application of various organizations in novel dynamic environments such as public cloud, private cloud, mixed cloud and the like.
Virtualization technology: virtualization technology is the basis of cloud computing. In short, virtualization enables multiple virtual machines to run on one physical machine, and the virtual machines share the CPU, memory, and IO hardware resources of the physical machine, but are logically isolated from each other. The physical machine is generally called a host (host), and the virtual machine running on the host is called a guest (guest).
A safety container: a secure container is a runtime technology that provides a complete operating system execution environment for container applications, but isolates the execution of applications from the host operating system, avoiding applications from directly accessing host resources, and thus can provide additional protection between container hosts or between containers.
Dump (dump): in the field of computers, dump is generally translated and has two scenes, namely verb and noun. Verb scenarios generally refer to exporting, dumping data into file or static form, such as may be understood as: the contents of memory at a certain time, dump (unloading, exporting, saving) are converted into files. Noun scenes generally refer to files obtained in the above process or static forms, i.e., result files of verbs.
gdb: a powerful program debugging tool based on command lines.
crash: a widely used analysis tool for linux kernel crash dump files.
Kernel error (kernel panel): refers to the action taken by the operating system when it detects an internal fatal error and cannot safely handle the error.
Kdump: kjump is a function of the Linux kernel and can create a core dump when a kernel error occurs. When triggered, kdump exports a kernel dump file (also known as vmcore) that may be used for debugging using a debugging tool such as gdb or crash to determine the cause of the kernel error.
Qemu: QEMU is an open source simulator and Virtual Machine Monitor (VMM).
Qemu dump: a set of mechanism for dumping the kernel memory is realized in Qemu, and a kernel dump file of a client can be generated by executing a dump-guest-memory command or opening pvpanic through a Qemu monitor, a kernel error monitoring device and automatic dumping in case of configuration crash.
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The virtualization technology is a core technology of a cloud scene, and in a system in which a plurality of Virtual Machines (VMs) are deployed, the plurality of VMs share the same Physical host (PM), that is, Physical resources of the host, such as a processor, a memory, a disk, and network devices. Therefore, the physical resources of one physical host can be shared to a plurality of users by taking the VM as the granularity, so that the physical resources can be conveniently and flexibly used by the users on the premise of safety isolation, and the utilization rate of the physical resources is greatly improved.
In the cloud-native scenario, the container technology is a representative technology of cloud-native. During operation of the container, isolation between containers and between the containers and the operating system of the physical machine is generally required. However, with the large scale use of containers, particularly in various container arrangement systems such as kubernets, the isolation of such operating systems is no longer satisfactory. In a public cloud scenario, the same physical machine needs to run different container groups (pod), and since the operation mode of the pod still adopts a common kernel mechanism and is vulnerable, a secure container technology is proposed and used. Through the secure container technology, the pod can be configured to run in the VM, a complete operating system execution environment is provided for the pod by using the VM, the execution of the application program in the pod is isolated from the operating system of the physical machine, the application program is prevented from directly accessing the resources of the physical machine, and therefore protection can be provided between the pod and the physical machine or between the pods.
Fig. 1 is an architecture diagram of a physical host employing virtualization technology according to an embodiment of the present invention.
As shown in fig. 1, the physical host may include: a hardware layer 101, a host operating system (host os) 102 running on the hardware layer 101, a Virtual Machine Monitor (VMM) 103 running in the host os 102, and a plurality of VMs 104 running on the VMM 103. For example, two VMs, VM a and VM b, are shown in FIG. 1. Each VM104 includes a guest operating system (guest os) 105 and at least one container group (pod) 106 running on the guest os105, in which at least one Application (APP) 107 can run, and a container engine 108 is configured in each VM104 to support the running of the pod 106. For example, in the architecture shown in fig. 1, two pods 106 are run in each VM104, and each pod106 includes one APP 107.
The hardware layer 101 may include one or more hardware devices such as a physical processor (physical CPU), a physical storage device (e.g., a memory and a hard disk), a network interface, and a peripheral device. host os 102 may be a Linux operating system. VMM103, also known as hypervisor, is a software middleware running between hardware layer 101 and each VM104 for coordinating the hardware resources of hardware layer 101 to each VM104 for guest os105 and APP 107. For example, VMM103 may coordinate the processing resources of the physical CPU to provide to various VMs 104.
Referring to fig. 1, the VMM103 may run in host os 102. Alternatively, the VMM103 may be deployed independently of the host os 102, i.e., the VMM103 may run directly on the hardware layer 101.
The guest OS105 may be a Library operating system (libry OS), also known as unikernel, which is a lightweight virtualization technology that employs an OS exokernel architecture to abstract OS functionality into libraries, providing Library files, federated file systems, common file systems, and functionality to perform various operations, interacting with other modules.
In the process of implementing the concept of the present invention, the inventor finds that, in the process of operating the physical host shown in fig. 1, the case that the serial port cannot be logged in due to accidental guest os kernel crash or guest os exception occurs, and the container is automatically destroyed in this scenario, thereby greatly increasing the debugging and maintenance difficulty of the safety container operation and maintenance personnel. Therefore, it is necessary to provide an effective means to obtain the kernel dump file (vmcore) when the gusts os crash abnormally or the operation and maintenance personnel need to analyze the cause of the gusts os crash abnormally by using the debugging tool such as gdb or crash.
Under the current virtual scene, according to the realization side of the dump vmcore, the dump scheme can be divided into two types: 1) a guest-side dump mechanism represented by Kdump; 2) a host-side dump mechanism represented by Qemu dump. Specifically, the method comprises the following steps:
in a virtualization scenario, the guest side kernel dump scheme refers to a scheme of directly performing kernel dump in a VM by using existing characteristics of a guest kernel.
Although the kernel has the existing characteristic that Kjump can support the generation of vmcore, the technology has the following disadvantages when applied in a sandbox container scene:
1. the overhead of the reserved memory is amplified in a cloud native scene;
2. and the VM debugging vmcore cannot be logged in under the condition that the guest os abnormal serial port cannot be connected in the running process.
Specifically, turning on Kdump requires reserving a portion of memory at the time of a kernel (kernel) startup of the VM. The size of the reserved memory is usually set to 64-128MB, and the larger the whole memory is, the larger the memory needs to be reserved. For a single large-size virtual machine, the memory overhead can be accepted at first sight; however, in the cloud native scenario, a plurality of small-sized containers carrying micro services are deployed on a single machine in a high-density manner, and the memory overhead of the single machine is extremely large from the perspective of the whole machine.
For example, in a function computation scenario, assuming that 2000 pods are deployed on a physical machine, each pod corresponds to a container sandbox, assuming that the memory specification of each pod is 512MB, and Kdump is configured with a reserved memory of 64MB, the overhead of the reserved memory is 2000 × 64MB =125GB at the same time. Moreover, since the guest kernel does not collapse under most conditions, the reserved memory of the part is not used under normal conditions. In order to prevent one-ten-thousandth of situations from occupying such a large amount of memory, resource waste is caused. On the other hand, when the sandbox container runs, if the serial port between the VM and the physical machine is abnormally disconnected, the VMM cannot acquire running information of the VM, and the Kdump cannot generate vmcore without breakdown, so that kernel dump cannot be performed actively.
In summary, the gust side kernel dump scheme that needs to reserve the gust memory is not applicable in the cloud native scenario.
Compared with a guest side kernel dump scheme in a virtualization scene, the host side dump technology has the advantages that the vmcore can be generated for debugging and the like in a cloud native scene without reserving a memory in the guest and when a serial port is abnormal. The host side vmcore dump technique is represented by the implementation of qemu. In the technology, the VMM can directly acquire all physical memories of the guest, and the personal notification chain is registered in the guest kernel through the pvpersonal device, so that the dump vmcore can be automatically generated after the core error occurs in the guest, and the memory does not need to be reserved in the guest.
However, the dump on the host side typically generates a kernel dump file by crawling all memory pages of the guest kernel.
For a large-size virtual machine which runs for a long time, because the running data is more, the kernel dump file generation method in the related technology is adopted, the time for generating the kernel dump file is longer, the space occupied by the generated kernel dump file is larger, and the technical problem of lower efficiency of generating the kernel dump file exists.
In order to at least partially solve the technical problems in the related art, an embodiment of the present invention provides a data processing method, which determines at least one memory mapped address obtained by using a driver loaded into a virtual machine in response to a kernel dump instruction; acquiring at least one page descriptor stored by the virtual machine according to the at least one memory mapping address; acquiring at least one target memory page belonging to a target type according to the memory page type indicated by the at least one page descriptor; according to the technical scheme, when the kernel dump file is generated, the page descriptor can be used for determining the memory page type of the memory page corresponding to the page descriptor, so that only the memory page matched with the target type is stored as the kernel dump file, and the technical effects of reducing the generation time of the kernel dump file and the memory occupation amount of the kernel dump file are achieved.
Fig. 2 is a flowchart of a data processing method according to an embodiment of the present invention, where the method may be executed by a virtual machine monitor on a host side, and includes the following steps:
at least one memory mapped address obtained with a driver loaded into a virtual machine is determined 201 in response to a kernel dump instruction.
According to the embodiment of the invention, the VMM may receive a kernel dump instruction, where the kernel dump instruction is used to instruct the VMM to dump a memory page of a process in the memory system of the VM, and the kernel dump instruction may be sent by any network entity when the running condition of the VM needs to be analyzed and debugged.
According to an embodiment of the present invention, the VMM may access a physical storage device of the physical host to obtain the at least one memory mapped address in response to a kernel dump instruction, but is not limited thereto. The VMM may also access a virtual memory space configured in the VMM to obtain at least one memory mapped address.
According to the embodiment of the invention, the memory mapping address can be obtained from the physical memory of the virtual machine by the driver loaded in the virtual machine and sent to the host machine, so that the host machine can store the memory mapping address conveniently.
According to the embodiment of the present invention, the memory mapping address may be a storage address of a memory mapping space in the physical memory of the virtual machine, and a page descriptor that represents a memory page type of the memory page is stored in the memory mapping space.
According to an embodiment of the present invention, determining the at least one memory mapped address obtained by the driver loaded into the virtual machine may further be implemented as:
in response to the kernel dump instruction, at least one memory mapped address is obtained from the virtual machine using a driver loaded into the virtual machine.
According to the embodiment of the invention, in addition to obtaining the memory-mapped address from the storage space of the VMM, after receiving the kernel dump instruction, the VMM may send an address obtaining request to the driver loaded into the virtual machine, so that the driver returns the memory-mapped address obtained by the driver to the VMM in response to the address obtaining request.
202, obtaining at least one page descriptor stored in the virtual machine according to the at least one memory mapped address.
According to the embodiment of the present invention, after at least one memory mapping address is obtained, the at least one memory mapping address may be used to access the memory mapping space in the physical memory of the virtual machine, so as to obtain the page descriptor in the memory mapping space.
203, obtaining at least one target memory page belonging to the target type according to the memory page type indicated by the at least one page descriptor.
At least one target memory page is stored as a kernel dump file 204.
According to the embodiment of the present invention, after the page descriptor is obtained, before the kernel dump file is generated, the page descriptor may be used to screen the memory pages of the memory system of the VM, and only at least one target memory page matching the target type is dumped.
According to the embodiment of the present invention, the target type may be a preset memory page type of a memory page required by the debugging tool when the kernel dump file is debugged. In some embodiments of the present invention, the target type may be carried in a kernel dump instruction, and sent to the VMM together with the kernel dump instruction, but is not limited thereto, and the target type may also be stored in the VMM in advance, and after the VMM receives the kernel dump instruction, the target type corresponding to the kernel dump instruction is determined.
According to an embodiment of the present invention, the memory page types of the memory pages may include, for example, a zero page, a user page, a private cache page, a non-private cache page, and a free page. In some possible application scenarios, an initiator of the kernel dump instruction expects to dump memory pages of which the memory page types are the user page and the private cache page in the VM memory system, so that when the kernel dump instruction is sent to the VMM, the target types, i.e., the user page and the private cache page, can be sent to the VMM together. After acquiring the kernel dump instruction, the VMM may use a memory mapping address obtained by a driver loaded into the virtual machine, and access the VM according to the memory mapping address to obtain the page descriptor of the memory page, thereby determining, according to the page descriptor, the memory pages in all the memory pages whose memory page types belong to the user page and the private cache page, determining the memory pages whose memory page types belong to the user page and the private cache page as target memory pages, and dumping the target memory pages as a kernel dump file.
Before dumping, the memory pages stored by the VM memory system can be screened according to the acquired page descriptors and the target types, only part of the memory pages belonging to the target types are dumped, and the memory pages are screened, so that the clipping of the kernel dump file is realized, the memory occupation amount of the kernel dump file and the generation time of the kernel dump file are reduced, and the generation efficiency of the kernel dump file is improved.
According to the embodiment of the present invention, the target type may be configured in the form of an array in which the number of bits corresponds to the number of memory page types, each bit in the array corresponds to at least one of the plurality of memory page types, and whether the memory page type corresponding to the bit is the target type is determined by setting the value of each bit in the array.
The memory page types of the memory pages may include, for example, five types, that is, a zero page, a user page, a private cache page, a non-private cache page, and an idle page, and then an array of five bits may be configured, where each bit in the array corresponds to the zero page, the user page, the private cache page, the non-private cache page, and the idle page, and in addition, each bit in the array is subjected to a value of 0 and 1 to determine whether the memory page type corresponding to the bit is a target type, where the memory page type corresponding to the bit with the value of 0 may be a non-target type, and the memory page type corresponding to the bit with the value of 1 may be a target type.
In some embodiments of the present invention, the kernel dump instruction carries an array [0, 0, 1, 1, 0], where each bit in the array from left to right represents a zero page, a user page, a private cache page, a non-private cache page, and a free page, and the private cache page and the non-private cache page can be determined as a target type by analyzing the array, so that all page types stored in the memory system of the VM can be dumped in the page memories of the private cache page and the non-private cache page.
According to the embodiment of the invention, a plurality of virtual machines can run on the host machine, and the kernel dump instruction can carry the target virtual machine identifier.
According to an embodiment of the present invention, the data processing method further includes:
in response to the kernel dump instruction, a page descriptor stored by the virtual machine corresponding to the target virtual machine identification is obtained.
According to the embodiment of the invention, under a virtualization scene, a plurality of virtual machines are generally operated on a single host machine. When performing kernel dumping, it is not necessary to perform kernel dumping on each virtual machine running on the host machine. Therefore, when initiating the kernel dump instruction, the initiator of the kernel dump instruction may write the identification information of the virtual machine that desires to perform the kernel dump into the kernel dump instruction, so that the VMM performs the kernel dump operation only on the virtual machine corresponding to the identification information in the kernel dump instruction after receiving the kernel dump instruction.
According to an embodiment of the present invention, the data processing method further includes:
receiving a kernel dump instruction generated by the virtual machine under the condition of abnormal state;
alternatively, the first and second electrodes may be,
a kernel dump instruction initiated with a command line interface is received.
According to the embodiment of the invention, the kernel dump of the VMM can have two triggering conditions, including active dump and passive dump. The passive dump means that when the virtual machine is in an abnormal state, the virtual machine sends a kernel dump instruction to the VMM. The active dump is a kernel dump instruction initiated by an operation and maintenance person through a command line interface when the virtual machine is in a normal state.
According to the embodiment of the present invention, for the active dump process, the following may be specifically implemented:
and executing a dump-guest-memory command through a command-line interface (cli), wherein the command sends a request to a serial port of the physical host through a Remote Procedure Call (RPC), and the serial port further sends a kernel dump instruction corresponding to the request to the VMM.
According to the embodiment of the invention, after receiving the kernel dump instruction initiated by cli, the VMM can terminate the operation of the VM before the dump and restore the normal operation of the VM after generating the kernel dump file in order to avoid the content error of the kernel dump file caused by the change of the VM physical memory during the dump.
According to the embodiment of the present invention, the passive dump process may be specifically implemented as follows:
when the guest os of the VM is started, the driver is loaded to the guest os, and after the driver is loaded to the guest os, the notification chain may be registered to the guest os.
After the notification chain is registered, the driver can monitor the running condition of the guest os in real time, and when the guest os kernel error is monitored, the driver can call a call back function to write to a specified port of the VMM so as to notify the VMM of the abnormal event of the guest os.
The management module of the VCPU in the VMM may monitor the designated port, and after obtaining the abnormal event returned by the driver, the management module may terminate the operation of the VCPU and start the dump.
According to an embodiment of the present invention, determining, in response to the kernel dump instruction, at least one memory mapped address obtained by using the driver loaded into the virtual machine may be specifically implemented as:
responding to a kernel dump instruction, and acquiring a block address from a first memory area of a host machine;
accessing a second memory area of the virtual machine according to the block address;
acquiring at least one memory mapping address from a second memory area; the second memory area is created in advance by the driver, and stores at least one memory mapping address acquired from the virtual machine and feeds back the block address to the host.
FIG. 3 is a schematic diagram illustrating an embodiment of determining at least one memory mapped address obtained by a driver loaded into a virtual machine in response to a kernel dump instruction.
In fig. 3, 301 may represent a physical memory of a VM, 302 may represent a VMM, and the VMM302 may be configured with a first memory region 3021 therein, as well as a management module 3022 of a VCPU. The first memory area 3021 may be a shared memory space for the VMM302 and the management module 3022 of the VCPU to read.
For the active dump process, a kernel dump instruction initiated by cli is sent to the VMM302, so the VMM302 can read the first memory region 3021 to obtain the block address at this time.
For the passive dumping process, a kernel dumping instruction is initiated by the guest os and sent to the management module 3022 of the VCPU, so that the management module 3022 of the VCPU can read the first memory area 3021 to obtain the block address at this time.
As shown in fig. 3, the physical memory 301 of the VM may include a plurality of memory mapped spaces 304 and a second memory region 303, and each memory mapped space 304 may store a plurality of page descriptors 3041. The second memory region 303 includes a plurality of physical mapping spaces 3031, where each physical mapping space 3031 may correspond to one memory mapping space 304 and is used to store a memory mapping address of the corresponding memory mapping space 304.
After the block address is obtained, the second memory region 303 in the physical memory 301 of the VM may be accessed according to the block address, and a plurality of memory mapping addresses may be obtained from the second memory region 303, so that the page descriptor may be obtained from the memory mapping space 304 corresponding to the memory mapping addresses.
In fig. 3, a driver device 305 configured on the VMM302 side and a driver 307 configured on the guest os306 side are also included.
The VMM302 may monitor the running information of the guest os306 and upon monitoring the boot os306 to boot, may load the driver 307 of the driver device 305 to the guest os 306. After the driver 307 successfully records the guest os306, it may create a second memory region 303 in the physical memory 301 of the VM, capture the memory mapping addresses of the plurality of memory mapping spaces 304, and respectively store the plurality of memory mapping addresses in the plurality of physical mapping spaces 3031 of the second memory region 303 correspondingly.
After the Memory mapped address is stored in the second Memory region 303, the driver 307 may first obtain the Memory mapped address from the second Memory region 303, and then the driver 307 transmits the block address of the second Memory region 303 to the driver 305 by using a Memory mapping I/O (Memory mapping I/O) space based on a communication connection with the driver 305. After driver device 305 receives the block address from driver 307, the block address may be pushed to dump driver management module 308 in VMM302 using epoll (I/O event notification facility). The dump driver management module 308, in response to the storage instruction of the driver management module 309, stores the block address into the first storage area 3021, so that after the VMM302 receives the kernel dump instruction, the block address is obtained from the first storage area 3021, so as to obtain the memory-mapped address by accessing the second memory area 303, thereby obtaining the page descriptor 3041 stored in the memory-mapped address.
According to an embodiment of the present invention, obtaining at least one memory mapped address from the second memory area may specifically be implemented as:
traversing at least one physical mapping space in the second memory area;
aiming at any physical mapping space, one or more memory mapping addresses stored in the physical mapping space are obtained;
according to at least one memory mapping address, acquiring at least one page descriptor stored by a virtual machine comprises:
accessing at least one memory mapped space of the virtual machine based on the at least one memory mapped address to determine at least one page descriptor stored in the at least one memory mapped space.
According to an embodiment of the present invention, referring to fig. 3, after the block address is obtained, the second memory region 303 may be accessed according to the block address, specifically, the physical mapping space 3031 in the second memory region 303 may be traversed, and the memory mapping address corresponding to the memory mapping space may be obtained from the physical mapping space, so that the corresponding memory mapping space may be accessed according to the memory mapping address to obtain the at least one page descriptor 3041 stored in the memory mapping space.
According to an embodiment of the present invention, obtaining at least one target memory page belonging to a target type according to a memory page type indicated by at least one page descriptor may specifically be implemented as:
determining at least one target page descriptor matched with the target type from the at least one page descriptor;
and determining at least one target memory page corresponding to at least one target page descriptor according to the mapping relation between the page descriptor and the memory page.
According to the embodiment of the present invention, after obtaining the at least one page descriptor, the at least one page descriptor may be first parsed, the memory page type characterized by each page descriptor may be obtained, and then the at least one page descriptor may be screened based on the target type to determine the at least one target page descriptor belonging to the target type.
After the target page descriptor is determined, the target memory page corresponding to the target page descriptor can be found based on the mapping relationship between the page descriptor and the memory page, which is created in advance, and the target memory page is stored as a kernel dump file.
According to another embodiment of the present invention, obtaining at least one target memory page belonging to a target type according to a memory page type indicated by at least one page descriptor may further be specifically implemented as:
analyzing at least one page descriptor, and determining a page structure of a memory page corresponding to the at least one page descriptor respectively, wherein the page structure represents the memory page type of the memory page;
and determining at least one target memory page belonging to the target type according to the memory page type characterized by the page structure of the at least one page descriptor.
According to the embodiment of the invention, after the at least one page descriptor is obtained, the at least one page descriptor can be sequentially traversed and analyzed. After the page descriptor is parsed to determine the page structure, the memory page type of the memory page corresponding to the page descriptor may be determined according to the interface structure, and if the memory page type of the memory page belongs to the target type, the memory page may be written into the kernel dump file, and the next page descriptor is continuously parsed until all page descriptors are traversed. If the memory page type of the memory page does not belong to the target type, the memory page can be ignored, and the next page descriptor continues to be continued until all the page descriptors are traversed.
According to an embodiment of the invention, the kernel dump instruction comprises dump configuration information; the dump configuration information includes current limit information;
storing the at least one target memory page as a kernel dump file comprises:
determining the occupation amount of transmission resources indicated by the current limiting information, wherein the occupation amount of the transmission resources comprises the occupation amount of bandwidth and the input/output rate;
and storing at least one target memory page as a kernel dump file based on the transmission resource occupation amount.
According to the embodiment of the invention, when the kernel dump file is generated, data transmission between the VM and the VMM needs to be involved, and because the data transmission resource of the VMM is limited, and the data transmission quantity needed for generating the kernel dump file is large, the data transmission when the kernel dump file is generated usually occupies much data transmission resource of the VMM.
Therefore, before the target memory page is stored as the kernel dump file, the current limit information in the dump configuration information can be acquired, the memory page data to be transmitted is sliced based on the current limit information, and the huge memory page data is transmitted in a slicing mode, so that the occupation of the data transmission resources of the VMM can be reduced.
According to the embodiment of the invention, the current limit information is used for indicating the maximum bandwidth of the VMM which can be used by the dump memory page data and the maximum input/output times per second. After the current limit information is obtained, the memory page data can be transmitted according to the maximum bandwidth of the available VMM indicated by the current limit refinement and the maximum input/output times per second. Specifically, for example, the current limit information indicates that the maximum bandwidth is 100MB/S, and the memory page data to be transmitted is provided for 600MB, so that the memory page data to be transmitted can be divided into six equal parts according to the maximum bandwidth, and 100MB is transmitted each time.
According to other embodiments of the present invention, the token bucket algorithm may be further utilized to perform current limiting on the memory page data to be transmitted, so as to achieve the effect of accurately controlling bps (Bit rate) and iops (Input/Output Operations Per Second, number of times of read/write Operations Per Second).
According to an embodiment of the invention, the dump configuration information may further include a target storage path of the kernel dump file.
When the kernel dump file is generated, the kernel dump file may be stored into a physical memory of the physical host based on the indication of the target storage path.
According to an embodiment of the present invention, the data processing method further includes:
receiving a debugging instruction of a debugging tool;
and providing the kernel dump file to a debugging tool, so that the debugging tool can debug the kernel dump file to determine the running condition of the kernel of the virtual machine.
Fig. 4 schematically shows a schematic diagram of a data processing method provided by an embodiment of the present invention.
As shown in fig. 4, after obtaining the page descriptor from the VM402, the VMM401 may filter the memory page according to the page descriptor, and dump the target memory page determined by the filtering, so as to generate a kernel dump file.
After the kernel dump file is generated, a debugging instruction of the debugging tool 403 may be received, and in response to the debugging instruction, the generated kernel dump file is provided to the debugging tool 403, so that the debugging tool debugs the kernel dump file to determine the running condition of the kernel of the virtual machine.
Compared with a guest side dump scheme (such as Kdump) in the related art, the data processing method provided by the embodiment of the invention has no requirement on guest reserved memory due to the fact that the dump mechanism is on the host side, thereby avoiding memory reservation on the host side and avoiding waste of memory resources. In addition, the data processing method provided by the embodiment of the invention can not only generate the kernel dump file when the kernel of the VM crashes, but also actively require the VMM to generate the kernel dump file through a cli command in the running process, and can also debug the running condition of the VM when a serial port can not be connected.
Compared with a host side dump scheme (such as Qemu dump) in the related art, the data processing method driving device and the driving program provided by the embodiment of the invention can acquire the page descriptor from the guest, so that the memory page can be screened based on the page descriptor, and the clipping of the kernel dump file is realized. By reasonably configuring dump configuration information, the dump time and the size of a kernel dump file can be greatly reduced under the same compression algorithm. According to a specific test case specification, under the container specification of 80g4c, a zlib compression algorithm is used, the target type is set as a filtering zero page, and the target type is used for comparing all memory page types, under the condition that most of the memory page types of the memory pages are zero pages, the dumping time can be reduced to 10 seconds from 26 seconds, 61% is optimized, and the size of a kernel dump file can be reduced to 173MB from 673MB, 74% is optimized; in the case that the memory page type of the memory page is mostly non-zero pages, the dump time can be reduced from 119 seconds to 9 seconds, which is optimized to 92%, and the dump file size can be reduced from 1.1GB to 136MB, which is optimized to 87%.
Fig. 5 is a flowchart of a data processing method according to another embodiment of the present invention, where the method may be executed by a virtual machine, and includes the following steps:
501, creating a second memory area in the virtual machine;
502, obtaining at least one memory mapping address corresponding to at least one page descriptor from a virtual machine;
503, storing at least one memory mapping address in the second memory area;
and 504, sending the block address of the second memory area to the host, so that the host accesses the memory mapping address according to the block address to obtain the page descriptor when receiving the kernel dump instruction, obtains at least one target memory page belonging to the target type according to the memory page type indicated by the page descriptor, and stores the at least one target memory page as a kernel dump file.
According to an embodiment of the present invention, the data processing method may further include:
detecting a starting instruction;
and loading the driver in response to the starting instruction so as to create a second memory area in the virtual machine by using the driver.
The specific implementation of the data processing method shown in fig. 5 may refer to the data processing method shown in fig. 1, and is not described herein again.
Fig. 6 is a block diagram of a host according to an embodiment of the present invention, and as shown in fig. 6, the host 600 may include an address determination module 601, a page descriptor obtaining module 602, a memory page determination module 603, and a dump module 604.
An address determining module 601, configured to determine, in response to a kernel dump instruction, at least one memory mapping address obtained by using a driver loaded into a virtual machine;
a page descriptor obtaining module 602, configured to obtain at least one page descriptor stored in the virtual machine according to at least one memory mapping address;
a memory page determining module 603, configured to obtain, according to the memory page type indicated by the at least one page descriptor, at least one target memory page belonging to a target type;
a dump module 604, configured to store the at least one target memory page as a kernel dump file.
According to an embodiment of the present invention, the address determination module 601 includes:
the block address determination submodule is used for responding to a kernel dump instruction and acquiring a block address from a first memory area of a host machine;
the access submodule is used for accessing a second memory area of the virtual machine according to the block address;
the address acquisition submodule is used for acquiring at least one memory mapping address from the second memory area; the second memory area is created in advance by the driver, and stores at least one memory mapping address acquired from the virtual machine and feeds back the block address to the host.
According to an embodiment of the present invention, the address obtaining submodule includes:
the traversing unit is used for traversing at least one physical mapping space in the second memory area;
a mapping address obtaining unit, configured to obtain, for any physical mapping space, one or more memory mapping addresses stored in the physical mapping space;
the page descriptor retrieving module 602 includes:
and the page descriptor obtaining unit is used for accessing the at least one memory mapping space of the virtual machine based on the at least one memory mapping address so as to determine at least one page descriptor stored in the at least one memory mapping space.
According to an embodiment of the present invention, the memory page determining module 603 includes:
a descriptor determining unit for determining at least one target page descriptor matching the target type from among the at least one page descriptor;
the first memory page determining unit is configured to determine, according to a mapping relationship between the page descriptor and the memory page, at least one target memory page corresponding to the at least one target page descriptor.
According to an embodiment of the present invention, the memory page determining module 603 includes:
the analysis unit is used for analyzing the at least one page descriptor and determining a page structure of the memory page corresponding to the at least one page descriptor respectively, wherein the page structure represents the memory page type of the memory page;
a second memory page determining unit, configured to determine, according to the memory page type that is characterized by the page structure of the at least one page descriptor, at least one target memory page that belongs to the target type.
According to an embodiment of the present invention, host 600 further includes:
the virtual machine control device comprises a first instruction receiving unit, a second instruction receiving unit and a control unit, wherein the first instruction receiving unit is used for receiving a kernel dump instruction generated by the virtual machine under the condition of being in an abnormal state;
alternatively, the first and second electrodes may be,
and the second instruction receiving unit is used for receiving a kernel dump instruction initiated by using the command line interface.
According to the embodiment of the invention, the kernel dump instruction comprises dump configuration information; the dump configuration information includes current limit information;
dump module 604 includes:
the information determining unit is used for determining the occupation amount of the transmission resources indicated by the current limiting information, wherein the occupation amount of the transmission resources comprises the occupation amount of the bandwidth and the input/output rate;
and the memory transfer unit is used for storing at least one target memory page as a kernel memory transfer file based on the transmission resource occupation amount.
According to an embodiment of the present invention, host 600 further includes:
the instruction receiving unit is used for receiving a debugging instruction of a debugging tool;
and the file providing unit is used for providing the kernel dump file to a debugging tool so that the debugging tool can debug the kernel dump file and determine the running condition of the kernel of the virtual machine.
The host in fig. 6 may execute the data processing method in the embodiment shown in fig. 1, and the implementation principle and the technical effect are not described again. The specific manner in which each module, unit, and sub-unit of the 6 devices in the above embodiments performs operations has been described in detail in the embodiments related to the method, and will not be described in detail herein.
Fig. 7 is a block diagram of a driver according to an embodiment of the present invention, and as shown in fig. 7, the driver 700 may include a creating module 701, an address obtaining module 702, a memory area determining module 703, and an address sending module 704.
A creating module 701, configured to create a second memory area in the virtual machine;
an address obtaining module 702, configured to obtain at least one memory mapping address corresponding to at least one page descriptor from a virtual machine;
a memory region determination 703, configured to store at least one memory mapped address to a second memory region;
the address sending module 704 is configured to send the block address of the second memory area to the host, so that the host accesses the memory mapping address according to the block address to obtain the page descriptor when receiving the kernel dump instruction, obtains at least one target memory page belonging to the target type according to the memory page type indicated by the page descriptor, and stores the at least one target memory page as a kernel dump file.
The driver in fig. 7 may execute the data processing method in the embodiment shown in fig. 5, and details of implementation principles and technical effects are not repeated. The specific manner in which each module, unit and sub-unit of the 7 devices in the above embodiments performs operations has been described in detail in the embodiments related to the method, and will not be described in detail herein.
In one possible design, the interface configuration apparatus provided in the embodiment of the present invention may be implemented as a computing device, as shown in fig. 8, which may include a storage component 801 and a processing component 802;
the storage component 801 stores one or more computer instructions, wherein the one or more computer instructions are invoked by the processing component 802 for execution, so as to implement the data processing method provided by the embodiment of the present invention.
Of course, a computing device may also necessarily include other components, such as input/output interfaces, communication components, and so forth. The input/output interface provides an interface between the processing components and peripheral interface modules, which may be output devices, input devices, etc. The communication component is configured to facilitate wired or wireless communication between the computing device and other devices, and the like.
The computing device may be a physical device or an elastic computing host provided by a cloud computing platform, and the computing device may be a cloud server, and the processing component, the storage component, and the like may be a basic server resource leased or purchased from the cloud computing platform.
When the computing device is a physical device, the computing device may be implemented as a distributed cluster consisting of a plurality of servers or terminal devices, or may be implemented as a single server or a single terminal device.
In practical application, the computing device may specifically deploy a node in the message queue system, and implement the node as a producer, a consumer, a transit server, a naming server, or the like in the message queue system.
The embodiment of the present invention further provides a computer-readable storage medium, which stores a computer program, and the computer program, when executed by a computer, can implement the data processing method provided by the embodiment of the present invention.
The embodiment of the present application further provides a computer program product, which includes a computer program, and when the computer program is executed by a computer, the data processing method provided by the embodiment of the present invention can be implemented.
The processing components in the respective embodiments above may include one or more processors executing computer instructions to perform all or part of the steps of the above methods. Of course, the processing elements may also be implemented as one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components configured to perform the above-described methods.
The storage component is configured to store various types of data to support operations in the device. The memory components may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment may be implemented by software plus a necessary general hardware platform, and may also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (14)

1. A data processing method, comprising:
determining at least one memory mapped address obtained by a driver loaded into the virtual machine in response to a kernel dump instruction;
acquiring at least one page descriptor stored by the virtual machine according to the at least one memory mapping address;
acquiring at least one target memory page belonging to a target type according to the memory page type indicated by the at least one page descriptor;
and storing the at least one target memory page as a kernel dump file.
2. The method of claim 1, wherein determining, in response to the kernel dump instruction, at least one memory mapped address obtained with a driver loaded into the virtual machine comprises:
responding to the kernel dump instruction, and acquiring a block address from a first memory area of a host machine;
accessing a second memory area of the virtual machine according to the block address;
acquiring at least one memory mapping address from the second memory area; the second memory area is created in advance by the driver, stores the at least one memory mapping address acquired from the virtual machine, and feeds back the block address to the host.
3. The method of claim 2, wherein the obtaining at least one memory mapped address from the second memory region comprises:
traversing at least one physical mapping space in the second memory area;
aiming at any one physical mapping space, one or more memory mapping addresses stored in the physical mapping space are obtained;
the obtaining, according to the at least one memory mapped address, at least one page descriptor stored by the virtual machine includes:
accessing at least one memory mapped space of the virtual machine based on the at least one memory mapped address to determine at least one page descriptor stored in the at least one memory mapped space.
4. The method according to claim 1, wherein the obtaining, according to the memory page type indicated by the at least one page descriptor, at least one target memory page belonging to a target type includes:
determining at least one target page descriptor matching the target type from the at least one page descriptor;
and determining at least one target memory page corresponding to the at least one target page descriptor according to the mapping relation between the page descriptor and the memory page.
5. The method according to claim 1, wherein the obtaining, according to the memory page type indicated by the at least one page descriptor, at least one target memory page belonging to a target type includes:
analyzing the at least one page descriptor, and determining a page structure of a memory page corresponding to the at least one page descriptor respectively, wherein the page structure represents a memory page type of the memory page;
and determining at least one target memory page belonging to the target type according to the memory page type characterized by the page structure of the at least one page descriptor.
6. The method of claim 1, further comprising:
receiving the kernel dump instruction generated by the virtual machine under the condition of abnormal state;
alternatively, the first and second electrodes may be,
receiving the kernel dump instruction initiated using a command line interface.
7. The method of claim 1, wherein the kernel dump instruction comprises dump configuration information; the dump configuration information comprises current limit information;
the storing the at least one target memory page as a kernel dump file includes:
determining the occupation amount of transmission resources indicated by the current limiting information, wherein the occupation amount of transmission resources comprises the occupation amount of bandwidth and the input/output rate;
and storing the at least one target memory page as a kernel dump file based on the transmission resource occupation amount.
8. The method of claim 1, further comprising:
receiving a debugging instruction of a debugging tool;
and providing the kernel dump file to the debugging tool so that the debugging tool can debug the kernel dump file and determine the running condition of the kernel of the virtual machine.
9. The method according to claim 1, wherein a plurality of virtual machines are running on a host machine, and the kernel dump instruction carries a target virtual machine identifier;
the method further comprises the following steps:
and responding to the kernel dump instruction, and acquiring the page descriptor stored by the virtual machine corresponding to the target virtual machine identification.
10. The method of claim 1, further comprising:
and responding to the kernel dump instruction, and acquiring at least one memory mapping address from the virtual machine by using a driver loaded into the virtual machine.
11. A data processing method, comprising:
creating a second memory area in the virtual machine;
acquiring at least one memory mapping address corresponding to at least one page descriptor from the virtual machine;
storing the at least one memory mapped address to the second memory region;
and sending the block address of the second memory area to a host machine, so that the host machine accesses the memory mapping address according to the block address to acquire a page descriptor under the condition that the host machine receives a kernel dump instruction, acquires at least one target memory page belonging to a target type according to the memory page type indicated by the page descriptor, and stores the at least one target memory page as a kernel dump file.
12. The method of claim 11, further comprising:
detecting a starting instruction;
and in response to the starting instruction, loading a driver so as to create the second memory area in the virtual machine by using the driver.
13. A computing device comprising a processing component and a storage component;
the storage component stores one or more computer instructions; the one or more computer instructions are for execution by the processing component to invoke, implement a data processing method according to any one of claims 1 to 10, or a data processing method according to any one of claims 11 to 12.
14. A computer storage medium, characterized in that a computer program is stored, which, when executed by a computer, implements the data processing method according to any one of claims 1 to 10, or the data processing method according to any one of claims 11 to 12.
CN202210455256.6A 2022-04-28 2022-04-28 Data processing method, computing device and computer storage medium Pending CN114595038A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210455256.6A CN114595038A (en) 2022-04-28 2022-04-28 Data processing method, computing device and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210455256.6A CN114595038A (en) 2022-04-28 2022-04-28 Data processing method, computing device and computer storage medium

Publications (1)

Publication Number Publication Date
CN114595038A true CN114595038A (en) 2022-06-07

Family

ID=81821174

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210455256.6A Pending CN114595038A (en) 2022-04-28 2022-04-28 Data processing method, computing device and computer storage medium

Country Status (1)

Country Link
CN (1) CN114595038A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115794313A (en) * 2022-12-26 2023-03-14 科东(广州)软件科技有限公司 Virtual machine debugging method, system, electronic equipment and storage medium
CN116991543A (en) * 2023-09-26 2023-11-03 阿里云计算有限公司 Host, virtualized instance introspection method and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130290789A1 (en) * 2012-04-27 2013-10-31 Marvell World Trade Ltd. Memory Dump And Analysis In A Computer System
CN103593298A (en) * 2013-10-16 2014-02-19 北京航空航天大学 Memory recovery method and device
JP2014032498A (en) * 2012-08-02 2014-02-20 Mitsubishi Electric Corp Fault reproduction system for computer
CN106997315A (en) * 2016-01-25 2017-08-01 阿里巴巴集团控股有限公司 A kind of method and apparatus of core dump for virtual machine

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130290789A1 (en) * 2012-04-27 2013-10-31 Marvell World Trade Ltd. Memory Dump And Analysis In A Computer System
JP2014032498A (en) * 2012-08-02 2014-02-20 Mitsubishi Electric Corp Fault reproduction system for computer
CN103593298A (en) * 2013-10-16 2014-02-19 北京航空航天大学 Memory recovery method and device
CN106997315A (en) * 2016-01-25 2017-08-01 阿里巴巴集团控股有限公司 A kind of method and apparatus of core dump for virtual machine

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
KAN ZHONG 等: "FLIC: Fast, lightweight checkpointing for mobile virtualization using NVRAM", 《 2016 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE)》 *
刘秀波: "基于计算机物理内存分析的Rootkit查找方法研究与实现", 《中国优秀硕士学位论文全文数据库(电子期刊)》 *
李占魁: "基于内存转储分析的代码注入攻击检测方法", 《中国优秀硕士学位论文全文数据库(电子期刊)》 *
胡明玉: "《计算机应用模型机研制技术》", 31 January 2006, 辽宁大学出版社 *
赵炯: "《Linux操作系统实现原理》", 30 September 2018, 同济大学出版社 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115794313A (en) * 2022-12-26 2023-03-14 科东(广州)软件科技有限公司 Virtual machine debugging method, system, electronic equipment and storage medium
CN115794313B (en) * 2022-12-26 2024-04-09 科东(广州)软件科技有限公司 Virtual machine debugging method, system, electronic device and storage medium
CN116991543A (en) * 2023-09-26 2023-11-03 阿里云计算有限公司 Host, virtualized instance introspection method and storage medium
CN116991543B (en) * 2023-09-26 2024-02-02 阿里云计算有限公司 Host, virtualized instance introspection method and storage medium

Similar Documents

Publication Publication Date Title
KR102236522B1 (en) Method and apparatus for processing information
US10379967B2 (en) Live rollback for a computing environment
TWI544328B (en) Method and system for probe insertion via background virtual machine
US7028056B1 (en) Method and arrangements for generating debugging information following software failures
US9026860B2 (en) Securing crash dump files
US7669020B1 (en) Host-based backup for virtual machines
WO2018085421A1 (en) Read/write request processing method and apparatus
CN103827809B (en) For the system and method for virtual partition monitoring
US11010355B2 (en) Layer-based file access method and apparatus of virtualization instance
CN114595038A (en) Data processing method, computing device and computer storage medium
US11983100B2 (en) Automated testing of systems and applications
US10228993B2 (en) Data dump for a memory in a data processing system
US9940152B2 (en) Methods and systems for integrating a volume shadow copy service (VSS) requester and/or a VSS provider with virtual volumes (VVOLS)
US20090276205A1 (en) Stablizing operation of an emulated system
US11032168B2 (en) Mechanism for performance monitoring, alerting and auto recovery in VDI system
US10698715B2 (en) Alert mechanism for VDI system based on social networks
US20200104141A1 (en) Techniques of retrieving bios data from bmc
CN103996003A (en) Data wiping system in virtualization environment and method thereof
CN111310192B (en) Data processing method, device, storage medium and processor
US10936425B2 (en) Method of tracking and analyzing data integrity issues by leveraging cloud services
CN111240898B (en) Method and system for realizing black box based on Hypervisor
CN115774742A (en) Data storage newly-increased method, device, equipment, medium and product of private cloud
CN115665265A (en) Request processing method, device, equipment, storage medium and system
CN112527192B (en) Data acquisition method and device and service equipment
US12009990B1 (en) Hardware-based fault injection service

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination