WO2011127865A2 - 内存转储处理方法和装置及内存转储系统 - Google Patents

内存转储处理方法和装置及内存转储系统 Download PDF

Info

Publication number
WO2011127865A2
WO2011127865A2 PCT/CN2011/074721 CN2011074721W WO2011127865A2 WO 2011127865 A2 WO2011127865 A2 WO 2011127865A2 CN 2011074721 W CN2011074721 W CN 2011074721W WO 2011127865 A2 WO2011127865 A2 WO 2011127865A2
Authority
WO
WIPO (PCT)
Prior art keywords
processing
memory
link
unit group
memory block
Prior art date
Application number
PCT/CN2011/074721
Other languages
English (en)
French (fr)
Other versions
WO2011127865A3 (zh
Inventor
李俊
张超
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2011/074721 priority Critical patent/WO2011127865A2/zh
Priority to CN2011800005796A priority patent/CN102203718B/zh
Priority to EP11768476.1A priority patent/EP2437178B1/en
Publication of WO2011127865A2 publication Critical patent/WO2011127865A2/zh
Priority to US13/340,342 priority patent/US8627148B2/en
Publication of WO2011127865A3 publication Critical patent/WO2011127865A3/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0766Error or fault reporting or storing
    • G06F11/0778Dumping, i.e. gathering error/state information after a fault for later diagnosis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3003Monitoring arrangements specially adapted to the computing system or computing system component being monitored
    • G06F11/3037Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system component is a memory, e.g. virtual memory, cache

Definitions

  • Embodiments of the present invention relate to data processing technologies, and in particular, to a memory dump processing method and apparatus, and a memory dump system. Background technique
  • the operating system generally performs a memory dump after the system crashes, and dumps the data in the memory to the storage device to facilitate the location of the problem afterwards.
  • memory has been greatly expanded, resulting in system dump time is too long.
  • the development of modern processor technology the processor has evolved from single core to multi-core multi-threading, and can process multiple threads in parallel at the same time, that is, the processor has multiple logical CPUs, and one logical CPU is one processing unit.
  • the traditional dump uses a single logical CPU, which results in a huge waste of computing power.
  • AIX Advanced Interactive executive
  • HP-UX HP-UX
  • Hewlett Packard UniX Hewlett Packard UniX and so on provide a parallel dump function, which divides the memory, and one or more logical CPUs are responsible for the processing of each link of a memory block (such as preprocessing, filtering, compression, writing to disk, etc.) , thus speeding up the processing.
  • the logical CPUs that process the individual memory blocks work independently of each other, when multiple logical CPUs simultaneously perform disk I/O transfers, they cause instantaneous peak traffic of disk I/O transfers, resulting in a drop in dump performance.
  • drivers that require storage devices must be reentrant to ensure that multiple logical CPUs use the storage device at the same time. Summary of the invention
  • Embodiments of the present invention provide a memory dump processing method and apparatus, and a memory dump system, to avoid instantaneous peak traffic of disk I/O transmission, and improve memory dump performance.
  • the embodiment of the invention provides a memory dump processing method, including:
  • the embodiment of the invention provides a memory dump processing device, including:
  • a subsequent processing module configured to perform processing on each processing unit group other than the first processing unit group to perform a subsequent processing step after the processing of the first link is completed, to write a memory block to the storage device, where The follow-up process is the next step of the processed link;
  • the processing unit group corresponds to one link in the processing memory dump processing.
  • the embodiment of the present invention provides a memory dump system, including a memory and at least two processing units, and further includes a memory dump processing apparatus provided by the embodiment of the present invention.
  • the embodiment of the present invention provides a memory dump processing method and device, and a memory dump system, which can process the first link of each memory block by calling the first processing unit group, and can trigger each memory block one by one.
  • Dump processing so that each memory block is serially written to the storage device through the disk I/O, avoiding the instantaneous peak traffic of the disk I/O transmission, and the storage device driver does not need to be reentrant, thereby improving the memory.
  • Dump performance the processing units of the corresponding groups are separately processed in the subsequent processing steps of the memory blocks, so that each processing unit group can process each link of each memory block in a pipeline manner, thereby improving the utilization of the processing unit. Efficiency, avoiding waste of resources.
  • FIG. 1 is a flowchart of a memory dump processing method according to Embodiment 1 of the present invention.
  • FIG. 2 is a flowchart of a memory dump processing method according to Embodiment 2 of the present invention.
  • FIG. 3 is a flowchart of a memory dump processing method according to Embodiment 3 of the present invention.
  • FIG. 4 is a schematic structural diagram of a memory dump processing apparatus according to Embodiment 4 of the present invention.
  • FIG. 5 is a schematic structural diagram of a memory dump processing apparatus according to Embodiment 5 of the present invention.
  • FIG. 6 is a schematic structural diagram of a memory dump system according to Embodiment 6 of the present invention. detailed description
  • FIG. 1 is a flowchart of a memory dump processing method according to Embodiment 1 of the present invention.
  • the memory dump processing method provided in this embodiment may be specifically applied to a system having multiple processing units (ie, multiple logical CPUs), and when the operating system of the system crashes, the operating system's memory dump
  • the processing device can control each processing unit, and uses the memory dump processing method to perform memory dump.
  • the memory dump processing device can be implemented in the form of hardware and software, and can be integrated in the control device of the operating system, or the memory dump processing device can be independently set.
  • Step 10 The memory dump processing device invokes the first processing unit group to process the first link of the memory dump processing of each memory block;
  • the memory has been divided into multiple memory blocks, which can be drawn according to various preset rules. Minute.
  • the dump processing of each memory block involves at least two links.
  • a typical link may include preprocessing, filtering, compression, and writing of a disk.
  • the processing of each link of the memory block has a certain order, only when After the completion of one link, the next step can be processed.
  • a corresponding processing unit group is allocated for each link, and the number of processing units in each group may be allocated according to the actual processing speed of the processing unit, and may be one or more.
  • Step 20 After processing the processing of the first link, the processing unit groups other than the first processing unit group are respectively processed to write the memory block to the storage device, and the subsequent processing link is the processed link. One link.
  • the processing unit groups corresponding to the four links are the first processing unit group, a second processing unit group, a third processing unit group, and a fourth processing unit group.
  • the specific process of performing the memory dump by the memory dump processing method provided in this embodiment may be: first calling the first processing unit group to process the pre-processing link of the first memory block, and the pre-processing link After the completion of the process, the link is the processed link, and the filtering link of the first memory block is the subsequent processing link.
  • the first processing unit group may be called to process the preprocessing link of the second memory block according to the order of the memory block, and at the same time, the second processing unit group may be called to process the filtering link of the first memory block.
  • the compression link becomes a follow-up processing link
  • the filtering process becomes a follow-up processing link.
  • the first processing unit group may be called to process the preprocessing link of the third memory block according to the order of the memory block, and at the same time, the third processing unit group is called to process the compression link of the first memory block, and the call is performed.
  • the second processing unit group processes the filtering link of the second memory block.
  • the dump processing of each memory block may be triggered one by one, so that each memory block is serially passed. Disk I/O is written to the storage device, which avoids the instantaneous peak traffic of disk I/O transmission.
  • the storage device driver does not need to be reentrant, which improves the memory dump performance.
  • the processing units of the corresponding groups are separately processed in the subsequent processing steps of the memory blocks, so that each processing unit group can process each link of each memory block in a pipeline manner, thereby improving the utilization of the processing unit. Efficiency, avoiding waste of resources.
  • FIG. 2 is a flowchart of a memory dump processing method according to Embodiment 2 of the present invention.
  • the difference between this embodiment and the first embodiment is that in step 20 of the foregoing embodiment, the subsequent step after the processing of the first link is completed
  • the processing step of invoking each processing unit group other than the first processing unit group for processing may specifically include: when the memory dump processing device recognizes that the memory block has a subsequent processing link, calling the corresponding processing unit group for processing, and After the processing is completed, the next link is updated to the subsequent processing.
  • the link may specifically include a pre-processing link, a filtering link, a compression link, and a write disk link.
  • the pre-processing link, the filtering link, the compression link and the write disk link are processed sequentially.
  • the first step is the pre-processing step
  • the subsequent processing link is the filtering link, the compression link or the write disk link.
  • the filtering process is a follow-up processing link.
  • the compression link is a follow-up processing link, and so on.
  • the setting of the link can be set according to the actual memory dump needs, and is not limited to this embodiment.
  • each link of each memory block is correspondingly provided with a storage bit for storing the processing status identifier, and when the memory dump processing device recognizes that the memory block has a subsequent processing link, the corresponding processing unit is called.
  • the operation of updating the next link to the subsequent processing link after the processing is completed may specifically include the following steps:
  • Step 301 When the memory dump processing device identifies the subsequent processing link according to the processing status identifier of each memory bit of each memory block, the corresponding processing unit group is invoked for processing;
  • Step 302 After the processing of the current processing link of the memory block is completed, the memory dump processing device updates the processing status identifier corresponding to the memory block to update the next link to the subsequent processing link.
  • the processing status identifier is specifically used to indicate whether a certain link of a certain memory block has been processed.
  • the storage bit may be specifically set in a reserved storage area of the memory, and the reserved storage area is not called when the system is working normally, and the storage bit storage in the reserved storage area is used only when the system crashes and needs to perform a memory dump. Process status ID.
  • the storage bit can also be set in other storage units, and can be used to store the bit storage processing status identifier when the system crashes, which is not limited to this embodiment.
  • the setting method of the storage bit can be specifically as follows:
  • each memory block needs three processing status identifiers, which respectively correspond to the processing status of the first three links. If the status of a link is unprocessed, the processing status identifier corresponding to the link is "0", when the link is processed. After the completion, the processing status identifier corresponding to the link is updated to "1", according to which the next link of the link of the memory block can be processed, and the synchronous operation of each link is realized.
  • 1024 x 3 3072 storage bits are required, and one storage bit may be 1 bit, and at least 384 bytes of reserved storage area are required.
  • the processing state identifiers stored in all the storage bits in the reserved storage area may be initialized to "0", so that the processing state identifiers are updated during the memory dumping process to achieve synchronization of the links.
  • each link in the pipeline processing can also be implemented by a data structure such as a queue or a stack, which is not limited to this embodiment.
  • the memory dump processing device in step 10 calls the first processing unit group to process the first step of the memory dump processing of each memory block, which may specifically include:
  • Step 101 The memory dump processing device numbers the memory blocks according to the order of the memory blocks.
  • Step 102 The memory dump processing device invokes the first processing unit group to perform the first step of memory dump processing of each memory block according to the number order. Process it.
  • the first processing unit group is called to process the first link of each memory block according to the number order, and the subsequent processing steps for each memory block are also performed in the order of the memory blocks.
  • the file data in the memory can be restored without special processing.
  • FIG. 3 is a flowchart of a memory dump processing method according to Embodiment 3 of the present invention.
  • the memory dump processing method provided in this embodiment may further include the following steps:
  • Step 40 The memory dump processing device detects a load condition of each processing unit group, and generates a detection result.
  • Step 50 The memory dump processing device dynamically adjusts the number of processing units in each processing unit group according to the detection result.
  • the processing unit group corresponding to the pre-processing step may include a processing unit, and the processing unit group corresponding to the filtering step includes a processing unit, and the processing unit group corresponding to the compression link includes three processing units, and the writing disk segment A processing unit group is included in the corresponding processing unit group.
  • the number of processing units in each processing unit group is dynamically adjusted by detecting the load status of each processing unit group. When the processing speed of the compression link is too fast, the processing speed of the filtering link is compared.
  • one of the three processing units of the compression link can be stopped, the resources of the processing unit are released, and the processing unit is used for the processing of the filtering step. It is also possible to preset that the number of processing units in the processing unit group corresponding to each link is one, and in the memory dump process, dynamically adjust the processing units in each processing unit group by detecting the load status of each processing unit group. Quantity. It is worth noting that the memory dump processing device checks the load status of each processing unit group and adjusts the number of processing units in each processing unit group. The steps of the memory dump may be performed during the memory dump process, and the steps of the memory dump are not Has a certain timing relationship.
  • the memory dump processing device detects the load status of each processing unit group, and generates a detection result, which may specifically include:
  • the memory dump processing device detects the average time of processing the corresponding number of memory blocks in each processing unit group as a detection result.
  • the average time of processing the corresponding number of memory blocks in each processing unit group is detected as a detection result to reflect the load condition of each processing unit, and the operation is simple and easy to implement.
  • the subsequent processing link is a write disk link
  • the operation of writing to the storage device may specifically include the following steps:
  • the memory dump processing device calls the processing unit group corresponding to the write disk link, according to the number of disk I/Os. The amount and the number of the memory block are written to the disk, and the memory block is written to the storage device through the disk I/O.
  • the number of disk I/Os may be multiple.
  • the disk segment processing may be performed according to the number of disk I/Os and the number of the memory block. To ensure the storage order of memory blocks in the storage device.
  • the write disk processing is performed according to the number of the disk I/O and the number of the memory block, and the memory block is written to the storage device through the disk I/O, and specifically includes the following steps:
  • the number and the number of disk I/Os are modulo, and the memory blocks are written to the storage device through the corresponding disk I/O according to the modulo result.
  • the memory block number is 1-1024
  • the number of disk I/Os is 2
  • the numbers are 0 and 1
  • the number 1 of the memory block 1 is modulo 2
  • the modulo result is 1.
  • the No. 1 memory block is written to the storage device through the No. 1 disk I/O
  • the No. 2 memory block is numbered 2
  • the modulo result is modulo
  • the modulo result is 0, then the No. 2 memory block is written through the No. 0 disk. Storage device.
  • the memory dump processing device in step 10 calls the first processing unit group to process the first step of the memory dump processing of each memory block. Before, the method further includes the steps of dividing the memory block:
  • Step 60 The memory dump processing device divides the memory into at least two memory blocks according to the bandwidth of the disk I/O to be written.
  • Dividing the memory according to the bandwidth written to the disk I/O can avoid the memory dump performance degradation caused by disk I/O congestion.
  • the memory dump processing device divides the memory into at least two memory blocks according to the bandwidth of the disk I/O to be written, which may specifically include:
  • Step 601 The memory dump processing device calculates the capacity of the memory block according to the bandwidth of the I/O to be written to the disk, wherein the capacity of the memory block before processing the disk is not greater than the bandwidth of the disk I/O;
  • the memory is divided according to the capacity of the memory block, and the bandwidth of the disk I/O can be effectively utilized.
  • FIG. 4 is a schematic structural diagram of a memory dump processing apparatus according to Embodiment 4 of the present invention. As shown in FIG. 4, the memory dump processing apparatus provided in this embodiment may specifically implement the memory dump processing method provided by any embodiment of the present invention, but is not limited thereto.
  • the memory dump processing apparatus by initiating the setting of the processing module 11, invokes the first processing unit group to process the first link of each memory block, and may trigger the dump processing of each memory block one by one, thereby realizing
  • the memory blocks are serially written to the storage device through the disk I/O, which avoids the instantaneous peak traffic of the disk I/O transmission, and the storage device driver does not need to be reentrant, thereby improving the memory dump performance.
  • the processing unit of each memory block is processed by the processing unit of the subsequent processing module 12, and the processing units of the corresponding groups are respectively processed, so that each processing unit group 13 can implement each link of each memory block in a flowing manner. The processing improves the utilization efficiency of the processing unit and avoids waste of resources.
  • FIG. 5 is a schematic structural diagram of a memory dump processing apparatus according to Embodiment 5 of the present invention.
  • each link of each memory block is correspondingly provided with a storage bit for storing a processing status identifier, and the processing status identifier is specifically used to indicate whether a certain link of a certain memory block has been processed.
  • the storage bit may be specifically set in a reserved storage area of the memory 15, and the reserved storage area is not called when the system works normally, and the storage bit in the reserved storage area is used only when the system crashes and needs to perform a memory dump.
  • Store processing status ID The storage bit can also be set in other storage units, and can be used to store the bit storage processing status identifier when the system crashes, which is not limited to this embodiment.
  • the subsequent processing module 12 in the memory dump processing apparatus in this embodiment may specifically include a subsequent link processing unit 121 and a storage bit updating unit 122.
  • the subsequent link processing unit 121 is configured to call the corresponding processing unit group for processing when the subsequent processing link is identified according to the processing status identifier of each storage bit of each memory block.
  • the storage bit updating unit 122 is configured to update the processing status identifier corresponding to the memory block after the processing of the current processing link of the memory block is completed, so as to update the next link to the subsequent processing loop.
  • each link in the pipeline processing can also be implemented by a data structure such as a queue or a stack, which is not limited to this embodiment.
  • the startup processing module 11 may specifically include a numbering unit 111 and a startup unit 112, and the numbering unit 111 is configured to number the memory blocks in the order of the respective memory blocks.
  • the startup unit 112 is configured to call the first processing unit group to process the first loop of the memory dump processing of each memory block in the order of number.
  • the memory blocks are numbered according to the order of the memory blocks by the numbering unit 111, and the first processing unit group is called by the startup unit 112 to process the first link of each memory block according to the number order, thereby implementing subsequent processing steps for each memory block. It is also in the order of the memory blocks. After the memory block is written to the storage device 14, the file data in the memory can be restored without special processing.
  • the memory dump processing device specifically includes a load detection module 17 and a load modulation
  • the whole module 18, the load detecting module 17 is configured to detect the load status of each processing unit group, and generate a detection result.
  • the load adjustment module 18 is configured to dynamically adjust the number of processing units in each processing unit group according to the detection result.
  • the load adjustment module 18 can dynamically allocate processing units to each link, maintaining the balance of the processing process, avoiding bottlenecks, and effectively utilizing processing resources.
  • the load detecting module 17 can detect the average time of processing the corresponding number of memory blocks of each processing unit group as a detection result.
  • the average time of processing the corresponding number of memory blocks in each group of processing units is detected as a detection result to reflect the load condition of each processing unit, and the operation is simple and easy to implement.
  • the subsequent link processing unit 121 includes at least a write disk processing sub-unit 1211.
  • the write disk processing sub-unit 1211 is configured to: when the processing status identifier of each memory block of each memory block is identified to a subsequent processing link, and the subsequent processing link is a write disk link, the processing unit group corresponding to the write disk link is called for processing, and the indication is The write disk processing is performed according to the number of disk I/Os and the number of the memory block, and the memory block is written to the storage device 14 through the disk I/O.
  • the number of disk I/Os may be multiple.
  • the processing unit group corresponding to the write disk link may be indicated by the write disk processing sub-unit 1211. 13 Write disk processing according to the number of disk I/Os and the number of memory blocks to ensure the storage order of the memory blocks in the storage device 14.
  • the memory dump processing apparatus may further include a memory partitioning module 16, and the memory partitioning module 16 is configured to divide the memory 15 into at least two memory blocks according to a bandwidth of the disk I/O to be written. Dividing memory 15 based on the bandwidth written to disk I/O can avoid memory dump performance degradation caused by disk I/O congestion.
  • FIG. 6 is a schematic structural diagram of a memory dump system according to Embodiment 6 of the present invention.
  • the memory dump system provided in this embodiment includes a memory 15 and at least two processing units 23, and further includes the present invention.
  • the memory dump processing device 21 provided by any of the embodiments.
  • the working process of the memory dumping system in the memory dumping system to call the processing unit 23 to dump the memory 15 in the memory dumping system can be referred to the above embodiment, and details are not described herein.
  • the memory dump system provided in this embodiment can trigger the dump processing of each memory block one by one, so that each memory block is serially written to the storage device 14 through the disk I/O, thereby avoiding the instantaneous transmission of the disk I/O. Peak traffic, storage device 14 drivers are also not required to be reentrant, improving memory dump performance. It is also possible to realize the processing of each link of each memory block by each processing unit group 13 in a flowing manner, thereby improving the utilization efficiency of the processing unit and avoiding waste of resources.
  • the memory dump processing method and device and the memory dump system realize the serial dump of the memory block by the multi-processing unit cooperation in the pipeline mode, and the peak disk I/O traffic is branched to each link, thereby avoiding Instantaneous peak traffic for disk I/O transfers, and storage device drivers do not need to be reentrant, improving memory dump performance.
  • a sequential dump of memory files is implemented, providing a simple handling principle for multi-disk I/O. According to the load situation of each link, the number of processing units in each group is dynamically adjusted, bottlenecks are avoided, and system resources are effectively utilized.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Quality & Reliability (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Debugging And Monitoring (AREA)

Abstract

本发明提供了一种内存转储处理方法、装置和系统,该方法包括:调用第一处理单元组对各内存块的内存转储处理的第一环节进行处理;调用第一处理单元组之外的各处理单元组对第一环节处理完成后的后续处理环节分别进行处理,以将内存块写入存储设备。本发明提供的技术方案可以实现流水方式对各内存块的各环节进行处理,避免磁盘I/O传输的瞬时峰值流量,提高内存转储性能。

Description

内存转储处理方法和装置及内存转储系统
技术领域
本发明实施例涉及数据处理技术, 尤其涉及一种内存转储处理方法和装 置及内存转储系统。 背景技术
操作系统一般会在系统崩溃后实行内存转储, 将内存中的数据转储到存 储设备中, 以方便事后问题定位。 随着硬件技术的发展, 内存得到了极大的 扩展, 导致了系统转储时间过长。 同时现代处理器技术的发展, 处理器已经 从单核发展到多核多线程, 能够同时并行处理多个线程, 即处理器具有多个 逻辑 CPU, —个逻辑 CPU 即为一个处理单元。 而传统的转储使用单个逻辑 CPU处理, 这就造成了系统的计算能力极大的浪费。
现有技术的一些系统如 AIX ( Advanced Interactive executive ) , HP-UX
( Hewlett Packard UniX )等提供了并行转储的功能, 它将内存分块, 由一个 或多个逻辑 CPU负责一个内存块的各环节(如预处理、 过滤、 压缩、 写磁盘 等) 的处理操作, 从而加快了处理速度。 但是, 由于处理各个内存块的逻辑 CPU相互独立并行工作, 当多个逻辑 CPU同时进行磁盘 I/O传输时, 会引起 磁盘 I/O传输的瞬时峰值流量, 导致转储性能的下降。 而且, 要求存储设备 的驱动程序必须是可重入的, 才能保证多个逻辑 CPU同时使用存储设备。 发明内容
本发明实施例提供一种内存转储处理方法和装置及内存转储系统, 以避 免磁盘 I/O传输的瞬时峰值流量, 提高内存转储性能。 本发明实施例提供一种内存转储处理方法, 包括:
调用第一处理单元组对各内存块的内存转储处理的第一环节进行处理; 对所述第一环节处理完成后的后续处理环节, 调用所述第一处理单元组 之外的各处理单元组分别进行处理, 以将内存块写入存储设备, 所述后续处 理环节为已处理环节的下一环节;
其中, 所述各处理单元组对应处理内存转储处理中的一个环节。
本发明实施例提供一种内存转储处理装置, 包括:
启动处理模块, 用于调用第一处理单元组对各内存块的内存转储处理的 第一环节进行处理;
后续处理模块, 用于对所述第一环节处理完成后的后续处理环节, 调用 所述第一处理单元组之外的各处理单元组分别进行处理, 以将内存块写入存 储设备, 所述后续处理环节为已处理环节的下一环节;
其中, 所述各处理单元组对应处理内存转储处理中的一个环节。
本发明实施例提供一种内存转储系统, 包括内存和至少两个处理单元, 还包括本发明实施例提供的内存转储处理装置。
由上述技术方案可知, 本发明实施例提供内存转储处理方法和装置及内 存转储系统, 通过调用第一处理单元组对各内存块的第一环节进行处理, 可 以逐个触发对各内存块的转储处理, 因此实现将各内存块串行通过磁盘 I/O 写入存储设备, 避免了磁盘 I/O传输的瞬时峰值流量, 存储设备的驱动程序 也无需为可重入的, 提高了内存转储性能。 另外, 本实施例对各内存块的后 续处理环节, 调用各对应组的处理单元分别进行处理, 可以实现各处理单元 组以流水方式对各内存块的各环节的处理, 提高了处理单元的利用效率, 避 免了资源浪费。 附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案, 下面将对实 施例或现有技术描述中所需要使用的附图作简单地介绍, 显而易见地, 下面 描述中的附图仅仅是本发明的一些实施例, 对于本领域普通技术人员来讲, 在不付出创造性劳动性的前提下, 还可以根据这些附图获得其他的附图。
图 1为本发明实施例一提供的内存转储处理方法流程图;
图 2为本发明实施例二提供的内存转储处理方法流程图;
图 3为本发明实施例三提供的内存转储处理方法流程图;
图 4为本发明实施例四提供的内存转储处理装置结构示意图;
图 5为本发明实施例五提供的内存转储处理装置结构示意图;
图 6为本发明实施例六提供的内存转储系统结构示意图。 具体实施方式
下面将结合本发明实施例中的附图, 对本发明实施例中的技术方案进行 清楚、 完整地描述, 显然, 所描述的实施例仅仅是本发明一部分实施例, 而 不是全部的实施例。 基于本发明中的实施例, 本领域普通技术人员在没有做 出创造性劳动前提下所获得的所有其他实施例, 都属于本发明保护的范围。
实施例一
图 1为本发明实施例一提供的内存转储处理方法流程图。 如图 1所示, 本 实施例提供的内存转储处理方法具体可以应用于具有多个处理单元(即多个 逻辑 CPU ) 的系统, 当该系统的操作系统崩溃后, 操作系统的内存转储处理 装置即可控制各处理单元, 采用该内存转储处理方法进行内存转储。 内存转 储处理装置可以通过硬件和软件的形式来实现, 可以集成在操作系统的控制 装置中, 也可以独立设置内存转储处理装置。
本实施例的内存转储处理方法包括:
步骤 10、 内存转储处理装置调用第一处理单元组对各内存块的内存转储 处理的第一环节进行处理;
具体的, 内存已经被划分成多个内存块, 可以按照多种预设规则进行划 分。 通常每个内存块的转储处理都涉及至少两个环节, 例如, 典型的环节可 以包括预处理、 过滤、 压缩和写磁盘等环节, 对内存块各环节的处理具有一 定的顺序, 只有当上一环节处理完成后, 才能进行下一环节的处理。 本实施 例所采用的技术方案中, 为每个环节分配对应的处理单元组, 每组中处理单 元的数量可以根据处理单元的实际处理速度来分配, 可以为一个或多个。 如 预处理环节对应的处理单元组中包括一个处理单元, 过滤环节对应的处理单 元组中包括一个处理单元,压缩环节对应的处理单元组中包括三个处理单元, 写磁盘环节对应的处理单元组中包括一个处理单元, 以使各环节保持相对一 致的处理进度。 上述步骤 10即调用第一环节对应的第一处理单元组, 对各内 存块的第一环节进行处理, 具体可以按照设定的顺序对各内存块进行处理。
步骤 20、 对第一环节处理完成后的后续处理环节, 调用第一处理单元组 之外的各处理单元组分别进行处理, 以将内存块写入存储设备, 后续处理环 节为已处理环节的下一环节。
其中, 各处理单元组对应处理内存转储处理中的一个环节。 第一处理单 元组为第一环节对应的处理单元组, 调用第一处理单元组对各内存块的第一 环节进行处理, 当第一处理单元组对一个内存块的第一环节处理完成后, 该 第一环节即为已处理环节, 此时才可以调用其他处理单元组对该内存块依次 进行后续处理环节的处理, 而此时, 第一处理单元组可以继续对下一个内存 块的第一环节进行处理。 随着转储的进展, 各内存块陆续出现后续处理环节, 这些后续处理环节可能对应着不同的环节, 内存转储处理装置可以调用不同 的处理单元组同时对这些后续处理环节进行处理。
下面以内存块的内存转储处理包括预处理、 过滤、 压缩和写磁盘四个环 节为例对内存转储处理方法进行说明, 该四个环节对应的处理单元组分别为 第一处理单元组、 第二处理单元组、 第三处理单元组和第四处理单元组。 通 过本实施例提供的内存转储处理方法进行内存转储的具体过程可以为: 首先 调用第一处理单元组对第一个内存块的预处理环节进行处理, 预处理环节处 理完成后, 该环节即为已处理环节, 则第一个内存块的过滤环节为后续处理 环节。 此时, 可以按照内存块的顺序, 调用第一处理单元组对第二个内存块 的预处理环节进行处理, 同时, 可调用第二处理单元组对第一个内存块的过 滤环节进行处理。 第一个内存块的过滤环节处理完成后, 压缩环节成为后续 处理环节, 而第二个内存块的预处理环节处理完成后, 其过滤环节成为后续 处理环节。 此时, 可以按照内存块的顺序, 调用第一处理单元组对第三个内 存块的预处理环节进行处理, 同时, 调用第三处理单元组对第一个内存块的 压缩环节进行处理, 调用第二处理单元组对第二个内存块的过滤环节进行处 理。 后续的操作过程类似执行, 第一个内存块的压缩环节处理完成后, 调用 第四处理单元组对第一个内存块的写磁盘环节进行处理, 将第一个内存块写 入存储设备, 至此就完成了对第一个内存块的转储处理。 第二个内存块的过 滤环节处理完成后调用第三处理单元组对第二个内存块的压缩环节进行处 理, 第三个内存块的预处理环节处理完成后, 调用第二处理单元组对第三个 内存块的过滤环节进行处理, 同时调用第一处理单元组对第四个内存块的预 处理环节进行处理。 依次循环实现对各内存块的各环节的流水线处理, 实现 将所有内存块顺序通过磁盘 I/O写入存储设备。
本实施例提供的内存转储处理方法, 通过调用第一处理单元组对各内存 块的第一环节进行处理, 可以逐个触发对各内存块的转储处理, 因此实现将 各内存块串行通过磁盘 I/O写入存储设备, 避免了磁盘 I/O传输的瞬时峰值流 量, 存储设备的驱动程序也无需为可重入的, 提高了内存转储性能。 另外, 本实施例对各内存块的后续处理环节, 调用各对应组的处理单元分别进行处 理, 可以实现各处理单元组以流水方式对各内存块的各环节的处理, 提高了 处理单元的利用效率, 避免了资源浪费。
实施例二
图 2为本发明实施例二提供的内存转储处理方法流程图。本实施例与实施 例一的区别在于, 上述实施例中的步骤 20, 对第一环节处理完成后的后续处 理环节, 调用第一处理单元组之外的各处理单元组分别进行处理, 具体可以 包括: 当内存转储处理装置识别到内存块存在后续处理环节时, 调用对应的 处理单元组进行处理, 且在处理完成后, 将下一个环节更新为后续处理环节。
通过对内存块的后续处理环节的识别和更新, 可以实现各内存块的各环 节流水处理的同步。
在本实施例中, 环节具体可以包括预处理环节、 过滤环节、 压缩环节和 写磁盘环节。 预处理环节、 过滤环节、 压缩环节和写磁盘环节顺序处理, 第 一环节为预处理环节, 后续处理环节为过滤环节、 压缩环节或写磁盘环节。 如, 作为第一环节的预处理环节处理完成后, 过滤环节为后续处理环节, 当 过滤环节处理完成后, 压缩环节为后续处理环节, 以此类推。 环节的设置可 以根据实际内存转储需要来设置, 不以本实施例为限。
在本实施例中, 优选地, 各内存块的各环节对应设置有存储位, 用于存 储处理状态标识,则当内存转储处理装置识别到内存块存在后续处理环节时, 调用对应的处理单元组进行处理, 且在处理完成后, 将下一个环节更新为后 续处理环节的操作具体可以包括如下步骤:
步骤 301、当内存转储处理装置根据各内存块各存储位的处理状态标识识 别到后续处理环节时, 调用对应的处理单元组进行处理;
步骤 302、 在内存块当前的后续处理环节处理完成后, 内存转储处理装置 更新内存块对应的处理状态标识, 以将下一个环节更新为后续处理环节。
处理状态标识具体用以指示某一内存块的某一环节是否已经处理完成。 存储位具体可以设置在内存的预留存储区域中, 该预留存储区域在系统正常 工作时不调用, 只有在系统崩溃需要进行内存转储时, 才使用该预留存储区 域中的存储位存储处理状态标识。 存储位也可以设置在其他存储单元中, 在 系统崩溃时可以用以存储位存储处理状态标识即可, 不以本实施例为限。 存 储位的设置方法具体可以为:
假设将内存总共划分为 1024块内存块, 每个内存块需要进行四个环节的 处理, 则每个内存块需要三个处理状态标识, 分别对应前三个环节的处理状 态, 若某一环节的状态为未处理, 该环节对应的处理状态标识为 "0" , 当该 环节处理完成后, 将该环节对应的处理状态标识更新为 "1 " , 据此可以对内 存块的该环节的下一环节进行处理, 实现了各环节的同步操作。 通过一个存 储位存储一个处理状态标识, 则需要 1024 x 3=3072个存储位, 一个存储位具 体可以为 1比特, 则至少需要 384个字节的预留存储区域。 在内存转储工作开 始之前, 具体可以将预留存储区域中的所有存储位存储的处理状态标识初始 化为 "0" ,以便内存转储过程中对处理状态标识更新来实现对各环节的同步。
通过处理状态标识的设置, 在系统崩溃的环境下, 避免了应用系统信号 量进行流水处理中各环节同步而引起的错误,提高了内存转储处理的正确性, 进一步提高了内存转储性能。 流水处理中各环节的同步也可以通过队列、 堆 栈等数据结构来实现, 不以本实施例为限。
在本实施例中, 步骤 10内存转储处理装置调用第一处理单元组对各内存 块的内存转储处理的第一环节进行处理, 具体可以包括:
步骤 101、 内存转储处理装置按照各内存块的顺序对内存块进行编号; 步骤 102、内存转储处理装置调用第一处理单元组按照编号顺序对各内存 块的内存转储处理的第一环节进行处理。
通过按照各内存块的顺序对内存块进行编号, 调用第一处理单元组按照 编号顺序对各内存块的第一环节进行处理, 实现对各内存块的后续处理环节 也是按照内存块的顺序进行, 将内存块写入存储设备后不需要进行特殊处理 就可以还原内存中的文件数据。
实施例三
图 3为本发明实施例三提供的内存转储处理方法流程图。本实施例提供的 内存转储处理方法具体还可以包括如下步骤:
步骤 40、 内存转储处理装置检测各处理单元组的负载状况, 生成检测结 果; 步骤 50、 内存转储处理装置根据检测结果动态调整各处理单元组中处理 单元的数量。
通过对各处理单元组的负载状况的检测, 能够对各环节动态分配处理单 元, 保持了处理过程的平衡性, 避免出现瓶颈。 具体的, 可以预先设置预处 理环节对应的处理单元组中包括一个处理单元, 过滤环节对应的处理单元组 中包括一个处理单元, 压缩环节对应的处理单元组中包括三个处理单元, 写 磁盘环节对应的处理单元组中包括一个处理单元。 在内存转储过程中, 再通 过对各处理单元组的负载状况的检测动态地调整各处理单元组中的处理单元 的数量, 当压缩环节的处理速度过快时, 而过滤环节的处理速度比较慢时, 可以使压缩环节的三个处理单元中的一个停止工作,译放该处理单元的资源, 并将该处理单元用于进行过滤环节的处理工作。 也可以预设各环节对应的处 理单元组中处理单元的数量均为一个, 再在内存转储过程中, 通过对各处理 单元组的负载状况的检测动态地调整各处理单元组中处理单元的数量。 值得 注意的是, 内存转储处理装置检查各处理单元组的负载状况, 并调整各处理 单元组中处理单元的数量的步骤可以在内存转储过程中进行, 与上述内存转 储的各步骤不具有必然的时序关系。
在本实施例中, 优选地, 步骤 40内存转储处理装置检测各处理单元组的 负载状况, 生成检测结果, 具体可以包括:
内存转储处理装置检测各处理单元组处理设定数量内存块对应环节的平 均时间, 作为检测结果。
通过对各处理单元组处理设定数量内存块对应环节的平均时间进行检 测, 作为检测结果, 以反映各处理单元的负载情况, 操作简单, 易于实现。
在本实施例中, 当后续处理环节为写磁盘环节时, 则对第一环节处理完 成后的后续处理环节, 调用第一处理单元组之外的各处理单元组分别进行处 理, 以将内存块写入存储设备的操作具体可以包括如下步骤:
内存转储处理装置调用写磁盘环节对应的处理单元组, 根据磁盘 I/O的数 量和内存块的编号进行写磁盘环节处理, 将内存块通过磁盘 I/O写入存储设 备。
在实际应用中,磁盘 I/O的数量可能为多个,通过多个磁盘 I/O将内存块写 入存储设备时, 可以根据磁盘 I/O的数量和内存块的编号进行写磁盘环节处 理, 以保证内存块在存储设备中的存储顺序。
在本实施例中, 优选地, 根据磁盘 I/O的数量和内存块的编号进行写磁盘 环节处理, 将内存块通过磁盘 I/O写入存储设备, 具体可以包括如下步骤: 对内存块的编号和磁盘 I/O的数量取模, 根据取模结果将内存块通过对 应的磁盘 I/O写入存储设备。
具体的, 例如, 内存块的编号为 1-1024, 磁盘 I/O的数量为 2个, 编号为 0 和 1 , 则 1号内存块的编号 1 , 与 2取模, 取模结果为 1 , 则将 1号内存块通过 1 号磁盘 I/O写入存储设备, 2号内存块的编号为 2, 与 2取模, 取模结果为 0, 则 将 2号内存块通过 0号磁盘写入存储设备。
在本实施例中, 在步骤 10内存转储处理装置调用第一处理单元组对各内 存块的内存转储处理的第一环节进行处理, 之前, 具体还可以包括划分内存 块的步骤:
步骤 60、 内存转储处理装置根据待写入磁盘 I/O的带宽将内存划分为至少 两个内存块。
根据写入磁盘 I/O的带宽对内存进行划分,可以避免磁盘 I/O的拥塞而导致 的内存转储性能的下降。
在本实施例中, 优选地, 步骤 60内存转储处理装置根据待写入磁盘 I/O的 带宽将内存划分为至少两个内存块, 具体可以包括:
步骤 601、 内存转储处理装置根据待写入磁盘 I/O的带宽计算内存块的容 量, 其中, 内存块经过写磁盘环节之前各环节处理后的容量不大于磁盘 I/O的 带宽;
步骤 602、 内存转储处理装置根据计算得到的内存块容量对内存进行划 如写入磁盘 I/O的带宽为 20M/S , 过滤环节的比率为 50%和压缩环节的比 率为 50%, 20/50%/50%=80M, 则划分的内存块的容量大小为 80M, 根据该内 存块容量对内存进行划分, 可以有效利用磁盘 I/O的带宽。
实施例四
图 4为本发明实施例四提供的内存转储处理装置结构示意图。如图 4所示, 本实施例提供的内存转储处理装置具体可以实现本发明任意实施例提供的内 存转储处理方法, 但并不以此为限。
本实施例提供的内存转储处理装置包括启动处理模块 11和后续处理模块 12。 启动处理模块 11用于调用第一处理单元组对各内存块的内存转储处理的 第一环节进行处理。 本实施例所采用的技术方案中, 内存转储处理装置可以 预先为每个环节分配对应的处理单元组 13 , 每处理单元组中处理单元的数量 也可以根据处理单元的实际处理速度来分配, 可以为一个或多个。 后续处理 模块 12用于对第一环节处理完成后的后续处理环节, 调用第一处理单元组之 外的各处理单元组分别进行处理, 以将内存块写入存储设备, 后续处理环节 为已处理环节的下一环节。 其中, 各处理单元组对应处理内存转储处理中的 一个环节。
本实施例提供的内存转储处理装置, 通过启动处理模块 11的设置, 调用 第一处理单元组对各内存块的第一环节进行处理, 可以逐个触发对各内存块 的转储处理, 因此实现将各内存块串行通过磁盘 I/O写入存储设备, 避免了磁 盘 I/O传输的瞬时峰值流量, 存储设备的驱动程序也无需为可重入的, 提高了 内存转储性能。 另外, 本实施例通过后续处理模块 12的设置, 对各内存块的 后续处理环节, 调用各对应组的处理单元分别进行处理, 可以实现各处理单 元组 13以流水方式对各内存块的各环节的处理,提高了处理单元的利用效率, 避免了资源浪费。
实施例五 图 5为本发明实施例五提供的内存转储处理装置结构示意图。在本实施例 中, 各内存块的各环节对应设置有存储位, 用于存储处理状态标识, 处理状 态标识具体用以指示某一内存块的某一环节是否已经处理完成。 存储位具体 可以设置在内存 15的预留存储区域中, 该预留存储区域在系统正常工作时不 调用, 只有在系统崩溃需要进行内存转储时, 才使用该预留存储区域中的存 储位存储处理状态标识。 存储位也可以设置在其他存储单元中, 在系统崩溃 时可以用以存储位存储处理状态标识即可, 不以本实施例为限。
本实施例提供内存转储处理装置中的后续处理模块 12具体可以包括后续 环节处理单元 121和存储位更新单元 122。后续环节处理单元 121用于当根据各 内存块各存储位的处理状态标识识别到后续处理环节时, 调用对应的处理单 元组进行处理。存储位更新单元 122用于在内存块当前的后续处理环节处理完 成后, 更新内存块对应的处理状态标识, 以将下一个环节更新为后续处理环 节。
通过处理状态标识的设置, 在系统崩溃的环境下, 避免了应用系统信号 量进行流水处理中各环节同步而引起的错误,提高了内存转储处理的正确性, 进一步提高了内存转储性能。 流水处理中各环节的同步也可以通过队列、 堆 栈等数据结构来实现, 不以本实施例为限。
在本实施例中, 启动处理模块 11具体可以包括编号单元 111和启动单元 112, 编号单元 111用于按照各内存块的顺序对内存块进行编号。 启动单元 112 用于调用第一处理单元组按照编号顺序对各内存块的内存转储处理的第一环 节进行处理。
通过编号单元 111按照各内存块的顺序对内存块进行编号,并通过启动单 元 112调用第一处理单元组按照编号顺序对各内存块的第一环节进行处理, 实 现对各内存块的后续处理环节也是按照内存块的顺序进行, 将内存块写入存 储设备 14后不需要进行特殊处理就可以还原内存中的文件数据。
在本实施例中, 内存转储处理装置具体还包括负载检测模块 17和负载调 整模块 18, 负载检测模块 17用于检测各处理单元组的负载状况, 生成检测结 果。 负载调整模块 18用于根据检测结果动态调整各处理单元组中处理单元的 数量。
通过负载检测模块 17对各处理单元组的负载状况的检测, 负载调整模块 18能够对各环节动态分配处理单元, 保持了处理过程的平衡性, 避免出现瓶 颈, 也有效地利用了处理资源。
在本实施例中, 优选地, 负载检测模块 17可以检测各处理单元组处理设 定数量内存块对应环节的平均时间, 作为检测结果。 通过对各组处理单元处 理设定数量内存块对应环节的平均时间进行检测, 作为检测结果, 以反映各 处理单元的负载情况, 操作简单, 易于实现。
在本实施例中, 后续环节处理单元 121至少包括写磁盘处理子单元 1211。 写磁盘处理子单元 1211用于当根据各内存块各存储位的处理状态标识识别到 后续处理环节, 且后续处理环节为写磁盘环节时, 调用写磁盘环节对应的处 理单元组进行处理, 且指示根据磁盘 I/O的数量和内存块的编号进行写磁盘环 节处理, 将内存块通过磁盘 I/O写入存储设备 14。
在实际应用中,磁盘 I/O的数量可能为多个,通过多个磁盘 I/O将内存块写 入存储设备 14时, 可以通过写磁盘处理子单元 1211指示写磁盘环节对应的处 理单元组 13根据磁盘 I/O的数量和内存块的编号进行写磁盘环节处理, 以保证 内存块在存储设备 14中的存储顺序。
在本实施例中, 内存转储处理装置还可以包括内存划分模块 16, 内存划 分模块 16用于根据待写入磁盘 I/O的带宽将内存 15划分为至少两个内存块。 根 据写入磁盘 I/O的带宽对内存 15进行划分,可以避免磁盘 I/O的拥塞而导致的内 存转储性能的下降。
实施例六
图 6为本发明实施例六提供的内存转储系统结构示意图, 如图 6所示, 本 实施例提供的内存转储系统包括内存 15和至少两个处理单元 23 , 还包括本发 明任意实施例提供的内存转储处理装置 21。 本实施例提供内存转储系统中通 过内存转储处理装置 21调用处理单元 23对内存 15进行转储的工作过程具体可 以参照上述实施例, 此不再贅述。
本实施例提供的内存转储系统, 可以逐个触发对各内存块的转储处理, 因此实现将各内存块串行通过磁盘 I/O写入存储设备 14 ,避免了磁盘 I/O传输的 瞬时峰值流量, 存储设备 14的驱动程序也无需为可重入的, 提高了内存转储 性能。 也可以实现各处理单元组 13以流水方式对各内存块的各环节的处理, 提高了处理单元的利用效率, 避免了资源浪费。
本发明实施例提供的内存转储处理方法和装置及内存转储系统, 以流水 方式实现多处理单元合作对内存块串行转储, 将峰值的磁盘 I/O流量分流到了 各个环节, 避免了磁盘 I/O传输的瞬时峰值流量, 存储设备的驱动程序也无需 为可重入的, 提高了内存转储性能。 实现了对内存文件的顺序转储, 对于多 磁盘 I/O提供了简单的处理原则。 根据各环节的负载情况动态地调整各组处理 单元的数量, 避免了瓶颈问题, 有效地利用了系统资源。
本领域普通技术人员可以理解: 实现上述方法实施例的全部或部分步骤 可以通过程序指令相关的硬件来完成, 前述的程序可以存储于一计算机可读 取存储介质中, 该程序在执行时, 执行包括上述方法实施例的步骤; 而前述 的存储介质包括: ROM, RAM, 磁碟或者光盘等各种可以存储程序代码的介 最后应说明的是: 以上实施例仅用以说明本发明的技术方案, 而非对其 限制; 尽管参照前述实施例对本发明进行了详细的说明, 本领域的普通技术 人员应当理解: 其依然可以对前述实施例所记载的技术方案进行修改, 或者 对其中部分技术特征进行等同替换; 而这些修改或者替换, 并不使相应技术 方案的本质脱离本发明各实施例技术方案的精神和范围。

Claims

权 利 要 求
1、 一种内存转储处理方法, 其特征在于, 包括:
调用第一处理单元组对各内存块的内存转储处理的第一环节进行处理; 对所述第一环节处理完成后的后续处理环节, 调用所述第一处理单元组 之外的各处理单元组分别进行处理, 以将内存块写入存储设备, 所述后续处 理环节为已处理环节的下一环节;
其中, 所述各处理单元组对应处理内存转储处理中的一个环节。
2、 根据权利要求 1所述的内存转储处理方法, 其特征在于, 对所述第一 环节处理完成后的后续处理环节, 调用所述第一处理单元组之外的各处理单 元组分别进行处理包括:
当识别到内存块存在后续处理环节时,调用对应的处理单元组进行处理, 且在处理完成后, 将下一个环节更新为后续处理环节。
3、 根据权利要求 2所述的内存转储处理方法, 其特征在于, 各内存块的 各环节对应设置有存储位, 用于存储处理状态标识, 则当识别到内存块存在 后续处理环节时, 调用对应的处理单元组进行处理, 且在处理完成后, 将下 一个环节更新为后续处理环节包括:
当根据各内存块各存储位的处理状态标识识别到后续处理环节时, 调用 对应的处理单元组进行处理;
在内存块当前的后续处理环节处理完成后, 更新内存块对应的处理状态 标识, 以将下一个环节更新为后续处理环节。
4、 根据权利要求 1或 2或 3所述的内存转储处理方法, 其特征在于, 调 用第一处理单元组对各内存块的内存转储处理的第一环节进行处理包括: 按照各内存块的顺序对内存块进行编号;
调用第一处理单元组按照所述编号顺序对各内存块的内存转储处理的第 一环节进行处理。
5、 根据权利要求 1或 2或 3所述的内存转储处理方法, 其特征在于, 还 包括:
检测各处理单元组的负载状况, 生成检测结果;
根据所述检测结果动态调整各处理单元组中处理单元的数量。
6、 根据权利要求 5所述的内存转储处理方法, 其特征在于, 检测各处理 单元组的负载状况, 生成检测结果包括:
检测各处理单元组处理设定数量内存块对应环节的平均时间, 作为检测 结果。
7、 根据权利要求 4所述的内存转储处理方法, 其特征在于, 当后续处理 环节为写磁盘环节时, 则所述对所述第一环节处理完成后的后续处理环节, 调用所述第一处理单元组之外的各处理单元组分别进行处理, 以将内存块写 入存储设备包括:
调用写磁盘环节对应的处理单元组, 根据磁盘 I/O 的数量和所述内存块 的编号进行写磁盘环节处理, 将所述内存块通过磁盘 I/O写入存储设备。
8、根据权利要求 7所述的内存转储处理方法,其特征在于,根据磁盘 I/O 的数量和所述内存块的编号进行写磁盘环节处理,将所述内存块通过磁盘 I/O 写入存储设备包括:
对所述内存块的编号和所述磁盘 I/O的数量取模, 根据取模结果将所述 内存块通过对应的磁盘 I/O写入存储设备。
9、 根据权利要求 1所述的内存转储处理方法, 其特征在于, 在调用第一 处理单元组对各内存块的内存转储处理的第一环节进行处理之前, 还包括: 根据待写入磁盘 I/O的带宽将内存划分为至少两个内存块。
10、 根据权利要求 9所述的内存转储处理方法, 其特征在于, 根据待写 入磁盘 I/O的带宽将内存划分为至少两个内存块包括:
根据待写入磁盘 I/O 的带宽计算内存块的容量, 其中, 所述内存块经过 写磁盘环节之前各环节处理后的容量不大于所述磁盘 I/O的带宽;
根据计算得到的内存块容量对所述内存进行划分。
11、 根据权利要求 1所述的内存转储处理方法, 其特征在于:
所述环节包括预处理环节、 过滤环节、 压缩环节和写磁盘环节; 所述预处理环节、 过滤环节、 压缩环节和写磁盘环节顺序处理; 所述第一环节为所述预处理环节, 所述后续处理环节为所述过滤环节、 所述压缩环节或所述写磁盘环节。
12、 一种内存转储处理装置, 其特征在于, 包括:
启动处理模块, 用于调用第一处理单元组对各内存块的内存转储处理的 第一环节进行处理;
后续处理模块, 用于对所述第一环节处理完成后的后续处理环节, 调用 所述第一处理单元组之外的各处理单元组分别进行处理, 以将内存块写入存 储设备, 所述后续处理环节为已处理环节的下一环节;
其中, 所述各处理单元组对应处理内存转储处理中的一个环节。
13、 根据权利要求 12所述的内存转储处理装置, 其特征在于, 各内存块 的各环节对应设置有存储位, 用于存储处理状态标识, 且所述后续处理模块 包括:
后续环节处理单元, 用于当根据各内存块各存储位的处理状态标识识别 到后续处理环节时, 调用对应的处理单元组进行处理;
存储位更新单元, 用于在内存块当前的后续处理环节处理完成后, 更新 内存块对应的处理状态标识, 以将下一个环节更新为后续处理环节。
14、 根据权利要求 12或 13所述的内存转储处理装置, 其特征在于, 所 述启动处理模块包括:
编号单元, 用于按照各内存块的顺序对内存块进行编号;
启动单元, 用于调用第一处理单元组按照所述编号顺序对各内存块的内 存转储处理的第一环节进行处理。
15、 根据权利要求 12或 13所述的内存转储处理装置, 其特征在于, 还 包括: 负载检测模块, 用于检测各处理单元组的负载状况, 生成检测结果; 负载调整模块, 用于根据所述检测结果动态调整各处理单元组中处理单 元的数量。
16、 根据权利要求 14所述的内存转储处理装置, 其特征在于: 所述后续环节处理单元至少包括写磁盘处理子单元, 用于当根据各内存 块各存储位的处理状态标识识别到后续处理环节, 且后续处理环节为写磁盘 环节时, 调用写磁盘环节对应的处理单元组进行处理, 且指示根据磁盘 I/O 的数量和所述内存块的编号进行写磁盘环节处理,将所述内存块通过磁盘 I/O 写入存储设备。
17、 根据权利要求 12所述的内存转储处理装置, 其特征在于, 还包括: 内存划分模块, 用于根据待写入磁盘 I/O 的带宽将内存划分为至少两个 内存块。
18、 一种内存转储系统, 包括内存和至少两个处理单元, 其特征在于, 还包括权利要求 12-17任一所述的内存转储处理装置。
PCT/CN2011/074721 2011-05-26 2011-05-26 内存转储处理方法和装置及内存转储系统 WO2011127865A2 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
PCT/CN2011/074721 WO2011127865A2 (zh) 2011-05-26 2011-05-26 内存转储处理方法和装置及内存转储系统
CN2011800005796A CN102203718B (zh) 2011-05-26 2011-05-26 内存转储处理方法和装置及内存转储系统
EP11768476.1A EP2437178B1 (en) 2011-05-26 2011-05-26 Method, apparatus, and system for processing memory dump
US13/340,342 US8627148B2 (en) 2011-05-26 2011-12-29 Method and apparatus for memory dump processing and a memory dump system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2011/074721 WO2011127865A2 (zh) 2011-05-26 2011-05-26 内存转储处理方法和装置及内存转储系统

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US13/340,342 Continuation US8627148B2 (en) 2011-05-26 2011-12-29 Method and apparatus for memory dump processing and a memory dump system

Publications (2)

Publication Number Publication Date
WO2011127865A2 true WO2011127865A2 (zh) 2011-10-20
WO2011127865A3 WO2011127865A3 (zh) 2012-04-19

Family

ID=44662774

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2011/074721 WO2011127865A2 (zh) 2011-05-26 2011-05-26 内存转储处理方法和装置及内存转储系统

Country Status (4)

Country Link
US (1) US8627148B2 (zh)
EP (1) EP2437178B1 (zh)
CN (1) CN102203718B (zh)
WO (1) WO2011127865A2 (zh)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9176804B2 (en) 2013-06-27 2015-11-03 International Business Machines Corporation Memory dump optimization in a system
CN104281549B (zh) * 2013-07-08 2017-07-21 联想(北京)有限公司 一种ums中调整写缓冲区的方法及装置
US9317356B2 (en) * 2013-10-15 2016-04-19 Globalfoundries Inc. Device state capture during operating system dump
CN104881611B (zh) 2014-02-28 2017-11-24 国际商业机器公司 保护软件产品中的敏感数据的方法和装置
JPWO2016139774A1 (ja) * 2015-03-04 2017-12-14 富士通株式会社 情報処理装置、情報処理システム
CN106648442A (zh) * 2015-10-29 2017-05-10 阿里巴巴集团控股有限公司 一种元数据节点的内存镜像方法、装置
CN107450856A (zh) * 2017-08-10 2017-12-08 北京元心科技有限公司 存储数据的写入方法、读取方法以及相应的装置、终端
CN107436738B (zh) * 2017-08-17 2019-10-25 北京理工大学 一种数据存储方法及系统
CN110543384B (zh) * 2019-09-05 2022-05-17 Oppo广东移动通信有限公司 内存的回写方法、装置、终端及存储介质
US11334416B2 (en) * 2019-10-25 2022-05-17 Dell Products L.P. System and method for transferring peripheral firmware core data

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5293612A (en) * 1989-05-11 1994-03-08 Tandem Computers Incorporated Selective dump method and apparatus
US5339406A (en) * 1992-04-03 1994-08-16 Sun Microsystems, Inc. Reconstructing symbol definitions of a dynamically configurable operating system defined at the time of a system crash
US6948010B2 (en) * 2000-12-20 2005-09-20 Stratus Technologies Bermuda Ltd. Method and apparatus for efficiently moving portions of a memory block
JP2007226413A (ja) * 2006-02-22 2007-09-06 Hitachi Ltd メモリダンプ方法、メモリダンププログラム、及び、計算機システム
US7831857B2 (en) * 2006-10-31 2010-11-09 Hewlett-Packard Development Company, L.P. Method and system for recovering from operating system crash or failure
TW200919365A (en) * 2007-10-30 2009-05-01 Sercomm Corp Image processing system and method thereof applied with instant messaging program
JP5211751B2 (ja) * 2008-02-26 2013-06-12 富士通株式会社 計算機、ダンププログラムおよびダンプ方法
CN101770401B (zh) * 2008-12-30 2013-09-18 北京天融信网络安全技术有限公司 一种建立多核运行环境的方法
JP5449791B2 (ja) * 2009-02-02 2014-03-19 オリンパス株式会社 データ処理装置および画像処理装置

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
None
See also references of EP2437178A4

Also Published As

Publication number Publication date
CN102203718A (zh) 2011-09-28
WO2011127865A3 (zh) 2012-04-19
CN102203718B (zh) 2013-08-28
EP2437178A4 (en) 2012-08-01
EP2437178B1 (en) 2013-07-31
US8627148B2 (en) 2014-01-07
US20120304019A1 (en) 2012-11-29
EP2437178A2 (en) 2012-04-04

Similar Documents

Publication Publication Date Title
WO2011127865A2 (zh) 内存转储处理方法和装置及内存转储系统
TWI714847B (zh) 區塊鏈共識網路中處理共識請求的方法、裝置和電子設備
US10411953B2 (en) Virtual machine fault tolerance method, apparatus, and system
CN106406896B (zh) 一种并行PipeLine技术的区块链建块方法
CN108647104B (zh) 请求处理方法、服务器及计算机可读存储介质
CN111367630A (zh) 一种基于云计算的多用户多优先级的分布式协同处理方法
CN104424186B (zh) 一种流计算应用中实现持久化的方法及装置
US10324881B2 (en) Systems and methods for flipping NIC teaming configuration without interfering live traffic
US20230244537A1 (en) Efficient gpu resource allocation optimization method and system
WO2022062833A1 (zh) 内存分配方法及相关设备
CN110990154B (zh) 一种大数据应用优化方法、装置及存储介质
CN112988065B (zh) 数据迁移方法、装置、设备及存储介质
WO2024077881A1 (zh) 神经网络训练的调度方法、系统及计算机可读存储介质
CN110377398A (zh) 一种资源管理方法、装置及主机设备、存储介质
CN112995051B (zh) 网络流量恢复方法及装置
CN103823712A (zh) 一种多cpu虚拟机系统的数据流处理方法和装置
US10289329B2 (en) Burst buffer dynamic logical volume sizing in high performance computing environment
WO2016008317A1 (zh) 数据处理方法和中心节点
CN110502337B (zh) 针对Hadoop MapReduce中混洗阶段的优化系统
WO2015058594A1 (zh) 一种进程加载方法、装置及系统
CN111782368A (zh) 中断嵌套处理方法、装置、终端及存储介质
CN108347341A (zh) 一种用于调整虚拟机加速能力的加速能力调整方法及装置
CN111459871A (zh) 一种基于fpga异构计算的区块链加速系统及方法
CN114697194B (zh) 阻塞式事件通知方法及装置
CN109189615A (zh) 一种宕机处理方法和装置

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 201180000579.6

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11768476

Country of ref document: EP

Kind code of ref document: A2

WWE Wipo information: entry into national phase

Ref document number: 2011768476

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE