WO2014100996A1 - 磁盘阵列刷盘方法及磁盘阵列刷盘装置 - Google Patents

磁盘阵列刷盘方法及磁盘阵列刷盘装置 Download PDF

Info

Publication number
WO2014100996A1
WO2014100996A1 PCT/CN2012/087506 CN2012087506W WO2014100996A1 WO 2014100996 A1 WO2014100996 A1 WO 2014100996A1 CN 2012087506 W CN2012087506 W CN 2012087506W WO 2014100996 A1 WO2014100996 A1 WO 2014100996A1
Authority
WO
WIPO (PCT)
Prior art keywords
disk
raid group
logical unit
concurrent
current
Prior art date
Application number
PCT/CN2012/087506
Other languages
English (en)
French (fr)
Inventor
张翔
董浩
李�权
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to JP2015549916A priority Critical patent/JP6060277B2/ja
Priority to EP12890699.7A priority patent/EP2927779B1/en
Priority to PCT/CN2012/087506 priority patent/WO2014100996A1/zh
Priority to CN201280002903.2A priority patent/CN103229136B/zh
Publication of WO2014100996A1 publication Critical patent/WO2014100996A1/zh
Priority to US14/752,077 priority patent/US9582433B2/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/12Replacement control
    • G06F12/121Replacement control using replacement algorithms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0804Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with main memory updating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • G06F12/0868Data transfer between cache memory and other subsystems, e.g. storage devices or host systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/0644Management of space entities, e.g. partitions, extents, pools
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0689Disk arrays, e.g. RAID, JBOD

Definitions

  • Embodiments of the present invention relate to computer technology, and in particular, to a disk array brush disk method and a disk array brush disk device. Background technique
  • the disk array of the disk array is scheduled at the level of the logic unit, that is, each input/output (Input/Output, hereinafter abbreviated as 10) is directed to a single logical unit, and includes multiple in the disk array.
  • Each RAID group has multiple logical units in each RAID group.
  • the current disk array brushing method cannot handle the situation well. For example, multiple logical units in the same RAID group, the brush disk 10 for a single logical unit is ordered in the logical unit, but multiple logics are required to concurrently brush multiple logical units and concurrently brush the disk. The order between the units is discrete, and the bursts of 10 that are represented throughout the RAID group are often discrete.
  • Discrete brush disk concurrency 10 will cause the magnetic arm of the disk to be bounced back and forth, a large amount of time is consumed in the magnetic arm addressing instead of data reading and writing, which has a great negative impact on the overall performance of the disk array, the throughput of the disk array Lower.
  • Embodiments of the present invention provide a disk array brush disk method and a disk array brush disk device for improving the disk disk efficiency and increasing the throughput of the disk array.
  • a disk array brush method includes: obtaining, according to a physical address of each logical unit, a logical unit in a redundant array of the same independent disk RAID group in the disk array;
  • the brush disk concurrent input and output 10 of the RAID group is sequentially brushed to the logical units in the RAID group according to the order of the physical addresses of the logical units, and each of the brush disks 10 includes a logic to be brushed to the RAID group. At least one dirty page of the unit.
  • the flashing concurrent input and output 10 of the RAID group is sequentially brushed to the logical unit in the RAID group in the order of the physical addresses of the logical units.
  • the method further includes: determining, according to the total number of dirty pages to be brushed to the disk array, the total number of dirty pages to be brushed to the RAID group, and the concurrent upper limit of the RAID group, determining the concurrent execution of the RAID group 10 numbers.
  • the total number of dirty pages, U is the current utilization of the disk array.
  • the sequential order of the physical addresses of the cells to the logical units in the RAID group includes: traversing from the current logical unit pointed by the brush pointer of the RAID group; if the disk is to be flushed to the dirty page of the RAID group If the dirty page to be brushed to the current logical unit is not included, the brush pointer of the RAID group is directed to another logical unit that is next to the current logical unit in the order of the physical address;
  • the dirty page of the RAID group includes a dirty page to be brushed to the current logical unit, and the current logical unit is brushed.
  • the performing, the brushing the current logic unit includes: if the current logic unit has completed the brush If the number of concurrent bursts of the disk does not reach the upper limit of the concurrent flash of the current logical unit, the dirty page corresponding to the current logical unit of the current logical unit is swiped to the current logical unit; if the current logical unit If the number of concurrently brushed disks reaches the upper limit of the concurrent logical disk of the current logical unit, the brushing of the current logical unit is stopped, and the brush pointers of the RAID group are pointed in the order according to the physical address. Another logical unit of the current logical unit.
  • the performing by the If the number of concurrent flashes of the RAID group does not reach the upper limit of the concurrent flash disk of the RAID group, the current logical unit is brushed; if the number of completed flash disks of the RAID group reaches the RAID group If the upper limit of the disk is concurrently executed, the brush disk of the RAID group is stopped.
  • the method further includes: maintaining a brush of the RAID group The disk pointer does not change.
  • a disk array brush device includes: an obtaining module, configured to acquire, according to a physical address of each logical unit, each logical unit in a redundant array of the same independent disk RAID group in the disk array a sorting module, the brush disk concurrent input and output 10 is used to sequentially scan the logical units in the RAID group according to the order of the physical addresses of the logical units, and each of the brush disks 10 includes At least one dirty page of one logical unit in the RAID group is to be swiped.
  • the method further includes: a determining module, configured, according to the total number of dirty pages to be brushed to the disk array, the total number of dirty pages to be brushed to the RAID group, and The upper limit of the concurrent flash disk of the RAID group is determined, and the number of the brush disks of the RAID group is determined to be 10 concurrently.
  • the brush module is specifically configured to: traverse from a current logical unit pointed by the brush pointer of the RAID group; if the dirty page to be brushed to the RAID group does not include Brushing the disk to the dirty page of the current logical unit, and pointing the brush pointer of the RAID group to another logical unit next to the current logical unit in the order of the physical address;
  • the dirty page of the RAID group includes a dirty page to be flashed to the current logical unit, and the current logical unit is brushed.
  • the brush module is configured to perform a brushing on the current logic unit, specifically: if the current logic unit If the number of concurrent flashes of the completed logical disk does not reach the upper limit of the concurrent flash of the current logical unit, the dirty page corresponding to the current logical unit of the current logical unit is swiped to the current logical unit; If the number of concurrent flashes of the current logical unit reaches the upper limit of the concurrent flash of the current logical unit, the current logical unit is stopped from being brushed, and the flash pointer of the RAID group is pointed to the physical address. Another logical unit that is next to the current logical unit.
  • the brush module is further configured to: if the RAID group has completed the brush If the concurrent number of the RAID group does not reach the upper limit of the concurrent flash disk of the RAID group, the current logical unit is brushed; if the number of completed flush disks of the RAID group reaches the upper limit of the concurrent brush of the RAID group, Then, the brush disk of the RAID group is stopped.
  • the brush module is further configured to: keep the brush pointer of the RAID group unchanged.
  • another disk array brush device provided by the embodiment of the present invention includes: a memory, configured to store an instruction
  • processor coupled to the memory, the processor being configured to execute instructions stored in the memory, wherein
  • the processor is configured to: obtain a sorting of logical units of the same independent disk redundant array RAID group in the disk array according to physical addresses of the logical units; and send the RAID group concurrent input and output 10 according to each
  • the sorting of the physical addresses of the logical units is sequentially swiped to the logical units in the RAID group, and each of the flush concurrents 10 includes at least one dirty page to be swiped to one of the logical units in the RAID group.
  • the processor is configured to:: according to the total number of dirty pages to be brushed to the disk array, the total number of dirty pages to be brushed to the RAID group, and The upper limit of the concurrent flash disk of the RAID group is determined, and the number of the brush disks of the RAID group is determined to be 10 concurrently.
  • the processor is configured to: specifically: brush from the RAID group The current logical unit pointed to by the disk pointer starts to traverse; if the dirty page to be brushed to the RAID group does not include the dirty page to be brushed to the current logical unit, the flash pointer of the RAID group is pointed to the physical The order of the addresses is arranged in another logical unit of the current logical unit; if the dirty page to be swiped to the RAID group contains a dirty page to be brushed to the current logical unit, the current The logic unit performs a brushing.
  • the processor is configured to perform a brushing on the current logic unit, specifically: if the current logic If the number of concurrent flashes of the unit has not reached the upper limit of the concurrent flash of the current logical unit, the dirty page corresponding to the current logical unit of the current logical unit is swiped to the current logical unit; If the number of concurrent flashes of the current logical unit reaches the upper limit of the concurrent logical disk of the current logical unit, stopping the brushing of the current logical unit,
  • the scrubber pointer of the RAID group points to another logical unit that is next to the current logical unit in the order of the physical address.
  • the processor is configured to: further if: If the number of concurrent disks of the RAID group does not reach the upper limit of the concurrent disk of the RAID group, the current logical unit is brushed; if the number of completed flash disks of the RAID group reaches the upper limit of the concurrent disk of the RAID group , then stop the brush to the RAID group.
  • the processor is configured to: keep the brush pointer of the RAID group unchanged.
  • the disk array brush disk method and the disk array brush disk device provided by the embodiments of the present invention use a physical address sequence to collectively schedule multiple logical units in a single RAID group, thereby reducing the time consumed by the magnetic arm bounce and address addressing.
  • By independently controlling each RAID group the impact between RAID groups is avoided, thereby improving the disk array efficiency and increasing the throughput of the disk array.
  • Embodiment 1 is a flowchart of Embodiment 1 of a disk array brush disk method according to the present invention
  • FIG. 2 is a flow chart of a second embodiment of a disk array brushing method according to the present invention
  • FIG. 3 is a flow chart of a third embodiment of a disk array brushing method according to the present invention
  • FIG. 5 is a schematic structural view of a second embodiment of a disk array brush disk device according to the present invention.
  • FIG. 1 is a flowchart of a first embodiment of a disk array brushing method according to the present invention. As shown in FIG. 1 , the disk array brushing method provided in this embodiment may include:
  • Step S110 Obtain, according to the physical address of each logical unit, the logical units in the same RAID group in the disk array.
  • each RAID group may include multiple logical units, and the logical unit may be identified by a logical unit number LU, and the logical unit is used for Divide the storage space in the RAID group.
  • the logical unit is used for Divide the storage space in the RAID group.
  • all logical units in the same RAID group are divided according to their physical address order on the RAID group, and each logical unit occupies a continuous physical address space in the RAID group.
  • LUNs 1 to 5 may be sorted by physical address: LUN2, LU4, LUN1, LUN5, LUN3.
  • Step S120 The flash disk concurrent input/output 10 of the RAID group is sequentially brushed to the logical unit in the RAID group according to the sorting of the physical addresses of the logical units, and each of the brush disks 10 includes a logical unit to be brushed to the RAID group. At least one dirty page.
  • a flash disk concurrency 10 of a RAID group may contain one or more dirty pages that need to be swiped to the same logical unit in the RAID group.
  • One brush disk concurrency 10 of the RAID group can be understood as dirty of the brush disk concurrently 10
  • the page is swiped to a single brush operation of the RAID group.
  • the dirty pages to be swiped to the RAID group are successively packed in the order of their corresponding destination addresses to form a single flash disk concurrently, wherein the destination address of the dirty page corresponds to the logical unit to be written by the dirty page. Physical address.
  • Each RAID group is brushed separately. During the process of brushing, each RAID group does not affect each other.
  • the flash disk concurrent 10 in the RAID group is respectively brushed to the logical unit of the RAID group in a certain order. If the order is consistent with the physical order of the logical unit in the RAID group, the physical address of each logical unit in the RAID group is Continuously, the process of concurrently brushing the RAID group to the RAID group is the process of sequentially addressing the magnetic arms in the RAID group and writing the dirty pages to the disk.
  • the disk array brush method provided by the embodiment of the present invention performs the process of collectively scheduling the logical units in a single RAID group according to the physical address sequence, thereby reducing the time consumed by the magnetic arm to jump back and forth, and is independent of each RAID group.
  • the control avoids the influence between the RAID groups, thereby improving the disk array efficiency and increasing the throughput of the disk array.
  • FIG. 2 is a flowchart of a second embodiment of a disk array brushing method according to the present invention. As shown in FIG. 2, the disk array brushing method provided in this embodiment may include:
  • Step S210 Obtain, according to the physical address of each logical unit, the logical units in the same RAID group in the disk array.
  • the RAID array may include multiple RAID groups, each RAID group may include multiple logical units, the logical unit may be identified by a logical unit number LU, and the logical unit is used to divide the storage space in the RAID group.
  • the logical unit may be identified by a logical unit number LU, and the logical unit is used to divide the storage space in the RAID group.
  • all logical units in the same RAID group can be divided according to their physical address order on the RAID group, each logic The unit occupies a contiguous physical address space in the RAID group.
  • Step S220 Determine the number of concurrent calls of the RAID group according to the total number of dirty pages to be brushed to the disk array, the total number of dirty pages to be flashed to the RAID group, and the upper limit of the concurrent disk of the RAID group.
  • the dirty page to be brushed to the disk array is data that has been written to the CACHE but has not been written to the disk array. If the dirty page in the CACHE exceeds the high water level, If the space for storing dirty pages in CACHE is about to be exhausted, the upper limit of the concurrent disk of each RAID group is used as the number of concurrent flashes of the RAID group. If the dirty page in the CACHE does not reach the high water level, the proportion of the dirty pages to be brushed to the disk array, the concurrent disk limit of the RAID group, and the disk according to the total number of dirty pages to be brushed to the RAID group.
  • the busyness of the array determines the number of concurrent bursts of logical units in the RAID group.
  • the upper limit of the concurrent flash disk of the RAID group that is, the upper limit of the number of the flash disks 10 in the RAID group can be performed in each of the concurrent flash disks.
  • the upper limit of the concurrent disk of the RAID group is determined by the member disk type and the number of member disks of the RAID group.
  • the RAID level of the RAID group is determined. The faster the member disk is, the more the number of member disks as the data disk, the larger the upper limit of the concurrent disk of the RAID group.
  • the number of the concurrent discs of the RAID group can be determined according to the following formula:
  • ⁇ rada is the RAID group's brush disk concurrent number 10, including the total number of all the logical units of the RAID group in the RAID group 10; M is the upper limit of the concurrent brush disk of the RAID group; p cit is the disk to be brushed to the RAID group The total number of dirty pages; P is the total number of dirty pages to be swiped to the disk array; U is the current utilization of the disk array.
  • Step S230 The flash disk concurrent input/output 10 of the RAID group is sequentially brushed to the logical unit in the RAID group according to the order of the physical addresses of the logical units, and each of the brush disks 10 includes a logical unit to be brushed to the RAID group. At least one dirty page.
  • a flash disk concurrency 10 of a RAID group may contain one or more dirty pages that need to be swiped to the same logical unit in the RAID group.
  • One brush disk concurrency 10 of the RAID group can be understood as dirty of the brush disk concurrently 10 Page brush to one flash operation of the RAID group, one brush disk of the RAID group Concurrency 10 can contain one or more dirty pages that need to be flushed to the same logical unit in the RAID group.
  • the dirty pages to be flushed to the RAID group are successively packed in the order of their corresponding destination addresses to form a single flash disk concurrently, wherein the destination address of the dirty page corresponds to the logical unit to be written by the dirty page. Physical address.
  • Each RAID group is separately brushed.
  • each RAID group does not affect each other.
  • the flash disk concurrent 10 in the RAID group is respectively brushed to the logical unit of the RAID group in a certain order. If the order is consistent with the physical order of the logical unit in the RAID group, the physical address of each logical unit in the RAID group is Continuously, the process of concurrently brushing the RAID group to 10 for the RAID group is the process of sequentially addressing the magnetic arms in the RAID group and writing dirty pages to the disk.
  • the disk array brush method provided by the embodiment of the present invention performs the process of collectively scheduling the logical units in a single RAID group according to the physical address sequence, thereby reducing the time consumed by the magnetic arm to jump back and forth, and is independent of each RAID group.
  • the control avoids the influence of each RAID group, and balances the usage rate of each RAID group to a certain extent by facing the unified management of the RAID groups of the RAID groups at the RAID group level. This improves the disk array efficiency and increases the throughput of the disk array.
  • the RAID unit concurrent input/output 10 of the RAID group is sequentially brushed to the logical unit in the RAID group according to the physical address of each logical unit, including: traversing from the current logical unit pointed by the brush pointer of the RAID group; If the dirty page to be flushed to the RAID group does not include the dirty page to be brushed to the current logical unit, the flash disk pointer of the RAID group is directed to another logical unit that is next to the current logical unit in the order of the physical address; If the dirty page to be flushed to the RAID group contains a dirty page corresponding to the current logical unit, meaning that there is a concurrent disk corresponding to the current logical unit 10, the current logical unit is brushed.
  • a brush pointer in a single RAID group, can be set to indicate the logical unit at the beginning of the current brush operation, that is, the logical unit at which the last concurrent brush operation is terminated. If during the brushing process, the dirty page to be swiped to the RAID group does not include the dirty page of the current logical unit to be brushed to the brush pointer, meaning that there is no corresponding concurrent flash of the logical unit of the logical unit 10, then The brush pointer points to another logical unit, and the other logical unit is a logical unit ranked next to the current logical unit in accordance with the physical address order, and the other logical unit is brushed. If the current logical unit is ranked last in the RAID group according to the physical address order, the brush pointer is pointed to the logical unit ranked first in the RAID group according to the physical address order.
  • the current logical unit is brushed, including: if the current logical unit is completed If the number of concurrent flashes of the disk is less than the upper limit of the concurrent disk of the current logical unit, the dirty page corresponding to the current logical unit is flushed to the current logical unit; if the current logical unit has completed the brush, the number of concurrently executed is 10 After the concurrent upper limit of the current logical unit is reached, the current logical unit is stopped from being brushed, and the brush pointer of the RAID group is directed to another logical unit that is next to the current logical unit in the order of the physical address.
  • the current logical unit points the brush pointer to another logical unit, and the other logical unit is a logical unit that is next to the current logical unit according to the physical address order, and the other logical unit is brushed.
  • the current logical unit is brushed, including: if the number of concurrent flashes of the RAID group does not reach the upper limit of the concurrent flash disk of the RAID group, the current logical unit is brushed; if the RAID group has completed the brush If the number of concurrent disks reaches the upper limit of the concurrent disk of the RAID group, the disk of the RAID group is stopped. Similarly, before the dirty page is swiped to the current logical unit, if it is determined that the number of completed flashes of all the logical units in the RAID group reaches the number of the logical units of the RAID group and the number of the flashes is 10, this time The brush disk of the RAID group has been completed. The brush pointer of the RAID group is also stuck to the current logical unit, and the brush disk of the RAID group is exited.
  • the disk array brush method performs the process of collectively scheduling the logical units in a single RAID group according to the physical address sequence, thereby reducing the time consumed by the magnetic arm to jump back and forth, and is independent of each RAID group.
  • Control and according to the total number of dirty pages to be brushed to the disk array, the total number of dirty pages to be brushed to each RAID group, and the concurrent brush upper limit of each RAID group, respectively determine the number of concurrently brushed disks of each RAID group, avoiding The effect of the difference in the number of logical units of each RAID group increases the efficiency of the disk array and increases the throughput of the disk array.
  • FIG. 3 is a flowchart of a third embodiment of a disk array brushing method according to the present invention. As shown in FIG. 3, the disk array brushing method provided in this embodiment is applied to a method for brushing a single RAID group. Includes:
  • Step S302 Determine the number of concurrent flashes of the RAID group.
  • the number of flushed disks of the RAID group can be determined according to the total number of dirty pages to be brushed to the disk array, the total number of dirty pages to be flashed to the RAID group, and the upper limit of the concurrent disk of the RAID group.
  • Step S304 Determine a current logical unit pointed to by the brush pointer of the RAID group.
  • step S306 it is determined whether the number of completed flashes of the RAID group reaches the number of the flash disks of the RAID group is 10, and if yes, step S320 is performed, otherwise step S308 is performed.
  • step S308 it is determined whether the dirty page to be flashed to the RAID group contains the dirty page to be brushed to the current logical unit, and if yes, step S310 is performed, otherwise step S316 is performed.
  • step S310 it is determined whether the number of concurrently executed flash disks of the RAID group reaches the upper limit of the concurrent disk of the RAID group. If yes, step S320 is performed, otherwise step S312 is performed.
  • Step S312 Determine whether the number of concurrent flashes of the current logical unit has reached the upper limit of the concurrent disk of the current logical unit. If yes, execute step S316, and if no, execute step S314.
  • a concurrent disk upper limit can also be set for each logical unit in the RAID group. Correspondingly, it can be determined whether the number of completed flashes of the current logical unit pointed to by the brush pointer reaches the upper limit of the concurrent flash of the current logical unit.
  • Step S314 Write the dirty page included in one of the combo discs 10 corresponding to the current logical unit into the current logical unit, and return to step S306.
  • the method further includes: packaging at least one dirty page corresponding to the current logical unit into a brush concurrent 10 corresponding to the current logical unit.
  • Step S316 Whether the brush pointer has traversed all the logical units. If yes, go to step 320, if no, go to step S318.
  • all logical units herein refer to all logical units in the RAID group. Step S318, pointing the brush pointer to the next logical unit, and returning to step S304.
  • next logical unit is a logical unit arranged next to the current logical unit pointed by the brush pointer in the logical block addressing order. If the current logical unit is ranked last in the RAID group according to the physical address order, the scrubbing pointer is pointed to the logical unit ranked first in the RAID group in order of physical address.
  • Step S320 the brush is completed.
  • the disk array brushing method provided by the embodiment of the present invention, through the above steps, uniformly scheduling the logical units in a single RAID group according to the physical address order, thereby reducing the time consumed by the magnetic arm bounce and address addressing,
  • the number of logical units varies greatly.
  • the disk array brush device 400 provided in this embodiment may include:
  • the obtaining module 410 may be configured to obtain a sort of physical addresses of logical units of each logical unit in a redundant array of the same independent disk in the disk array.
  • the brush disk module 420 can be used to sequentially flush the RAID input/output 10 of the RAID group to the logical unit in the RAID group according to the physical address of each logical unit, and each of the flash disks 10 includes the disk to be flushed to the RAID. At least one dirty page of a logical unit in a group.
  • the determining module 430 is further configured to: determine, according to the total number of dirty pages to be brushed to the disk array, the total number of dirty pages to be brushed to the RAID group, and the concurrent upper limit of the RAID group, the RAID group is determined. Concurrent 10 numbers.
  • the brush module 420 may be specifically configured to: traverse from a current logical unit pointed by the brush pointer of the RAID group; if the dirty page to be brushed to the RAID group does not include the dirty page to be brushed to the current logical unit, Then, the brush pointer of the RAID group is pointed to another logical unit that is next to the current logical unit in the order of the physical address; if the dirty page to be swiped to the RAID group contains the dirty page to be brushed to the current logical unit, The current logical unit is brushed.
  • the brushing module 420 is configured to perform a brushing on the current logic unit, and specifically includes: if the number of concurrent flashes of the current logical unit has not reached the upper limit of the concurrent logical disk of the current logical unit, the corresponding logical unit will be corresponding to the current logical unit.
  • the dirty disk in the concurrent disk is swiped to the current logical unit; if the number of concurrent flashes of the current logical unit reaches the upper limit of the concurrent logical disk of the current logical unit, the current logical unit is stopped, and the RAID is replaced.
  • the group's brush pointer points to another logical unit that is next to the current logical unit in the order of the physical address.
  • the brush module 420 can also be used to: if the number of concurrent flashes of the RAID group does not reach the upper limit of the concurrent disk of the RAID group, the current logical unit is brushed; if the RAID group has completed the brush If the number of concurrent 10 reaches the upper limit of the concurrent flash disk of the RAID group, the brushing of the RAID group is stopped. Further, the brush module 420 can also be used to: keep the brush pointer of the RAID group unchanged.
  • the disk array brush device 400 provided in this embodiment may be used to perform the technical solution of the method embodiment shown in any one of FIG. 1 to FIG. 3, and the implementation principle and technical effects are similar, and details are not described herein again.
  • FIG. 5 is a schematic structural diagram of Embodiment 2 of a disk array brush disk device according to the present invention, as shown in FIG.
  • the disk array brush device 500 includes: a memory 501 for storing instructions; a processor 502 coupled to the memory, the processor 502 configured to execute instructions stored in the memory 501; a disk array brush device 500 may also include network interface 503 as well as other user interfaces 504 and communication bus 505. Communication bus 505 is used to implement connection communication between these devices.
  • the memory 501 may include a high speed random access memory (Random Access Memory, RAM), and may also include a non-volatile memory such as at least one disk memory.
  • the memory 501 can optionally include at least one storage device located remotely from the aforementioned processor 502.
  • processor 502 is configured to:
  • each logical unit in the same independent disk redundant array RAID group in the disk array according to the physical address of each logical unit; and flush the RAID group concurrent input and output 10 to the RAID according to the sorting of the physical addresses of the logical units.
  • the logical unit in the group, each of the concurrent discs 10 includes at least one dirty page to be swiped to a logical unit in the RAID group.
  • the processor 502 is configured to further: determine, according to the total number of dirty pages to be brushed to the disk array, the total number of dirty pages to be brushed to the RAID group, and the concurrent upper limit of the RAID group, the brush set of the RAID group is determined. Concurrent 10 numbers.
  • the processor 502 is configured to: traverse from the current logical unit pointed to by the brush pointer of the RAID group; if the dirty page to be brushed to the RAID group does not include the dirty page to be brushed to the current logical unit , the flash disk pointer of the RAID group is pointed to another logical unit next to the current logical unit in the order of the physical address; if the dirty page to be brushed to the RAID group contains the dirty page to be brushed to the current logical unit, Brush the current logical unit.
  • the processor 502 is configured to perform a brushing on the current logical unit, and specifically includes: if the number of concurrent flashes of the current logical unit has not reached the upper limit of the concurrent logical unit of the current logical unit, the current logic is corresponding to The dirty brush page of the unit is flushed to the current logical unit in 10; if the number of concurrent flashes of the current logical unit reaches the upper limit of the concurrent logical unit of the current logical unit, the current logical unit is stopped from being brushed, The RAID pointer of the RAID group points to another logical unit that is next to the current logical unit in the order of the physical address.
  • the processor 502 is configured to further: if the number of concurrent flashes of the RAID group does not reach the upper limit of the concurrent flash disk of the RAID group, the current logical unit is brushed; if the RAID group has been completed If the number of concurrent disks reaches the upper limit of the concurrent disk of the RAID group, the disk of the RAID group is stopped.
  • processor 502 is configured to further: keep the brush pointer of the RAID group unchanged.
  • the disk array brush device 500 provided in this embodiment may be used to implement the technical solution implemented by any of the methods shown in FIG. 1 to FIG. 3, and the implementation principle and technical effects thereof are similar, and are not described herein again.
  • FIG. 5 is only a schematic diagram of the structure of the disk array brush disk device provided by the present invention, and the specific structure can be adjusted according to actual conditions.
  • the disk array brush method and the disk array brush device provided by the embodiments of the present invention reduce the magnetic arm bounce and address by uniformly scheduling the logical units in a single RAID group according to the physical address order.
  • the consumption time by independent control of each RAID group, avoids the impact between each RAID group, and balances the use of each RAID group to a certain extent by balancing the number of each RAID group in the RAID group layer.
  • the rate increases the disk array's efficiency and increases the disk array's throughput.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Signal Processing For Digital Recording And Reproducing (AREA)
  • Manufacturing Of Magnetic Record Carriers (AREA)
  • Techniques For Improving Reliability Of Storages (AREA)
  • Brushes (AREA)

Abstract

本发明实施例提供一种磁盘阵列刷盘方法及磁盘阵列刷盘装置,方法包括:获取磁盘阵列中同一独立磁盘冗余阵列RAID组中各逻辑单元按照各逻辑单元的物理地址的排序;将RAID组的刷盘并发输入输出IO按照各逻辑单元的物理地址的排序依次刷盘至RAID组中的逻辑单元,每个刷盘并发IO包括待刷盘至RAID组中一个逻辑单元的至少一个脏页面。本发明实施例提供的磁盘阵列刷盘方法及磁盘阵列刷盘装置,通过对单个RAID组内的逻辑单元统一调度按物理地址顺序进行刷盘,减少了磁臂来回跳动寻址消耗的时间,通过对各RAID组独立的控制,避免了各RAID组之间的影响,从而提高了磁盘阵列的刷盘效率,增加了磁盘阵列的吞吐量。

Description

磁盘阵列刷盘方法及磁盘阵列刷盘装置
技术领域
本发明实施例涉及计算机技术, 尤其涉及一种磁盘阵列刷盘方法及磁盘 阵列刷盘装置。 背景技术
随着计算机应用技术的快速发展, 随之产生的大量数据对存储的空间和 性能的要求都在不断提高。 由于当前的主流磁盘操作中仍然存在大量的机械 操作, 其性能和处理器以及内存都有较大的差距。 将高速緩冲存储器 ( CACHE )技术应用于存储领域, 不仅可以隐藏主机时延, 还可以整合数 据, 通过 CACHE将数据以磁盘友好的方式写入磁盘, 即通过对磁盘进行刷 盘, 从而达到存储系统最佳的吞吐量。
经过几十年的研究, 现有的 CACHE算法已经趋于成熟。 但是计算机提 供的应用日趋多样化, 磁盘阵列提供的空间以及性能都需要更灵活的调度方 法。 而同一磁盘阵列中往往会包括不同类型的盘, 即使是同类型的盘组成的 独立磁盘冗余阵列 ( Redundant Array of Independent Disks , 以下简称 RAID ) 组, 每个 RAID组包含的成员盘数往往也会不同。 同时单个 RAID组中以逻 辑单元号 ( Logic Unit Number, 以下简称 LU )标识的逻辑单元的个数也在 逐渐增多。
现有技术中, 磁盘阵列的刷盘都是在逻辑单元的层面进行调度, 即每个 刷盘输入输出(Input/Output, 以下简称 10 )均针对单个逻辑单元, 而在磁盘 阵列中同时包括多个 RAID组, 每个 RAID组中又有多个逻辑单元, 目前的 磁盘阵列刷盘方法不能对该情况进行 4艮好地处理。 比如同一个 RAID组内的 多个逻辑单元, 针对单个逻辑单元的刷盘 10在该逻辑单元内是有序的, 但 由于需要对多个逻辑单元并发刷盘、 且并发刷盘的多个逻辑单元之间的顺序 是离散的, 表现在整个 RAID组内的刷盘并发 10往往是离散的。 离散的刷盘 并发 10会导致磁盘的磁臂来回跳动寻址, 大量的时间消耗在磁臂寻址而非 数据读写, 对磁盘阵列的整体性能造成极大的负面影响, 磁盘阵列的吞吐量 较低。 发明内容
本发明实施例提供一种磁盘阵列刷盘方法及磁盘阵列刷盘装置, 用以提 高磁盘阵列的刷盘效率, 增加磁盘阵列的吞吐量。
第一方面, 本发明实施例提供的一种磁盘阵列刷盘方法, 包括: 获取磁盘阵列中同一独立磁盘冗余阵列 RAID组中各逻辑单元按照所述 各逻辑单元的物理地址的排序; 将所述 RAID组的刷盘并发输入输出 10按照 各逻辑单元的物理地址的排序依次刷盘至所述 RAID组中的逻辑单元, 每个 刷盘并发 10包括待刷盘至所述 RAID组中一个逻辑单元的至少一个脏页面。
在第一方面的第一种可能的实现方式中, 所述将所述 RAID组的刷盘并 发输入输出 10按照各逻辑单元的物理地址的顺序依次刷盘至所述 RAID组中 的逻辑单元之前, 还包括: 根据待刷盘至所述磁盘阵列的脏页面总数、 待刷 盘至所述 RAID组的脏页面总数和所述 RAID组的并发刷盘上限, 确定所述 RAID组的刷盘并发 10数。
根据第一方面的第一种可能的实现方式, 在第一方面的第二种可能的实 现方式中, 所述根据待刷盘至所述磁盘阵列的脏页面总数、 待刷盘至所述 RAID组的脏页面总数和所述 RAID组的并发刷盘上限 , 确定所述 RAID组的 刷盘并发 10 数, 包括: 确定所述 RAID 组的刷盘并发 10 数 dn = M x (Pn / P) x (\ - U) , 其中, M为所述 RAID组的并发刷盘上限, 为所述 待刷盘至所述 RAID组的脏页面总数, P为所述待刷盘至所述磁盘阵列的脏 页面总数, U为所述磁盘阵列的当前利用率。
根据第一方面或第一方面的前两种可能的实现方式之一, 在第一方面的 第三种可能的实现方式中, 所述将所述 RAID组的刷盘并发输入输出 10按照 各逻辑单元的物理地址的顺序依次刷盘至所述 RAID 组中的逻辑单元, 包 括: 从所述 RAID组的刷盘指针指向的当前逻辑单元开始遍历; 若待刷盘至 所述 RAID组的脏页面不包含待刷盘至所述当前逻辑单元的脏页面, 则将所 述 RAID组的刷盘指针指向按照物理地址的顺序排在所述当前逻辑单元下一 个的另一逻辑单元; 若所述待刷盘至所述 RAID组的脏页面包含待刷盘至所 述当前逻辑单元的脏页面, 对所述当前逻辑单元进行刷盘。 根据第一方面第三种可能的实现方式, 在第一方面的第四种可能的实现 方式中, 所述对所述当前逻辑单元进行刷盘, 包括: 若所述当前逻辑单元的 已完成刷盘并发 10数未达到所述当前逻辑单元的并发刷盘上限, 则将对应 所述当前逻辑单元的刷盘并发 10 中的脏页面刷盘至所述当前逻辑单元; 若 所述当前逻辑单元的已完成刷盘并发 10数达到所述当前逻辑单元的并发刷 盘上限, 则停止对所述当前逻辑单元进行刷盘, 将所述 RAID组的刷盘指针 指向按照物理地址的顺序排在所述当前逻辑单元下一个的另一逻辑单元。
根据第一方面的第三种或者第四种可能的实现方式, 在第一方面的第五 种可能的实现方式中, 所述对所述当前逻辑单元进行刷盘, 包括: 若所述 RAID组的已完成刷盘并发 10数未达到所述 RAID组的并发刷盘上限, 则对 所述当前逻辑单元进行刷盘; 若所述 RAID组的已完成刷盘并发 10数达到所 述 RAID组的并发刷盘上限, 则停止对所述 RAID组的刷盘。
根据第一方面的第五种可能的实现方式, 在第一方面的第六种可能的实 现方式中, 所述停止对所述 RAID组的刷盘之后, 还包括: 保持所述 RAID 组的刷盘指针不变。
第二方面, 本发明实施例提供的一种磁盘阵列刷盘装置, 包括: 获取模块, 用于获取磁盘阵列中同一独立磁盘冗余阵列 RAID组中各逻 辑单元按照所述各逻辑单元的物理地址的排序; 刷盘模块, 用于将所述 RAID组的刷盘并发输入输出 10按照各逻辑单元的物理地址的排序依次刷盘 至所述 RAID组中的逻辑单元, 每个刷盘并发 10包括待刷盘至所述 RAID组 中一个逻辑单元的至少一个脏页面。
在第二方面的第一种可能的实现方式中, 还包括: 确定模块, 用于根据 待刷盘至所述磁盘阵列的脏页面总数、 待刷盘至所述 RAID组的脏页面总数 和所述 RAID组的并发刷盘上限, 确定所述 RAID组的刷盘并发 10数。
根据第二方面的第一种可能的实现方式, 在第二方面的第二种可能的实 现方式中, 所述确定模块具体用于: 确定所述 RAID 组的刷盘并发 10 数 dn = M x (Pn / P) x (\ - U) , 其中, M为所述 RAID组的并发刷盘上限, p„为所述 待刷盘至所述 RAID组的脏页面总数, P为所述待刷盘至所述磁盘阵列的脏 页面总数, U为所述磁盘阵列的当前利用率。
根据第二方面或第二方面的前两种可能的实现方式之一, 在第二方面的 第三种可能的实现方式中, 所述刷盘模块具体用于: 从所述 RAID组的刷盘 指针指向的当前逻辑单元开始遍历; 若待刷盘至所述 RAID组的脏页面不包 含待刷盘至所述当前逻辑单元的脏页面, 则将所述 RAID组的刷盘指针指向 按照物理地址的顺序排在所述当前逻辑单元下一个的另一逻辑单元; 若所述 待刷盘至所述 RAID组的脏页面包含待刷盘至所述当前逻辑单元的脏页面, 对所述当前逻辑单元进行刷盘。
根据第二方面第三种可能的实现方式, 在第二方面的第四种可能的实现 方式中, 所述刷盘模块用于对当前逻辑单元进行刷盘, 具体包括: 若所述当 前逻辑单元的已完成刷盘并发 10数未达到所述当前逻辑单元的并发刷盘上 限, 则将对应所述当前逻辑单元的刷盘并发 10 中的脏页面刷盘至所述当前 逻辑单元; 若所述当前逻辑单元的已完成刷盘并发 10数达到所述当前逻辑 单元的并发刷盘上限, 则停止对所述当前逻辑单元进行刷盘, 将所述 RAID 组的刷盘指针指向按照物理地址的顺序排在所述当前逻辑单元下一个的另一 逻辑单元。
根据第二方面的第三种或者第四种可能的实现方式, 在第二方面的第五 种可能的实现方式中, 所述刷盘模块还用于: 若所述 RAID组的已完成刷盘 并发 10数未达到所述 RAID组的并发刷盘上限, 则对所述当前逻辑单元进行 刷盘; 若所述 RAID组的已完成刷盘并发 10数达到所述 RAID组的并发刷盘 上限, 则停止对所述 RAID组的刷盘。
根据第二方面的第五种可能的实现方式, 在第二方面的第六种可能的实 现方式中, 所述刷盘模块还用于: 保持所述 RAID组的刷盘指针不变。
第三方面, 本发明实施例提供的另一种磁盘阵列刷盘装置, 包括: 存储器, 用于存储指令;
处理器, 与所述存储器耦合, 处理器被配置为执行存储在存储器中的 指令, 其中,
处理器被配置为用于: 获取磁盘阵列中同一独立磁盘冗余阵列 RAID组 中各逻辑单元按照所述各逻辑单元的物理地址的排序; 将所述 RAID组的刷 盘并发输入输出 10按照各逻辑单元的物理地址的排序依次刷盘至所述 RAID 组中的逻辑单元, 每个刷盘并发 10包括待刷盘至所述 RAID组中一个逻辑单 元的至少一个脏页面。 在第三方面的第一种可能的实现方式中, 处理器被配置为用于: 根据 待刷盘至所述磁盘阵列的脏页面总数、 待刷盘至所述 RAID组的脏页面总数 和所述 RAID组的并发刷盘上限, 确定所述 RAID组的刷盘并发 10数。
根据第三方面的第一种可能的实现方式, 在第三方面的第二种可能的实 现方式中, 所述处理器被配置为具体用于: 确定所述 RAID组的刷盘并发 10 dn = Mx { )n IPX -U、, 其中, M为所述 RAID组的并发刷盘上限, 为所 述待刷盘至所述 RAID组的脏页面总数, P为所述待刷盘至所述磁盘阵列的 脏页面总数, U为所述磁盘阵列的当前利用率。
根据第三方面或第三方面的前两种可能的实现方式之一, 在第三方面的 第三种可能的实现方式中, 所述处理器配置为具体用于: 从所述 RAID组的 刷盘指针指向的当前逻辑单元开始遍历; 若待刷盘至所述 RAID组的脏页面 不包含待刷盘至所述当前逻辑单元的脏页面, 则将所述 RAID组的刷盘指针 指向按照物理地址的顺序排在所述当前逻辑单元下一个的另一逻辑单元; 若 所述待刷盘至所述 RAID组的脏页面包含待刷盘至所述当前逻辑单元的脏页 面, 对所述当前逻辑单元进行刷盘。
根据第三方面第三种可能的实现方式, 在第三方面的第四种可能的实现 方式中, 所述处理器配置为用于对当前逻辑单元进行刷盘, 具体包括: 若所 述当前逻辑单元的已完成刷盘并发 10数未达到所述当前逻辑单元的并发刷 盘上限, 则将对应所述当前逻辑单元的刷盘并发 10 中的脏页面刷盘至所述 当前逻辑单元; 若所述当前逻辑单元的已完成刷盘并发 10数达到所述当前 逻辑单元的并发刷盘上限, 则停止对所述当前逻辑单元进行刷盘, 将所述
RAID 组的刷盘指针指向按照物理地址的顺序排在所述当前逻辑单元下一个 的另一逻辑单元。
根据第三方面的第三种或者第四种可能的实现方式, 在第三方面的第五 种可能的实现方式中, 所述处理器配置为还用于: 若所述 RAID组的已完成 刷盘并发 10数未达到所述 RAID组的并发刷盘上限, 则对所述当前逻辑单元 进行刷盘; 若所述 RAID组的已完成刷盘并发 10数达到所述 RAID组的并发 刷盘上限, 则停止对所述 RAID组的刷盘。
根据第三方面的第五种可能的实现方式, 在第三方面的第六种可能的实 现方式中, 所述处理器配置为还用于: 保持所述 RAID组的刷盘指针不变。 本发明实施例提供的磁盘阵列刷盘方法及磁盘阵列刷盘装置, 通过对单 个 RAID组内的多个逻辑单元统一调度按物理地址顺序进行刷盘, 减少了磁 臂来回跳动寻址消耗的时间, 通过对各 RAID 组独立的控制, 避免了各 RAID 组之间的影响, 从而提高了磁盘阵列的刷盘效率, 增加了磁盘阵列的 吞吐量。 附图说明
对实施例或现有技术描述中所需要使用的附图作一简单地介绍, 显而易 见地, 下面描述中的附图是本发明的一些实施例, 对于本领域普通技术 人员来讲, 在不付出创造性劳动的前提下, 还可以根据这些附图获得其 他的附图。
图 1为本发明提供的磁盘阵列刷盘方法实施例一的流程图;
图 2为本发明提供的磁盘阵列刷盘方法实施例二的流程图; 图 3为本发明提供的磁盘阵列刷盘方法实施例三的流程图; 图 4为本发明提供的磁盘阵列刷盘装置实施例一的结构示意图; 图 5为本发明提供的磁盘阵列刷盘装置实施例二的结构示意图。 具体实施方式 为使本发明实施例的目的、 技术方案和优点更加清楚, 下面将结合 本发明实施例中的附图, 对本发明实施例中的技术方案进行清楚、 完整 地描述, 显然, 所描述的实施例是本发明一部分实施例, 而不是全部的 实施例。 基于本发明中的实施例, 本领域普通技术人员在没有作出创造 性劳动前提下所获得的所有其他实施例, 都属于本发明保护的范围。
图 1 为本发明提供的磁盘阵列刷盘方法实施例一的流程图, 如图 1 所 示, 本实施例提供的磁盘阵列刷盘方法可以包括:
步骤 S110、 获取磁盘阵列中同一 RAID组中各逻辑单元按照各逻辑单元 的物理地址的排序。
具体地, 磁盘阵列中可以包括多个 RAID组, 每个 RAID组可以包括多 个逻辑单元, 逻辑单元可以以逻辑单元号 LU 作为标识, 逻辑单元用以划 分 RAID组中的存储空间。 通常, 可以在创建逻辑单元时, 将同一 RAID组 中的所有逻辑单元按照其在 RAID组上的物理地址顺序进行划分, 每个逻辑 单元占用 RAID组中连续的物理地址空间。 举例来说 , LUN 1〜5按照物理地 址的排序可能为: LUN2、 LU 4、 LUN1、 LUN5、 LUN3。
步骤 S120、 将 RAID组的刷盘并发输入输出 10按照各逻辑单元的物理 地址的排序依次刷盘至 RAID组中的逻辑单元, 每个刷盘并发 10包括待刷盘 至 RAID组中一个逻辑单元的至少一个脏页面。
RAID组的一个刷盘并发 10中可以包含一个或多个需刷盘至 RAID组中 的同一逻辑单元的脏页面, RAID组的一个刷盘并发 10可以理解为将该刷盘 并发 10中的脏页面刷盘至该 RAID组的一次刷盘操作。 通常,将待刷盘至所 述 RAID组的脏页面按照其对应的目的地址的顺序连续打包, 形成一个个刷 盘并发 10 , 其中, 脏页面的目的地址对应该脏页面待写入的逻辑单元的物理 地址。 对各 RAID组分别进行刷盘, 在刷盘的过程中, 各 RAID组之间互不 影响。 将 RAID组中的刷盘并发 10按照一定顺序分别刷盘至 RAID组的逻辑 单元, 如果该顺序与该 RAID组中逻辑单元的物理地址的排列顺序一致, 由 于 RAID组中各逻辑单元的物理地址连续,那么将 RAID组的刷盘并发 10刷 盘至该 RAID组的过程即为该 RAID组中的磁臂顺序寻址, 并将脏页面写入 磁盘的过程。
本发明实施例提供的磁盘阵列刷盘方法, 通过对单个 RAID组内的逻辑 单元统一调度按物理地址顺序进行刷盘, 减少了磁臂来回跳动寻址消耗的时 间, 通过对各 RAID组独立的控制, 避免了各 RAID组之间的影响, 从而提 高了磁盘阵列的刷盘效率, 增加了磁盘阵列的吞吐量。
图 2 为本发明提供的磁盘阵列刷盘方法实施例二的流程图, 如图 2所 示, 本实施例提供的磁盘阵列刷盘方法可以包括:
步骤 S210、 获取磁盘阵列中同一 RAID组中各逻辑单元按照各逻辑单元 的物理地址的排序。
具体地, 磁盘阵列中可以包括多个 RAID组, 每个 RAID组可以包括多 个逻辑单元, 逻辑单元可以以逻辑单元号 LU 作为标识, 逻辑单元用以划 分 RAID组中的存储空间。 通常, 可以在创建逻辑单元时, 将同一 RAID组 中的所有逻辑单元按照其在 RAID组上的物理地址顺序进行划分, 每个逻辑 单元占用 RAID组中连续的物理地址空间。
步骤 S220、 根据待刷盘至磁盘阵列的脏页面总数、 待刷盘至 RAID组的 脏页面总数和 RAID组的并发刷盘上限, 确定 RAID组的刷盘并发 10数。
本步骤也可以在步骤 S210之前实施, 具体而言,待刷盘至磁盘阵列的脏 页面为已写入 CACHE, 但尚未写入磁盘阵列的数据, 如果 CACHE中的脏页 面超过了高水位, 即 CACHE中存放脏页面的空间即将耗尽, 则将每个 RAID 组的并发刷盘上限作为该 RAID组的刷盘并发 10数。 如果 CACHE中的脏页 面没有达到高水位, 则可以根据待刷盘至该 RAID组的脏页面总数占待刷盘 至磁盘阵列的脏页面总数的比例、 该 RAID组的并发刷盘上限, 以及磁盘阵 列的忙闲程度来确定本次该 RAID组中逻辑单元的刷盘并发 10的个数。 其中 RAID组的并发刷盘上限, 即每次刷盘并发中该 RAID组可以进行刷盘 10的 个数上限, RAID组的并发刷盘上限由该 RAID组的成员盘类型、 成员盘个 数以及该 RAID组釆用的 RAID级别决定。 成员盘的速度越快, 作为数据盘 的成员盘的数量越多, RAID组的并发刷盘上限就越大。
可选地, 如果 CACHE 中的脏页面数没有达到高水位, 可以根据下述公 式确定本次该 RAID组的刷盘并发 10的个数:
dn = M x (Pn / P) x (\ - U)
其中, ί„为 RAID组的刷盘并发 10数, 包括该 RAID组中所有逻辑单元 的刷盘并发 10 的总数; M 为该 RAID组的并发刷盘上限; p„为待刷盘至 RAID组的脏页面总数; P为待刷盘至磁盘阵列的脏页面总数; U为磁盘阵列 的当前利用率。
由此可见, 待刷盘至某一 RAID 组的脏页面总数相对于待刷盘至磁盘 阵列的脏页面总数的比例越大, 则该 RAID组的刷盘并发 10的个数也越多; 磁盘阵列的当前利用率越低, 则每个 RAID组的刷盘并发 10的个数也越多。
步骤 S230、 将 RAID组的刷盘并发输入输出 10按照各逻辑单元的物理 地址的排序依次刷盘至 RAID组中的逻辑单元, 每个刷盘并发 10包括待刷盘 至 RAID组中一个逻辑单元的至少一个脏页面。
RAID组的一个刷盘并发 10中可以包含一个或多个需刷盘至 RAID组中 的同一逻辑单元的脏页面, RAID组的一个刷盘并发 10可以理解为将该刷盘 并发 10中的脏页面刷盘至该 RAID组的一次刷盘操作, RAID组的一个刷盘 并发 10 中可以包含一个或多个需刷盘至 RAID组中的同一逻辑单元的脏页 面。 通常, 将待刷盘至所述 RAID组的脏页面按照其对应的目的地址的顺序 连续打包, 形成一个个刷盘并发 10, 其中, 脏页面的目的地址对应该脏页面 待写入的逻辑单元的物理地址。 对各 RAID组分别进行刷盘, 在刷盘的过程 中, 各 RAID组之间互不影响。 将 RAID组中的刷盘并发 10按照一定顺序分 别刷盘至 RAID组的逻辑单元 , 如果该顺序与该 RAID组中逻辑单元的物理 地址的排列顺序一致, 由于 RAID组中各逻辑单元的物理地址连续, 那么将 RAID组的刷盘并发 10刷盘至该 RAID组的过程即为该 RAID组中的磁臂顺 序寻址, 并将脏页面写入磁盘的过程。
本发明实施例提供的磁盘阵列刷盘方法, 通过对单个 RAID组内的逻辑 单元统一调度按物理地址顺序进行刷盘, 减少了磁臂来回跳动寻址消耗的时 间, 通过对各 RAID组独立的控制, 避免了各 RAID组之间的影响, 并且通 过在 RAID组层面对各 RAID组刷盘并发 10数的统一管理, 一定程度上平衡 了各 RAID组的使用率。 从而提高了磁盘阵列的刷盘效率, 增加了磁盘阵列 的吞吐量。
进一步地, 将 RAID组的刷盘并发输入输出 10按照各逻辑单元的物理 地址的顺序依次刷盘至 RAID组中的逻辑单元, 包括: 从 RAID组的刷盘指 针指向的当前逻辑单元开始遍历; 若待刷盘至 RAID组的脏页面不包含待刷 盘至当前逻辑单元的脏页面, 则将 RAID组的刷盘指针指向按照物理地址的 顺序排在当前逻辑单元下一个的另一逻辑单元; 若待刷盘至 RAID组的脏页 面包含对应当前逻辑单元的脏页面, 意味着有对应该当前逻辑单元的刷盘并 发 10, 则对当前逻辑单元进行刷盘。 简单而言, 单个 RAID组中, 可以设置 刷盘指针, 用于指示当前刷盘操作起始的逻辑单元, 即上次并发刷盘操作终 止的逻辑单元。 若在刷盘过程中, 待刷盘至 RAID组的脏页面不包括待刷盘 至刷盘指针指向的当前逻辑单元的脏页面, 意味着没有对应该当前逻辑单元 的刷盘并发 10, 则将刷盘指针指向另一逻辑单元, 另一逻辑单元为依照物理 地址顺序排在当前逻辑单元下一个的逻辑单元, 对另一逻辑单元进行刷盘。 如果当前逻辑单元依照物理地址顺序排在该 RAID组的最后, 则将刷盘指针 指向依照物理地址顺序排在该 RAID组的首位的逻辑单元。
进一步地, 对当前逻辑单元进行刷盘, 包括: 若当前逻辑单元的已完成 刷盘并发 10数未达到当前逻辑单元的并发刷盘上限, 则将对应当前逻辑单 元的刷盘并发 10 中的脏页面刷盘至当前逻辑单元; 若当前逻辑单元的已完 成刷盘并发 10数达到当前逻辑单元的并发刷盘上限, 则停止对当前逻辑单 元进行刷盘, 将所述 RAID组的刷盘指针指向按照物理地址的顺序排在所述 当前逻辑单元下一个的另一逻辑单元。 简单而言, 在将一个刷盘并发 10 中 的脏页面刷盘至当前逻辑单元之前, 若判断当前逻辑单元的已完成刷盘并发 10数达到当前逻辑单元的并发刷盘上限, 则停止对当前逻辑单元的刷盘, 将 刷盘指针指向另一逻辑单元, 另一逻辑单元为依照物理地址顺序排在当前逻 辑单元下一个的逻辑单元, 对另一逻辑单元进行刷盘。
进一步地, 对当前逻辑单元进行刷盘, 包括: 若 RAID组的已完成刷盘 并发 10数未达到 RAID组的并发刷盘上限, 则对当前逻辑单元进行刷盘; 若 RAID组的已完成刷盘并发 10数达到 RAID组的并发刷盘上限, 则停止对 RAID 组的刷盘。 同样, 在将脏页面刷盘至当前逻辑单元之前, 如果判断 RAID组中所有逻辑单元的已完成刷盘并发 10的个数达到 RAID组中逻辑单 元的刷盘并发 10数, 即本次对该 RAID组的刷盘已完成, 也将 RAID组的刷 盘指针停留至当前逻辑单元, 退出本次对该 RAID组的刷盘。
本发明实施例提供的磁盘阵列刷盘方法, 通过对单个 RAID组内的逻辑 单元统一调度按物理地址顺序进行刷盘, 减少了磁臂来回跳动寻址消耗的时 间, 通过对各 RAID组独立的控制, 并且根据待刷盘至磁盘阵列的脏页面总 数、 待刷盘至各 RAID组的脏页面总数和各 RAID组的并发刷盘上限, 分别 确定各 RAID组的刷盘并发 10数, 避免了各 RAID组的逻辑单元个数差别较 大的影响, 提高了磁盘阵列的刷盘效率, 增加了磁盘阵列的吞吐量。
图 3 为本发明提供的磁盘阵列刷盘方法实施例三的流程图, 如图 3 所 示, 本实施例提供的磁盘阵列刷盘方法应用于为对单个 RAID组进行刷盘的 方法时, 可以包括:
步骤 S302、 确定 RAID组的刷盘并发 10数。
具体地, 可以根据待刷盘至磁盘阵列的脏页面总数、 待刷盘至 RAID组 的脏页面总数和 RAID组的并发刷盘上限, 确定 RAID组的刷盘并发 10数。
步骤 S304、 确定 RAID组的刷盘指针指向的当前逻辑单元。
单个 RAID组中, 可以设置刷盘指针, 用于指向当前刷盘操作起始的逻 辑单元, 也即上次刷盘操作终止时指向的逻辑单元。
步骤 S306、 判断 RAID组的已完成刷盘并发 10数是否达到 RAID组的 刷盘并发 10数, 若是则执行步骤 S320, 若否则执行步骤 S308。
步骤 S308、 判断待刷盘至 RAID组的脏页面是否包含待刷盘至当前逻辑 单元的脏页面, 若是则执行步骤 S310, 若否则执行步骤 S316。
步骤 S310、 判断 RAID组的已完成刷盘并发 10数是否达到 RAID组的 并发刷盘上限, 若是则执行步骤 S320, 若否则执行步骤 S312。
步骤 S312、 判断当前逻辑单元的已完成刷盘并发 10数是否达到当前逻 辑单元的并发刷盘上限, 若是则执行步骤 S316, 若否执行步骤 S314。
进一步地, 针对 RAID组中的每个逻辑单元也可设置并发刷盘上限。 对 应地, 可以判断刷盘指针指向的当前逻辑单元的已完成刷盘并发 10数是否 达到当前逻辑单元的并发刷盘上限。
步骤 S314、 将对应当前逻辑单元的一个刷盘并发 10中包含的脏页面写 入当前逻辑单元中, 返回步骤 S306。
具体地, 步骤 S314之前还包括, 将对应当前逻辑单元的至少一个脏页面 打包成对应当前逻辑单元的一个刷盘并发 10。
步骤 S316、 刷盘指针是否已遍历所有逻辑单元。 若是则执行步骤 320, 若否执行步骤 S318。
具体地, 这里的所有逻辑单元是指所述 RAID组中的所有逻辑单元。 步骤 S318、 将刷盘指针指向下一逻辑单元, 返回步骤 S304。
具体地, 下一逻辑单元为按照逻辑块寻址顺序排列在刷盘指针指向的当 前逻辑单元下一个的逻辑单元。 如果当前逻辑单元依照物理地址顺序排在该 RAID组的最后,则将刷盘指针指向依照物理地址顺序排在该 RAID组的首位 的逻辑单元。
步骤 S320、 刷盘完成。
本发明实施例提供的磁盘阵列刷盘方法, 通过上述各步骤, 对单个 RAID 组内的逻辑单元统一调度按物理地址顺序进行刷盘, 减少了磁臂来回 跳动寻址消耗的时间, 通过对各 RAID组独立的控制, 并且根据待刷盘至磁 盘阵列的脏页面总数、 待刷盘至各 RAID组的脏页面总数和各 RAID组的并 发刷盘上限, 分别确定各 RAID组的刷盘并发 10的个数, 避免了各 RAID组 的逻辑单元个数差别较大的影响。
图 4为本发明提供的磁盘阵列刷盘装置实施例一的结构示意图, 如图 4 所示, 本实施例提供的磁盘阵列刷盘装置 400可以包括:
获取模块 410, 可以用于获取磁盘阵列中同一独立磁盘冗余阵列 RAID 组中各逻辑单元按照各逻辑单元的物理地址的排序;
刷盘模块 420, 可以用于将 RAID组的刷盘并发输入输出 10按照各逻辑 单元的物理地址的排序依次刷盘至 RAID组中的逻辑单元, 每个刷盘并发 10 包括待刷盘至 RAID组中一个逻辑单元的至少一个脏页面。
进一步地, 还可以包括: 确定模块 430, 用于根据待刷盘至磁盘阵列的 脏页面总数、 待刷盘至 RAID组的脏页面总数和 RAID组的并发刷盘上限 , 确定 RAID组的刷盘并发 10数。
进一步地, 确定模块 430可以具体用于: 确定 RAID组的刷盘并发 10数 dn = M x (Pn / P) x (\ - U) , 其中, M为 RAID组的并发刷盘上限, 为待刷盘至 RAID组的脏页面总数, P为待刷盘至磁盘阵列的脏页面总数, U为磁盘阵列 的当前利用率。
进一步地, 刷盘模块 420可以具体用于: 从 RAID组的刷盘指针指向的 当前逻辑单元开始遍历; 若待刷盘至 RAID组的脏页面不包含待刷盘至当前 逻辑单元的脏页面, 则将 RAID组的刷盘指针指向按照物理地址的顺序排在 当前逻辑单元下一个的另一逻辑单元; 若待刷盘至 RAID组的脏页面包含待 刷盘至当前逻辑单元的脏页面, 对当前逻辑单元进行刷盘。
进一步地, 刷盘模块 420 用于对当前逻辑单元进行刷盘, 具体可以包 括: 若当前逻辑单元的已完成刷盘并发 10数未达到当前逻辑单元的并发刷 盘上限, 则将对应当前逻辑单元的刷盘并发 10 中的脏页面刷盘至当前逻辑 单元; 若当前逻辑单元的已完成刷盘并发 10数达到当前逻辑单元的并发刷 盘上限, 则停止对当前逻辑单元进行刷盘, 将 RAID组的刷盘指针指向按照 物理地址的顺序排在当前逻辑单元下一个的另一逻辑单元。
进一步地, 刷盘模块 420还可以用于: 若 RAID组的已完成刷盘并发 10 数未达到 RAID组的并发刷盘上限, 则对当前逻辑单元进行刷盘; 若 RAID 组的已完成刷盘并发 10数达到 RAID组的并发刷盘上限, 则停止对 RAID组 的刷盘。 进一步地, 刷盘模块 420还可以用于: 保持 RAID组的刷盘指针不变。 本实施例提供的磁盘阵列刷盘装置 400, 可以用于执行图 1至图 3 中 任一所示方法实施例的技术方案, 其实现原理和技术效果类似, 此处不 再赘述。
图 5为本发明提供的磁盘阵列刷盘装置实施例二的结构示意图, 如图
5所示, 磁盘阵列刷盘装置 500包括: 存储器 501 , 用于存储指令; 处理 器 502, 与所述存储器耦合, 处理器 502被配置为执行存储在存储器 501 中的指令; 磁盘阵列刷盘装置 500还可以包括网络接口 503以及其他用户 接口 504以及通信总线 505。 通信总线 505用于实现这些装置之间的连接 通信。 存储器 501可能包含高速随机存储器(Random Access Memory, 简 称 RAM ) , 也可能还包括非易失性存储器( non- volatile memory ) , 例如 至少一个磁盘存储器。 存储器 501可选的可以包含至少一个位于远离前述 处理器 502的存储装置。
其中, 处理器 502被配置为用于:
获取磁盘阵列中同一独立磁盘冗余阵列 RAID组中各逻辑单元按照各逻 辑单元的物理地址的排序; 将 RAID组的刷盘并发输入输出 10按照各逻辑单 元的物理地址的排序依次刷盘至 RAID组中的逻辑单元, 每个刷盘并发 10包 括待刷盘至 RAID组中一个逻辑单元的至少一个脏页面。
进一步地, 处理器 502 被配置为还用于: 根据待刷盘至磁盘阵列的脏 页面总数、 待刷盘至 RAID组的脏页面总数和 RAID组的并发刷盘上限, 确 定 RAID组的刷盘并发 10数。
进一步地, 处理器 502被配置为具体用于: 确定 RAID组的刷盘并发 10 tdn = M X ρη / Ρχ - U) , 其中, Μ为 RAID组的并发刷盘上限, p„为待刷 盘至 RAID组的脏页面总数, P为待刷盘至磁盘阵列的脏页面总数, U为磁 盘阵列的当前利用率。
进一步地, 处理器 502被配置为具体用于: 从 RAID组的刷盘指针指向 的当前逻辑单元开始遍历; 若待刷盘至 RAID组的脏页面不包含待刷盘至当 前逻辑单元的脏页面, 则将 RAID组的刷盘指针指向按照物理地址的顺序排 在当前逻辑单元下一个的另一逻辑单元; 若待刷盘至 RAID组的脏页面包含 待刷盘至当前逻辑单元的脏页面, 对当前逻辑单元进行刷盘。 进一步地, 处理器 502被配置为用于对当前逻辑单元进行刷盘, 具体 包括: 若当前逻辑单元的已完成刷盘并发 10数未达到当前逻辑单元的并发 刷盘上限, 则将对应当前逻辑单元的刷盘并发 10 中的脏页面刷盘至当前逻 辑单元; 若当前逻辑单元的已完成刷盘并发 10数达到当前逻辑单元的并发 刷盘上限, 则停止对当前逻辑单元进行刷盘, 将 RAID组的刷盘指针指向按 照物理地址的顺序排在当前逻辑单元下一个的另一逻辑单元。
进一步地, 处理器 502被配置为还用于: 若 RAID组的已完成刷盘并发 10 数未达到 RAID 组的并发刷盘上限, 则对当前逻辑单元进行刷盘; 若 RAID组的已完成刷盘并发 10数达到 RAID组的并发刷盘上限, 则停止对 RAID组的刷盘。
进一步地, 处理器 502被配置为还用于: 保持 RAID组的刷盘指针不 变。
本实施例提供的磁盘阵列刷盘装置 500, 可以用于执行图 1至图 3任 一所示方法实施的技术方案, 其实现原理和技术效果类似, 此处不再赘 述。 图 5仅为本发明提供的磁盘阵列刷盘装置的结构的一种示意图, 具体 结构可根据实际进行调整。
综上所述, 本发明实施例提供的磁盘阵列刷盘方法和磁盘阵列刷盘装 置, 通过对单个 RAID组内的逻辑单元统一调度按物理地址顺序进行刷盘, 减少了磁臂来回跳动寻址消耗的时间, 通过对各 RAID组独立的控制, 避免 了各 RAID组之间的影响 , 并且通过在 RAID组层面对各 RAID组刷盘 10数 的平衡, 一定程度上平衡了各 RAID组的使用率, 提高了磁盘阵列的刷盘效 率, 增加了磁盘阵列的吞吐量。
本领域普通技术人员可以理解: 实现上述方法实施例的全部或部分步骤 可以通过程序指令相关的硬件来完成, 前述的程序可以存储于一计算机可读 取存储介质中, 该程序在执行时, 执行包括上述方法实施例的步骤; 而前述 的存储介质包括: ROM、 RAM, 磁碟或者光盘等各种可以存储程序代码的 介质。
最后应说明的是: 以上各实施例仅用以说明本发明的技术方案, 而非对 其限制; 尽管参照前述各实施例对本发明进行了详细的说明, 本领域的普通 技术人员应当理解: 其依然可以对前述各实施例所记载的技术方案进行修 改, 或者对其中部分或者全部技术特征进行等同替换; 而这些修改或者替 换, 并不使相应技术方案的本质脱离本发明各实施例技术方案的范围。

Claims

权 利 要 求 书
1、 一种磁盘阵列刷盘方法, 其特征在于, 包括:
获取磁盘阵列中同一独立磁盘冗余阵列 RAID组中各逻辑单元按照所述 各逻辑单元的物理地址的排序;
将所述 RAID组的刷盘并发输入输出 10按照各逻辑单元的物理地址的排 序依次刷盘至所述 RAID组中的逻辑单元, 每个刷盘并发 10包括待刷盘至所 述 RAID组中一个逻辑单元的至少一个脏页面。
2、 根据权利要求 1所述的方法, 其特征在于, 所述将所述 RAID组的刷 盘并发输入输出 10按照各逻辑单元的物理地址的顺序依次刷盘至所述 RAID 组中的逻辑单元之前, 还包括:
根据待刷盘至所述磁盘阵列的脏页面总数、 待刷盘至所述 RAID组的脏 页面总数和所述 RAID组的并发刷盘上限, 确定所述 RAID组的刷盘并发 10 数。
3、 根据权利要求 2 所述的方法, 其特征在于, 所述根据待刷盘至所述 磁盘阵列的脏页面总数、 待刷盘至所述 RAID组的脏页面总数和所述 RAID 组的并发刷盘上限, 确定所述 RAID组的刷盘并发 10数, 包括:
确定所述 RAID组的刷盘并发 10数 ί„ = Μ X (Pn / P) x (\ - U) ,
其中, M 为所述 RAID 组的并发刷盘上限, 为所述待刷盘至所述
RAID组的脏页面总数, P为所述待刷盘至所述磁盘阵列的脏页面总数, U为 所述磁盘阵列的当前利用率。
4、 根据权利要求 1 至 3任一项所述的方法, 其特征在于, 所述将所述 RAID组的刷盘并发输入输出 10按照各逻辑单元的物理地址的顺序依次刷盘 至所述 RAID组中的逻辑单元, 包括:
从所述 RAID组的刷盘指针指向的当前逻辑单元开始遍历;
若待刷盘至所述 RAID组的脏页面不包含待刷盘至所述当前逻辑单元的 脏页面, 则将所述 RAID组的刷盘指针指向按照物理地址的顺序排在所述当 前逻辑单元下一个的另一逻辑单元;
若所述待刷盘至所述 RAID组的脏页面包含待刷盘至所述当前逻辑单元 的脏页面, 对所述当前逻辑单元进行刷盘。
5、 根据权利要求 4 所述的方法, 其特征在于, 所述对所述当前逻辑单 元进行刷盘, 包括:
若所述当前逻辑单元的已完成刷盘并发 10数未达到所述当前逻辑单元 的并发刷盘上限, 则将对应所述当前逻辑单元的刷盘并发 10 中的脏页面刷 盘至所述当前逻辑单元;
若所述当前逻辑单元的已完成刷盘并发 10数达到所述当前逻辑单元的 并发刷盘上限, 则停止对所述当前逻辑单元进行刷盘, 将所述 RAID组的刷 盘指针指向按照物理地址的顺序排在所述当前逻辑单元下一个的另一逻辑单 元。
6、 根据权利要求 4或 5所述的方法, 其特征在于, 所述对所述当前逻辑 单元进行刷盘, 包括:
若所述 RAID组的已完成刷盘并发 10数未达到所述 RAID组的并发刷盘 上限, 则对所述当前逻辑单元进行刷盘;
若所述 RAID组的已完成刷盘并发 10数达到所述 RAID组的并发刷盘上 限, 则停止对所述 RAID组的刷盘。
7、 根据权利要求 6所述的方法, 其特征在于, 所述停止对所述 RAID组 的刷盘之后, 还包括: 保持所述 RAID组的刷盘指针不变。
8、 一种磁盘阵列刷盘装置, 其特征在于, 包括:
获取模块, 用于获取磁盘阵列中同一独立磁盘冗余阵列 RAID组中各逻 辑单元按照所述各逻辑单元的物理地址的排序;
刷盘模块, 用于将所述 RAID组的刷盘并发输入输出 10按照各逻辑单元 的物理地址的排序依次刷盘至所述 RAID组中的逻辑单元, 每个刷盘并发 10 包括待刷盘至所述 RAID组中一个逻辑单元的至少一个脏页面。
9、 根据权利要求 8所述的装置, 其特征在于, 还包括:
确定模块, 用于根据待刷盘至所述磁盘阵列的脏页面总数、 待刷盘至所 述 RAID组的脏页面总数和所述 RAID组的并发刷盘上限, 确定所述 RAID 组的刷盘并发 10数。
10、 根据权利要求 9 所述的装置, 其特征在于, 所述确定模块具体用 于:
确定所述 RAID组的刷盘并发 10数 = M X (Pn / P) x (\ - U) ,
其中, M 为所述 RAID 组的并发刷盘上限, 为所述待刷盘至所述 RAID组的脏页面总数, P为所述待刷盘至所述磁盘阵列的脏页面总数, U为 所述磁盘阵列的当前利用率。
11、 根据权利要求 8至 10中任一项所述的装置, 其特征在于, 所述刷盘 模块具体用于:
从所述 RAID组的刷盘指针指向的当前逻辑单元开始遍历;
若待刷盘至所述 RAID组的脏页面不包含待刷盘至所述当前逻辑单元的 脏页面, 则将所述 RAID组的刷盘指针指向按照物理地址的顺序排在所述当 前逻辑单元下一个的另一逻辑单元;
若所述待刷盘至所述 RAID组的脏页面包含待刷盘至所述当前逻辑单元 的脏页面, 对所述当前逻辑单元进行刷盘。
12、 根据权利要求 11 所述的装置, 其特征在于, 所述刷盘模块用于对 当前逻辑单元进行刷盘, 具体包括:
若所述当前逻辑单元的已完成刷盘并发 10数未达到所述当前逻辑单元 的并发刷盘上限, 则将对应所述当前逻辑单元的刷盘并发 10 中的脏页面刷 盘至所述当前逻辑单元;
若所述当前逻辑单元的已完成刷盘并发 10数达到所述当前逻辑单元的 并发刷盘上限, 则停止对所述当前逻辑单元进行刷盘, 将所述 RAID组的刷 盘指针指向按照物理地址的顺序排在所述当前逻辑单元下一个的另一逻辑单 元。
13、 根据权利要求 11或 12所述的装置, 其特征在于, 所述刷盘模块还 用于:
若所述 RAID组的已完成刷盘并发 10数未达到所述 RAID组的并发刷盘 上限, 则对所述当前逻辑单元进行刷盘;
若所述 RAID组的已完成刷盘并发 10数达到所述 RAID组的并发刷盘上 限, 则停止对所述 RAID组的刷盘。
14、 根据权利要求 13 所述的装置, 其特征在于, 所述刷盘模块还用 于:
保持所述 RAID组的刷盘指针不变。
PCT/CN2012/087506 2012-12-26 2012-12-26 磁盘阵列刷盘方法及磁盘阵列刷盘装置 WO2014100996A1 (zh)

Priority Applications (5)

Application Number Priority Date Filing Date Title
JP2015549916A JP6060277B2 (ja) 2012-12-26 2012-12-26 ディスクアレイ・フラッシュ方法及びディスクアレイ・フラッシュ装置
EP12890699.7A EP2927779B1 (en) 2012-12-26 2012-12-26 Disk writing method for disk arrays and disk writing device for disk arrays
PCT/CN2012/087506 WO2014100996A1 (zh) 2012-12-26 2012-12-26 磁盘阵列刷盘方法及磁盘阵列刷盘装置
CN201280002903.2A CN103229136B (zh) 2012-12-26 2012-12-26 磁盘阵列刷盘方法及磁盘阵列刷盘装置
US14/752,077 US9582433B2 (en) 2012-12-26 2015-06-26 Disk array flushing method and disk array flushing apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2012/087506 WO2014100996A1 (zh) 2012-12-26 2012-12-26 磁盘阵列刷盘方法及磁盘阵列刷盘装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US14/752,077 Continuation US9582433B2 (en) 2012-12-26 2015-06-26 Disk array flushing method and disk array flushing apparatus

Publications (1)

Publication Number Publication Date
WO2014100996A1 true WO2014100996A1 (zh) 2014-07-03

Family

ID=48838328

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2012/087506 WO2014100996A1 (zh) 2012-12-26 2012-12-26 磁盘阵列刷盘方法及磁盘阵列刷盘装置

Country Status (5)

Country Link
US (1) US9582433B2 (zh)
EP (1) EP2927779B1 (zh)
JP (1) JP6060277B2 (zh)
CN (1) CN103229136B (zh)
WO (1) WO2014100996A1 (zh)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10558569B2 (en) * 2013-10-31 2020-02-11 Hewlett Packard Enterprise Development Lp Cache controller for non-volatile memory
CN103577349B (zh) * 2013-11-06 2016-11-23 华为技术有限公司 在高速缓存中选择数据进行刷盘的方法和装置
CN104461936B (zh) * 2014-11-28 2017-10-17 华为技术有限公司 缓存数据的刷盘方法及装置
US10891264B2 (en) * 2015-04-30 2021-01-12 Vmware, Inc. Distributed, scalable key-value store
CN105095112B (zh) * 2015-07-20 2019-01-11 华为技术有限公司 控制缓存刷盘方法、装置及非易失性计算机可读存储介质
CN106557430B (zh) * 2015-09-19 2019-06-21 成都华为技术有限公司 一种缓存数据刷盘方法及装置
CN107870732B (zh) * 2016-09-23 2020-12-25 伊姆西Ip控股有限责任公司 从固态存储设备冲刷页面的方法和设备
CN107577439B (zh) * 2017-09-28 2020-10-20 苏州浪潮智能科技有限公司 分配处理资源的方法、装置、设备及计算机可读存储介质
CN107844436B (zh) * 2017-11-02 2021-07-16 郑州云海信息技术有限公司 一种缓存中脏数据的组织管理方法、系统及存储系统
CN110413545B (zh) * 2018-04-28 2023-06-20 伊姆西Ip控股有限责任公司 存储管理方法、电子设备和计算机程序产品
US11138122B2 (en) * 2019-07-29 2021-10-05 EMC IP Holding Company, LLC System and method for dual node parallel flush

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101458668A (zh) * 2008-12-19 2009-06-17 成都市华为赛门铁克科技有限公司 缓存数据块的处理方法和硬盘
CN101526882A (zh) * 2008-03-03 2009-09-09 中兴通讯股份有限公司 独立磁盘冗余阵列子系统中逻辑单元重建的方法及装置

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05303528A (ja) * 1992-04-27 1993-11-16 Oki Electric Ind Co Ltd ライトバック式ディスクキャッシュ装置
JP4322068B2 (ja) * 2003-03-07 2009-08-26 富士通株式会社 ストレージシステム及びそのデイスク負荷バランス制御方法
US7539815B2 (en) * 2004-12-29 2009-05-26 International Business Machines Corporation Method, system and circuit for managing task queues in a disk device controller
US7574556B2 (en) * 2006-03-20 2009-08-11 International Business Machines Corporation Wise ordering for writes—combining spatial and temporal locality in write caches
JP2010160544A (ja) * 2009-01-06 2010-07-22 Core Micro Systems Inc キャッシュメモリシステム及びキャッシュメモリの制御方法
US9665442B2 (en) * 2010-03-29 2017-05-30 Kaminario Technologies Ltd. Smart flushing of data to backup storage
US8862819B2 (en) * 2010-03-31 2014-10-14 Kaminario Technologies Ltd. Log structure array

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101526882A (zh) * 2008-03-03 2009-09-09 中兴通讯股份有限公司 独立磁盘冗余阵列子系统中逻辑单元重建的方法及装置
CN101458668A (zh) * 2008-12-19 2009-06-17 成都市华为赛门铁克科技有限公司 缓存数据块的处理方法和硬盘

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP2927779A4 *

Also Published As

Publication number Publication date
JP2016503209A (ja) 2016-02-01
US9582433B2 (en) 2017-02-28
EP2927779A4 (en) 2015-12-23
US20150293856A1 (en) 2015-10-15
EP2927779B1 (en) 2018-06-13
JP6060277B2 (ja) 2017-01-11
CN103229136B (zh) 2016-03-02
EP2927779A1 (en) 2015-10-07
CN103229136A (zh) 2013-07-31

Similar Documents

Publication Publication Date Title
WO2014100996A1 (zh) 磁盘阵列刷盘方法及磁盘阵列刷盘装置
JP5759623B2 (ja) メモリシステムコントローラを含む装置および関連する方法
JP6320318B2 (ja) 記憶装置及び記憶装置を含む情報処理システム
US11762569B2 (en) Workload based relief valve activation for hybrid controller architectures
TWI536268B (zh) 記憶體排程之方法、系統及裝置
US8719520B1 (en) System and method for data migration between high-performance computing architectures and data storage devices with increased data reliability and integrity
US9317436B2 (en) Cache node processing
TWI421680B (zh) Parallel flash memory controller
CN110114758A (zh) 存储器的针对性清除
JP2022538587A (ja) データ・ストレージ・システムにおけるブロック・モード・トグリング
US20210133110A1 (en) Migrating data between block pools in a storage system
TWI432965B (zh) 具有複數個結構之記憶體系統及其操作方法
US10235288B2 (en) Cache flushing and interrupted write handling in storage systems
CN104461936A (zh) 缓存数据的刷盘方法及装置
US20160179412A1 (en) Endurance enhancement scheme using memory re-evaluation
JP2010102695A (ja) Hdd障害からの高速データ回復
JP2013513881A (ja) フラッシュ型メモリ・システムにおけるアクセス競合の低減方法、プログラム及びシステム
CN109164981A (zh) 磁盘管理方法、装置、存储介质和设备
US11797448B2 (en) Using multi-tiered cache to satisfy input/output requests
US20120102242A1 (en) Controlling data destaging within a multi-tiered storage system
JP2022539788A (ja) 読出しヒート・データ分離をサポートしている書込みキャッシュ・アーキテクチャ内でのデータ配置
CN108153582A (zh) Io命令处理方法与介质接口控制器
US10628045B2 (en) Internal data transfer management in a hybrid data storage device
TW201608469A (zh) 有效率釋出序列輸入輸出串流的方法及裝置
CN112346658A (zh) 在具有高速缓存体系结构的存储设备中提高数据热量跟踪分辨率

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12890699

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2015549916

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2012890699

Country of ref document: EP