WO2016129101A1 - Système de mémoire et procédé de contrôle de mémoire - Google Patents

Système de mémoire et procédé de contrôle de mémoire Download PDF

Info

Publication number
WO2016129101A1
WO2016129101A1 PCT/JP2015/053964 JP2015053964W WO2016129101A1 WO 2016129101 A1 WO2016129101 A1 WO 2016129101A1 JP 2015053964 W JP2015053964 W JP 2015053964W WO 2016129101 A1 WO2016129101 A1 WO 2016129101A1
Authority
WO
WIPO (PCT)
Prior art keywords
difference
storage
data
difference information
storage controller
Prior art date
Application number
PCT/JP2015/053964
Other languages
English (en)
Japanese (ja)
Inventor
直人 柳
Original Assignee
株式会社日立製作所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社日立製作所 filed Critical 株式会社日立製作所
Priority to PCT/JP2015/053964 priority Critical patent/WO2016129101A1/fr
Publication of WO2016129101A1 publication Critical patent/WO2016129101A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures

Definitions

  • the present invention generally relates to a technology of a storage system and a storage control method.
  • a technology is known in which only data newly written in a first storage device is copied to a second storage device through a network, thereby backing up data in the first storage device to a second storage device.
  • Patent Document 1 discloses the following.
  • the first storage device stores the write data received from the host computer in the transfer buffer, rearranges the plurality of write data in the transfer buffer in units of tracks, and converts the rearranged plurality of write data to the second buffer.
  • the second storage device receives the rearranged write data and writes the write data to the HDD (Hard Disk Drive) in the rearranged order.
  • the disk head in the second storage device can sequentially access the HDD.
  • the data expansion process to the HDD in the second storage device is shortened.
  • the first storage device stores data (including overwritten data; hereinafter referred to as “difference data”) newly written in its own storage device (HDD, SSD (Solid State Drive), etc.) as the second storage device.
  • difference data data
  • HDD hard disk drive
  • SSD Solid State Drive
  • the storage system includes one or more storage devices that store data stored in the first volume, and a storage controller that controls the one or more storage devices.
  • the first volume is divided into a plurality of slots which are a plurality of storage areas.
  • the storage controller displays difference information for each of a plurality of slots indicating whether it is a used slot in which difference data including data stored after a predetermined timing is stored or an empty slot in which difference data is not stored. Have.
  • the storage controller divides the difference information into a plurality of difference information parts, a selection process for selecting the first difference information part and the second difference information part from the plurality of difference information parts, The exchange processing for exchanging the address of the used slot belonging to the second difference information portion and the address of the empty slot belonging to the second difference information portion is executed.
  • FIG. 1 is a schematic diagram showing an overview of a storage system according to an embodiment. It is a figure for demonstrating an example of a difference data rearrangement process. It is a schematic diagram which shows an example of a structure in a storage system. The structural example of a difference map is shown. The structural example of a submap management table is shown. The structural example of a synchronous setting table is shown. The structural example of a synchronous time analysis table is shown. The structural example of a synchronous time management table is shown. The structural example of an operation rate threshold value setting table is shown. The structural example of an employment
  • xxx table information may be described using the expression “xxx table”, but the information may be expressed in any data structure. That is, “xxx table” can be referred to as “xxx information” to indicate that the information does not depend on the data structure.
  • xxx information information may be described using the expression “xxx table”, but the information may be expressed in any data structure. That is, “xxx table” can be referred to as “xxx information” to indicate that the information does not depend on the data structure.
  • the configuration of each table is an example, and one table may be divided into two or more tables, or all or part of the two or more tables may be a single table. Good.
  • an ID is used as element identification information, but other types of identification information may be used instead of or in addition thereto.
  • a common number in the reference number is used, and when a description is made by distinguishing the same type of element, the reference number of the element may be used. is there.
  • VOL80 when the VOL is not particularly distinguished, it is described as “VOL80”, and when each VOL is distinguished and described, it may be described as “VOL80a” or “VOL80b”.
  • an I / O (Input / Output) request is a write request or a read request, and may be referred to as an access request.
  • the “storage unit” may be one or more storage devices including a memory.
  • the storage unit may be at least a main storage device of a main storage device (typically a volatile memory) and an auxiliary storage device (typically a nonvolatile storage device).
  • the storage unit may include at least one of a cache area (for example, a cache memory or a partial area thereof) and a buffer area (for example, a buffer memory or a partial area thereof).
  • the cache area and the buffer area are common in that data input / output to / from the PDEV is temporarily stored, and may differ depending on whether or not the read data remains. Specifically, data read from the cache area remains in the cache area, and data once read from the buffer area may not remain in the buffer area.
  • “PDEV” means a physical storage device, and typically a non-volatile storage device (for example, an auxiliary storage device).
  • the PDEV may be, for example, an HDD or an SSD.
  • RAID is an abbreviation for Redundant Array of Independent (or Independent) Disks.
  • the RAID group is composed of a plurality of PDEVs, and stores data according to the RAID level associated with the RAID group.
  • the RAID group may be referred to as a parity group.
  • the parity group may be, for example, a RAID group that stores parity.
  • a program is executed by a processor (for example, a CPU (Central Processing Unit)), so that a predetermined process is appropriately performed with a storage resource (for example, a memory) and / or an interface device (for example, a communication port). ) Etc. may be used.
  • the processing described with the program as the subject may be processing performed by a processor or an apparatus or system having the processor.
  • the processor may include a hardware circuit that performs a part or all of the processing.
  • the program may be installed in a computer-like device from a program source.
  • the program source may be, for example, a storage medium that can be read by a program distribution server or a computer.
  • the program distribution server may include a processor (for example, a CPU) and a storage resource, and the storage resource may further store a distribution program and a program to be distributed. Then, the processor of the program distribution server executes the distribution program, so that the processor of the program distribution server may distribute the distribution target program to other computers.
  • a processor for example, a CPU
  • the storage resource may further store a distribution program and a program to be distributed. Then, the processor of the program distribution server executes the distribution program, so that the processor of the program distribution server may distribute the distribution target program to other computers.
  • two or more programs may be realized as one program, or one program may be realized as two or more programs.
  • the management system may be composed of one or more computers.
  • the management computer displays information (specifically, for example, the management computer displays information on its own display device, or the management computer displays display information in a remote display computer)
  • Management computer is the management system.
  • the plurality of computers may include a display computer when the display computer performs display
  • a management computer eg, management system
  • the display system may be a display device included in the management computer or a display computer connected to the management computer.
  • the I / O system may be an I / O device (for example, a keyboard and a pointing device or a touch panel) included in the management computer, a display computer connected to the management computer, or another computer.
  • “Displaying display information” by the management computer means displaying the display information on the display system, which may be displaying the display information on a display device included in the management computer.
  • the management computer may transmit display information to the display computer (in the latter case, the display information is displayed by the display computer).
  • the management computer inputting / outputting information may be inputting / outputting information to / from an I / O device of the management computer, or a remote computer connected to the management computer (for example, a display) Information may be input / output to / from the computer.
  • the information output may be a display of information.
  • the “host system” is a system that transmits an I / O request to the storage system, and includes an interface device, a storage resource (for example, a memory), and a processor connected to them. Good.
  • the host system may be composed of one or more host computers.
  • the at least one host computer may be a physical computer, and the host system may include a virtual host computer in addition to the physical host computer.
  • the “storage system” may be one or more storage apparatuses, and includes a plurality of PDEVs (for example, one or more RAID groups) and a storage controller that controls I / O for the plurality of PDEVs. It's okay.
  • a storage controller includes a back-end interface device connected to a plurality of PDEVs, a front-end interface device connected to at least one of a host system and a management system, a storage resource, and a processor connected to them May be included.
  • the storage controller may be made redundant.
  • VOL is an abbreviation for logical volume and may be a logical storage device.
  • the VOL may be a substantial VOL (RVOL) or a virtual VOL (VVOL).
  • the VOL may be an online VOL provided to a host system connected to a storage system that provides the VOL, and an offline VOL that is not provided to the host system 8 (not recognized by the host system).
  • the “RVOL” may be a VOL based on a physical storage resource (for example, one or more RAID groups) possessed by the storage system having the RVOL.
  • VVOL may be at least one of an external connection VOL (EVOL), a capacity expansion VOL (TPVOL), and a snapshot VOL.
  • the EVOL is based on a storage space (for example, VOL) of an external storage system, and may be a VOL according to a storage virtualization technology.
  • the TPVOL is composed of a plurality of virtual areas (virtual storage areas), and may be a VOL according to a capacity virtualization technology (typically Thin Provisioning).
  • the snapshot VOL may be a snapshot VOL that is provided as a snapshot of the original VOL.
  • the TPVOL may typically be an online VOL.
  • the snapshot VOL may be an RVOL.
  • a “pool” is a logical storage area (for example, a set of a plurality of pool VOLs), and may be prepared for each use.
  • the TP pool may be a storage area composed of a plurality of real areas (substantial storage areas). A real area may be allocated from the TP pool to the virtual area of TPVOL.
  • the snapshot pool may be a storage area in which data saved from the original VOL is stored.
  • One pool may be used as both a TP pool and a snapshot pool.
  • the “pool VOL” may be a VOL that is a component of the pool.
  • the pool VOL may be an RVOL or an EVOL.
  • the pool VOL may typically be an offline VOL.
  • FIG. 1 is a schematic diagram showing an outline of a storage system according to this embodiment.
  • the storage system 1 includes a storage device 10a and a storage device 10b.
  • the storage device 10a and the storage device 10b can transmit and receive data through a predetermined network or communication line.
  • the storage device 10a includes a PDEV and a storage controller 20a that controls the PDEV.
  • the storage controller 20a controls the PDEV to configure the first VOL 80a.
  • the storage apparatus 10b includes a PDEV and a storage controller 20b that controls the PDEV.
  • the storage controller 20b constitutes the second VOL 80b.
  • the storage controller 20a When backing up the data of the first VOL 80a to the second VOL 80b, the storage controller 20a performs the following processing.
  • the storage controller 20a backs up all data stored in the first VOL 80a to the second VOL 80b. As a result, the first VOL 80a and the second VOL 80b have the same data (synchronized state).
  • the storage controller 20a stores data (including overwritten data, that is, differential data) newly written in the first VOL 80a after the first VOL 80a and the second VOL 80b are in synchronization with each other in a predetermined area ( (Referred to as “difference data storage area”) 102a.
  • the storage controller 20a acquires the difference data from the difference data storage area 102a at a predetermined timing (after the elapse of a certain period or when the difference data is stored in a predetermined amount or more), and the acquired difference Data is transmitted to the storage device 10b.
  • the storage controller 20b receives the difference data and stores it in the difference data storage area 102b of the second VOL 80b.
  • the difference data storage area consists of multiple slots, and the difference data is managed in slot units.
  • the data stored in the slot itself becomes differential data.
  • the data including the written data (data in the slot unit) ) Is the difference data.
  • a VOL address indicating the order is given to a plurality of slots constituting the difference data storage area.
  • the storage controller 20a has information indicating whether or not differential data is stored in each slot in the differential data storage area 102a of the first VOL 80a. This information is referred to as a difference map 90a.
  • the difference map 90a is composed of a plurality of difference bits, and the difference bits correspond to the slots (VOL address) of the difference data storage area on a one-to-one basis.
  • the differential bit corresponding to the slot in which the differential data is stored is differential bit on (eg, “1”).
  • the difference bit corresponding to the empty slot in which the difference data is not stored, that is, the difference slot is turned off (for example, “0”).
  • the method by which the storage controller 20a obtains differential data from the differential data storage area 102a includes sequential access and random access.
  • Sequential access can be used when differential data is acquired at a time from consecutive slots of VOL addresses.
  • the storage controller 20a issues a sequential access command designating the head VOL address and data length, and thereby stores a plurality of data stored in a plurality of slots between the head VOL address and the data length. Differential data can be acquired sequentially.
  • Random access can be used when obtaining differential data from each slot.
  • the storage controller 20a can obtain the difference data stored in the slot corresponding to the VOL address by issuing a random access command designating the VOL address.
  • the storage controller 20a performs differential data relocation processing on the differential data storage area 102a.
  • the difference data rearrangement process is a process of rearranging the difference data so that the difference data is continuous.
  • the storage controller 20a can acquire a large amount of difference data from the difference data storage area 102a after rearrangement by one sequential access. Next, the difference data rearrangement process will be described.
  • FIG. 2 is a diagram for explaining an example of the difference data rearrangement process.
  • the slot in the difference data storage area 102a and the difference bit in the difference map 90 have a one-to-one correspondence.
  • the rearrangement process of the difference map 90 is performed by the following steps, for example.
  • the storage controller 20a divides the difference map 90-1 as an example of difference information into a plurality of submaps as an example of the difference information part.
  • the difference map 90-1 is divided into four submaps R1, R2, R3, and R4.
  • the storage controller 20a counts the differential bit-on of each submap.
  • the number of difference bit on (black block) in the submap R1 is “9”
  • the number of difference bit on in the submap R2 is “5”
  • the number of differential bit-ons in the submap R4 is “7”.
  • the storage controller 20a selects a submap (referred to as “relocation destination submap”) as a relocation destination (movement destination) from among the plurality of submaps.
  • the storage controller 20a may preferentially select a submap having a large number of differential bit-ons as a relocation destination.
  • the storage controller 20a may select this submap R1 as a relocation destination.
  • the storage controller 20a selects, from among a plurality of submaps, a submap (referred to as “relocation source submap”) that is a relocation source (movement source) of difference data.
  • the storage controller 20a may preferentially select a submap with a small number of differential bit-ons as a relocation source. For example, in the difference map 90-1 of FIG. 2, since the number of difference bit-ons in the submap R2 is the smallest, the storage controller 20a selects this submap R2 as the relocation source.
  • the storage controller 20a selects, from among the differential bit-offs of the rearrangement destination submap, the differential bit-off that is the replacement destination of the differential bit-on of the rearrangement source submap.
  • the storage controller 20a may preferentially select the differential bit off with the earliest VOL address among the differential bit offs of the relocation destination submap as the differential bit off as the replacement destination.
  • the difference bit-on can be continuously rearranged in the rearrangement destination submap.
  • the difference bit off 1002 of the rearrangement destination submap R1 is selected as the replacement destination of the difference bit on 2002 of the rearrangement source submap R2.
  • the difference bit off 1003 of the rearrangement destination submap R1 is selected as the replacement destination of the difference bit on 2004 of the rearrangement source submap R2.
  • the storage controller 20a exchanges the difference bit-on of the replacement source submap exchange source and the exchange destination difference bit-off of the relocation destination submap (S11). That is, the storage controller 20a exchanges the physical address in which the differential data corresponding to the exchange-source VOL address is stored and the physical address in which the differential data is not stored corresponding to the exchange-destination VOL address. As a result, the physical address in which the differential data is not stored is associated with the exchange-source VOL address, and the physical address in which the differential data is stored is associated with the exchange-destination VOL address.
  • the physical address may be an address corresponding to a storage area constituting the VOL.
  • the physical address may be an address corresponding to a storage area on the PDEV 31 constituting the VOL.
  • the physical address may be an address corresponding to a lower logical storage area constituting the VOL.
  • the storage controller 20a executes the above (B4) to (B6) for the other submaps.
  • the storage controller 20a selects a submap R4 with the smallest number of differential bit-on after the submap R2 as a relocation source, executes the above (B4) to (B6) (S12), and finally The submap R3 is selected as the rearrangement source, and the above (B4) to (B6) are executed (S13).
  • the difference map 90-2 in FIG. 2 becomes the state of the difference map 90-3 in FIG.
  • the storage controller 20a acquires the difference data relating to the difference bit-on of the rearrangement destination submap. At this time, the storage controller 20a obtains differential bit-on that is continuously arranged, that is, differential data that is continuously stored, using a sequential access command in the rearrangement destination submap. In the difference map 90-3 of FIG. 2, difference bit-on 1001 to 1029 are continuously arranged. Therefore, the storage controller 20a issues a sequential access command once, and acquires continuous differential data corresponding to differential bit-on 1001 to 1029 at a time. As a result, the storage controller 20a can reduce the number of times the command for acquiring the difference data is issued.
  • the storage controller 20a transmits the acquired plurality of difference data to the second storage device 10b.
  • the storage controller 20b stores the transmitted plurality of difference data in the second VOL 80b. As a result, the first VOL 80a and the second VOL 80b are synchronized again.
  • the storage controller 20a may divide the difference map 90 in any way.
  • the storage controller 20a may divide the difference map 90 of FIG. 2 into a cross shape.
  • the number of divisions of the difference map 90-1 may be determined based on the number of difference bits of the difference map 90-1.
  • the number of divisions may be determined so that the number of difference bits belonging to each submap is a predetermined number.
  • the storage controller 20a may divide the difference map 90 so that the number of difference bits in each submap is as equal as possible. This is to facilitate comparison of the number of differential bit-ons in (B3) and (B4).
  • the storage controller 20a replaces the number of difference bits on in the submap with the difference bit-on with respect to the total number of difference bits in the submap in (B3) and (B4).
  • a ratio of numbers may be used. For example, in (B3), the storage controller 20a preferentially selects a submap having a large difference bit-on ratio as a relocation destination, and in (B4), selects a submap having a small difference bit-on ratio. You may preferentially select the relocation source.
  • FIG. 3 is a schematic diagram showing an example of a configuration in the storage system 1.
  • the storage system 1 is composed of two or more storage devices 10a and 10b.
  • the storage apparatuses 10a and 10b may be configured as one apparatus.
  • the storage device 10 is a device for storing data.
  • the storage apparatus 10 can send and receive data to and from the host system 8 and the management system 9 through the communication network 7.
  • the communication network 7 may be, for example, a SAN (Storage Area Network), a LAN (Local Area Network), a WAN (Wide Area Network), or a combination thereof.
  • the management system 9 is a system for managing the storage system 1.
  • the management system 9 may be composed of one or more management computers.
  • the host system 8 is a system that uses the storage system 1.
  • the host system 8 may be composed of one or more host computers.
  • the host computer may be able to store the data in the storage device 10 by sending a write request to the storage device 10.
  • the host computer may be able to acquire data from the storage apparatus 10 by sending a read request to the storage apparatus 10.
  • the storage apparatus 10 includes one or more storage controllers 20 and one or more PDEVs 31.
  • the PDEV 31 is an example of a physical storage device.
  • the PDEV 31 may be, for example, an HDD, SSD, or FM package.
  • the FM package includes an FM (flash memory) and an FM controller that controls input / output of data to / from the FM.
  • the FM is composed of one or more FM chips.
  • the FM chip is composed of a plurality of physical areas. Specifically, for example, the FM chip is a NAND flash, and is configured by a plurality of “physical blocks”, and each physical block is configured by a plurality of “physical pages”.
  • a physical block or a physical page is an example of a physical area. Data is accessed (read and written) in physical page units, and data is erased in physical block units.
  • the logical space (for example, all or a part of the VOL) managed by the storage controller 20 is based on a plurality of sub-logical spaces provided by a plurality of FM packages constituting a RAID group.
  • One sub logical space may be divided into a plurality of logical areas.
  • the sub logical space may be composed of a plurality of “logical blocks”, and each logical block may be composed of a plurality of “logical pages”.
  • a logical block or logical page may be an example of a logical area.
  • a logical area may be rephrased as a logical address range.
  • the FM controller manages the correspondence between the logical address and the physical address (for example, holds address conversion information indicating the correspondence between the logical address and the physical address), and is specified by an I / O command from the host device.
  • the logical area to which the specified logical address belongs is specified, the physical area corresponding to the specified logical area is specified, and I / O is performed on the specified physical area.
  • the logical address is, for example, LBA (Logical Block Address), and a logical area ID (for example, a logical block number or a logical page number) or the like may be employed instead of or in addition to the LBA.
  • the physical address is, for example, PBA (Physical Block Address), and a physical area ID (for example, a physical block number or a physical page number) or the like may be employed instead of or in addition to PBA.
  • FM is a write-once type. Specifically, when a logical area to which a physical page (hereinafter referred to as “page”) is assigned is a write destination, the FM is replaced with the assigned page. Thus, a new empty page is allocated, and data is written to the newly allocated page. For each logical area, the data written to the newly assigned page is “valid data”, the page to which valid data is written is the “valid page”, and is stored in the previously assigned page.
  • the invalid data is “invalid data”, and the page where invalid data is written is “invalid page”.
  • a physical page that can store new data without being a valid page or an invalid page is an “empty page”.
  • non-volatile semiconductor memory other than FM for example, PRAM (Phase Change Random Access Memory), MRAM (Magnetic Resistant Random Access Memory), ReRAM Ace MemoryRec (Ferroelectric Random Access Memory) may be employed.
  • the storage controller 20 controls the PDEV 31 to configure a VOL that is a logical storage device.
  • the VOL may be a substantial VOL (RVOL) or a virtual VOL (VVOL).
  • the storage controller 20 processes I / O for the VOL. For example, when the storage controller 20 receives a write request transmitted from the host system 8, the storage controller 20 stores the received write data in the VOL. That is, the storage controller 20 writes the write data to the PDEV 31 that constitutes the VOL. When the storage controller 20 receives a read request from the host system 8, the storage controller 20 acquires the requested read data from the VOL and returns it to the host system 8. Other processing in the storage controller 20 will be described later.
  • the storage controller 20 includes a processor 21, a local memory 22, a shared memory 23, a cache memory 24, a host I / F 26, a disk I / F 25, a remote I / F 27, and a bidirectional communication between these elements. It may be constituted by an internal bus 28 that enables data communication. There may be a plurality of these elements. The processing in the storage controller 20 may be realized by these elements operating in cooperation.
  • the local memory 22 is a memory for storing various data and programs used by the processor 21. Examples of data and programs stored in the local memory 22 will be described later.
  • the shared memory 23 is a memory for storing data shared by a plurality of elements. An example of data stored in the shared memory 23 will be described later.
  • the cache memory 24 is a memory for storing temporary data and the like.
  • the read data or the write data is temporarily stored in the cache memory 24 in order to shorten the response time of the storage controller 20 to the read request or write request.
  • the host I / F 26 is an I / F for the storage controller 20 (the processor 21) to transmit / receive data to / from the host system 8 through the communication network 7.
  • the host I / F 26 is, for example, a SAS adapter or a LAN adapter.
  • the disk I / F 25 is an I / F for the storage controller 20 (the processor 21) to transmit / receive data to / from the PDEV 31.
  • the disk I / F 25 is, for example, a PCIe adapter, a SATA adapter, a SAS adapter, or the like.
  • the remote I / F 27 is an I / F for transmitting and receiving data between the storage apparatus 10a and the storage apparatus 10b that is the synchronization target. That is, the difference data in the storage apparatus 10a is transmitted to the storage apparatus 10b through this remote I / F 27.
  • the remote I / F 27 may be connected to the communication network 7 or may be connected to a predetermined communication line or communication network connected to the storage device 10b.
  • the storage controller 20 can execute the I / O program 41, the differential data relocation program 43, and the VOL synchronization program 42. These programs may be stored in the local memory 22. Then, the contents of these processes may be realized by executing these programs by the processor 21. Alternatively, these processes may be configured as a predetermined logic operation circuit (for example, ASIC (Application Specific Integrated Circuit)). The contents of these processes may be realized by these logical operation circuits operating independently or in cooperation with the processor 21.
  • ASIC Application Specific Integrated Circuit
  • the storage controller 20 manages a submap management table 52, a synchronization setting table 54, a synchronization time analysis table 56, a synchronization time management table 58, an operation rate threshold setting table 60, and an adoption determination setting table 62.
  • the storage controller 20 may manage only one of these tables. These tables may be stored in the local memory 22. Alternatively, these tables may be stored in either the shared memory 23 or the PDEV 31. Alternatively, these tables may be stored in another device connected by the communication network 7.
  • the storage controller 20 manages the difference map 90.
  • the difference map 90 may be stored in any of the local memory 22, the shared memory 23, or the PDEV 31.
  • the difference data rearrangement program 43 is a process for rearranging difference data in the difference data storage area.
  • the difference data rearrangement program 43 divides the difference map 90 into two or more submaps, a selection process for selecting a rearrangement destination submap from a plurality of submaps, and a difference between the rearrangement source submaps
  • An exchange process for exchanging bit-on and differential bit-off of the rearrangement destination submap may be included.
  • the division number of the difference map 90 may be determined based on the total number of difference bits included in the difference map 90.
  • the difference map 90 may be divided so that the total number of difference bits included in each submap is as equal as possible.
  • the difference map 90 may be divided so that each submap has a predetermined total number of difference bits. This is to facilitate comparison between submaps.
  • the difference data rearrangement program 43 is executed when necessary, and may not be executed when unnecessary. This is because the time allowed for resynchronization processing varies depending on the policy of the user who uses the storage system, and it is not necessary to execute the differential data relocation processing that requires a processing load until the user policy is satisfied. is there.
  • the storage controller 20 calculates the number of difference data (predicted number) when the VOL synchronization program 42 is executed next time, and determines whether or not the predicted number of difference data is larger than a predetermined threshold.
  • the storage controller 20 does not need to execute the replacement process when the determination result is affirmative and performs the replacement process when the determination result is negative.
  • the predicted number of difference data may be calculated based on the past VOL synchronization program 42.
  • the VOL synchronization program 42 is a process for synchronizing the data of the first VOL 80a and the second VOL 80b.
  • the VOL synchronization program 42 may include a process of transmitting difference data newly stored in the first VOL 80a to the second VOL 80b after the synchronization state.
  • the VOL synchronization program 42 may be executed at a constant cycle. Alternatively, the VOL synchronization program 42 may be executed when the difference data has accumulated a predetermined amount or more.
  • the storage controller 20 In executing the VOL synchronization program 42, the storage controller 20 first acquires the relocated differential data from the differential data storage area of the first VOL 80a. Then, the storage controller 20 transmits the acquired difference data to the storage apparatus 10b having the second VOL 80b. The transmitted difference data is stored (that is, synchronized) in the second VOL 80b of the storage apparatus 10b.
  • the VOL synchronization program 42 may include a process of acquiring a plurality of differential data stored continuously at a time using a sequential access command.
  • the VOL synchronization program 42 may include a process of acquiring difference data stored discontinuously with a random access command.
  • the I / O program 41 is a process for controlling a write request and a read request received from the host system 8.
  • the I / O program 41 stores the write data (difference data) in the first VOL 80 a and transmits a response (success or failure) to the write request to the host system 8. including.
  • the I / O program 41 receives a read request from the host system 8
  • the I / O program 41 includes processing for acquiring data related to the request from the first VOL 80 a and transmitting the acquired data to the host system 8.
  • the I / O program 41 When the I / O program 41 receives a write request during execution of the differential data rearrangement program 43, the I / O program 41 stores the write data (differential data) preferentially in an empty slot belonging to the rearrangement destination submap. May be included. Thereby, the continuity of differential bit-on in the rearrangement destination submap can be enhanced. In addition, it is possible to prevent a difference bit-on from newly occurring in the rearrangement source submap that has been rearranged. Details of this processing will be described later (see FIG. 15).
  • the difference map 90 is as described above. Details will be described later (see FIG. 4).
  • the submap management table 52 is a table for managing information related to each submap. Details will be described later (see FIG. 5).
  • the synchronization setting table 54 is a table in which setting values related to the VOL synchronization program 42 are stored. Details will be described later (see FIG. 6).
  • the synchronization time analysis table 56 is a table in which values for analyzing the time required for the VOL synchronization program 42 are stored. Details will be described later (see FIG. 7).
  • the synchronization time management table 58 is a table for managing information related to the time required for the VOL synchronization program 42. Details will be described later (see FIG. 8).
  • the operating rate threshold setting table 60 is a table in which thresholds used for determining whether or not the differential data rearrangement program 43 is to be executed are stored. Details will be described later (see FIG. 9).
  • the adoption determination setting table 62 is a table that stores whether or not each determination regarding the necessity of executing the differential data rearrangement program 43 is adopted. Details will be described later (see FIG. 10).
  • FIG. 4 shows a configuration example of the difference map 90.
  • the difference map 90 is information for managing an address assigned to a slot in the VOL (referred to as a “VOL address”) and a difference bit indicating whether or not difference data is stored in the slot. is there.
  • the difference map 90 may have a VOL address 101 and a difference bit 102 as field values.
  • the VOL address 101 is an address assigned to a slot included in the VOL difference data storage area.
  • the difference bit 102 is a flag indicating whether or not difference data is stored in the slot indicated by the VOL address 101.
  • the difference bit corresponding to the VOL address 101 in which the difference data is stored is the difference bit on (“1”), and the difference bit corresponding to the VOL address 10 in which the difference data is not stored is the difference bit off (“0”). It becomes.
  • the VOL address 101 “0x0000 to 0x0030” belongs to the rearrangement destination submap
  • the VOL address 101 “0x0270” belongs to the rearrangement source submap.
  • the differential bit on of the VOL address 101 “0x0270” is exchanged with any differential bit off belonging to the rearrangement destination submap.
  • the earliest VOL address 101 is “0x0010”.
  • the difference bit on 102 of the VOL address 101 “0x0270” and the difference bit off 102 of the VOL address 101 “0x0010” may be exchanged. That is, the physical address in which the differential data corresponding to the exchange source VOL address 101 “0x0270” has been stored and the physical address in which the differential data not corresponding to the exchange destination VOL address 101 “0x0010” is stored may be exchanged. As a result, the two difference bits of the VOL address 101 “0x0000” and “0x0010” are successively turned on. By repeating this, the continuity of differential bit-on in the rearrangement destination submap can be enhanced.
  • FIG. 5 shows a configuration example of the submap management table 52.
  • the submap management table 52 is a table for managing information on each submap divided by the division processing on the difference map 90.
  • the submap management table 52 includes, as field values, a submap ID 111, a VOL address section 112, a differential bit-on number 113, a total number of differential bits 114, a differential bit-on ratio 115, a rank 116, and a difference And a bit-on tail 117.
  • the submap ID 111 is a value for identifying the submap.
  • the submap ID 111 may be the ID of the difference map 90.
  • the submap ID 111 “R” in FIG. 5 may be an ID of the difference map 90
  • the submap IDs 111 “R1” to “R4” may be IDs of submaps divided from the difference map 90 “R”.
  • the VOL address section 112 is a section of a VOL address belonging to the submap of the submap ID 111.
  • the difference bit-on number 113 is the number of difference bit-ons belonging to the submap of the submap ID 111 (that is, the number of difference data).
  • the total number of difference bits 114 is the total number of difference bits belonging to the submap of the submap ID 111 (that is, the total number of slots).
  • the difference bit-on ratio 115 is the ratio of the difference bit-on number 113 to the total number 114 of difference bits in the submap of the submap ID 111.
  • the rank 116 is a value indicating the largest difference bit-on ratio 115 in the submap of the submap ID 111 among all the submaps divided from the same difference map 90.
  • the last bit 117 of differential bit on is information regarding the differential bit on at the end of the VOL address in the submap of the submap ID 111.
  • the last bit 117 of the difference bit on may be “0”.
  • the last 117 of the differential bit on may be “ ⁇ 1”.
  • the last 117 of the differential bit on may be the known VOL address.
  • FIG. 6 shows a configuration example of the synchronization setting table 54.
  • the synchronization setting table 54 is a table in which setting values related to the VOL synchronization program 42 are stored.
  • the synchronization setting table 54 may include, as field values, a pair ID 121, an upper limit time 122, a previous resynchronization time 123, a safety factor 124, and a next relocation necessity 125.
  • the pair ID 121 is a value for identifying a VOL pair to be synchronized (backup target).
  • the pair ID 121 may be, for example, a combination of two VOL IDs to be synchronized.
  • the upper limit time 122 is a value indicating the upper limit of the time that can be applied to the VOL synchronization program 42 between the VOLs of the pair ID 121.
  • the upper limit time 122 may be set by the user, or may be automatically set by the storage controller 20.
  • the previous resynchronization time 123 is the time taken by the previous VOL synchronization program 42 between the VOLs of the pair ID 121.
  • the safety factor 124 is a value indicating how much margin is provided for the upper limit time 122.
  • the upper limit time 122 of the pair ID 121 “first VOL: second VOL” in FIG. 6 is “200 minutes”, and the safety factor 124 is “10%”.
  • the safety factor 124 may be set by the user or automatically set by the storage controller 20.
  • the next relocation necessity 125 is information indicating whether or not the differential data relocation program 43 needs to be executed before the next VOL synchronization program 42 between the VOLs of the pair ID 121.
  • the next relocation necessity 125 may be “necessary”, and when the difference data relocation program 43 is not necessary, the next relocation necessity 125 may be “unnecessary”.
  • the next relocation necessity 125 may be automatically determined by the storage controller 20 based on the upper limit time 122, the safety factor 124, and the previous resynchronization time 123.
  • the storage controller 20 determines that the next relocation necessity 125 is “necessary”, If not, the next relocation necessity 125 may be determined as “unnecessary”. This is because the time required for the VOL synchronization program 42 can be shortened by executing the differential data relocation program 43.
  • the storage controller 20 may generate a GUI (Graphical User Interface) that can input the upper limit time 122 and the safety factor 124 for each pair ID 121.
  • GUI Graphic User Interface
  • a result of whether or not the next rearrangement is necessary may be displayed in conjunction with the input upper limit time 122 and safety factor 124.
  • This GUI may be output to the display device of the host system 8 used by the user.
  • FIG. 7 shows a configuration example of the synchronization time analysis table 56.
  • the synchronization time analysis table 56 is a table in which values for analyzing the time required for the VOL synchronization program 42 are stored.
  • the synchronization time analysis table 56 may store both records including actual measured values in the past VOL synchronization program 42 and records including predicted values in the future VOL synchronization program 42.
  • the synchronization time analysis table 56 may have, as field values, a synchronization ID 131, a Q arrival time 132, a difference bit-on number 133, a difference bit-on increase rate 134, and a time 135 required for resynchronization. .
  • the synchronization ID 131 is information for identifying which VOL synchronization program 42 the record relates to.
  • the synchronization ID 131 may include information for identifying whether the record is an actual measurement value or a predicted value. For example, the synchronization ID 131 “1 (actual measurement)” in FIG. 7 indicates that the record includes an actual measurement value in the first VOL synchronization program 42 that has been executed.
  • the synchronization ID 131 “2 (prediction)” indicates that the record includes a predicted value in the second VOL synchronization program 42 scheduled to be executed.
  • the synchronization ID 131 may be the time when the VOL synchronization program 42 is executed or the time when it is scheduled to be executed.
  • the Q arrival time 132 is an actual time taken until the number of differential bit-on 133 reaches the threshold value Q in the VOL synchronization program 42 of the synchronization ID 131.
  • the Q arrival time 132 may be an actual measurement value in any record of actual measurement and prediction.
  • the difference bit-on number 133 related to actual measurement is the actual number of difference bit-ons when the VOL synchronization program 42 of the synchronization ID 131 is executed.
  • the difference bit-on number 133 related to the prediction is the number of difference bit-on predicted when the VOL synchronization program 42 of the synchronization ID is executed.
  • the difference bit-on increase rate 134 related to the actual measurement is a value indicating a rate of increase in the number of differential bit-on per unit time until the VOL synchronization program 42 with the synchronization ID 131 is actually executed.
  • the difference bit-on increase rate 134 relating to the prediction is a value indicating a rate of increase in the number of difference bit-on per unit time predicted until the time when the VOL synchronization program 42 of the synchronization ID 131 is executed. .
  • the time 135 required for the resynchronization related to the actual measurement is the time actually taken from the start to the completion of the VOL synchronization program 42 with the synchronization ID 131.
  • the time 135 required for the resynchronization related to the prediction is a time estimated to be taken from the start to the completion of the VOL synchronization program 42 of the synchronization ID 131.
  • the Q arrival time 132 is “tp (minutes)”, and the number of differential bit on Assume that 133 is “Np (pieces)”, the differential bit-on increase rate 134 is “np (pieces / minute)”, and the time 135 required for resynchronization is “Tp (minutes)”. Then, it is assumed that the Q arrival time 132 is “tc (minutes)” in the VOL synchronization program 42 (VOL synchronization program 42 in the current cycle) with the synchronization ID 131 “2 (prediction)”.
  • FIG. 8 shows a configuration example of the synchronization time management table 58.
  • the synchronization time management table 58 is a table for managing information related to the time required for the VOL synchronization program 42.
  • the synchronization time management table 58 includes, as field values, a pair ID 141, an upper limit time 142, a safety factor 143, an allowable time 144, a predicted time 145, a predicted excess time 146, a measured sequential ratio 147, and a target sequential ratio. 148.
  • the pair ID 141, the upper limit time 142, and the safety factor 143 are as described above (see FIG. 6).
  • Allowable time 144 is a time allowed for the VOL synchronization program 42 between the VOLs of the pair ID 141.
  • the predicted time 145 is a time predicted from the start to the completion of the VOL synchronization program 42 between the VOLs of the pair ID 141.
  • the storage controller 20 may automatically calculate the predicted time 145 based on the time (actually measured value) required for the past VOL synchronization program 42.
  • the sequential ratio is the ratio of the number of differential bit-ons having continuity to the sum of the number of differential bit-ons having continuity and the number of differential bit-ons having no continuity (random) in the difference map 90. It may be the value shown. Whether or not the difference bit-on has continuity may be determined based on, for example, whether or not the difference bit-on continues for a predetermined number or more.
  • the measured sequential ratio 147 is a sequential ratio measured at a certain time (for example, the current time) in the synchronization source VOL of the pair ID 141.
  • the target sequential ratio 148 is a sequential ratio targeted by the synchronization source VOL of the pair ID 141.
  • the target sequential ratio 148 may be a sequential ratio necessary for completing the VOL synchronization program 42 within the allowable time 144. That is, in order to complete the VOL synchronization program 42 within the allowable time 144, the differential data relocation program 43 may be executed until the measured sequential ratio 147 becomes equal to or greater than the target sequential ratio 148.
  • FIG. 9 shows a configuration example of the operation rate threshold value setting table 60.
  • the operating rate threshold value setting table 60 is a table in which threshold values of resource operating rates used for determining whether or not the differential data rearrangement program 43 is to be executed are stored.
  • the operating rate threshold setting table 60 may have a resource ID 151 and an operating rate threshold 152 as field values.
  • the resource ID 151 is information for identifying a resource in the storage apparatus 10.
  • the resource may be, for example, a processor resource, a memory resource, or a network resource.
  • the operation rate threshold 152 indicates a threshold for the operation rate of the resource with the resource ID 151.
  • the storage controller 20 determines that the resource operation rate with the resource ID 151 is equal to or less than the operation rate threshold value 152 corresponding to the resource ID 151. Until that time, the execution of the differential data rearrangement program 43 is temporarily stopped. This is because if the differential data relocation program 43 is executed when the operation rate is high, the operation rate is further increased, and the performance of other processes (for example, the performance of the I / O program 41) can be reduced.
  • the storage controller 20 may generate a GUI that can input the operation rate threshold 152 for each resource ID 151. This GUI may be output to the display device of the host system 8 used by the user.
  • FIG. 10 shows a configuration example of the adoption determination setting table 62.
  • the adoption determination setting table 62 is a table that stores whether or not each determination regarding the necessity of executing the differential data rearrangement program 43 is adopted.
  • the adoption determination setting table 62 may have a determination content 161 and adoption approval / disapproval 162 as field values.
  • the determination content 161 is information indicating the content of determination regarding whether or not the differential data relocation program 43 can be executed.
  • Adoptability 162 is information indicating whether or not to adopt the determination content.
  • the adoption availability 161 may be set by the user, or may be automatically set by the storage controller 20.
  • FIG. 11 is a flowchart showing an example of processing of the difference data relocation program 43.
  • the storage controller 20 executes necessity determination processing for determining whether or not it is necessary to execute the differential data relocation program 43 (S101). Details of the necessity determination process will be described later (see FIG. 12).
  • the storage controller 20 determines that it is not necessary to execute the differential data rearrangement program 43 (S101: rearrangement is not necessary), the process ends. If the storage controller 20 determines that it is necessary to execute the differential data relocation program 43 (S101: relocation required), the storage controller 20 proceeds to the next processing of S102.
  • the storage controller 20 executes the dividing process of the difference map 90 (S102). Details of the dividing process of the difference map 90 will be described later.
  • the storage controller 20 executes processing related to the submap divided by the division processing (S102) of the difference map 90 (S103). Details of processing related to the submap will be described later (see FIG. 13).
  • the storage controller 20 determines that the rearrangement of the difference data has been completed (S103: rearrangement complete), the present processing is terminated.
  • the storage controller 20 determines that the relocation of the difference data is incomplete (S103: relocation incomplete)
  • the storage controller 20 may proceed to the next processing of S104.
  • the storage controller 20 performs differential bit exchange processing (S104). Details of the exchange process will be described later.
  • Storage controller 20 executes post-processing (S105). Details of the post-processing will be described later. Then, the storage controller 20 returns to the process of S103.
  • FIG. 12 is a flowchart showing details of the necessity determination process (S101).
  • the storage controller 20 newly creates the difference map 90 and initializes it (for example, turns off all the difference bits) (S201).
  • the storage controller 20 starts measuring the Q arrival time 132 “tc” until the number of differential bit-ons becomes equal to or greater than the threshold Q in the difference map 90 (S202).
  • the storage controller 20 When the storage controller 20 receives a write command for the synchronization source VOL (S210: YES), the storage controller 20 executes a difference map update process (S211). Otherwise (S210: NO), the process returns to the process of S210. Details of the difference map update processing will be described later (see FIG. 14).
  • the storage controller 20 determines whether or not the number of difference bit-ons in the difference map 90 is equal to or greater than the threshold value Q (S220). If the determination result is negative (S220: NO), the process returns to S210.
  • the storage controller 20 ends the measurement of the Q arrival time “tc” and stores the measured time “tc” in the synchronization time analysis table 56 (S221). .
  • the storage controller 20 determines whether or not the time 135 “Tc” required for the resynchronization related to the prediction exceeds the allowable time 144 (S230).
  • the storage controller 20 determines that rearrangement of difference data is necessary (S231). In this case, the processing after S102 in FIG. 12 is executed.
  • the storage controller 20 determines that the relocation of the difference data is not necessary (S232). In this case, the processes after S102 in FIG. 12 are not executed.
  • the storage controller 20 determines the division number of the difference map 90 based on the difference bit-on number 133 “Nc” related to the prediction in the difference map 90 (S301).
  • the storage controller 20 divides the difference map 90 by the determined number of divisions and generates a plurality of submaps (S302).
  • the storage controller 20 stores the submap ID 111, the VOL address section 112, and the total number of difference bits 114 in each submap in the submap management table 52.
  • the storage controller 20 stores the difference bit-on number 113 in each submap in the submap management table 52 (S303).
  • the storage controller 20 selects a submap having the largest number of differential bit-ons 113 from among a plurality of submaps as a relocation destination submap (S304).
  • the storage controller 20 calculates the difference bit-on ratio 115 and the rank 116 based on the difference bit-on number 113 of each submap, and stores it in the submap management table 52.
  • FIG. 13 is a flowchart showing details of the process (S103) related to the submap.
  • the storage controller 20 determines whether or not an unselected submap exists in this process (S401). If there is no unselected submap (S401: NO), the storage controller 20 determines “relocation complete” (S412), and returns to the processing shown in FIG. 11 (RETURN). In this case, the determination result of the process related to the submap shown in FIG. 11 (S103) is “relocation complete”, and the differential data relocation program 43 ends.
  • the storage controller 20 selects, from among the unselected submaps, the submap having the smallest difference bit-on number 113 as the relocation source submap ( S402).
  • the storage controller 20 determines whether or not the rearrangement source submap is the target of the rearrangement process (S403). For example, in the submap management table 52, when the last bit 117 of the difference bit on related to the rearrangement source submap is “0” (when there is no difference bit on), or the rearrangement source submap and the rearrangement destination sub When the maps are the same, the storage controller 20 may make the determination result in S403 negative. When the determination result of S403 is negative (S403: NO), the storage controller 20 returns to the process of S401. If the determination result of S403 is positive (S403: YES), the storage controller 20 proceeds to the next process of S410.
  • the storage controller 20 determines whether or not the latest measured sequential ratio 147 of the difference map 90 is equal to or greater than the target sequential ratio 148 (S410). If the determination result in S410 is affirmative (S410: YES), the storage controller 20 determines “relocation complete” (S410), and returns to the processing shown in FIG. 11 (RETURN). This is because the VOL synchronization program 42 can be completed within the allowable time 144 with the current difference map 90 maintained without further rearranging the difference data. In this case, the determination result of the processing relating to the submap shown in FIG. The reason why the rearrangement is completed when the condition of S410 is satisfied is that the rearrangement processing is subjected to a certain processing load. Further, by sequentially selecting a submap having a large number of differential bit-ons as a relocation destination, the sequential ratio can be increased with a smaller number of relocations.
  • the storage controller 20 proceeds to the process of S413. If it is set in the adoption determination setting table 62 that the determination content 161 in S410 is not adopted (adoption propriety 162 “NO”), the storage controller 20 does not execute the determination in S410 and does not perform the determination in S413. You may proceed to processing.
  • the storage controller 20 selects one of the differential bit ons belonging to the relocation source submap as the differential bit on of the exchange source (S413).
  • the storage controller 20 determines whether or not the resource operation rate measured most recently is equal to or higher than the operation rate threshold 152 corresponding to the ID 151 of this resource in the operation rate threshold setting table 60 (S420).
  • the storage controller 20 registers the difference bit-on of the exchange source in a predetermined queue (S431), waits for a fixed time (S432), and then performs the processing of S420. Return.
  • the storage controller 20 determines “relocation is incomplete” (S421), and returns to the process shown in FIG. 12 (RETURN). In this case, the determination result of the process related to the submap shown in FIG. 11 (S103) is “relocation incomplete”, and the next replacement process (S104) is executed.
  • the storage controller 20 selects the difference bit off of the exchange destination from the difference bit off in the relocation destination submap (S501).
  • the storage controller 20 may preferentially select the differential bit off with the earliest VOL address 101 in the relocation destination submap. This is because differential bit-on is arranged as continuously as possible in the rearrangement destination submap.
  • the storage controller 20 exchanges the differential bit-on selected as the exchange source in the relocation source submap and the differential bit-on selected as the exchange destination in the relocation destination submap (S502). That is, the storage controller 20 exchanges a slot in which difference data corresponding to the difference bit on of the exchange source is stored and an empty slot corresponding to the difference bit off of the exchange destination. Then, the storage controller 20 returns to the process shown in FIG. 11 (RETURN).
  • the storage controller 20 updates the difference bit-on number 113, the difference bit-on ratio 115, and the last bit 117 of the difference bit-on of the record related to the rearrangement destination submap (S601). .
  • the storage controller 20 updates the difference bit-on number 113, the difference bit-on ratio 115, and the last bit 117 of the difference bit-on in the relocation source submap (S602). .
  • the storage controller 20 updates the measured sequential ratio 147 in the synchronization time management table 58 (S603). Then, the storage controller 20 returns to the process shown in FIG. 12 (RETURN). In the process shown in FIG. 11, the storage controller 20 returns to the process of S103.
  • FIG. 14 is a flowchart showing an example of the difference map update process (S211).
  • the difference map update process (S211) is executed when a write command for the synchronization source VOL is received as shown in FIG.
  • the storage controller 20 determines whether or not the rearrangement destination submap has already been determined (S701). If the relocation destination submap has not yet been determined (S701: NO), the storage controller 20 proceeds to the process of S710.
  • the storage controller 20 next determines whether or not the VOL address 101 designated by the write command belongs to the relocation destination submap (S702). . If the determination result of S702 is positive (S702: YES), the storage controller 20 proceeds to the process of S710.
  • the storage controller 20 exchanges the slot of the VOL address 101 designated by the write command with an empty slot belonging to the relocation destination submap (S703). Further, the storage controller 20 may preferentially select the first VOL address 101 as the replacement destination in the difference bit off in the relocation destination submap. Then, the storage controller 20 proceeds to the process of S710.
  • the storage controller 20 stores the write data (that is, difference data) in the write destination slot (S710). If the write destination slot is exchanged in S703, the difference data is stored in the exchange destination slot belonging to the rearrangement destination submap.
  • the storage controller 20 updates the difference map 90 (S711). That is, the storage controller 20 updates the difference bit corresponding to the slot in which the difference data is stored in the difference map 90 to ON.
  • the storage controller 20 updates the submap management table 52 (S712). That is, in the submap management table 52, the storage controller 20 adds the number of differential bits turned on in S711 to the number of differential bits on in the relocation destination submap.
  • the storage controller 20 refers to the difference bit-on tail 117 of the record related to the submap in which the difference data of the submap management table 52 is stored, and the difference bit-on tail 117 of the submap in which the difference data is stored. It is determined whether or not the VOL address 101 is known (S720).
  • the storage controller 20 next determines whether the write destination VOL address 101 of the differential data is behind the last VOL address 101 of the differential bit on. It is determined whether or not (S721).
  • the storage controller 20 next sets “ ⁇ 1” (unknown) in the submap management table 52 for the difference bit-on end 117 of the record related to the submap. (S722). Then, the storage controller 20 returns to the process shown in FIG. 12 (RETURN).
  • FIG. 15 is a flowchart showing an example of processing of the VOL synchronization program 42.
  • the storage controller 20 determines whether or not the differential data relocation program 43 has been executed (S801). When the differential data relocation program 43 has not been executed (S801: NO), the storage controller 20 executes a normal resynchronization process between VOLs (S810), and proceeds to the process of S830.
  • the storage controller 20 determines whether or not an unselected submap exists in the VOL synchronization program 42 (S802). If there is no unselected submap (S802: NO), the storage controller 20 proceeds to the process of S830.
  • the storage controller 20 selects a submap with the largest number of differential bit-ons from among the unselected submaps (S803).
  • the storage controller 20 determines whether or not the VOL address 101 at the end 117 of the differential bit on is found in the selected submap (S820).
  • the storage controller 20 When the VOL address 101 of the last bit 117 of the difference bit on is unknown in the selected submap (S820: NO), the storage controller 20 performs the following process. That is, the storage controller 20 searches the entire submap and extracts differential bit-on. Then, the storage controller 20 acquires each difference data of the plurality of VOL addresses 101 corresponding to the extracted difference bit-on by a random access command (S822). Then, the storage controller 20 returns to the process of S802.
  • the storage controller 20 When the VOL address 101 of the last bit 117 of differential bit on is known in the selected submap (S803: YES), the storage controller 20 performs the following process. In other words, the storage controller 20 acquires all the differential data from the first VOL address 101 of the submap to the last VOL address 101 of the differential bit ON by a sequential access command (S822). Then, the storage controller 20 returns to the process of S802.
  • the storage controller 20 transmits the difference data acquired in S821 and S822 to the synchronization destination storage device (S830).
  • the storage controller 20 updates the synchronization time management table 58 (S831), and ends this process.
  • the embodiment of the copy process (remote copy process) between the first VOL 80a of the two storage apparatuses 10a and the second VOL 80b of the storage apparatus 10b has been described.
  • the contents of the above-described embodiment include other configurations. It is also applicable to.
  • the contents of the above-described embodiment can be applied to a copy process (local copy process) in one storage apparatus.
  • one storage apparatus 10 has one storage controller 20, a first VOL 80a, and a second VOL 80b.
  • a difference map 90a related to the first VOL 80a and a difference map 80b related to the second VOL 80b are provided. May be managed.
  • the contents of the above-described embodiment may be applied to the local copy process between the first VOL 80a and the second VOL 80b in one storage apparatus 10.
  • Storage system 10 Storage device 20: Storage controller 31: PDEV 80: Volume

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

L'invention concerne un dispositif de contrôle de mémoire ayant des informations de différence qui indiquent, pour chacun d'une pluralité de créneaux, si le créneau est un créneau utilisé dans lequel des données de différence dans un premier volume sont mémorisées ou un créneau libre dans lequel aucunes données de différence ne sont mémorisées. Le dispositif de contrôle de mémoire divise les informations de différence en une pluralité de blocs d'informations de différence, sélectionne un premier bloc d'informations de différence et un second bloc d'informations de différence parmi la pluralité de blocs d'informations de différence, et échange l'adresse d'un créneau utilisé appartenant au second bloc d'informations de différence avec l'adresse d'un créneau libre appartenant au premier bloc d'informations de différence.
PCT/JP2015/053964 2015-02-13 2015-02-13 Système de mémoire et procédé de contrôle de mémoire WO2016129101A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2015/053964 WO2016129101A1 (fr) 2015-02-13 2015-02-13 Système de mémoire et procédé de contrôle de mémoire

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2015/053964 WO2016129101A1 (fr) 2015-02-13 2015-02-13 Système de mémoire et procédé de contrôle de mémoire

Publications (1)

Publication Number Publication Date
WO2016129101A1 true WO2016129101A1 (fr) 2016-08-18

Family

ID=56614339

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2015/053964 WO2016129101A1 (fr) 2015-02-13 2015-02-13 Système de mémoire et procédé de contrôle de mémoire

Country Status (1)

Country Link
WO (1) WO2016129101A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1078899A (ja) * 1996-09-02 1998-03-24 Hitachi Ltd 情報記録再生装置
JP2005332067A (ja) * 2004-05-18 2005-12-02 Hitachi Ltd バックアップ取得方法及びディスクアレイ装置
JP2010176180A (ja) * 2009-01-27 2010-08-12 Nec Corp ストレージシステム

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1078899A (ja) * 1996-09-02 1998-03-24 Hitachi Ltd 情報記録再生装置
JP2005332067A (ja) * 2004-05-18 2005-12-02 Hitachi Ltd バックアップ取得方法及びディスクアレイ装置
JP2010176180A (ja) * 2009-01-27 2010-08-12 Nec Corp ストレージシステム

Similar Documents

Publication Publication Date Title
US9317436B2 (en) Cache node processing
US8316180B2 (en) Method and system for rebuilding data in a distributed RAID system
JP4890160B2 (ja) ストレージシステム及びバックアップ/リカバリ方法
JP5981563B2 (ja) 情報記憶システム及び情報記憶システムの制御方法
JP6286622B2 (ja) ストレージシステム
JP6328335B2 (ja) ストレージ装置及びその制御方法
JP5073259B2 (ja) 仮想化システム及び領域割当て制御方法
US8639898B2 (en) Storage apparatus and data copy method
JP6007332B2 (ja) ストレージシステム及びデータライト方法
JP6569477B2 (ja) ストレージ制御装置、および制御プログラム
WO2014155525A1 (fr) Système de stockage et procédé de commande
US10579540B2 (en) Raid data migration through stripe swapping
JP2013525912A (ja) ストレージ装置及びその制御方法
CN112181736A (zh) 分布式存储系统及分布式存储系统的配置方法
US10067882B2 (en) Storage system and storage control method
US11740823B2 (en) Storage system and storage control method
WO2016139787A1 (fr) Système de stockage et procédé de commande d'écriture de données
WO2015145617A1 (fr) Système de stockage et procédé de commande de mémoire
US8572347B2 (en) Storage apparatus and method of controlling storage apparatus
JP6019940B2 (ja) 情報処理装置、コピー制御プログラム、およびコピー制御方法
US11061604B2 (en) Method and storage system architecture for accessing data by means of a compatible module
JP5712535B2 (ja) ストレージ装置、制御部およびストレージ装置制御方法
WO2018055686A1 (fr) Système de traitement d'informations
WO2016129101A1 (fr) Système de mémoire et procédé de contrôle de mémoire
WO2014030249A1 (fr) Système de vérification et procédé de vérification de performance d'e/s de volume

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15881976

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15881976

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP