WO2017022002A1 - ストレージ装置、ストレージシステム、ストレージシステムの制御方法 - Google Patents
ストレージ装置、ストレージシステム、ストレージシステムの制御方法 Download PDFInfo
- Publication number
- WO2017022002A1 WO2017022002A1 PCT/JP2015/071736 JP2015071736W WO2017022002A1 WO 2017022002 A1 WO2017022002 A1 WO 2017022002A1 JP 2015071736 W JP2015071736 W JP 2015071736W WO 2017022002 A1 WO2017022002 A1 WO 2017022002A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- storage
- storage device
- management information
- memory
- cache
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/2053—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
- G06F11/2094—Redundant storage or storage space
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/1658—Data re-synchronization of a redundant component, or initial sync of replacement, additional or spare unit
- G06F11/1662—Data re-synchronization of a redundant component, or initial sync of replacement, additional or spare unit the resynchronized component or unit being a persistent storage device
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/1666—Error detection or correction of the data by redundancy in hardware where the redundant component is memory or memory area
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0806—Multiuser, multiprocessor or multiprocessing cache systems
- G06F12/084—Multiuser, multiprocessor or multiprocessing cache systems with a shared cache
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0866—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
- G06F12/0871—Allocation or management of cache space
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0604—Improving or facilitating administration, e.g. storage management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/061—Improving I/O performance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0614—Improving the reliability of storage systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0646—Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
- G06F3/065—Replication mechanisms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0662—Virtualisation aspects
- G06F3/0665—Virtualisation aspects at area level, e.g. provisioning of virtual or logical volumes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
- G06F3/0683—Plurality of storage devices
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/1658—Data re-synchronization of a redundant component, or initial sync of replacement, additional or spare unit
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/10—Providing a specific technical effect
- G06F2212/1016—Performance improvement
- G06F2212/1024—Latency reduction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0646—Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
Definitions
- the present invention relates to a storage apparatus, a storage system, and a storage system control method.
- Patent Document 1 JP-A-2006-221526
- this publication in a storage apparatus having a plurality of memories, management information is made redundant between different memories, and when a failure occurs in a memory storing the management information, the management information is copied to another memory to provide redundancy. It describes the technology to be secured.
- the management information necessary for the operation of the storage system is made redundant between memories of different storage devices, so that the availability can be improved. Also, when the use of the storage device is stopped, such as when a failure occurs in some of the storage devices that make up the storage system, or when it is removed, the management information stored in the storage device is transferred to other storage devices. Redundancy can be maintained by copying.
- Patent Document 1 in order to copy management information when a failure occurs, it is necessary to secure a free capacity capable of storing the management information in a copy destination memory.
- the copy destination storage device when a free capacity capable of storing the management information is regularly secured, the memory usage rate during normal operation is reduced and the cost is increased.
- the free capacity that can store the management information is dynamically secured in the copy destination storage device, the memory usage rate during normal operation is improved, but the system when the free capacity is dynamically secured There is a need to reduce the impact on For this reason, it is important to appropriately determine the copy destination storage apparatus based on the use state of the memory individually managed by each storage apparatus.
- the storage device in preparation for stopping the use of the storage device, the storage device can be used even if the free space for storing the management information of the storage device is not constantly secured in the memory of the other storage device. It is possible to provide a storage system capable of dynamically securing a free capacity when stopping the storage and reducing the influence on the system when the free capacity is dynamically secured.
- the storage system of the present invention includes a plurality of storage devices including the first storage device.
- Each of the plurality of storage apparatuses includes a memory having a management information storage area for storing management information and a cache area for storing cache information, and a processor for managing the state of the cache area.
- the processors of the plurality of storage devices stop using based on the state of the cache area managed by each storage device other than the first storage device.
- a copy destination storage apparatus that is a copy destination of copy target management information that is management information stored in the memory is determined.
- the processor of the copy destination storage apparatus releases at least a part of the cache area of the memory of the copy destination storage apparatus, and stores the copy target management information in the released cache area.
- FIG. 1 is a diagram showing the concept of management information saving processing in the present embodiment.
- each storage device 2 has a management information storage area 11 for storing management information and a cache area 12 for storing cache information. Further, the processor 8 of each storage device 2 manages the access frequency of the cache area 12 of the storage device 2.
- the management information is stored in at least one of the memories 9 of the plurality of storage apparatuses 2 when the plurality of storage apparatuses 2 are operating, and is referenced from the memory 9 by at least one of the processors 8 of the plurality of storage apparatuses 2. It is possible information. In particular, when each storage device 2 is operating, the processor 8 of the storage device 2 can refer to the management information stored in the memory 9 of the storage device 2.
- the cache information indicates which of the plurality of storage apparatuses 2 This information can also be deleted from the memory 9.
- cache information that is the same data as the data stored in the storage device (drive 4) included in the storage apparatus 2 is deleted from the cache area 12 of the memory 9. good.
- the deletion may be any of overwriting with other cache information, erasing the cache information, or making the cache information inaccessible.
- storage devices A, B, C, and D are shown as examples of the plurality of storage devices 2.
- the number of storage apparatuses 2 constituting the storage system is not limited to this example, and it is sufficient that the number is at least two storage apparatuses 2.
- the management information storage area 11 of the memory 9 of each of the storage devices A and B stores the same shared management information X, and the shared management information X is made redundant.
- the management information stored in the management information 9 can be used to continue the operation, and the availability can be increased.
- the shared management information X is copied to at least one of the storage apparatuses C and D other than the storage apparatuses A and B to make it redundant, and the reliability is restored.
- the use of the memory 9 of the storage apparatus 2 is stopped is typically when the use of the storage apparatus 2 is stopped, for example, when a failure occurs in the storage apparatus 2 or when the storage apparatus 2 is used. It is time to remove. For example, the operation management cost can be reduced by removing some of the storage apparatuses 2 from the plurality of storage apparatuses 2 constituting the storage system.
- the shared management information X stored in the memory 9 whose use is stopped is also called copy target management information.
- the storage apparatus A that makes the storage apparatus B and the shared management information X redundant detects the stop of the use of the memory 9 of the storage apparatus B. This may be detected by the storage apparatus A accessing the storage apparatus B regularly or irregularly, or may be detected based on an instruction from a management terminal connected to the storage system.
- Storage devices C and D other than the storage devices A and B are copy destination candidate storage devices that are copy destination candidates for the share management information X.
- management information such as shared management information Y
- cache information cache information C1, C2, D1, D2, etc.
- the free capacity for storing the management information (shared management information X) of the other storage device 2 is not constantly secured, for example, by securing a large cache area 12 instead, The use efficiency of the memory 9 can be improved.
- the information in any memory 9 of the copy destination candidate storage device is deleted and the shared management information X is stored. Since the management information is information necessary for the operation of the storage system and cannot be deleted, the cache information is deleted.
- the cache area 12 of the memory 9 of the storage apparatus 2 stores cache information (cache data) in order to speed up response performance when a data access request is made from the host computer 1, and stores management information that needs to be copied. Often there is not enough space to store. In the cache area 12, a cache area having a high access frequency and a cache area having a low access frequency are mixed. For this reason, if the cache information is randomly deleted and the cache area is released, the cache hit rate is lowered and the access performance is greatly lowered.
- the access performance is described as a cache hit rate, and the amount of decrease in access performance is described as the amount of decrease in cache hit rate.
- the access performance and the decrease in access performance are not limited to this example.
- the management information of the storage device 2 can be stored in the memory 9 of another storage device 2 in the event of a failure or reduction of the storage device 2 even if the memory capacity is not secured beforehand. .
- the shared management information X which is one copy target management information
- the copy destination storage apparatus may be determined with priority from the copy target management information having a large capacity.
- the storage device A notifies the copy destination candidate storage devices C and D of the capacity of the share management information X, respectively.
- the copy destination candidate storage apparatuses C and D predict the amount of decrease in access performance when the shared management information X is copied based on the access frequency of the cache area 12 that is individually managed, and predict to the storage apparatus A Send the amount of decline.
- the storage apparatus A determines, as the copy destination storage apparatus, a copy destination candidate storage apparatus with a low predicted decrease in access performance based on the predicted decrease amounts in the copy destination candidate storage apparatuses C and D (S100). ). This can reduce the influence on the access frequency of the entire system when the cache area is released.
- the copy destination storage device is the storage device C. Details of the access performance reduction amount prediction processing in each of the copy destination candidate storage apparatuses C and D will be described later with reference to FIG.
- the copy destination storage device is a copy destination candidate storage device with a low access performance degradation amount
- the copy destination storage apparatus may be determined based on the priority order.
- necessary information may be acquired from each copy destination candidate storage device. For example, if the storage device 2 with a high load on the processor 8 performs a copy process, the access performance may be lowered. Therefore, the load of the processor 8 of each storage device 2 is acquired, and the storage device 2 with a low load on the processor 8 is obtained.
- the copy destination storage device may be determined.
- the copy destination storage device may be determined based on the state of the cache area 12 managed by each copy destination candidate storage device.
- the state of the cache area 12 may include information on the access frequency of the cache area 12.
- the state of the cache area 12 may include the last access time 185 of the cache area 12, information about whether or not the cache area 12 is usable, and information about whether or not the cache area 12 is being used (in-use flag 183).
- the storage apparatus A instructs the copy destination storage apparatus C to secure a capacity for copying the shared management information X.
- the processor of the copy destination storage device C re-restores the cache information (C1, C2, etc.) of the copy destination storage device C based on the access frequency so that the cache region storing the shared management information X becomes a continuous region. Arrangement is executed (S101). Details of the cache information rearrangement process (S101) will be described later with reference to FIG.
- the copy destination storage apparatus C secures a free capacity for storing the shared management information X by releasing at least a part of the cache area 12 of the memory 9 of the copy destination storage apparatus C (S102).
- the copy destination storage apparatus C copies the share management information X from the storage apparatus A and stores it in the released cache area 12 (S103). Therefore, the shared management information X is made redundant between the memories 9 of the storage device A and the copy destination storage device C.
- the copy destination storage apparatus C notifies the storage destination of the share management information X to at least some or all of the plurality of storage apparatuses 2. For example, the copy destination storage apparatus C notifies the storage apparatus A and the storage apparatus D of the storage destination of the share management information X. In this way, the storage destination of the share management information X is notified to the other storage apparatus 2, so that the notified storage apparatus 2 can access the share management information X stored in the copy destination storage apparatus C.
- FIG. 2 is a diagram showing the configuration of the storage device 2.
- the storage device 2 includes a drive 4 that is a data storage device and a storage controller 5 that executes processing according to the command.
- the drive 4 is a non-volatile storage device.
- the drive 4 may be, for example, a semiconductor memory device such as a magnetic disk or an SSD (Solid State Drive). There may be one or two or more drives 4 and storage controllers 5, respectively.
- the storage controller 5 includes a server I / F (interface) 6, a drive I / F 7, an inter-device coupling I / F 40, a processor 8, and a memory 9, which are connected to each other via an internal network.
- the server I / F 6 is connected to the host computer 1 via the network 3 and executes command and data transmission / reception processing with respect to the host computer 1.
- the drive I / F 7 executes command / data transmission / reception processing with respect to the drive 4.
- the inter-device coupling I / F 40 is connected to another storage device 2 via the network 41 and executes command and data transmission / reception processing with respect to the other storage device 2.
- One storage controller 5 may have one or more memories 9.
- the network 3 and the network 41 are communication paths for exchanging commands and data between devices (the host computer 1 and the storage device 2) connected to each other, and are, for example, a SAN (Storage Area Network).
- the network 3 and the network 41 may be the same network or may be networks independent of each other.
- the network 3 and the network 41 are independent networks, communication performed between the plurality of storage apparatuses 2 via the network 41 is performed between the host computer 1 and the storage apparatus 2 via the network 3. There is an advantage that does not affect the communication performed.
- the processor 8 executes a program in the memory 9 and executes various processes according to the command.
- the processor 8 may be an arithmetic unit or a control unit that executes a program.
- the processing performed by the processor 8 executing the program on the memory 9 may be described with the storage device 2 or the storage controller 5 as the subject of the processing.
- the memory 9 is a volatile storage device.
- the memory 9 may be a storage unit that stores data.
- the plurality of storage apparatuses 2 are combined and share the memory 9 with each other.
- the memory 9 shared among the plurality of storage apparatuses 2 includes a logical memory ID that is an identifier unique among the plurality of storage apparatuses 2 and a physical identifier that is unique within the storage apparatus 2 in which the memory 9 exists.
- a memory ID is assigned.
- the logical volume 10 is composed of physical storage areas of a plurality of drives 4 by the storage controller 5.
- This configuration method includes, for example, RAID (Redundant Arrays of Inexpensive Disks).
- RAID Redundant Arrays of Inexpensive Disks
- the storage controller 5 provides the logical volume 10 to the host computer 1.
- the storage apparatus 2 has one or more logical volumes 10.
- the host computer 1 may be a physical computer having a processor and a memory, or a virtual computer running on the physical computer.
- the host computer 1 is also called a server.
- the host computer 1 is a workstation that provides an online mail order service, for example.
- the host computer 1 stores at least a part of data necessary for the service provided by the host computer 1 in the storage device 2.
- the host computer 1 reads / writes data stored in the storage apparatus 2 by transmitting a data access request to the storage apparatus 2.
- a SCSI (Small Computer System Interface) standard read command (Read command) or a write command (Write command) is used.
- the data access request is also called an I / O (Input / output) request.
- FIG. 3 is a diagram showing the configuration of the memory 9.
- the processor 8 manages the storage area of the memory 9 separately into a management information storage area 11 and a cache area 12 by using a management information arrangement table 13 and a cache management table 18 described later.
- Management information is stored in the management information storage area 11.
- the management information includes integrated management information, shared management information, and individual management information.
- the management information is also called configuration information or control information depending on the contents of the management information.
- Integrated management information is information that each of the plurality of storage apparatuses 2 constituting the storage system has.
- Examples of the integrated management information include a management information arrangement table 13 and a logical / physical conversion table 35.
- the shared management information is management information that can be referred to not only from the own storage apparatus 2 but also from the other storage apparatus 2.
- the shared management information will be mainly described as management information that is made redundant among some different storage apparatuses 2 among the plurality of storage apparatuses 2 constituting the storage system.
- some shared management information may be redundant between different memories 9 in the same storage device 2.
- the location (address) of the shared management information is managed by the management information arrangement table 13 that is integrated management information. Therefore, the share management information stored in the memory 9 in a certain storage device 2 can be shared with other storage devices 2.
- the shared management information includes at least one of a program executed by any processor of the plurality of storage apparatuses 2 and management information used for executing any program.
- a program to be executed by the processor 8 there is a backup function program for realizing a data backup function.
- management information used for program execution include backup function control information used for executing the backup function program, logical volume management table 26, security setting information, billing information, and the like.
- Individual management information is management information managed individually by each storage device 2.
- the individual management information there is a cache management table 18 including information on the access frequency of the cache area 12 of the storage apparatus 2.
- the content of the individual management information may be different between different storage apparatuses 2.
- the cache management table 18 of each storage device 2 is management information for managing the access frequency of the cache area 12 of the storage device 2 itself, the contents of the cache management table 18 between different storage devices 2 are different.
- the cache management table 18 is information whose contents are appropriately changed according to the data access request of the host computer 1. If the cache management table 18 is made redundant between different storage apparatuses 2, synchronization is required every time the contents are changed, so that the load on the storage system may be improved and the access response performance may be degraded. Since the cache management table 18 is information that is normally used in the storage apparatus 2, it does not have to be redundant between different storage apparatuses 2.
- the storage device 2 to be accessed differs depending on the use and contents of the management information.
- Management information accessed from a plurality of storage devices 2 needs to be redundant between the storage devices 2, and the amount of management information to be copied can be reduced by limiting the copy target management information to these management information. Also good.
- the copy target management information may be limited to shared management information.
- the cache information includes write data and read data.
- the cache information is information for improving the performance (speeding up) of access to data in the drive 4 having a lower performance (lower speed) than the memory 9. That is, the cache information of each storage device 2 includes at least one of write data based on a write request from the host computer 1 connected to the storage device 2 and read data used for a response to the read request from the host computer 1. Including one.
- the storage apparatus 2 when the storage apparatus 2 receives a write request from the host computer 1, the storage apparatus 2 temporarily stores the write data related to the write request in the memory 9 and returns a write completion response to the host computer 1. Since the storage apparatus 2 can asynchronously execute write processing (hereinafter also referred to as destage processing) of write data from the memory 9 to the low-performance drive 4, in response to a write request from the host computer 1 in a short time. Write completion response.
- destage processing write processing
- the storage apparatus 2 when the storage apparatus 2 receives a read request from the host computer 1 and the data related to the read request is stored in the memory 9, the storage apparatus 2 acquires the data from the memory 9, and the host computer 1 Respond to.
- the storage apparatus 2 acquires the data from the drive 4, stores it in the memory 9, and responds to the host computer 1.
- Data related to the read request is called read data.
- the storage device 2 When the read data is stored in the cache area 12 of the memory 9, the storage device 2 does not need to read the read data from the drive 4 having a lower performance than the memory 9, and can respond to the read request of the host computer 1 for a short time. Can respond.
- Each storage device 2 copies and multiplexes the individual management information and cache information between different memories 9 of the same storage device 2 in order to prevent the loss of the individual management information and cache information when a failure occurs in the memory 9.
- redundancy Even if the read data stored in the cache of the memory 9 is lost, the read data can be read from the drive 4 and respond to the host computer 1 without being multiplexed between the plurality of memories 9. good.
- redundancy may be continued or recovered as described later, or redundancy may not be required by executing destage processing.
- the storage apparatus 2 includes a plurality of storage controllers 5 having the memory 9, the individual management information and the cache information may be made redundant by storing them in the memories 9 of different storage controllers 5 in the storage apparatus 2. .
- the management information stored in the memory 9 that stops using is stored in another memory 9 in the same storage device 2. Redundancy may be restored by copying to In this case, the management information (copy target management information) may include at least one of individual management information, shared management information, and integrated management information.
- the storage device 2 determines the cache information to be deleted by the access performance decrease amount prediction process (FIG. 10) for the plurality of memories 9 of the storage device 2 itself. Thereby, the influence on the access frequency of the storage apparatus 2 when the cache area 12 is released can be reduced. Then, the storage device 2 stores cache information on the basis of the access frequency so that the cache area 12 to be released becomes a continuous area in order to store the copy target management information (S101, FIG. 11). Execute. At this time, when selecting the cache area 12 to be released, it may be selected based on a predetermined policy or priority. For example, the management information is made redundant in the memory 9 between the different storage controllers 5 in the storage apparatus 2 or the management information is made redundant between the different memories 9 in the storage apparatus 2.
- the memory 9 having the cache area 12 may be appropriately selected.
- the storage apparatus 2 secures a free capacity for storing copy target management information by releasing at least a part of the cache area 12 of the memory 9 of the storage apparatus 2.
- the storage apparatus 2 restores redundancy by copying the copy target management information from the memory 9 having the copy target management information and storing it in the released cache area 12.
- FIG. 4 is a diagram showing the configuration of the management information arrangement table 13.
- the storage device 2 manages the management information storage area 11 of the memory 9 in which the shared management information is stored, using the management information arrangement table 13.
- the management information arrangement table 13 includes a management information name 130, a logical memory ID (A) 131, a logical address (A) 132, a logical memory ID (B) 133, a logical address (B) 134, and a size 135. Is management information for associating.
- an area of size 135 from the logical address (A) 132 of the memory 9 identified by the logical memory ID (A) 131, and a logical memory This indicates that an area of size 135 from the logical address (A) 134 of the memory 9 identified by the ID (B) 133 is secured.
- the logical address 0x1000 of the logical memory ID 1 is used as a storage area for storing management information called “security setting information”.
- a storage area having a size of 0x1000 is secured from each of the logical addresses 0x1000 of the logical memory ID 2.
- the logical memory ID (A) 131 and the logical memory ID are different so that the physical memory 9 is different so that the management information is not lost even if one of the storage destination memories 9 becomes unusable.
- (B) Different IDs are set in 133.
- the logical volume management table 26 is shared management information as described above, it is managed by this management information arrangement table 13 but is not shown in FIG. In each storage apparatus 2, a logical volume management table 26 is created for each logical volume 10 of the own storage apparatus 2. Therefore, the logical volume management table 26 of each storage device 2 may be managed by the management information arrangement table 13 respectively.
- FIG. 5 is a diagram showing the configuration of the cache management table 18.
- the processor 8 of the storage device 2 uses the cache management table 14 to manage the state of the cache area 12 of the memory 9 that stores the cache information.
- the cache information stored in the cache area 12 dynamically changes according to data access of the host computer 1. Therefore, the small area of the cache area 12 indicated by the physical memory ID 181 and the physical memory address 182 is managed for each predetermined size.
- the predetermined size is represented as “0x1000”.
- the cache management table 18 includes a cache ID 180 for identifying an entry, a physical memory ID 181, a physical address 182, a busy flag 183, a use start time 184, a last access time 185, a dirty data flag 186, and a cache hit. This is management information for associating the number of times 187.
- the cache ID 180 is an identifier of each entry.
- the physical memory ID 181 and the physical address 182 indicate that in the memory 9 identified by the physical memory ID 181, the cache area 12 having a predetermined size from the physical address 182 is a storage area that is a target of the entry.
- the in-use flag 183 indicates whether or not there is cache information stored in the cache area 12 that is the target of the entry.
- the use start time 184 indicates the time when the current cache information is stored in the cache area 12.
- the last access time 185 indicates the time when the host computer 1 last accessed the cache area 12 of the entry. Although the use start time 184 and the last access time 185 are illustrated in the format of hour, minute and second, there may be a date.
- the dirty data flag 186 indicates whether or not the cache information of the entry is dirty data. If the dirty data flag 186 is ON, that is, if the cache information of the entry is dirty data, it indicates that the cache information is the latest write data and the drive 4 has old data before update. When the dirty data flag is OFF, that is, when the cache information of the entry is not dirty data, it indicates that the cache information is the latest data and the drive 4 also has the latest data.
- the cache hit count 187 is the number of times the data related to the read request exists in the cache area 12 in response to the read request from the host computer 1 within a predetermined unit time.
- the cache hit count 187 may be measured by a counter included in each storage device 2.
- the cache hit count 187 in a predetermined unit time may be calculated from the counter count and the counter measurement time.
- the access frequency is described as the number of cache hits 187 in a predetermined unit time. Further, in this embodiment, the case where the access frequency is low is described as the cache hit count 187 in a predetermined unit time being small.
- the comparison target is the other storage device 2 or the other memory 9 of the own storage device 2.
- the access frequency is not limited to this example.
- the access frequency may be the last access time 185. In this case, the low access frequency means that the last access time 185 is old.
- FIG. 6 is a diagram showing the configuration of the logical volume management table 26.
- the logical volume management table 26 includes a logical volume ID 260 for identifying each logical volume 10, a cache guarantee amount 261, a logical volume address 262, a cache ID (A) 263, a cache ID (B) 264, and a drive ID 265. , Management information for associating the drive address 266 with each other.
- the storage apparatus 2 manages the storage area of the logical volume 10 indicated by the logical volume ID 260 and the logical volume address 262 for each predetermined size.
- the predetermined size is represented as “0x1000”.
- the logical volume ID 260 is an identifier of the logical volume 10.
- the cache guarantee amount 261 indicates the cache guarantee amount set for the logical volume 10.
- the logical volume address 262 indicates an address in the logical volume 10.
- the cache ID (A) 263 and the cache ID (B) 264 are the cache area 12 corresponding to the logical volume address 262. When both the cache ID (A) 263 and the cache ID (B) 264 are set, it indicates that there is write data in these cache areas 12.
- the cache ID (B) 263 indicates the cache area 12 to which the write data is duplicated, and is not set in the case of read data. If both the cache ID (A) 263 and the cache ID (B) 264 are set, it indicates that there is no data in the cache area 12.
- the drive ID 265 is an identifier for identifying the drive 4.
- the drive address 266 is a physical address in the drive 4, and indicates a physical storage area on the drive 4 in which data specified by the logical volume ID 260 and the address 262 is stored.
- the storage apparatus 2 receives a data access request designating the ID and address of the logical volume 10 from the host computer 1.
- the storage apparatus 2 specifies the address of the cache and the address of the drive 4 that are associated with the address of the designated logical volume 10.
- the cache ID (A) 263 and the cache ID (B) 264 associated with the logical volume ID 260 and the logical volume address 262 specified in the data access request, the drive ID 265 and drive address 266 are acquired.
- the acquired cache ID (A) 263 and cache ID (B) 264 respectively correspond to the cache ID 180 in the cache management table 18.
- the cache hit rate the higher the probability that data related to the read request exists in the cache area 12 (hereinafter referred to as the cache hit rate), the higher the response to the read request.
- the cache area 12 is a shared resource between a plurality of logical volumes 10, and when the in-use flag 183 is ON (in use) for all entries of the cache ID 180, the cache information of an arbitrary entry is deleted. New cache information is stored.
- the cache hit rate of other logical volumes 10 may decrease due to data access to a certain logical volume 10.
- a user may use a different logical volume 10 for each service, and there is a demand for suppressing performance interference between services.
- the storage apparatus 2 provides a function for guaranteeing the minimum capacity of the cache area 12 that can be used for each logical volume 10. For example, as shown in FIG. 6, when the cache guarantee amount 261 set by the user is stored for each logical volume 10 and cache information to be deleted is selected, it is determined whether or not the cache guarantee amount 261 can be satisfied even after deletion. To do. If it can be satisfied, the cache information is deleted, and if it cannot be satisfied, other cache information is targeted for deletion.
- FIG. 7 is a diagram showing the configuration of the logical / physical conversion table 35.
- the logical-physical conversion table 35 includes a logical memory ID 350 and a logical address 351 that are storage destinations of data recognized by various programs, and a storage ID 352 that identifies the storage device 2 as a storage destination where the data indicated by these is actually stored. And management information for associating the physical memory ID 353 and the physical address 354 in the storage apparatus 2 with each other.
- the processor 8 of each storage device 2 appropriately performs conversion to the logical memory ID 350 and logical address 351, storage ID 352, physical memory ID 353, and physical address 354, or reverse conversion using the logical-physical conversion table 35.
- the copy destination storage apparatus 2 sets the cache ID (A) 263 and the cache ID (B) 264 of the cache area 12 storing the cache information to be deleted to “None”, and the cache The use of the area 12 is stopped. Further, the copy destination storage apparatus 2 refers to the cache management table 18 and stores the area indicated by the physical memory ID 181 and the physical address 182 as the management information copy destination for the cache ID 180 to be deleted, and deletes the entry. .
- the copy destination storage apparatus 2 copies the management information to the copy destination of the management information stored in step S11.
- the storage system 2 refers to the management information arrangement table 13 and searches for an entry having a memory ID in which the value of the logical memory ID (A) 131 or the logical memory ID (B) 133 becomes unusable. To do.
- the storage device 2 reads the management information using the accessible memory address of the entry, and writes it to the copy destination of the management information recorded in step S11. Thereafter, the storage system 2 overwrites the copy destination memory ID and memory address with the logical memory ID and logical memory address at which the entry becomes unusable.
- the storage apparatus 2 that is the subject of these processes may be any storage apparatus 2 in the storage system. However, it is desirable that the storage apparatus 2 other than the storage apparatus 2 in which the failure occurs is performed. For example, these processes may be executed by the storage apparatus 2 having a copy of management information held by the storage apparatus 2 to be removed or the storage apparatus 2 in which a failure has occurred. If there is a management computer that is connected to each storage device 2 and manages the storage system, the management computer may execute these processes in cooperation with each storage device 2.
- the storage apparatus 2 that executes these processes receives a reduction notification or a failure notification from the storage apparatus 2 to be reduced or the storage apparatus 2 in which the failure has occurred. Or when the user's management terminal is instructed to execute the reduction process or the fault process. Further, the failure processing may be executed when the storage device 2 that is in communication between the storage devices 2 detects a failure of another storage device 2.
- FIG. 8 is a flowchart showing the reduction process of the storage apparatus 2. Processing when one or more storage apparatuses 2 are removed from the plurality of storage apparatuses 2 will be described.
- the storage device 2 has a total management information capacity (hereinafter referred to as management information amount) and a cache guarantee amount in each storage device 2, and a total memory amount after reduction (hereinafter referred to as a reduced memory amount). Is calculated.
- the storage device 2 determines whether or not the reduced memory amount exceeds the total of the management information amount and the cache guarantee amount (hereinafter referred to as a necessary memory amount) (S1).
- the storage device 2 may calculate the required memory amount and the reduced memory amount based on the management information arrangement table 13, the cache management table 18, and the logical volume management table 26 of each storage device 2. .
- the storage device 2 requests each storage device 2 to return the memory amount, management information amount, and cache guarantee amount, and the memory amount, management information amount, and cache guarantee returned from each storage device 2. Based on the amount, the required memory amount and the reduced memory amount may be calculated.
- a save process is executed to save the management information stored in the memory 9 of the storage device 2 to be removed to the memory 9 of the other storage device 2 (S2 ).
- This management information saving process (S2) is as described above with reference to FIG. Thereafter, information indicating that the storage device 2 can be removed may be displayed on the user management terminal (S3). The user can remove the storage apparatus 2 after confirming the display contents.
- the user management terminal indicates that if it is reduced, the necessary information cannot be stored in the memory 9 of the remaining storage device 2 and cannot be reduced. (S4).
- the storage apparatus 2 may display information indicating how much memory is insufficient after the reduction. By referring to the displayed content, the user can review the cache guarantee amount 28 and reduce the necessary memory amount, and then cause the storage apparatus 2 to execute the reduction process again.
- FIG. 9 is a flowchart showing the failure process of the storage apparatus 2. Processing when a failure occurs in one or more storage apparatuses 2 among the plurality of storage apparatuses 2 will be described.
- the storage device 2 When the storage device 2 detects a failure in the other storage device 2, the storage device 2 executes the same processing as the step of S1 in FIG. If it is determined in S1 that the required amount of memory is exceeded, the management information saving process in S2 is executed.
- the policy at the time of failure is information that determines whether to give priority to reliability or performance when duplication of management information and cache guarantee amount cannot be achieved, and is set by the user.
- the management information remains in a single state and the reliability is not restored (S8).
- the storage apparatus 2 may display to the user management terminal that the reliability cannot be recovered due to a shortage of memory. Further, the storage device 2 may display information prompting the user to review the cache guarantee amount or promptly replace the faulty part on the management terminal of the user.
- the policy is a reliability priority policy in S5 (S5 Yes)
- only the cache amount actually used among the cache guarantee amounts is regarded as the necessary memory amount, and the necessary memory amount is recalculated (S6). ).
- the presence / absence of the cache ID (A) 263 and the cache ID (B) 264 is totaled to calculate the cache amount in use.
- the calculated cache amount in use is compared with the cache guarantee amount 261, and the small value is set as the memory amount to be guaranteed in the logical volume 10.
- the sum of twice the size 135 of the management information arrangement table 13 and the amount of memory to be guaranteed for each logical volume 10 is taken as the required memory amount, and compared with the reduced memory amount.
- step S7 the management information reliability recovery is completed (S7).
- the storage apparatus 2 may display information indicating that the failure has occurred and the reliability has been recovered on the user management terminal. Further, the storage apparatus 2 may display on the user management terminal that the storage apparatus 2 in which no failure has occurred may be operated as it is. In addition, the storage apparatus 2 may display information prompting replacement work of the faulty part on the user management terminal.
- FIG. 10 is a flowchart of the access performance degradation amount prediction process.
- the amount of decrease in access performance is the amount of decrease in the cache hit rate.
- This process is executed in each copy destination candidate storage device. Therefore, the copy destination storage apparatus and the cache area 12 to be released can be determined so as to suppress a decrease in the cache hit rate based on the usage state of the cache area 12 of each memory 9 of the copy destination candidate storage apparatus. .
- the copy destination candidate storage device 2 acquires the capacity of the copy target management information (S14).
- the storage apparatus A may be executed by notifying the copy destination candidate storage apparatuses C and D of the capacity of the copy target management information (shared management information X).
- the copy destination candidate storage device refers to the management information arrangement table 13 and searches for an entry in which the value of the logical memory ID (A) 131 or the logical memory ID (B) 133 is an unusable memory ID. To do.
- the total value of the sizes 135 of the entries may be used as the copy target management information capacity.
- the capacity of the copy target management information is defined as “amount of cache information to be deleted” at this point.
- the copy destination candidate storage device 2 uses the cache management table 18 that is the individual management information to rearrange the cache areas 12 of all the memories 9 that the copy destination candidate storage device 2 has in the least frequently used order (S15).
- the purpose of this is to rearrange the cache information in the order in which the cache hit rate is unlikely to decrease.
- the cache information is rearranged in the order of the last access time 185 in the cache management table 18.
- the last access time 185 is the same, there are a method in which the oldest use start time 184 is prioritized and a method in which the smaller cache hit count 187 is prioritized and the like. As long as the cache usage is used, the method is not limited to the description in step S15.
- the copy destination candidate storage device 2 identifies the cache area 12 in ascending order of use frequency rearranged in step S15 (S16), and determines whether it is necessary to secure the cache area 12 (S17). Specifically, the copy destination candidate storage device 2 refers to the cache management table 18 and identifies an entry that matches the fetched cache area 12, and identifies an entry in the logical volume management table 26 having the entry ID 180. . The copy destination candidate storage device 2 compares the allocation amount of the cache area 12 indicated by the cache ID (A) 263 and the cache ID (B) 264 with the cache guarantee amount 261 in the specified entry.
- the copy destination candidate storage device 2 determines that the guarantee is unnecessary and determines that the cache area 12 is released. Perform steps. On the other hand, when the allocated amount of the cache area 12 of the logical volume 10 is less than the cache guarantee amount 261 (Yes in S17), the copy destination candidate storage device 2 determines that the guarantee is necessary and executes the step of S16.
- the copy destination candidate storage apparatus 2 determines that the reservation is not necessary in step S17, sets the cache area 12 as a deletion target (S18), and subtracts the size of the cache area 12 from the “amount of cache information to be deleted”. And update (S19). If the updated “amount of cache information to be deleted” is larger than 0, it is determined that the cache information needs to be further deleted, and the step of S16 is executed (Yes in S20).
- the copy destination candidate storage device 2 refers to the cache management table 18 and caches per predetermined unit time in the deletion target cache area 12 By dividing the total number of hits 187 by the number of reads received per unit time by the storage device, that is, the number of read requests, a predicted value of the decrease in access performance is calculated (S200). However, the predicted value of the decrease in access performance may be calculated by other methods and is not limited to the description in S200.
- FIG. 11 is a flowchart of cache information relocation processing. This process is executed in the copy destination storage apparatus. With this processing, it is possible to secure a continuous memory area necessary for storing the copy target management information, and to suppress a decrease in access performance after the cache area 12 is released.
- the process of rearranging the cache area 12 to be deleted is described in one place for simplification, but is not limited to one place.
- the copy target management information can be managed separately in a plurality of memory areas, it may be rearranged in a plurality of memory areas, or if the copy target management information can be allocated in a continuous memory area of the cache area 12.
- the relocation itself need not be executed.
- the copy target management information is divided into a plurality of memory areas and managed, the memory consumption for storing the storage memory of the subdivided management information increases. For example, the number of entries in the management information arrangement table 13 increases. For this reason, the use efficiency of the memory 9 can be improved by storing the management information in a certain large continuous area.
- the copy destination storage apparatus 2 selects an available memory 9 (S21), and enumerates cache areas 12 to be deleted (S22).
- the copy destination storage device 2 sets an area where the density of the cache area 12 to be deleted is high as the cache area 12 to be relocated (S23). This is intended to reduce the number of cache information to be rearranged.
- the copy destination storage apparatus 2 takes out the head of the listed cache area 12 (S24), and determines whether it is outside the relocation destination memory area (S25). When the copy destination storage apparatus 2 is outside the relocation destination memory area (Yes in S25), it exchanges data between the cache information that is not to be deleted and is in the area and the fetched cache information (S26). Specifically, the copy destination storage apparatus 2 moves the cache information in the area indicated by the physical address 182 between entries in the cache management table 18, and then moves the physical address 182, the use start time 184, and the last between entries. The access time 185, dirty data flag 186, and access hit count 187 also move with each other. Note that this processing is a change in the arrangement of the cache information and is not a data access to the cache area 12, so the use start time 184 and the last access time 185 are not changed.
- the copy destination storage apparatus 2 refers to the cache management table 18 when it is determined in step S25 that it is in the relocation destination memory area (No in S25), or when step S26 is executed. Referring to the dirty data flag 186 of the entry corresponding to the cache area 12 extracted in (S27).
- the copy destination storage apparatus 2 When the dirty data flag 186 is ON (S27 Yes), the copy destination storage apparatus 2 has the old data and the cache information in the cache area 12 is the latest data.
- the cache information is stored in the drive 4 (S28).
- the copy destination storage apparatus 2 refers to the logical volume management table 26 and identifies an entry in which the values of the cache IDs (A) 263 and (B) 264 match the ID 180 of the cache.
- the cache information is stored in the area indicated by the drive ID 265 and drive address 266 of the identified entry. Thereafter, the dirty data flag 186 is set to OFF. Thereby, the cache information of the cache area 12 can be deleted.
- the copy destination storage device 2 can delete the cache information of the cache area 12 because the data in the drive 4 is the latest data.
- the cache information relocation process in the cache information relocation process, the cache information is exchanged inside and outside the relocation destination memory area (S26), the cache information to be the latest data is stored in the drive (S28), and the cache information is deleted.
- the processing has been described in the order of (S102). However, if the steps S28 and S102 are executed first, the cache information that is not to be deleted in the relocation destination memory area may be moved to a free area outside the area in the step S26. Thus, the number of data copies can be reduced. As long as the cache area 12 to be deleted is selected so that the cache hit rate is unlikely to decrease, and the copy target management information is duplicated in the deleted area, the processing order is not limited to the description.
- the copy destination storage apparatus 2 executes the following processing by executing the access performance degradation amount prediction processing described in FIG. 10 and the cache information relocation processing in FIG.
- the processor 8 of the copy destination storage apparatus 2 uses the cache management table 18 to divide the cache area 12 of the memory 9 of the copy destination storage apparatus into a plurality of small areas, and manages the access frequency of each small area.
- the small area is the cache area 12 managed by each entry of the cache ID 180 of the cache management table.
- the copy destination storage apparatus 2 releases at least a part of the cache area 12 by stopping the use of the cache information stored in the small area with low access frequency based on the access frequency of each small area.
- the processor 8 of the copy destination storage apparatus 2 manages the cache guarantee amount 261 set in the logical volume 10 using the logical volume management table 26, and releases the small area of the cache area 12 with low access frequency. Even when the cache guarantee amount is satisfied, the use of the cache information stored in the small area is stopped. Thereby, in the copy destination storage apparatus, it is possible to suppress a decrease in access performance after the cache area 12 is released.
- the management information saving process in the future failure can be omitted by not using the storage device 2 in which a memory failure has occurred in the past as the copy destination storage device. is there.
- the storage apparatus 2 when the storage apparatus 2 is gradually added as the service grows, the storage apparatus becomes a storage apparatus in which old and new memories 9 are mixed.
- the management information may be copied by preferentially releasing the cache area 12 of the memory 9 of another relatively new storage device 2.
- the copy destination storage apparatus stores the copy target management information in the released cache area 12.
- the logical memory ID 131 and the logical address 132 of the management information arrangement table 13 may be changed.
- various programs executed in the storage apparatus 2 statically store the logical memory ID 131 and the logical address 132 that are storage destinations of the shared management information, or from the management information arrangement table 13 at the time of initial program startup,
- the read value is reused, if the storage location of the copy target management information changes in step S103, the copy target management information cannot be accessed.
- the logical-physical conversion table 35 is updated without changing the logical memory ID 131 and the logical address 132 of the management information arrangement table 13. .
- the copy destination storage device 2 uses the configuration information arrangement table 13 and the logical / physical conversion table 35 to store the copy target management information in the storage device 2 in which a failure has occurred or the storage device 2 to be removed.
- the logical memory ID 131 and the logical address 132 of the memory 9 are specified.
- the copy destination storage apparatus 2 responds to the entry for the specified logical memory ID 131 and logical address 132 with respect to its own storage ID 352 and physical memory ID 353 of its own memory 9 that is the storage destination of the copy target management information. And the physical address 354 is updated.
- the logical / physical conversion table 35 is integrated management information included in all the storage apparatuses 2, the logical / physical conversion table 35 of each storage apparatus 2 is updated.
- the copy destination storage device 2 notifies each storage device by notifying at least a part or all of the plurality of storage devices 2 of the storage destination (storage ID 352, physical memory ID 353, physical address 354) of the copy target management information.
- the logical-physical conversion table 35 of 2 is updated.
- the storage device 2 that is the notification destination notifies the storage device 2 other than the storage device 2 that stops use and the storage device 2 itself that is the notification destination of the storage destination of the copy target management information.
- the notified storage apparatus 2 updates the logical-physical conversion table 35, respectively.
- FIG. 12 is a flowchart showing an access process to the shared management information using the logical / physical conversion table 35. This process is executed in each storage device 2. For example, when a program executed by the processor 8 of the storage apparatus 2 tries to access the shared management information by specifying a logical memory ID and a logical address, the following access processing to the shared management information is performed.
- the storage system 2 refers to the logical-physical conversion table 35 and searches the management information arrangement table 13 for an entry that matches the specified logical memory ID and logical address (S30). If there is a matching entry (Yes in S30), the storage apparatus 2 determines that the specified logical memory ID 131 and logical address 132 are the management information storage area, and performs the step of S31. If there is no matching entry (No in S30), the storage apparatus 2 determines that it is not a management information storage area and determines that data access has failed (S34).
- step S31 the storage system 2 refers to the logical-physical conversion table 35 and searches for an entry having a logical memory ID 350 and a logical address 351 that match the specified logical memory ID and logical address.
- the storage system 2 accesses the data in the memory area indicated by the storage ID 352, the physical memory ID 353, and the physical address 354 in the entry searched in the step of S31 (S32), and makes the data access successful (S33).
- the program executed by the processor 8 of the storage apparatus 2 can have the same logical memory ID 131 and logical address 132 even if the storage location of the shared management information is changed in S 103. Can be used to access management information.
- the storage system management information arrangement table 13 and the logical-physical conversion table 35 are stored in a normal state (before a storage device failure or before removal), and after storage device failure or after removal, the storage When one or more storage devices 2 are added to the system, the normal state may be restored.
- the integrated management information and the copy target management information are stored in the memory 9 of the storage device 2 to be added, and the memory 9 of the copy destination storage device 2 is stored.
- the cache information may be stored by releasing an area in which the copy target management information is stored.
- each piece of information that the storage system has may be described in terms of a table or the like.
- the data structure of each piece of information is not limited, and other data structures may be used. Since each information does not depend on the data structure, for example, “kkk table” can be called “kkk information”.
- the processor 8 executes a program and performs processing using a storage resource (for example, the memory 9) and / or a communication interface device (for example, a communication port).
- the process mainly including the processor 8 may be interpreted as being performed by executing one or more programs.
- the process mainly using the storage device 2 may be interpreted as being executed mainly by the processor 8 of the storage device 2.
- the processor 8 is typically a microprocessor such as a CPU (Central Processing Unit), but may include a hardware circuit that executes a part of processing (for example, encryption / decryption, compression / decompression). Good.
Abstract
Description
Claims (15)
- 第1ストレージ装置を含む複数のストレージ装置を備えるストレージシステムであって、
前記複数のストレージ装置の其々は、管理情報を格納する管理情報格納領域とキャッシュ情報を格納するキャッシュ領域とを有するメモリと、前記キャッシュ領域の状態を管理するプロセッサと、を備え、
前記第1ストレージ装置のメモリの使用を停止するとき、前記複数のストレージ装置の少なくとも一部のプロセッサは、前記第1ストレージ装置以外の前記ストレージ装置が其々管理する前記キャッシュ領域の状態に基づいて、前記使用を停止するメモリが格納する管理情報であるコピー対象管理情報のコピー先となるコピー先ストレージ装置を決定し、
前記コピー先ストレージ装置のプロセッサは、前記コピー先ストレージ装置のメモリのキャッシュ領域の少なくとも一部を解放して、前記解放したキャッシュ領域に、前記コピー対象管理情報を格納する
ことを特徴とするストレージシステム。 - 前記管理情報は、前記複数のストレージ装置が稼動しているとき、前記複数のストレージ装置の前記メモリの少なくともいずれかに格納され、前記複数のストレージ装置の少なくともいずれかのプロセッサが当該メモリから参照可能な情報であり、
前記キャッシュ情報は、前記複数のストレージ装置の少なくともいずれかが備える記憶デバイスに同じデータが格納されている場合、前記複数のストレージ装置のいずれのメモリからも削除可能な情報であり、
前記キャッシュ領域の状態は、前記キャッシュ領域のアクセス頻度の情報を少なくとも含み、
前記複数のストレージ装置に、前記使用を停止するメモリが格納する管理情報のコピー先の候補となるコピー先候補ストレージ装置が複数ある場合、前記少なくとも一部のプロセッサは、
前記コピー先候補ストレージ装置が其々管理する前記キャッシュ領域のアクセス頻度に基づいて、前記コピー先候補ストレージ装置の其々に前記コピー対象管理情報をコピーした場合の前記アクセス性能の低下量を予測し、
前記複数のコピー先候補ストレージ装置のうち、前記予測したアクセス性能の低下量の低いストレージ装置を、前記コピー先ストレージ装置に決定する
ことを特徴とする請求項1に記載のストレージシステム。 - 前記複数のストレージ装置は、前記第1ストレージ装置と第2ストレージ装置と前記コピー先ストレージ装置を少なくとも備え、
前記第1ストレージ装置のメモリと前記第2ストレージ装置のメモリに、前記コピー対象管理情報を其々格納することで、前記コピー対象管理情報を冗長化しており、
前記第1ストレージ装置のメモリの使用を停止するとき、前記コピー先ストレージ装置のメモリに前記コピー対象管理情報を格納することで、前記第2ストレージ装置のメモリと前記コピー先ストレージ装置のメモリで前記コピー対象管理情報を冗長化し、
前記コピー先ストレージ装置は、前記コピー対象管理情報の格納先を、前記複数のストレージ装置の少なくとも一部に通知する
ことを特徴とする請求項2に記載のストレージシステム。 - 前記コピー先候補ストレージ装置は、前記複数のストレージ装置のうち、前記第1ストレージ装置と前記第2ストレージ装置以外の少なくとも2以上のストレージ装置であり、
前記少なくとも2以上のコピー先候補ストレージ装置は其々、前記アクセス性能の低下量を予測して、前記第2ストレージ装置に前記予測した低下量を送信し、
前記第2ストレージ装置は、前記少なくとも2以上のコピー先候補ストレージ装置其々で予測した低下量に基づき、前記予測したアクセス性能の低下量の低いコピー先候補ストレージ装置を、前記コピー先ストレージ装置に決定し、
前記コピー先ストレージ装置は、前記第2ストレージ装置から前記コピー対象管理情報をコピーする
ことを特徴とする請求項3に記載のストレージシステム。 - 前記複数のストレージ装置のそれぞれのメモリに格納する管理情報は、
前記複数のストレージ装置がそれぞれ有する統合管理情報と、
他ストレージ装置から参照可能な共有管理情報と、
各ストレージ装置で個別に管理する管理情報であって、当該ストレージ装置の前記キャッシュ領域のアクセス頻度の情報を含む個別管理情報と、
を有し、
前記コピー対象管理情報は、前記第1ストレージ装置及び前記第2ストレージ装置其々のメモリに格納された前記共有管理情報である
ことを特徴とする請求項4に記載のストレージシステム。 - 前記コピー先ストレージ装置のプロセッサは、前記コピー先ストレージ装置のメモリのキャッシュ領域を複数の小領域に分けて、各小領域のアクセス頻度を管理し、
前記各小領域のアクセス頻度に基づいて、アクセス頻度の低い小領域に格納されたキャッシュ情報の使用を停止することで、前記キャッシュ領域の少なくとも一部を解放する
ことを特徴とする請求項5に記載のストレージシステム。 - 前記コピー先ストレージ装置のプロセッサは、前記コピー対象管理情報を格納するキャッシュ領域が連続した領域となるように、前記アクセス頻度に基づいて、前記コピー先ストレージ装置のキャッシュ情報の再配置を実行する
ことを特徴とする請求項6に記載のストレージシステム。 - 前記コピー先ストレージ装置は、ホスト計算機に提供する論理ボリュームを構成する1以上の記憶デバイスをさらに有し、
前記コピー先ストレージ装置のプロセッサは、
前記論理ボリュームに設定されるキャッシュ保証量を管理し、
前記コピー先ストレージ装置のメモリのキャッシュ領域が有する小領域であって、当該小領域を解放しても前記キャッシュ保証量を満たす場合に、当該小領域に格納されたキャッシュ情報の使用を停止する
ことを特徴とする請求項6に記載のストレージシステム。 - 前記複数のストレージ装置の其々は、メモリを複数有し、
各ストレージ装置は、当該ストレージ装置で個別に管理する前記個別管理情報を、当該ストレージ装置内の異なるメモリに格納することで冗長化する
ことを特徴とする請求項5に記載のストレージシステム。 - 前記複数のストレージ装置の其々は、メモリを有するコントローラを複数備え、
各ストレージ装置は、当該ストレージ装置で個別に管理する前記個別管理情報を、当該ストレージ装置内の異なるコントローラのメモリに格納することで冗長化する
ことを特徴とする請求項9に記載のストレージシステム。 - 前記第1ストレージ装置のメモリの使用を停止するときは、前記第1ストレージ装置で障害が発生したとき、または第1ストレージ装置を減設するときである
ことを特徴とする請求項5に記載のストレージシステム。 - 各ストレージ装置の前記管理情報は、いずれかのプロセッサで実行されるプログラムと、前記いずれかのプログラムの実行に用いられる情報と、の少なくともいずれか一方を含み、
各ストレージ装置の前記キャッシュ情報は、当該ストレージ装置に接続するホスト計算機からのライト要求に基づくライトデータと、当該ホスト計算機からのリード要求への応答に用いるリードデータと、の少なくともいずれか一方を含む
ことを特徴とする請求項5に記載のストレージシステム。 - 前記ストレージシステムに1以上のストレージ装置を増設するとき、
前記増設するストレージ装置が有するメモリに、前記統合管理情報と、前記コピー対象管理情報を格納し、
前記コピー先ストレージ装置が有するメモリのうち、前記コピー対象管理情報が格納された領域を解放し、前記キャッシュ情報を格納する
ことを特徴とする請求項5に記載のストレージシステム。 - 管理情報を格納する管理情報格納領域とキャッシュ情報を格納するキャッシュ領域とを有するメモリと、前記キャッシュ領域の状態を管理するプロセッサと、を其々備える複数のストレージ装置に接続する接続部と、
前記複数のストレージ装置のうちの1つのストレージ装置のメモリの使用を停止するとき、前記1つのストレージ装置以外のストレージ装置であって、当該ストレージ装置のメモリのキャッシュ領域の少なくとも一部を解放して、前記使用を停止するメモリが格納する管理情報をコピーするストレージ装置を、前記1つのストレージ装置以外のストレージ装置が其々管理する前記キャッシュ領域の状態に基づいて決定する制御部と、
を備えることを特徴とするストレージ装置。 - 複数のストレージ装置と、前記複数のストレージ装置間を接続するネットワークと、を備えるストレージシステムの制御方法であって、
前記複数のストレージ装置は、前記第1ストレージ装置と第2ストレージ装置を含む4以上のストレージ装置を少なくとも備え、
前記複数のストレージ装置の其々は、管理情報を格納する管理情報格納領域とキャッシュ情報を格納するキャッシュ領域とを有するメモリと、前記キャッシュ領域のアクセス頻度を管理するプロセッサと、を備え、
前記第1ストレージ装置のメモリと前記第2ストレージ装置のメモリに、同一の管理情報を其々格納することで、前記同一の管理情報を冗長化しており、
前記第1ストレージ装置のメモリの使用を停止するとき、前記第1ストレージ装置と前記第2ストレージ装置以外の少なくとも2以上のストレージ装置が、前記同一の管理情報のコピー先の候補となるコピー先候補ストレージ装置であり、
前記少なくとも2以上のコピー先候補ストレージ装置は其々、当該コピー先候補ストレージ装置が管理する前記キャッシュ領域のアクセス頻度に基づいて、当該コピー先候補ストレージ装置に前記同一の管理情報をコピーした場合の前記アクセス性能の低下量を予測して、前記第2ストレージ装置に前記予測した低下量を送信し、
前記第2ストレージ装置は、前記少なくとも2以上のコピー先候補ストレージ装置其々で予測した低下量に基づき、前記予測したアクセス性能の低下量の低いコピー先候補ストレージ装置を、前記コピー先ストレージ装置に決定し、
前記コピー先ストレージ装置は、前記コピー先ストレージ装置のメモリのキャッシュ領域の少なくとも一部を解放して、前記第2ストレージ装置から前記同一の管理情報をコピーして、前記解放したキャッシュ領域に格納し、
前記同一の管理情報は、前記第2ストレージ装置のメモリと前記コピー先ストレージ装置のメモリの間で冗長化される
ことを特徴とするストレージシステムの制御方法。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2017532241A JP6437656B2 (ja) | 2015-07-31 | 2015-07-31 | ストレージ装置、ストレージシステム、ストレージシステムの制御方法 |
US15/749,014 US10725878B2 (en) | 2015-07-31 | 2015-07-31 | Storage apparatus, storage system, and control method of storage system for dynamically securing free space when a storage apparatus is disused |
PCT/JP2015/071736 WO2017022002A1 (ja) | 2015-07-31 | 2015-07-31 | ストレージ装置、ストレージシステム、ストレージシステムの制御方法 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2015/071736 WO2017022002A1 (ja) | 2015-07-31 | 2015-07-31 | ストレージ装置、ストレージシステム、ストレージシステムの制御方法 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017022002A1 true WO2017022002A1 (ja) | 2017-02-09 |
Family
ID=57942542
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2015/071736 WO2017022002A1 (ja) | 2015-07-31 | 2015-07-31 | ストレージ装置、ストレージシステム、ストレージシステムの制御方法 |
Country Status (3)
Country | Link |
---|---|
US (1) | US10725878B2 (ja) |
JP (1) | JP6437656B2 (ja) |
WO (1) | WO2017022002A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11341251B2 (en) | 2017-04-19 | 2022-05-24 | Quintessencelabs Pty Ltd. | Encryption enabling storage systems |
Families Citing this family (31)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
FR3054902B1 (fr) * | 2016-08-04 | 2019-06-21 | Thales | Procede et dispositif de distribution de partitions sur un processeur multi-coeurs |
US11379155B2 (en) | 2018-05-24 | 2022-07-05 | Alibaba Group Holding Limited | System and method for flash storage management using multiple open page stripes |
WO2020000136A1 (en) | 2018-06-25 | 2020-01-02 | Alibaba Group Holding Limited | System and method for managing resources of a storage device and quantifying the cost of i/o requests |
US11327929B2 (en) | 2018-09-17 | 2022-05-10 | Alibaba Group Holding Limited | Method and system for reduced data movement compression using in-storage computing and a customized file system |
KR102648790B1 (ko) * | 2018-12-19 | 2024-03-19 | 에스케이하이닉스 주식회사 | 데이터 저장 장치 및 그 동작 방법 |
US11061735B2 (en) | 2019-01-02 | 2021-07-13 | Alibaba Group Holding Limited | System and method for offloading computation to storage nodes in distributed system |
US10860223B1 (en) | 2019-07-18 | 2020-12-08 | Alibaba Group Holding Limited | Method and system for enhancing a distributed storage system by decoupling computation and network tasks |
JP7226557B2 (ja) * | 2019-07-30 | 2023-02-21 | 日本電信電話株式会社 | キャッシュ使用指標算出装置、キャッシュ使用指標算出方法、および、キャッシュ使用指標算出プログラム |
US11617282B2 (en) | 2019-10-01 | 2023-03-28 | Alibaba Group Holding Limited | System and method for reshaping power budget of cabinet to facilitate improved deployment density of servers |
US11055190B1 (en) * | 2020-01-03 | 2021-07-06 | Alibaba Group Holding Limited | System and method for facilitating storage system operation with global mapping to provide maintenance without a service interrupt |
US11449455B2 (en) | 2020-01-15 | 2022-09-20 | Alibaba Group Holding Limited | Method and system for facilitating a high-capacity object storage system with configuration agility and mixed deployment flexibility |
US11379447B2 (en) | 2020-02-06 | 2022-07-05 | Alibaba Group Holding Limited | Method and system for enhancing IOPS of a hard disk drive system based on storing metadata in host volatile memory and data in non-volatile memory using a shared controller |
US11449386B2 (en) | 2020-03-20 | 2022-09-20 | Alibaba Group Holding Limited | Method and system for optimizing persistent memory on data retention, endurance, and performance for host memory |
US11301173B2 (en) | 2020-04-20 | 2022-04-12 | Alibaba Group Holding Limited | Method and system for facilitating evaluation of data access frequency and allocation of storage device resources |
US11385833B2 (en) | 2020-04-20 | 2022-07-12 | Alibaba Group Holding Limited | Method and system for facilitating a light-weight garbage collection with a reduced utilization of resources |
US11281575B2 (en) | 2020-05-11 | 2022-03-22 | Alibaba Group Holding Limited | Method and system for facilitating data placement and control of physical addresses with multi-queue I/O blocks |
US11494115B2 (en) | 2020-05-13 | 2022-11-08 | Alibaba Group Holding Limited | System method for facilitating memory media as file storage device based on real-time hashing by performing integrity check with a cyclical redundancy check (CRC) |
US11461262B2 (en) | 2020-05-13 | 2022-10-04 | Alibaba Group Holding Limited | Method and system for facilitating a converged computation and storage node in a distributed storage system |
US11556277B2 (en) | 2020-05-19 | 2023-01-17 | Alibaba Group Holding Limited | System and method for facilitating improved performance in ordering key-value storage with input/output stack simplification |
US11507499B2 (en) | 2020-05-19 | 2022-11-22 | Alibaba Group Holding Limited | System and method for facilitating mitigation of read/write amplification in data compression |
US11263132B2 (en) | 2020-06-11 | 2022-03-01 | Alibaba Group Holding Limited | Method and system for facilitating log-structure data organization |
US11422931B2 (en) | 2020-06-17 | 2022-08-23 | Alibaba Group Holding Limited | Method and system for facilitating a physically isolated storage unit for multi-tenancy virtualization |
US11354200B2 (en) | 2020-06-17 | 2022-06-07 | Alibaba Group Holding Limited | Method and system for facilitating data recovery and version rollback in a storage device |
US11354233B2 (en) | 2020-07-27 | 2022-06-07 | Alibaba Group Holding Limited | Method and system for facilitating fast crash recovery in a storage device |
US11372774B2 (en) | 2020-08-24 | 2022-06-28 | Alibaba Group Holding Limited | Method and system for a solid state drive with on-chip memory integration |
US11487465B2 (en) | 2020-12-11 | 2022-11-01 | Alibaba Group Holding Limited | Method and system for a local storage engine collaborating with a solid state drive controller |
US11734115B2 (en) | 2020-12-28 | 2023-08-22 | Alibaba Group Holding Limited | Method and system for facilitating write latency reduction in a queue depth of one scenario |
US11416365B2 (en) | 2020-12-30 | 2022-08-16 | Alibaba Group Holding Limited | Method and system for open NAND block detection and correction in an open-channel SSD |
US11726699B2 (en) | 2021-03-30 | 2023-08-15 | Alibaba Singapore Holding Private Limited | Method and system for facilitating multi-stream sequential read performance improvement with reduced read amplification |
US11461173B1 (en) | 2021-04-21 | 2022-10-04 | Alibaba Singapore Holding Private Limited | Method and system for facilitating efficient data compression based on error correction code and reorganization of data placement |
US11476874B1 (en) | 2021-05-14 | 2022-10-18 | Alibaba Singapore Holding Private Limited | Method and system for facilitating a storage server with hybrid memory for journaling and data storage |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005285058A (ja) * | 2004-03-31 | 2005-10-13 | Hitachi Ltd | 記憶装置のキャッシュ管理方法 |
JP2006221526A (ja) * | 2005-02-14 | 2006-08-24 | Hitachi Ltd | ストレージ制御装置 |
JP2015517697A (ja) * | 2012-05-23 | 2015-06-22 | 株式会社日立製作所 | 二次記憶装置に基づく記憶領域をキャッシュ領域として用いるストレージシステム及び記憶制御方法 |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4060552B2 (ja) * | 2001-08-06 | 2008-03-12 | 株式会社日立製作所 | 記憶装置システム、および、記憶装置システムの構成方法 |
US7162587B2 (en) * | 2002-05-08 | 2007-01-09 | Hiken Michael S | Method and apparatus for recovering redundant cache data of a failed controller and reestablishing redundancy |
JP4454299B2 (ja) * | 2003-12-15 | 2010-04-21 | 株式会社日立製作所 | ディスクアレイ装置及びディスクアレイ装置の保守方法 |
US7441081B2 (en) * | 2004-12-29 | 2008-10-21 | Lsi Corporation | Write-back caching for disk drives |
US20150312337A1 (en) * | 2014-04-25 | 2015-10-29 | Netapp Inc. | Mirroring log data |
US9824041B2 (en) * | 2014-12-08 | 2017-11-21 | Datadirect Networks, Inc. | Dual access memory mapped data structure memory |
-
2015
- 2015-07-31 US US15/749,014 patent/US10725878B2/en active Active
- 2015-07-31 JP JP2017532241A patent/JP6437656B2/ja active Active
- 2015-07-31 WO PCT/JP2015/071736 patent/WO2017022002A1/ja active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005285058A (ja) * | 2004-03-31 | 2005-10-13 | Hitachi Ltd | 記憶装置のキャッシュ管理方法 |
JP2006221526A (ja) * | 2005-02-14 | 2006-08-24 | Hitachi Ltd | ストレージ制御装置 |
JP2015517697A (ja) * | 2012-05-23 | 2015-06-22 | 株式会社日立製作所 | 二次記憶装置に基づく記憶領域をキャッシュ領域として用いるストレージシステム及び記憶制御方法 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11341251B2 (en) | 2017-04-19 | 2022-05-24 | Quintessencelabs Pty Ltd. | Encryption enabling storage systems |
AU2018255501B2 (en) * | 2017-04-19 | 2022-08-04 | Quintessencelabs Pty Ltd. | Encryption enabling storage systems |
Also Published As
Publication number | Publication date |
---|---|
US20180322024A1 (en) | 2018-11-08 |
JP6437656B2 (ja) | 2018-12-12 |
JPWO2017022002A1 (ja) | 2018-04-26 |
US10725878B2 (en) | 2020-07-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6437656B2 (ja) | ストレージ装置、ストレージシステム、ストレージシステムの制御方法 | |
US8069191B2 (en) | Method, an apparatus and a system for managing a snapshot storage pool | |
JP4890033B2 (ja) | 記憶装置システム及び記憶制御方法 | |
US9501231B2 (en) | Storage system and storage control method | |
JP5124551B2 (ja) | ボリューム割り当てを管理する計算機システム及びボリューム割り当て管理方法 | |
JP4961319B2 (ja) | 仮想ボリュームにおける仮想領域に動的に実領域を割り当てるストレージシステム | |
WO2012049711A1 (en) | Data migration system and data migration method | |
US11068367B2 (en) | Storage system and storage system control method | |
US8250284B2 (en) | Adaptive memory allocation of a second data storage volume based on an updated history of capacity of a first data volume | |
JP5531091B2 (ja) | 計算機システム及びその負荷均等化制御方法 | |
JP5317807B2 (ja) | ファイル制御システムおよびそれに用いるファイル制御計算機 | |
JP4884041B2 (ja) | 自動拡張可能なボリュームに対して最適なi/oコマンドを発行するストレージシステム及びその制御方法 | |
JP2008065525A (ja) | 計算機システム、データ管理方法及び管理計算機 | |
JP2007115019A (ja) | ストレージのアクセス負荷を分散する計算機システム及びその制御方法 | |
JP2009146228A (ja) | バックアップ装置、バックアップ方法およびバックアッププログラム | |
US7849264B2 (en) | Storage area management method for a storage system | |
US20200097204A1 (en) | Storage system and storage control method | |
US9400723B2 (en) | Storage system and data management method | |
US20160259571A1 (en) | Storage subsystem | |
JP2005092308A (ja) | ディスク管理方法およびコンピュータシステム | |
US11789613B2 (en) | Storage system and data processing method | |
WO2014115184A1 (en) | Storage system and control method for storage system | |
WO2016006072A1 (ja) | 管理計算機およびストレージシステム | |
JP2011209874A (ja) | ストレージ装置及びこれを用いたデータ転送方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15900316 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2017532241 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15749014 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 15900316 Country of ref document: EP Kind code of ref document: A1 |