WO2014163620A1 - System for increasing storage media performance - Google Patents
System for increasing storage media performance Download PDFInfo
- Publication number
- WO2014163620A1 WO2014163620A1 PCT/US2013/034938 US2013034938W WO2014163620A1 WO 2014163620 A1 WO2014163620 A1 WO 2014163620A1 US 2013034938 W US2013034938 W US 2013034938W WO 2014163620 A1 WO2014163620 A1 WO 2014163620A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data
- write
- media devices
- read
- operations
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/14—Handling requests for interconnection or transfer
- G06F13/16—Handling requests for interconnection or transfer for access to memory bus
- G06F13/1605—Handling requests for interconnection or transfer for access to memory bus based on arbitration
- G06F13/1642—Handling requests for interconnection or transfer for access to memory bus based on arbitration with request queuing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/0223—User address space allocation, e.g. contiguous or non contiguous base addressing
- G06F12/023—Free address space management
- G06F12/0238—Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
- G06F12/0246—Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory in block erasable memory, e.g. flash memory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/14—Handling requests for interconnection or transfer
- G06F13/16—Handling requests for interconnection or transfer for access to memory bus
- G06F13/1605—Handling requests for interconnection or transfer for access to memory bus based on arbitration
- G06F13/161—Handling requests for interconnection or transfer for access to memory bus based on arbitration with latency improvement
- G06F13/1626—Handling requests for interconnection or transfer for access to memory bus based on arbitration with latency improvement by reordering requests
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/061—Improving I/O performance
- G06F3/0611—Improving I/O performance in relation to response time
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0638—Organizing or formatting or addressing of data
- G06F3/064—Management of blocks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
- G06F3/0683—Plurality of storage devices
- G06F3/0688—Non-volatile semiconductor memory arrays
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/10—Providing a specific technical effect
- G06F2212/1016—Performance improvement
- G06F2212/1024—Latency reduction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/50—Control mechanisms for virtual memory, cache or TLB
- G06F2212/502—Control mechanisms for virtual memory, cache or TLB using adaptive policy
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/72—Details relating to flash memory management
- G06F2212/7201—Logical to physical mapping or translation of blocks or pages
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/72—Details relating to flash memory management
- G06F2212/7203—Temporary buffering, e.g. using volatile buffer or dedicated buffer blocks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/72—Details relating to flash memory management
- G06F2212/7208—Multiple device management, e.g. distributing data over multiple flash devices
Definitions
- Flash Solid State Devices differ from traditional rotating disk drives in a number of aspects. Flash SSD devices have certain undesirable aspects. In particular, flash SSD devices suffer from poor random write performance that commonly degrades over time. Because flash media has a limited number of write cycles (a physical limitation of the storage material that eventually causes the device to "wear out"), write performance is also
- the flash SSD periodically rebalances the written sections of the media in a process called "wear leveling". This process assures that the storage material is used evenly thus extending the viable life of the device.
- the wear leveling prevents a user of the storage system from
- the user cannot access data in the flash SSD while these wear leveling or defragmentation operations are being performed and the flash SSD devices do not provide prior notification of when these background operations are going to occur.
- This prevents applications from anticipating the storage non-availability and scheduling other tasks during the flash SSD rebalancing operations.
- the relatively slow and inconsistent write times of the flash devices create bottlenecks for the relatively faster read operations.
- Vendors typically refer to all background operations as "garbage collection" without specifying the type, duration or frequency of the underlying events.
- a system having a plurality of storage media devices, and a processor configured to receive data for a write operation, to identify a group of three or more of the media devices for writing the data and to sequentially write the data into each of the three or more media devices in the identified group.
- the processor is further configured to receive a read operation and to identify one of the media devices currently being written with the data; and to concurrently read data from address locations associated with the read operation from two or more of the media devices in the group not currently being written with the data.
- the media devices may have variable write latencies; and the processor is further configured to normalize read latencies for the media devices by concurrently reading the data from multiple ones of the media devices in the group that are not being used for writing data.
- the media devices may be, for example flash memory devices, hard disk devices or the like.
- the processor may be configured to aggregate together a first set of the data for a write operations, to identify a first performance index associated with the first set of the data and to write the aggregated first set of data into sequential physical address locations, so a first number of the media devices in the group of media devices associated with the first performance index can be read without being blocked by the writing of the aggregated first set of data;
- the processor may be configured to aggregate together a second set of the data for a second write operation, to identify a second performance index associated with the second set of the data; and, to write the aggregated second set of data into sequential physical address locations so that a second number of the media devices in an additional group of the media devices associated with the second performance index can be read without being blocked by the writing of the aggregated second set of data.
- a same a same physical address may be used to store the data in each of the media devices
- a size of the aggregated first set and the aggregated second set of data is variable and based on when the write operations are identified.
- the system may identify a performance index for the write operations; and identify a number of two or more of the media devices in the group of media devices in the group for providing concurrent read operations based on the performance index.
- the processor may be further configured to write the data into one additional media device in addition to the identified number of the two or more media devices for providing concurrent read operations.
- the processor may also configured to identify a performance target for the particular write operation and map the performance target to the particular performance index such as a read access time of the media devices or the number of media devices in the identified group.
- a memory may be provided to store an indirection table that maps write addresses used in the write operations to separate independently accessible locations in each one of the media devices in the identified group. .
- an apparatus having a plurality of storage elements and a storage access system configured to write the same data into the storage elements sequentially one at a time so a number of the storage elements remain available for read operations while the other storage elements are being written with the data.
- the number of storage elements available for the read operations is associated with a selectable performance index;
- Read addresses for the read operations may be mapped to map read addresses for the read operations to multiple different ones of the storage elements so that data may be concurrently read during the read operations from number of the storage elements associated with the performance index and not currently being used by the write operations.
- the storage elements may be flash solid state devices.
- the storage elements may be independently read and write accessible; and, the storage access system may be configured to iteratively write a same independently accessible copy of the same data into each of the multiple different storage elements to avoid blocking access of the read operations to the number of the storage elements associated with the performance index during the write operations.
- the storage access system may normalize read access times for variable-latency storage elements by writing the data to three or more different storage elements and then reading back the data from two or more of the storage elements that are not currently being used for the write operations.
- the storage access system may also be configured to aggregate together a first set of the data for a first set of the write operations and to write the first set of the data into sequential physical address locations for each one of a first group of the storage elements.
- the storage access system may be configured to perform concurrent read operations from a first group of storage elements not currently being written with the first set of data, to aggregate together a second set of the data for a second set of the write operations and to write the second set of the data into sequential physical address locations for each of a second group of the storage elements different from the group of storage elements.
- the storage access system may also be configured to perform concurrent read operations from the second group of storage elements not currently being written with the second set of data.
- An indirection table may be used to map the read addresses to physical addresses in the storage elements.
- the performance index may map to different numbers of groups of the storage elements and different numbers of storage elements within groups.
- a method for receiving data for write operations, for aggregating together a set of the data for a set of the write operations; identifying a performance index for the set of the data and for performing sequential write operations for the aggregated set of the data into sequential physical address locations for each one of a group of media devices so a number of the media devices can be accessed by read operations during the sequential write operations.
- the number of the media devices that can be accessed by the read operations during the write operations may be based on a performance index.
- an additional set of data may be aggregated for an additional set of the write operations including identifying an additional performance index for the additional set of the data;
- Additional sequential write operations for the aggregated additional set of the data into sequential physical address locations for each one of an additional group of media devices may be performed so a number of the media devices can be accessed by additional read operations during the additional sequential write operations.
- the number of the media devices that can be accessed by the additional read operations during the additional sequential write operations may be based on the additional performance index.
- FIG. 1 is a block diagram of a storage access system
- FIG. 2 is a block diagram showing the storage access system of FIG. 1 in more detail
- FIG. 3 is a block diagram showing how data is iteratively stored in different media devices
- FIGS. 4-6 are block diagrams showing other schemes for iteratively storing data into different media devices
- FIG. 7 shows how the storage schemes in FIGS. 4-6 are mapped to different performance indexes
- FIG. 8 shows how the storage schemes in FIGS. 4-6 are mapped to different performance targets;
- FIG. 9 is a flow diagram showing how iterative write operations are performed by the storage access system in FIG. 1 ;
- FIGS. 10 and 11 show how the storage access system maps read operations to locations in different media devices.
- FIG. 12 is a flow diagram showing how the storage access system selects one of the media devices for a read operation.
- FIG. 1 shows a storage access system 100 that provides more consistent access times for storage media with inconsistent access latency and reduces bottlenecks caused by the slow and variable delays for write operations.
- Data for client write operations are aggregated to improve the overall performance of write operations to a storage media.
- the aggregated data is then written iteratively into multiple different media devices to prevent write operations from blocking access to the storage media during read operations.
- the single aggregated write operation is lower latency than if the client writes had been individually written.
- the storage access system 100 includes a write aggregation mechanism 108, iterative write mechanism 1 10, and an indirection mechanism 112.
- the operations performed by the write aggregation mechanism 108, iterative write mechanism 110, and an indirection mechanism 1 12 are carried out by one or more programmable processors 105 executing software modules located in a memory 107.
- some operations in the storage access system 100 may be implemented in hardware and other elements implemented in software.
- a storage media 1 14 includes multiple different media devices 120 that are each separately read and write accessible by the storage access system 100.
- the media devices 120 are flash Solid State Devices (SSDs) but could be or include any other type of storage device that may benefit from the aggregation and/or iterative storage schemes described below.
- SSDs Solid State Devices
- Clients 106 comprise any application that needs to access data in the storage media 114.
- clients 106 could comprise software applications in a database system that need to read and write data to and from storage media 1 14 responsive to communications with users via a Wide Area Network or Local Area Network (not shown).
- the clients 106 may also consist of a number of actual user applications or a single user application presenting virtual storage to other users indirectly.
- the clients 106 could include software applications that present storage to a web application operating on a web server. It should also be understood that the term "clients" simply refers to a software application and/or hardware that uses the storage media 1 14 or an abstraction of this media by means of a volume manager or other intermediate device.
- the clients 106, storage access system 100, and storage media 114 may all be part of the same appliance that is located on a server computer.
- any combination of the clients 106, storage access system 100, and storage media 1 14 may operate in different computing devices or servers.
- the storage access system 100 may be operated in conjunction with a personal computer, work station, portable video or audio device, or some other type of consumer product. Of course these are just examples, and the storage access system 100 can operate in any computing environment and with any application that needs to write and read date to and from storage media 1 14.
- the storage access system 100 receives write operations 102 from the clients 106.
- the write aggregation mechanism 108 aggregates data for the multiple different write operations 102. For example, the write aggregation mechanism 108 may aggregate four megabytes (MBs) of data from multiple different write operations 102 together into a data block.
- the indirection mechanism 1 12 then uses a performance indexing scheme described below to determine which of the different media devices 120 to store the data in the data block. Physical addresses in the selected media devices 120 are then mapped by the indirection mechanism 1 12 with the client write addresses in the write operations 102. This mapping is necessary as a specific aggregated write occurs to a single address while the client writes can consist of multiple noncontiguous addresses. Each written client write address can thus be mapped to a physical address which is in turn a subrange of the address of the aggregated write.
- the iterative write mechanism 1 10 iteratively (and serially - or one at a time) writes the aggregated data into each of the different selected media devices 120.
- This iterative write process only uses one media device at any one time and stores the same data into multiple different media devices 120. Because the same data is located in multiple different media devices 120 and only one media device 120 is written to at any one time, read operations 104 always have access to at least one of the media devices 120 for any data in storage media 114.
- the iterative write scheme prevents or reduces the likelihood of write operations creating bottlenecks and preventing read operations 104 from accessing the storage media 114. As an example, consider some initial data was written as part of an aggregate write operation over three devices.
- a read operation 104 may be received by the storage access system 100 while the iterative write mechanism 110 is iteratively writing data (serially) to multiple different media devices 120.
- the indirection mechanism 1 12 reads an address associated with the read operation 104 and then uses an indirection table to determine where the data associated with the read operation is located in a plurality of the media devices 120.
- the indirection mechanism can access the data from a different one of the media devices 120 that also stores the same data.
- the read operation 104 can continue while other media devices 120 are concurrently being used for write operations and even other read operations.
- the access times for read operations are normalized since the variable latencies associated with write operations no longer create bottlenecks for read operations.
- FIG. 2 describes the operation of the write aggregation mechanism 108 in more detail.
- the write aggregation mechanism 108 receives multiple different write operations 102 from clients 106.
- the write operations 102 include client addresses and associated data Dl, D2, and D3.
- the client addresses provided by the clients 106 in the write operations 102 may be random or sequential addresses.
- the write aggregation mechanism 108 aggregates the data write data Dl, D2, and D3 into an aggregation buffer 152.
- the data for the write operations 102 may be aggregated until a particular amount of data resides in buffer 152.
- the write aggregation mechanism 108 may aggregate the write data into a 4 Mega Byte (MB) buffer.
- the indirection mechanism 1 12 then identifies multiple different media devices 120 within the storage media 114 for storing the data in the 4MB aggregation buffer 152.
- aggregation occurs until either a specific size has been accumulated in buffer 152 or a specified time from the first client write has elapsed, whichever comes first.
- FIG. 2 illustrates the operation of the write aggregation mechanism 108 in more detail.
- the write aggregation mechanism 108 receives multiple different write operations 102 from clients 106.
- the write operations 102 include client addresses and associated data Dl , D2, and D3.
- the client addresses provided by the clients 106 in the write operations 102 may be random or sequential addresses.
- the write aggregation mechanism 108 aggregates the data write data Dl, D2, and D3 into an aggregation buffer 152.
- the data for the write operations 102 may be aggregated until, for example, a particular amount of data resides in buffer 152.
- the write aggregation mechanism 108 may aggregate the write data into a 4 Mega Byte (MB) buffer.
- the indirection mechanism 1 12 then identifies multiple different media devices 120 within the storage media 114 for storing the data in the 4MB aggregation buffer 152.
- aggregation occurs until either a specific size has been accululated in buffer 152 or a specified time from the first client write has elapsed, whichever comes first.
- Other aggregation management techniques will be apparent to persons of skill in the art having the benefit of this discussion.
- Aggregating data for multiple write operations into sequential write operations can reduce the overall latency for each individual write operation.
- flash SSDs can typically write a sequential set of data faster than random writes of the same amount of data. Therefore, aggregating multiple writes operations into a sequential write set can reduce the overall access time required for completing the write operations to storage media 1 14.
- the data associated with write operations 102 may not necessarily be aggregated.
- the write aggregation mechanism 108 may not be used and random individual write operations may be individually written into multiple different media devices 120 without first being aggregated in aggregation buffer 152.
- the indirection mechanism 1 12 maps the addresses for data Dl, D2, and D3 to physical addresses in different media devices 120.
- the data Dl, D2, and D3 in the aggregation buffer 152 is then written into the identified media devices 120 in the storage media 1 14.
- the clients 106 use and indirection table in indirection mechanism 1 12 to identify the locations in particular media devices 120 where the read data is located.
- FIG. 3 illustrates in more detail one of the iterative write schemes used by the indirection mechanism 1 12 for writing data into different media devices 120.
- the indirection mechanism 112 had previously received write operations identifying three client addresses Al , A2, and A3 associated with data Dl, D2, and D3, respectively.
- the iterative writing mechanism 110 writes data D 1 for the first address Al sequentially one-at-a-time into physical address PI of three media devices 1, 2, and 3.
- the iterative writing mechanism 1 10 then sequentially writes the data D2 associated with address A2 sequentially one-at-a-time into physical address P2 of media devices 1 , 2, and 3, and then sequentially one-at-a-time writes the data D3 associated with client address A3 sequentially into physical address P3 of media devices 1 , 2, and 3.
- Dl, D2, and D2 in each of the three media devices 1 , 2, and 3.
- the indirection mechanism 1 12 can now selectively read data Dl, D2, and D3 from any of the three media devices 1 , 2, or 3.
- the indirection mechanism 1 12 may currently be writing data into one of the media devices 120 and may also receive a read operation for data that is contained in the same media devices. Because the writes are iterative, only one of the media devices 1, 2, or 3 is used at any one time for performing write operations. Since the data for the read operation was previously stored in three different media devices 1 , 2, and 3, the indirection mechanism 1 12 can access one of the other two media devices, not currently being used in a write operation, to concurrently service the read operation. Thus, the write to the storage device 120 may not create any bottlenecks for read operations.
- FIG. 4 shows another write scheme where at least one read operation is guaranteed not to be blocked by any write operations.
- the iterative write mechanism 1 10 writes the data Dl, D2, and D3 into two different media devices 120.
- the same data Dl associated with client address Al is written into physical address PI in media devices 3 and 6.
- the same data D2 associated with address A2 is written into physical address PI in media devices 2 and 5
- the same data D3 associated with address A3 is written into physical address PI in media devices 3 and 6.
- FIG. 5 shows another iterative write scheme where two concurrent reads are arranged so as not to be blocked by the iterative write operations.
- the iterative write mechanism 1 10 writes the data Dl associated with address Al into physical address PI in media devices 2, 4, and 6.
- the same data D2 associated with address A2 is written into physical address location PI in media devices 1 , 3, and 5, and the data D3 associated with address A3 is written into physical address location P2 in media devices 2, 4 and 6.
- Each block of data Dl , D2, and D3 is written into three different media devices 120 and only one of the media devices will be used at any one time for writing data. There different media devices 120 will have data that can service any read operation. Therefore, the iterative write scheme in FIG. 5 allows a minimum of two read operations to be performed at the same time.
- FIG. 6 shows another iterative write scheme that allows a minimum of five concurrent reads without blocking by write operations.
- the iterative write mechanism 110 writes the data Dl associated with address Al into physical address locations PI in all of the six media devices 1-6.
- the data D2 associated with address A2 is written into physical address locations P2 in all media devices 1-6, and the data D3 associated with address A3 is written into physical address locations P3 in all media devices 1-6.
- the same data is written into each of the six media devices 120, and only one of the media devices 120 will be used at any one time for write operations. Therefore, five concurrent reads are possible from the media devices 120 as configured in FIG. 6.
- the sequential iterative write schemes described above are different from data mirroring where data is written into different devices at the same time and block all other memory accesses during the mirroring operation. Striping spreads data over different discs, but the data is not duplicated on different memory devices and is therefore not separately accessible from multiple different memory devices.
- the media devices are written using large sequential blocks of data (the size of the aggregation buffer) such that the random and variable-sized user write stream is converted into a sequential and uniformly-sized media write stream.
- FIGS. 7 and 8 shows how the different write schemes in FIGS. 4-6 can be dynamically selected according to a particular performance index assigned to the write operations.
- FIG. 7 shows a performance index table 200 that contains different performance indexes 1, 2, and 3 in column 202.
- the performance indexes 1 , 2, and 3 are associated with the write schemes described in FIGS. 4, 5, and 6, respectively.
- Performance index 1 has an associated number of 2 write iterations in column 204. This means that the data for each associated write operation will be written into 2 different media devices 120.
- Column 206 shows which media devices will be written into with the same data. For example, as described above in FIG. 4, media devices 1 and 4 will both be written with the same data D3, media devices 2 and 5 will both be written with the same data D2, and media devices 3 and 6 will both be written with the same data Dl.
- Performance index 2 in column 202 is associated with three write iterations as indicated in column 204. As described above in FIG. 5, media devices 1 , 3 and 5 will all be written with the same data or media devices 2, 4, and 6 will all be written with the same data. Performance index 3 in column 202 is associated with six write iterations as described FIG. 6 with the same data written into all six of the media devices.
- Selecting performance index 1 allows at least one unblocked read from the storage media.
- Selecting performance index 2 allows at least two concurrent unblocked reads from the storage media and selecting performance index 3 allows at least five concurrent unblocked reads from the storage media.
- a client 106 that needs a highest storage access performance may select performance index 3. For example, a client that needs to read database indexes may need to read a large amount of data all at the same time from many disjoint locations in storage media 1 14.
- a client 106 that needs to maximize storage capacity or that does not need maximum read performance might select performance index 1.
- the client 106 may only need to read a relatively small amount of data at any one time, or may only need to read blocks of sequential data typically stored in the same media device 120.
- the client 106 may be aware of the importance of the data or what type of data is being written.
- the client accordingly assigns a performance index 1, 2, or 3 to the data by sending a message with a particular performance index to storage access system 100.
- the indirection mechanism 1 12 will then start using the particular iterative write scheme associated with the selected performance index. For example, if the storage access system 100 receives a performance index of 3 from the client 106, the indirection mechanism 112 will start writing the same data into three different media devices 120.
- FIG. 8 shows another table 220 that associates the performance indexes in table 200 with performance targets 224.
- the performance targets 224 can be derived from empirical data that measures and averages read access times for each of the different write iteration schemes used by the storage access system 100. Alternatively, the performance targets 224 can be estimated by dividing a typical read access time for the media devices 120 by the number of unblocked reads that can be performed at the same time.
- a single read access maybe around 200 micro-seconds ( ⁇ 8),
- the performance target for the single unblocked read provided by performance index 1 would therefore be something less than about 200 ⁇ 8.
- the performance target for performance index 3 Because two concurrent unblocked reads are provided for performance index 3, the performance target for performance index 3 with something less than about 100 ⁇ 8. Because five concurrent unblocked reads are provided by performance index 3, the performance target for performance index 3 of something less than about 40 ⁇ 8.
- a client 106 can select a particular performance target 224 and the storage access system 100 will select the particular performance index 202 and iterative write scheme necessary to provide that particular level of read performance. It is also possible, using the described method, to implement a number of media regions with different QoS levels within the same group of physical media devices by allocating or reserving physical address space for each specific QoS level. As physical media space is consumed, it is also possible to reallocate address space to a different QoS level based on current utilization or other metric.
- FIG. 9 is a flow diagram showing one example of how the storage access system 100 in FIG. 1 performs write operations.
- the storage access system 100 receives some indication that write data is associated with performance index 2. This could be a message send from the client 106, a preconfigured parameter loaded into the storage access system 100, or the storage access system 100 could determine the performance index based on the particular client or a particular type of identified data.
- the client 106 could send a message along with the write data or the storage access system 100 could be configured to use performance index 2 based on different programmed criteria such as time of day, client identifier, type of data, or the like.
- a performance target value 224 could be identified by the storage access system 100 in operation 304.
- the client 106 could send a message to the storage access system 100 in operation 304 requesting a performance target of 75 ⁇ 8.
- the performance target could also be preconfigured in the storage access system 100 or could be identified dynamically by the storage access system 100 based on programmed criteria.
- the storage access system 100 uses table 220 in FIG. 8 to identify the performance index associated with the identified performance target of 75 ⁇ 8. In this example, the system 100 selects performance index 2 since 75 ⁇ 8 is less than the 100 ⁇ 8 value in column 224 of table 220.
- the next free media device group is identified.
- the first write group includes media devices 1 , 3, and 5, and the second group includes media devices 2, 4, and 6 (see FIGS. 5 and 7).
- media device 2, 4, and 6 were the last group of media devices that were written to by the storage access system 100. Accordingly, the least recently used media device group is identified as media devices 1 , 3, and 5 in operation 306.
- write data received from the one or more clients 106 is placed into the aggregation buffer 152 (FIG. 2) in operation 308 until the aggregation buffer is full in operation 310.
- the aggregation buffer 152 may be 4 MBs.
- the write aggregation mechanism 108 in FIG. 1 continues to place write data associated with performance index 2 into the aggregation buffer 152 until the aggregation buffer 152 reaches some threshold close to 4 MBs.
- the storage access system 100 then writes the aggregated block of write data into the media device as previously described in FIGS. 3-6.
- the same data is written into media device 1 in operation 312, media device 3 in a next sequential operation 314 and media device 5 in a third sequential write operation 314.
- the physical address locations in media devices 1 , 3, and 5 used for storing the data are then added to an indirection table in the indirection mechanism 112 in operation 318.
- the aggregation buffer 152 is refilled and the next group of media devices 2, 4, and 6 are used in the next iterative write to storage media 1 14.
- a different aggregation buffer which may have a different size or management criteria, can be used for other write data associated with other performance indexes.
- the data is iteratively written to the least recently used group of media devices 120 associated with that particular performance index (in this case, the 2, 4, and 6 group).
- FIG. 10 shows how a first read operation 340 to address Al is handled by the storage access system 100.
- the iterative write scheme previously shown in FIG. 5 was used to store data into multiple different media devices in storage media 114.
- the indirection mechanism 112 previously stored the same data Dl sequentially into media devices 2, 4, and 6 at physical address PI .
- the next data D2 was stored sequentially into media devices 1 , 3, and 5 at physical address P2.
- indirection table 344 in indirection mechanism 112 maps the address Al in read operation 340 to a physical address PI in media devices 2, 4, and 6. It should be noted that as long as the data is stored at the same physical address in each of the media devices, the indirection table 344 only needs to identify one physical address PI and the associated group number for the media devices 2, 4, and 6 where the data associated with address Al is stored. This reduces the number of entries in table 344.
- the indirection mechanism 1 12 identifies the physical address associated with the client address Al and selects one of the three media devices 2, 4, or 6 that is currently not being used. The indirection mechanism 1 12 reads the data Dl from the selected media device and forwards the data back to the client 106.
- FIG. 1 1 shows how the storage access system 100 handles a read operation 342 to address A2.
- the data D2 associated with address A2 was previously stored in physical address PI of media devices 1, 3, and 5.
- the indirection mechanism 112 mapped address Al to physical address PI in media devices 1, 3, and 5.
- the indirection mechanism 112 identifies the physical address PI associated with the read address A2 and selects one of the three media devices 1 , 3, or 5 that is currently not being used.
- the indirection mechanism 1 12 reads the data D2 from the selected one of media devices 1 , 3, or 5 and forwards the data D2 back to the client 106.
- FIG. 12 is a flow diagram illustrating in more detail how the indirection mechanism 112 determines what data to read from which of the media devices 120 in the storage media 1 14.
- data Dl has been previously written into the storage media 1 14 as described above in FIG. 5 and the indirection table 344 in FIG. 10 has been updated by the indirection mechanism 114.
- the indirection mechanism receives a read operation for address Al from one of the clients 106 (FIG. 1). If the indirection table 344 does not include an entry for address Al in operation 382, a read failure is reported in operation 396 and the read request is completed in operation 394.
- three candidate media addresses on media devices 2, 4, and 6 are identified by the indirection mechanism in operation 382.
- the indirection mechanism 1 12 selects one of the identified media devices in operation 384. If the selected media device is currently being used in a write operation in operation 386, the next one of the three identified media devices is selected in operation 384.
- the indirection mechanism 1 12 selects the next media device from the group in operation 384. This process is repeated until a free media device is identified or the last media device in indirection table 344 of FIG. 10 is identified in operation 390.
- the data Dl in the available media device 2, 4, or 6 is read by the indirection mechanism and returned to the client 106 in operation 392.
- the read and write status of all three media devices 2, 4, and 6 can be determined by the indirection mechanism 1 12 at the same time by monitoring the individual read and write status lines for all of the media devices.
- the indirection mechanism 112 could then simultaneously eliminate the unavailable media devices from consideration and then choose the least recently used one of the remaining available media devices. For example, media device 4 may currently be in use and media devices 2 and 6 may currently be available.
- the redirection mechanism 112 reads the data Dl at physical address location PI from the least recently used one of media devices 2 and 6 in operation 392.
- any combination of performance indexes and number of media devices can be used for storing different data.
- the client 106 may select performance index 1 for a first group of data and select performance index 3 for more performance critical second group of data.
- the indirection mechanism 1 12 can write the data to the necessary number of media devices using indirection tables 200 and 220 in FIGS. 7 and 8.
- the indirection mechanism 1 12 uses the indirection table 344 in FIGS. 10 and 1 1 to map the client addresses to particular physical addresses in the identified group of media devices 120.
- the different performance levels for the different performance indexed data is then
- the system described above can use dedicated processor systems, micro controllers, programmable logic devices, or microprocessors that perform some or all of the operations. Some of the operations described above may be implemented in software and other operations may be implemented in hardware. [0088] For the sake of convenience, the operations are described as various interconnected functional blocks or distinct software modules. This is not necessary, however, and there may be cases where these functional blocks or modules are equivalently aggregated into a single logic device, program or operation with unclear boundaries. In any event, the functional blocks and software modules or features of the flexible interface can be implemented by themselves, or in combination with other operations in either hardware or software.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
A storage access system provides consistent memory access times for storage media with inconsistent access latency and reduces bottlenecks caused by the variable time delays during memory write operations. Data is written iteratively into multiple different media devices to prevent write operations from blocking all other memory access operations. The multiple copies of the same data then allow subsequent read operations to avoid the media devices currently servicing the write operations. Write operations can be aggregated together to improve the overall write performance to a storage media. A performance index determines how many media devices store the same data. The number of possible concurrent reads varies according to the number of media devices storing the data. Therefore, the performance index provides different selectable Quality of Service (QoS) for data in the storage media.
Description
SYSTEM FOR INCREASING STORAGE MEDIA PERFORMANCE
BACKGROUND
[0001] Flash Solid State Devices (SSD) differ from traditional rotating disk drives in a number of aspects. Flash SSD devices have certain undesirable aspects. In particular, flash SSD devices suffer from poor random write performance that commonly degrades over time. Because flash media has a limited number of write cycles (a physical limitation of the storage material that eventually causes the device to "wear out"), write performance is also
unpredictable.
[0002] Internally, the flash SSD periodically rebalances the written sections of the media in a process called "wear leveling". This process assures that the storage material is used evenly thus extending the viable life of the device.
However, the wear leveling prevents a user of the storage system from
anticipating, or definitively knowing, when and for how long such background operations may occur (lack of transparency). Another example of a rebalancing operation is the periodic defragmentation caused by random nature of the user writes over the flash media address space.
[0003] For example, the user cannot access data in the flash SSD while these wear leveling or defragmentation operations are being performed and the flash SSD devices do not provide prior notification of when these background operations are going to occur. This prevents applications from anticipating the storage non-availability and scheduling other tasks during the flash SSD rebalancing operations. As a result, the relatively slow and inconsistent write times of the flash devices create bottlenecks for the relatively faster read operations. Vendors typically refer to all background operations as "garbage collection" without specifying the type, duration or frequency of the underlying events.
SUMMARY
[0004] A system is described herein, having a plurality of storage media devices, and a processor configured to receive data for a write operation, to identify a group of three or more of the media devices for writing the data and to sequentially write the data into each of the three or more media devices in the identified group.
[0005] The processor is further configured to receive a read operation and to identify one of the media devices currently being written with the data; and to concurrently read data from address locations associated with the read operation from two or more of the media devices in the group not currently being written with the data.
[0006] In an aspect, the media devices may have variable write latencies; and the processor is further configured to normalize read latencies for the media devices by concurrently reading the data from multiple ones of the media devices in the group that are not being used for writing data. The media devices may be, for example flash memory devices, hard disk devices or the like.
[0007] In a further aspect, the processor may be configured to aggregate together a first set of the data for a write operations, to identify a first performance index associated with the first set of the data and to write the aggregated first set of data into sequential physical address locations, so a first number of the media devices in the group of media devices associated with the first performance index can be read without being blocked by the writing of the aggregated first set of data;
[0008] Further, the processor may be configured to aggregate together a second set of the data for a second write operation, to identify a second performance index associated with the second set of the data; and, to write the aggregated second set of data into sequential physical address locations so that a second number of the media devices in an additional group of the media devices associated with the second performance index can be read without being blocked by the writing of the
aggregated second set of data. A same a same physical address may be used to store the data in each of the media devices
[0009] In an aspect, a size of the aggregated first set and the aggregated second set of data is variable and based on when the write operations are identified.
[0010] Moreover, the system may identify a performance index for the write operations; and identify a number of two or more of the media devices in the group of media devices in the group for providing concurrent read operations based on the performance index. The processor may be further configured to write the data into one additional media device in addition to the identified number of the two or more media devices for providing concurrent read operations.
[001 1 ] The processor may also configured to identify a performance target for the particular write operation and map the performance target to the particular performance index such as a read access time of the media devices or the number of media devices in the identified group.
[0012] A memory may be provided to store an indirection table that maps write addresses used in the write operations to separate independently accessible locations in each one of the media devices in the identified group. .
[0013] In yet another aspect, an apparatus is disclosed having a plurality of storage elements and a storage access system configured to write the same data into the storage elements sequentially one at a time so a number of the storage elements remain available for read operations while the other storage elements are being written with the data. The number of storage elements available for the read operations is associated with a selectable performance index;
[0014] Read addresses for the read operations may be mapped to map read addresses for the read operations to multiple different ones of the storage elements so that data may be concurrently read during the read operations from number of the storage elements associated with the performance index and not currently being used by the write operations. The storage elements may be flash solid state devices.
[0015] In a further aspect, the storage elements may be independently read and write accessible; and, the storage access system may be configured to iteratively write a same independently accessible copy of the same data into each of the multiple different storage elements to avoid blocking access of the read operations to the number of the storage elements associated with the performance index during the write operations.
[0016] The storage access system may normalize read access times for variable-latency storage elements by writing the data to three or more different storage elements and then reading back the data from two or more of the storage elements that are not currently being used for the write operations.
[0017] In another aspect, the storage access system may also be configured to aggregate together a first set of the data for a first set of the write operations and to write the first set of the data into sequential physical address locations for each one of a first group of the storage elements. The storage access system may be configured to perform concurrent read operations from a first group of storage elements not currently being written with the first set of data, to aggregate together a second set of the data for a second set of the write operations and to write the second set of the data into sequential physical address locations for each of a second group of the storage elements different from the group of storage elements. The storage access system may also be configured to perform concurrent read operations from the second group of storage elements not currently being written with the second set of data.
[0018] An indirection table may be used to map the read addresses to physical addresses in the storage elements. The performance index may map to different numbers of groups of the storage elements and different numbers of storage elements within groups.
[0019] In a further aspect, a method is disclosed for receiving data for write operations, for aggregating together a set of the data for a set of the write operations; identifying a performance index for the set of the data and for performing sequential write operations for the aggregated set of the data into
sequential physical address locations for each one of a group of media devices so a number of the media devices can be accessed by read operations during the sequential write operations. The number of the media devices that can be accessed by the read operations during the write operations may be based on a performance index.
[0020] In a further aspect, an additional set of data may be aggregated for an additional set of the write operations including identifying an additional performance index for the additional set of the data;
[0021 ] Additional sequential write operations for the aggregated additional set of the data into sequential physical address locations for each one of an additional group of media devices may be performed so a number of the media devices can be accessed by additional read operations during the additional sequential write operations. The number of the media devices that can be accessed by the additional read operations during the additional sequential write operations may be based on the additional performance index.
BRIEF DESCRIPTION OF THE DRAWINGS
[0022] FIG. 1 is a block diagram of a storage access system;
[0023] FIG. 2 is a block diagram showing the storage access system of FIG. 1 in more detail;
[0024] FIG. 3 is a block diagram showing how data is iteratively stored in different media devices;
[0025] FIGS. 4-6 are block diagrams showing other schemes for iteratively storing data into different media devices;
[0026] FIG. 7 shows how the storage schemes in FIGS. 4-6 are mapped to different performance indexes;
[0027] FIG. 8 shows how the storage schemes in FIGS. 4-6 are mapped to different performance targets;
[0028] FIG. 9 is a flow diagram showing how iterative write operations are performed by the storage access system in FIG. 1 ;
[0029] FIGS. 10 and 11 show how the storage access system maps read operations to locations in different media devices; and
[0030] FIG. 12 is a flow diagram showing how the storage access system selects one of the media devices for a read operation.
DETAILED DESCRIPTION
[0031] FIG. 1 shows a storage access system 100 that provides more consistent access times for storage media with inconsistent access latency and reduces bottlenecks caused by the slow and variable delays for write operations. Data for client write operations are aggregated to improve the overall performance of write operations to a storage media. The aggregated data is then written iteratively into multiple different media devices to prevent write operations from blocking access to the storage media during read operations. The single aggregated write operation is lower latency than if the client writes had been individually written.
[0032] The storage access system 100 includes a write aggregation mechanism 108, iterative write mechanism 1 10, and an indirection mechanism 112. In one embodiment, the operations performed by the write aggregation mechanism 108, iterative write mechanism 110, and an indirection mechanism 1 12 are carried out by one or more programmable processors 105 executing software modules located in a memory 107. In other embodiments, some operations in the storage access system 100 may be implemented in hardware and other elements implemented in software.
[0033] In one embodiment, a storage media 1 14 includes multiple different media devices 120 that are each separately read and write accessible by the storage access system 100. In one embodiment, the media devices 120 are flash Solid State Devices (SSDs) but could be or include any other type of storage device that
may benefit from the aggregation and/or iterative storage schemes described below.
[0034] Clients 106 comprise any application that needs to access data in the storage media 114. For example, clients 106 could comprise software applications in a database system that need to read and write data to and from storage media 1 14 responsive to communications with users via a Wide Area Network or Local Area Network (not shown). The clients 106 may also consist of a number of actual user applications or a single user application presenting virtual storage to other users indirectly. In another example, the clients 106 could include software applications that present storage to a web application operating on a web server. It should also be understood that the term "clients" simply refers to a software application and/or hardware that uses the storage media 1 14 or an abstraction of this media by means of a volume manager or other intermediate device.
[0035] In one embodiment, the clients 106, storage access system 100, and storage media 114 may all be part of the same appliance that is located on a server computer. In another example, any combination of the clients 106, storage access system 100, and storage media 1 14 may operate in different computing devices or servers. In other embodiments, the storage access system 100 may be operated in conjunction with a personal computer, work station, portable video or audio device, or some other type of consumer product. Of course these are just examples, and the storage access system 100 can operate in any computing environment and with any application that needs to write and read date to and from storage media 1 14.
[0036] The storage access system 100 receives write operations 102 from the clients 106.
[0037] The write aggregation mechanism 108 aggregates data for the multiple different write operations 102. For example, the write aggregation mechanism 108 may aggregate four megabytes (MBs) of data from multiple different write operations 102 together into a data block.
[0038] The indirection mechanism 1 12 then uses a performance indexing scheme described below to determine which of the different media devices 120 to store the data in the data block. Physical addresses in the selected media devices 120 are then mapped by the indirection mechanism 1 12 with the client write addresses in the write operations 102. This mapping is necessary as a specific aggregated write occurs to a single address while the client writes can consist of multiple noncontiguous addresses. Each written client write address can thus be mapped to a physical address which is in turn a subrange of the address of the aggregated write.
[0039] The iterative write mechanism 1 10 iteratively (and serially - or one at a time) writes the aggregated data into each of the different selected media devices 120. This iterative write process only uses one media device at any one time and stores the same data into multiple different media devices 120. Because the same data is located in multiple different media devices 120 and only one media device 120 is written to at any one time, read operations 104 always have access to at least one of the media devices 120 for any data in storage media 114. In other words, the iterative write scheme prevents or reduces the likelihood of write operations creating bottlenecks and preventing read operations 104 from accessing the storage media 114. As an example, consider some initial data was written as part of an aggregate write operation over three devices. If at most one of these devices is being written (with future data to other locations) at a time, there will always be at least 2 devices from which the original data can be read without stalling on a pending write operation. This assurance may be provided irrespective of the duration of any particular write operation.
[0040] A read operation 104 may be received by the storage access system 100 while the iterative write mechanism 110 is iteratively writing data (serially) to multiple different media devices 120. The indirection mechanism 1 12 reads an address associated with the read operation 104 and then uses an indirection table to determine where the data associated with the read operation is located in a plurality of the media devices 120.
[0041] If one of the identified media devices 120 is busy (currently being written to), the indirection mechanism can access the data from a different one of the media devices 120 that also stores the same data. Thus, the read operation 104 can continue while other media devices 120 are concurrently being used for write operations and even other read operations. The access times for read operations are normalized since the variable latencies associated with write operations no longer create bottlenecks for read operations.
[0042] FIG. 2 describes the operation of the write aggregation mechanism 108 in more detail. The write aggregation mechanism 108 receives multiple different write operations 102 from clients 106. The write operations 102 include client addresses and associated data Dl, D2, and D3. The client addresses provided by the clients 106 in the write operations 102 may be random or sequential addresses.
[0043] The write aggregation mechanism 108 aggregates the data write data Dl, D2, and D3 into an aggregation buffer 152. The data for the write operations 102 may be aggregated until a particular amount of data resides in buffer 152. For example, the write aggregation mechanism 108 may aggregate the write data into a 4 Mega Byte (MB) buffer. The indirection mechanism 1 12 then identifies multiple different media devices 120 within the storage media 114 for storing the data in the 4MB aggregation buffer 152. In another embodiment, aggregation occurs until either a specific size has been accumulated in buffer 152 or a specified time from the first client write has elapsed, whichever comes first.
[0044] Some examples of how the indirection mechanism 112 aggregates data for random write operations into a single data block and writes the data into media devices 120 are described in co-pending patent application Ser. No. US 12/759604 that claims priority to co-pending application Ser. No. 61/170,472 entitled:
STORAGE SYSTEM FOR INCREASING PERFORMANCE OF STORAGE MEDIA, filed April 17, 2009 which are both herein incorporated by reference in their entirety.
[0045] FIG. 2 illustrates the operation of the write aggregation mechanism 108 in more detail. The write aggregation mechanism 108 receives multiple different
write operations 102 from clients 106. The write operations 102 include client addresses and associated data Dl , D2, and D3. The client addresses provided by the clients 106 in the write operations 102 may be random or sequential addresses.
[0046] The write aggregation mechanism 108 aggregates the data write data Dl, D2, and D3 into an aggregation buffer 152. The data for the write operations 102 may be aggregated until, for example, a particular amount of data resides in buffer 152. For example, the write aggregation mechanism 108 may aggregate the write data into a 4 Mega Byte (MB) buffer. The indirection mechanism 1 12 then identifies multiple different media devices 120 within the storage media 114 for storing the data in the 4MB aggregation buffer 152. In another example, aggregation occurs until either a specific size has been accululated in buffer 152 or a specified time from the first client write has elapsed, whichever comes first. Other aggregation management techniques will be apparent to persons of skill in the art having the benefit of this discussion.
[0047] Aggregating data for multiple write operations into sequential write operations can reduce the overall latency for each individual write operation. For example, flash SSDs can typically write a sequential set of data faster than random writes of the same amount of data. Therefore, aggregating multiple writes operations into a sequential write set can reduce the overall access time required for completing the write operations to storage media 1 14.
[0048] In another embodiment, the data associated with write operations 102 may not necessarily be aggregated. For example, the write aggregation mechanism 108 may not be used and random individual write operations may be individually written into multiple different media devices 120 without first being aggregated in aggregation buffer 152.
[0049] The indirection mechanism 1 12 maps the addresses for data Dl, D2, and D3 to physical addresses in different media devices 120. The data Dl, D2, and D3 in the aggregation buffer 152 is then written into the identified media devices 120 in the storage media 1 14. In subsequent read operations 104, the clients 106 use
and indirection table in indirection mechanism 1 12 to identify the locations in particular media devices 120 where the read data is located.
[0050] FIG. 3 illustrates in more detail one of the iterative write schemes used by the indirection mechanism 1 12 for writing data into different media devices 120. The indirection mechanism 112 had previously received write operations identifying three client addresses Al , A2, and A3 associated with data Dl, D2, and D3, respectively.
[0051 ] The iterative writing mechanism 110 writes data D 1 for the first address Al sequentially one-at-a-time into physical address PI of three media devices 1, 2, and 3. The iterative writing mechanism 1 10 then sequentially writes the data D2 associated with address A2 sequentially one-at-a-time into physical address P2 of media devices 1 , 2, and 3, and then sequentially one-at-a-time writes the data D3 associated with client address A3 sequentially into physical address P3 of media devices 1 , 2, and 3. There is now a copy of Dl, D2, and D2 in each of the three media devices 1 , 2, and 3. In most cases, the writes to media devices 1 , 2 and 3 would each have been single writes containing the aggregated data D 1 , D2 and D3 written at physical address PI while addresses P2 and P3 are the subsequent sequential addresses. In either case, the result is that the user data for potentially random addresses Al , A2 and A3 are now written sequentially at the same addresses (PI , P2 and P3) on all three devices.
[0052] The indirection mechanism 1 12 can now selectively read data Dl, D2, and D3 from any of the three media devices 1 , 2, or 3. The indirection mechanism 1 12 may currently be writing data into one of the media devices 120 and may also receive a read operation for data that is contained in the same media devices. Because the writes are iterative, only one of the media devices 1, 2, or 3 is used at any one time for performing write operations. Since the data for the read operation was previously stored in three different media devices 1 , 2, and 3, the indirection mechanism 1 12 can access one of the other two media devices, not currently being used in a write operation, to concurrently service the read
operation. Thus, the write to the storage device 120 may not create any bottlenecks for read operations.
[0053] FIG. 4 shows another write scheme where at least one read operation is guaranteed not to be blocked by any write operations. In this scheme, the iterative write mechanism 1 10 writes the data Dl, D2, and D3 into two different media devices 120. For example, the same data Dl associated with client address Al is written into physical address PI in media devices 3 and 6. The same data D2 associated with address A2 is written into physical address PI in media devices 2 and 5, and the same data D3 associated with address A3 is written into physical address PI in media devices 3 and 6.
[0054] FIG. 5 shows another iterative write scheme where two concurrent reads are arranged so as not to be blocked by the iterative write operations. The iterative write mechanism 1 10 writes the data Dl associated with address Al into physical address PI in media devices 2, 4, and 6. The same data D2 associated with address A2 is written into physical address location PI in media devices 1 , 3, and 5, and the data D3 associated with address A3 is written into physical address location P2 in media devices 2, 4 and 6.
[0055] Each block of data Dl , D2, and D3 is written into three different media devices 120 and only one of the media devices will be used at any one time for writing data. There different media devices 120 will have data that can service any read operation. Therefore, the iterative write scheme in FIG. 5 allows a minimum of two read operations to be performed at the same time.
[0056] FIG. 6 shows another iterative write scheme that allows a minimum of five concurrent reads without blocking by write operations. The iterative write mechanism 110 writes the data Dl associated with address Al into physical address locations PI in all of the six media devices 1-6. The data D2 associated with address A2 is written into physical address locations P2 in all media devices 1-6, and the data D3 associated with address A3 is written into physical address locations P3 in all media devices 1-6.
[0057] The same data is written into each of the six media devices 120, and only one of the media devices 120 will be used at any one time for write operations. Therefore, five concurrent reads are possible from the media devices 120 as configured in FIG. 6.
[0058] The sequential iterative write schemes described above are different from data mirroring where data is written into different devices at the same time and block all other memory accesses during the mirroring operation. Striping spreads data over different discs, but the data is not duplicated on different memory devices and is therefore not separately accessible from multiple different memory devices. Here, the media devices are written using large sequential blocks of data (the size of the aggregation buffer) such that the random and variable-sized user write stream is converted into a sequential and uniformly-sized media write stream.
[0059] FIGS. 7 and 8 shows how the different write schemes in FIGS. 4-6 can be dynamically selected according to a particular performance index assigned to the write operations. FIG. 7 shows a performance index table 200 that contains different performance indexes 1, 2, and 3 in column 202. The performance indexes 1 , 2, and 3 are associated with the write schemes described in FIGS. 4, 5, and 6, respectively.
[0060] Performance index 1 has an associated number of 2 write iterations in column 204. This means that the data for each associated write operation will be written into 2 different media devices 120. Column 206 shows which media devices will be written into with the same data. For example, as described above in FIG. 4, media devices 1 and 4 will both be written with the same data D3, media devices 2 and 5 will both be written with the same data D2, and media devices 3 and 6 will both be written with the same data Dl.
[0061] Performance index 2 in column 202 is associated with three write iterations as indicated in column 204. As described above in FIG. 5, media devices 1 , 3 and 5 will all be written with the same data or media devices 2, 4, and 6 will all be written with the same data. Performance index 3 in column 202 is
associated with six write iterations as described FIG. 6 with the same data written into all six of the media devices.
[0062] Selecting performance index 1 allows at least one unblocked read from the storage media. Selecting performance index 2 allows at least two concurrent unblocked reads from the storage media and selecting performance index 3 allows at least five concurrent unblocked reads from the storage media.
[0063] A client 106 that needs a highest storage access performance may select performance index 3. For example, a client that needs to read database indexes may need to read a large amount of data all at the same time from many disjoint locations in storage media 1 14.
[0064] A client 106 that needs to maximize storage capacity or that does not need maximum read performance might select performance index 1. For example, the client 106 may only need to read a relatively small amount of data at any one time, or may only need to read blocks of sequential data typically stored in the same media device 120.
[0065] The client 106 may be aware of the importance of the data or what type of data is being written. The client accordingly assigns a performance index 1, 2, or 3 to the data by sending a message with a particular performance index to storage access system 100. The indirection mechanism 1 12 will then start using the particular iterative write scheme associated with the selected performance index. For example, if the storage access system 100 receives a performance index of 3 from the client 106, the indirection mechanism 112 will start writing the same data into three different media devices 120.
[0066] Accordingly, when a read operation reads the data back from the storage media 114, the amount of time required to read that particular data will correspond to the selected performance index. For example, since two concurrent reads are provided with performance index 3, data associated with performance index 3 can generally be read back faster than data associated with performance index of 1. Thus, the performance indexes provide a user selectable Quality of Service (QoS) for different data.
[0067] FIG. 8 shows another table 220 that associates the performance indexes in table 200 with performance targets 224. The performance targets 224 can be derived from empirical data that measures and averages read access times for each of the different write iteration schemes used by the storage access system 100. Alternatively, the performance targets 224 can be estimated by dividing a typical read access time for the media devices 120 by the number of unblocked reads that can be performed at the same time.
[0068] For example, a single read access maybe around 200 micro-seconds (μ8), The performance target for the single unblocked read provided by performance index 1 would therefore be something less than about 200 μ8.
Because two concurrent unblocked reads are provided for performance index 3, the performance target for performance index 3 with something less than about 100 μ8. Because five concurrent unblocked reads are provided by performance index 3, the performance target for performance index 3 of something less than about 40 μ8.
[0069] Thus, a client 106 can select a particular performance target 224 and the storage access system 100 will select the particular performance index 202 and iterative write scheme necessary to provide that particular level of read performance. It is also possible, using the described method, to implement a number of media regions with different QoS levels within the same group of physical media devices by allocating or reserving physical address space for each specific QoS level. As physical media space is consumed, it is also possible to reallocate address space to a different QoS level based on current utilization or other metric.
[0070] FIG. 9 is a flow diagram showing one example of how the storage access system 100 in FIG. 1 performs write operations. In operation 300, the storage access system 100 receives some indication that write data is associated with performance index 2. This could be a message send from the client 106, a preconfigured parameter loaded into the storage access system 100, or the storage access system 100 could determine the performance index based on the particular
client or a particular type of identified data. For example, the client 106 could send a message along with the write data or the storage access system 100 could be configured to use performance index 2 based on different programmed criteria such as time of day, client identifier, type of data, or the like.
[0071] Alternatively a performance target value 224 (FIG. 8) could be identified by the storage access system 100 in operation 304. For example, the client 106 could send a message to the storage access system 100 in operation 304 requesting a performance target of 75 μ8. The performance target could also be preconfigured in the storage access system 100 or could be identified dynamically by the storage access system 100 based on programmed criteria. In operation 306 the storage access system 100 uses table 220 in FIG. 8 to identify the performance index associated with the identified performance target of 75 μ8. In this example, the system 100 selects performance index 2 since 75 μ8 is less than the 100 μ8 value in column 224 of table 220.
[0072] In operation 302, the next free media device group is identified. For example, for performance index 2, there are two write groups. The first write group includes media devices 1 , 3, and 5, and the second group includes media devices 2, 4, and 6 (see FIGS. 5 and 7). In this example, media device 2, 4, and 6 were the last group of media devices that were written to by the storage access system 100. Accordingly, the least recently used media device group is identified as media devices 1 , 3, and 5 in operation 306.
[0073] In an example,write data received from the one or more clients 106 is placed into the aggregation buffer 152 (FIG. 2) in operation 308 until the aggregation buffer is full in operation 310. For example, the aggregation buffer 152 may be 4 MBs. The write aggregation mechanism 108 in FIG. 1 continues to place write data associated with performance index 2 into the aggregation buffer 152 until the aggregation buffer 152 reaches some threshold close to 4 MBs.
[0074] The storage access system 100 then writes the aggregated block of write data into the media device as previously described in FIGS. 3-6. In this example, the same data is written into media device 1 in operation 312, media device 3 in a
next sequential operation 314 and media device 5 in a third sequential write operation 314. The physical address locations in media devices 1 , 3, and 5 used for storing the data are then added to an indirection table in the indirection mechanism 112 in operation 318.
[0075] If more write data is received associated with performance index 2, the aggregation buffer 152 is refilled and the next group of media devices 2, 4, and 6 are used in the next iterative write to storage media 1 14. A different aggregation buffer, which may have a different size or management criteria, can be used for other write data associated with other performance indexes. When the other aggregation buffers are filled, the data is iteratively written to the least recently used group of media devices 120 associated with that particular performance index (in this case, the 2, 4, and 6 group).
[0076] FIG. 10 shows how a first read operation 340 to address Al is handled by the storage access system 100. In this example, the iterative write scheme previously shown in FIG. 5 was used to store data into multiple different media devices in storage media 114. Referring to FIG. 5, the indirection mechanism 112 previously stored the same data Dl sequentially into media devices 2, 4, and 6 at physical address PI . The next data D2 was stored sequentially into media devices 1 , 3, and 5 at physical address P2.
[0077] Referring again to FIG. 10, indirection table 344 in indirection mechanism 112 maps the address Al in read operation 340 to a physical address PI in media devices 2, 4, and 6. It should be noted that as long as the data is stored at the same physical address in each of the media devices, the indirection table 344 only needs to identify one physical address PI and the associated group number for the media devices 2, 4, and 6 where the data associated with address Al is stored. This reduces the number of entries in table 344.
[0078] The indirection mechanism 1 12 identifies the physical address associated with the client address Al and selects one of the three media devices 2, 4, or 6 that is currently not being used. The indirection mechanism 1 12 reads the
data Dl from the selected media device and forwards the data back to the client 106.
[0079] In an example, FIG. 1 1 shows how the storage access system 100 handles a read operation 342 to address A2. Recall that in FIG. 5, the data D2 associated with address A2 was previously stored in physical address PI of media devices 1, 3, and 5. Accordingly, the indirection mechanism 112 mapped address Al to physical address PI in media devices 1, 3, and 5.
[0080] Responsive to the read operation 342, the indirection mechanism 112 identifies the physical address PI associated with the read address A2 and selects one of the three media devices 1 , 3, or 5 that is currently not being used. The indirection mechanism 1 12 reads the data D2 from the selected one of media devices 1 , 3, or 5 and forwards the data D2 back to the client 106.
[0081] FIG. 12 is a flow diagram illustrating in more detail how the indirection mechanism 112 determines what data to read from which of the media devices 120 in the storage media 1 14. In this example, data Dl has been previously written into the storage media 1 14 as described above in FIG. 5 and the indirection table 344 in FIG. 10 has been updated by the indirection mechanism 114.
[0082] In operation 380, the indirection mechanism receives a read operation for address Al from one of the clients 106 (FIG. 1). If the indirection table 344 does not include an entry for address Al in operation 382, a read failure is reported in operation 396 and the read request is completed in operation 394.
[0083] In this example, three candidate media addresses on media devices 2, 4, and 6 are identified by the indirection mechanism in operation 382. The indirection mechanism 1 12 selects one of the identified media devices in operation 384. If the selected media device is currently being used in a write operation in operation 386, the next one of the three identified media devices is selected in operation 384.
[0084] If the selected media device is currently being used in a read operation in operation 388, the indirection mechanism 1 12 selects the next media device from the group in operation 384. This process is repeated until a free media
device is identified or the last media device in indirection table 344 of FIG. 10 is identified in operation 390. The data Dl in the available media device 2, 4, or 6 is read by the indirection mechanism and returned to the client 106 in operation 392.
[0085] The read and write status of all three media devices 2, 4, and 6 can be determined by the indirection mechanism 1 12 at the same time by monitoring the individual read and write status lines for all of the media devices. The indirection mechanism 112 could then simultaneously eliminate the unavailable media devices from consideration and then choose the least recently used one of the remaining available media devices. For example, media device 4 may currently be in use and media devices 2 and 6 may currently be available. The redirection mechanism 112 reads the data Dl at physical address location PI from the least recently used one of media devices 2 and 6 in operation 392.
[0086] As previously mentioned, any combination of performance indexes and number of media devices can be used for storing different data. For example, the client 106 (FIG. 1) may select performance index 1 for a first group of data and select performance index 3 for more performance critical second group of data. As long as the associated performance index is known, the indirection mechanism 1 12 can write the data to the necessary number of media devices using indirection tables 200 and 220 in FIGS. 7 and 8. The indirection mechanism 1 12 uses the indirection table 344 in FIGS. 10 and 1 1 to map the client addresses to particular physical addresses in the identified group of media devices 120. The different performance levels for the different performance indexed data is then
automatically provided since the number of possible concurrent reads for particular data corresponds directly with the number of media devices storing that particular data.
[0087] The system described above can use dedicated processor systems, micro controllers, programmable logic devices, or microprocessors that perform some or all of the operations. Some of the operations described above may be implemented in software and other operations may be implemented in hardware.
[0088] For the sake of convenience, the operations are described as various interconnected functional blocks or distinct software modules. This is not necessary, however, and there may be cases where these functional blocks or modules are equivalently aggregated into a single logic device, program or operation with unclear boundaries. In any event, the functional blocks and software modules or features of the flexible interface can be implemented by themselves, or in combination with other operations in either hardware or software.
[0089] Although only a few examples of this invention have been described in detail above, those skilled in the art will readily appreciate that many
modifications are possible in to the examples without materially departing from the novel teachings and advantages of the invention. Accordingly, all such modifications are intended to be included within the scope of this invention as defined in the following claims.
[0090]
Claims
1. A system, comprising:
multiple media devices; and
a processor configured to:
receive data for write operations;
identify a group of three or more of the media devices to which the data is to be written;
sequentially write the data into each of the media devices in the identified group;
receive a read operation;
identify one of the media devices currently being written with the data for write operations; and
concurrently read data from address locations associated with the read operation from one or more of the media devices in the group not currently being written with the data for the write operation.
2. The system according to claim 1 , wherein:
the media devices have variable write latencies; and the processor is further configured to normalize read latencies for the media devices by concurrently reading the data associated with read operations from multiple ones of the media devices in the group that are not being concurrently used for writing data.
3. The system according to claim 1 , wherein the media devices comprise flash solid state devices.
4. The system according to claim 1 , wherein the processor is further configured to:
aggregate together a first set of the data for a write operation;
identify a first performance index associated with the first set of the data;
write the aggregated first set of data into sequential physical address locations so a first number of the media devices in the group of media devices associated with the first performance index can be read without being blocked by the writing of the aggregated first set of data;
aggregate together a second set of the data for a second write operation;
identify a second performance index associated with the second set of the data; and
write the aggregated second set of data into sequential physical address locations so a second number of the media devices in an additional group of the media devices associated with the second performance index can be read without being blocked by the writing of the aggregated second set of data.
5. The system according to claim 4, wherein a size of the aggregated first set and the aggregated second set of data is variable and based on when the write operations are identified.
6. The system according to claim 1 , wherein the processor is configured to:
identify a performance index for the write operation; and identify a number of two or more of the media devices in the group of media devices in the group for providing concurrent read operations based on the performance index.
7. The system according to claim 6, wherein the processor is further configured to write the data into one additional media device in addition to the identified number of the two or more media devices for providing concurrent read operations.
8. The system according to claim 6, wherein the processor is configured to identify a performance target for the particular write operation and map the performance target to the particular performance index.
9. The system according to claim 8, wherein the performance target corresponds with a read access time of the media devices.
10. The system according to claim 8, wherein the performance target corresponds with how many of the media devices are in the identified group.
1 1. The system according to claim 1 , further comprising a memory storing an indirection table that maps write addresses used in the write operations to separate independently accessible locations in each one of the media devices in the identified group.
12. The system of claim 1 , wherein the processor is configured to use a same physical address to store the data in each of the media devices.
13. An apparatus, comprising:
storage elements; and
a storage access system configured to;
perform write operations configured to write same data into the storage elements sequentially one at a time so a number of the storage elements remain available for read operations while the other storage elements are being written with the data, wherein the number of storage elements available for the read operations is associated with a selectable performance index;
map read addresses for the read operations to multiple different ones of the storage elements not currently being used for the write operations; and
concurrently read data during the read operations from the number of the storage elements associated with the performance index and not currently being used by the write operations.
14. The apparatus according to claim 13, wherein the storage elements comprise flash solid state devices.
15. The apparatus according to claim 13, wherein:
the storage elements are independently read and write accessible;
the storage access system is configured to iteratively write a same independently accessible copy of the same data into each of the multiple different storage elements to avoid blocking access of the read operations to the number of the storage elements associated with the performance index during the write operations.
16. The apparatus according to claim 13, wherein the storage access system normalizes read access times for variable latency storage elements by writing the data to three or more different storage elements and then, responsive to a subsequent read operation, read back the data from one of the storage elements that are not currently being used for concurrent write operations.
17. The apparatus according to claim 13,wherein the storage access system is further configured to:
aggregate together a first set of the data for a first set of the write operations;
write the first set of the data into sequential physical address locations for each one of a first group of the storage elements, wherein the storage access system is configured to perform concurrent read operations from the first group of storage elements not currently being written with the first set of data;
aggregate together a second set of the data for a second set of the write operations; and
write the second set of the data into sequential physical address locations for each of a second group of the storage elements different from the group of storage elements, wherein the storage access system is configured to perform concurrent read operations from the second group of storage elements not currently being written with the second set of data.
18. The apparatus according to claim 13; further comprising an indirection table configured to map the read addresses to physical addresses in the storage elements.
19. The apparatus according to claim 13; wherein the performance index maps to different numbers of groups of the storage elements and different numbers of storage elements within groups.
20. A method, comprising:
receiving data for write operations;
aggregating together a set of the data for a set of the write operations;
identifying a performance index for the set of the data; performing sequential write operations for the aggregated set of the data into sequential physical address locations for each one of a group of media devices so a number of the media devices can be accessed by read operations during the sequential write operations, wherein the number of the media devices that can be accessed by the read operations during the write operations is based on the performance index.
21. The method of claim 20, further comprising:
aggregating together an additional set of the data for an additional set of the write operations;
identifying an additional performance index for the additional set of the data;
performing additional sequential write operations for the aggregated additional set of the data into sequential physical address locations for each one of an additional group of media devices so a number of the media devices can be accessed by additional read operations during the additional sequential write operations, wherein the number of the media devices that can be accessed by the additional read operations during the additional sequential write operations is based on the additional performance index.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020157030789A KR20160018471A (en) | 2013-04-02 | 2013-04-02 | System for increasing storage media performance |
PCT/US2013/034938 WO2014163620A1 (en) | 2013-04-02 | 2013-04-02 | System for increasing storage media performance |
EP13881037.9A EP2981965A4 (en) | 2013-04-02 | 2013-04-02 | System for increasing storage media performance |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/US2013/034938 WO2014163620A1 (en) | 2013-04-02 | 2013-04-02 | System for increasing storage media performance |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2014163620A1 true WO2014163620A1 (en) | 2014-10-09 |
Family
ID=51658750
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2013/034938 WO2014163620A1 (en) | 2013-04-02 | 2013-04-02 | System for increasing storage media performance |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP2981965A4 (en) |
KR (1) | KR20160018471A (en) |
WO (1) | WO2014163620A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017112283A1 (en) * | 2015-12-24 | 2017-06-29 | Intel Corporation | Non-uniform memory access latency adaptations to achieve bandwidth quality of service |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5890202A (en) * | 1994-11-28 | 1999-03-30 | Fujitsu Limited | Method of accessing storage units using a schedule table having free periods corresponding to data blocks for each storage portion |
US20040186945A1 (en) * | 2003-03-21 | 2004-09-23 | Jeter Robert E. | System and method for dynamic mirror-bank addressing |
US20060075191A1 (en) * | 2001-09-28 | 2006-04-06 | Emc Corporation | Pooling and provisioning storage resources in a storage network |
US20090006725A1 (en) * | 2006-12-15 | 2009-01-01 | Takafumi Ito | Memory device |
US20110258362A1 (en) * | 2008-12-19 | 2011-10-20 | Mclaren Moray | Redundant data storage for uniform read latency |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8375187B1 (en) * | 2007-09-27 | 2013-02-12 | Emc Corporation | I/O scheduling for flash drives |
-
2013
- 2013-04-02 WO PCT/US2013/034938 patent/WO2014163620A1/en active Application Filing
- 2013-04-02 EP EP13881037.9A patent/EP2981965A4/en not_active Withdrawn
- 2013-04-02 KR KR1020157030789A patent/KR20160018471A/en not_active Application Discontinuation
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5890202A (en) * | 1994-11-28 | 1999-03-30 | Fujitsu Limited | Method of accessing storage units using a schedule table having free periods corresponding to data blocks for each storage portion |
US20060075191A1 (en) * | 2001-09-28 | 2006-04-06 | Emc Corporation | Pooling and provisioning storage resources in a storage network |
US20040186945A1 (en) * | 2003-03-21 | 2004-09-23 | Jeter Robert E. | System and method for dynamic mirror-bank addressing |
US20090006725A1 (en) * | 2006-12-15 | 2009-01-01 | Takafumi Ito | Memory device |
US20110258362A1 (en) * | 2008-12-19 | 2011-10-20 | Mclaren Moray | Redundant data storage for uniform read latency |
Non-Patent Citations (1)
Title |
---|
See also references of EP2981965A4 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017112283A1 (en) * | 2015-12-24 | 2017-06-29 | Intel Corporation | Non-uniform memory access latency adaptations to achieve bandwidth quality of service |
US10146681B2 (en) | 2015-12-24 | 2018-12-04 | Intel Corporation | Non-uniform memory access latency adaptations to achieve bandwidth quality of service |
US11138101B2 (en) | 2015-12-24 | 2021-10-05 | Intel Corporation | Non-uniform memory access latency adaptations to achieve bandwidth quality of service |
Also Published As
Publication number | Publication date |
---|---|
EP2981965A1 (en) | 2016-02-10 |
EP2981965A4 (en) | 2017-03-01 |
KR20160018471A (en) | 2016-02-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8417871B1 (en) | System for increasing storage media performance | |
US20140304452A1 (en) | Method for increasing storage media performance | |
US11354235B1 (en) | Memory controller for nonvolatile memory that tracks data write age and fulfills maintenance requests targeted to host-selected memory space subset | |
US9619149B1 (en) | Weighted-value consistent hashing for balancing device wear | |
US9575668B1 (en) | Techniques for selecting write endurance classification of flash storage based on read-write mixture of I/O workload | |
US10095425B1 (en) | Techniques for storing data | |
US9477431B1 (en) | Managing storage space of storage tiers | |
US9395937B1 (en) | Managing storage space in storage systems | |
US9542125B1 (en) | Managing data relocation in storage systems | |
US8959286B2 (en) | Hybrid storage subsystem with mixed placement of file contents | |
US9244618B1 (en) | Techniques for storing data on disk drives partitioned into two regions | |
CN111587423B (en) | Hierarchical data policies for distributed storage systems | |
US9619169B1 (en) | Managing data activity information for data migration in data storage systems | |
EP2302500A2 (en) | Application and tier configuration management in dynamic page realloction storage system | |
KR20150105323A (en) | Method and system for data storage | |
JP5531091B2 (en) | Computer system and load equalization control method thereof | |
US9330009B1 (en) | Managing data storage | |
US10372372B2 (en) | Storage system | |
US20120011314A1 (en) | Storage system with reduced energy consumption and method of operating thereof | |
US11436138B2 (en) | Adaptive endurance tuning of solid-state storage system | |
WO2014163620A1 (en) | System for increasing storage media performance | |
US20240012580A1 (en) | Systems, methods, and devices for reclaim unit formation and selection in a storage device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13881037 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2013881037 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 20157030789 Country of ref document: KR Kind code of ref document: A |