US11023154B2 - Asymmetric data striping for uneven NAND defect distribution - Google Patents
Asymmetric data striping for uneven NAND defect distribution Download PDFInfo
- Publication number
- US11023154B2 US11023154B2 US16/156,929 US201816156929A US11023154B2 US 11023154 B2 US11023154 B2 US 11023154B2 US 201816156929 A US201816156929 A US 201816156929A US 11023154 B2 US11023154 B2 US 11023154B2
- Authority
- US
- United States
- Prior art keywords
- slice
- storage media
- storage
- nand
- operations
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0646—Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0638—Organizing or formatting or addressing of data
- G06F3/064—Management of blocks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/061—Improving I/O performance
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0614—Improving the reliability of storage systems
- G06F3/0619—Improving the reliability of storage systems in relation to data integrity, e.g. data losses, bit errors
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
- G06F3/0673—Single storage device
- G06F3/0679—Non-volatile semiconductor memory device, e.g. flash memory, one time programmable memory [OTP]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
- G06F3/0683—Plurality of storage devices
- G06F3/0688—Non-volatile semiconductor memory arrays
Definitions
- This invention relates to systems and methods for performing data striping in a NAND flash storage device.
- NAND storage devices In NAND storage devices, the target performance has been getting higher and higher.
- One of the easiest ways to meet performance requirement is through parallel processing.
- a NAND storage device receives a read or write commands, it segments the data and distributes it to several slices in a round-robin fashion called data striping. Each slice works completely independently. The performance of the NAND storage device is therefore the cumulatively performance of the number of slices employed.
- FIG. 1 is a schematic block diagram of a computing system suitable for implementing an approach in accordance with embodiments of the invention
- FIG. 2 is a schematic block diagram of components of a storage system that may implement an approach in accordance with an embodiment of the present invention
- FIG. 3 is a schematic block diagram of components for performing data striping in accordance with the prior art
- FIG. 4 is a schematic block diagram illustrating data striping in accordance with an embodiment of the present invention.
- FIG. 5 is schematic block diagram of different zones of device logical block addresses (DLBA) in accordance with an embodiment of the present invention.
- FIG. 6 is a process flow diagram of another method for performing a bit-flipping decoding algorithm in accordance with an embodiment of the present invention.
- the invention has been developed in response to the present state of the art and, in particular, in response to the problems and needs in the art that have not yet been fully solved by currently available apparatus and methods.
- Embodiments in accordance with the present invention may be embodied as an apparatus, method, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.), or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “module” or “system.” Furthermore, the present invention may take the form of a computer program product embodied in any tangible medium of expression having computer-usable program code embodied in the medium.
- a computer-readable medium may include one or more of a portable computer diskette, a hard disk, a random access memory (RAM) device, a read-only memory (ROM) device, an erasable programmable read-only memory (EPROM or flash memory) device, a portable compact disc read-only memory (CDROM), an optical storage device, and a magnetic storage device.
- a computer-readable medium may comprise any non-transitory medium that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
- Computer program code for carrying out operations of the present invention may be written in any combination of one or more programming languages, including an object-oriented programming language such as Java, Smalltalk, C++, or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages.
- the program code may execute entirely on a computer system as a stand-alone software package, on a stand-alone hardware unit, partly on a remote computer spaced some distance from the computer, or entirely on a remote computer or server.
- the remote computer may be connected to the computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
- LAN local area network
- WAN wide area network
- Internet Service Provider for example, AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
- These computer program instructions may also be stored in a non-transitory computer-readable medium that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable medium produce an article of manufacture including instruction means which implement the function/act specified in the flowchart and/or block diagram block or blocks.
- the computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
- FIG. 1 is a block diagram illustrating an example computing device 100 .
- Computing device 100 may be used to perform various procedures, such as those discussed herein.
- Computing device 100 can function as a server, a client, or any other computing entity.
- Computing device 100 can be any of a wide variety of computing devices, such as a desktop computer, a notebook computer, a server computer, a handheld computer, tablet computer and the like.
- Computing device 100 includes one or more processor(s) 102 , one or more memory device(s) 104 , one or more interface(s) 106 , one or more mass storage device(s) 108 , one or more Input/Output (I/O) device(s) 110 , and a display device 130 all of which are coupled to a bus 112 .
- Processor(s) 102 include one or more processors or controllers that execute instructions stored in memory device(s) 104 and/or mass storage device(s) 108 .
- Processor(s) 102 may also include various types of computer-readable media, such as cache memory.
- Memory device(s) 104 include various computer-readable media, such as volatile memory (e.g., random access memory (RAM) 114 ) and/or nonvolatile memory (e.g., read-only memory (ROM) 116 ). memory device(s) 104 may also include rewritable ROM, such as flash memory.
- volatile memory e.g., random access memory (RAM) 114
- nonvolatile memory e.g., read-only memory (ROM) 116
- memory device(s) 104 may also include rewritable ROM, such as flash memory.
- Mass storage device(s) 108 include various computer readable media, such as magnetic tapes, magnetic disks, optical disks, solid-state memory (e.g., flash memory), and so forth. As shown in FIG. 1 , a particular mass storage device is a hard disk drive 124 . Various drives may also be included in mass storage device(s) 108 to enable reading from and/or writing to the various computer readable media. Mass storage device(s) 108 include removable media 126 and/or non-removable media.
- I/O device(s) 110 include various devices that allow data and/or other information to be input to or retrieved from computing device 100 .
- Example I/O device(s) 110 include cursor control devices, keyboards, keypads, microphones, monitors or other display devices, speakers, printers, network interface cards, modems, lenses, CCDs or other image capture devices, and the like.
- Display device 130 includes any type of device capable of displaying information to one or more users of computing device 100 .
- Examples of display device 130 include a monitor, display terminal, video projection device, and the like.
- interface(s) 106 include various interfaces that allow computing device 100 to interact with other systems, devices, or computing environments.
- Example interface(s) 106 include any number of different network interfaces 120 , such as interfaces to local area networks (LANs), wide area networks (WANs), wireless networks, and the Internet.
- Other interface(s) include user interface 118 and peripheral device interface 122 .
- the interface(s) 106 may also include one or more user interface elements 118 .
- the interface(s) 106 may also include one or more peripheral interfaces such as interfaces for printers, pointing devices (mice, track pad, etc.), keyboards, and the like.
- Bus 112 allows processor(s) 102 , memory device(s) 104 , interface(s) 106 , mass storage device(s) 108 , and I/O device(s) 110 to communicate with one another, as well as other devices or components coupled to bus 112 .
- Bus 112 represents one or more of several types of bus structures, such as a system bus, PCI bus, IEEE 1394 bus, USB bus, and so forth.
- programs and other executable program components are shown herein as discrete blocks, although it is understood that such programs and components may reside at various times in different storage components of computing device 100 , and are executed by processor(s) 102 .
- the systems and procedures described herein can be implemented in hardware, or a combination of hardware, software, and/or firmware.
- one or more application specific integrated circuits (ASICs) can be programmed to carry out one or more of the systems and procedures described herein.
- a typically flash storage system 200 includes a solid state drive (SSD) that may include a plurality of NAND flash memory devices 202 .
- SSD solid state drive
- One or more NAND devices 202 may interface with a NAND interface 204 that interacts with an SSD controller 206 .
- the SSD controller 206 may receive read and write instructions from a host interface 208 implemented on or for a host device, such as a device including some or all of the attributes of the computing device 100 .
- the host interface 208 may be a data bus, memory controller, or other components of an input/output system of a computing device, such as the computing device 100 of FIG. 1 .
- the methods described below may be performed by the host, e.g. the host interface 208 alone or in combination with the SSD controller 206 .
- the methods described below may be used in a flash storage system 200 or any other type of non-volatile storage device.
- the methods described herein may be executed by any component in such a storage device or be performed completely or partially by a host processor coupled to the storage device.
- the SSD controller 206 may be programmed to implement data striping as described below with respect to FIGS. 3 through 6 .
- each slice of a storage device has to exclusively manage its own NAND block resources and deal with its own defects.
- the randomness of defect location causes variation in the available blocks of each slice.
- the number of defects is such the number of available blocks is more than the overall target volume size. In others, the number of defects is such the number of available blocks is less than the overall target volume size. In prior approaches to data striping, this would result in failure to meet the target volume size of the slice. For instance, an SSD may need a total 40,000 available blocks to meet a target volume size of SSD device.
- Slice # 1 may have 10,300
- slice # 2 may have 10,500
- slice # 3 may have 10,850
- slice # 4 may have 9,950. Even though the combination of slices # 1 through # 4 has 46,000 available blocks, in the normal round-robin striping scheme, this combination of slices cannot compose the required volume size due to slice # 4 lacking sufficient available blocks.
- FIG. 3 illustrates a conventional approach 300 to data striping
- a host 302 transmits operations, such as a read or write operation to a storage device. These operation are then divided among slices, such as by data striping logic 304 .
- data striping logic 304 For example, in the illustrated embodiment, there are slices 0 through 3 each with its corresponding bank of NAND dies 306 a - 306 d , respectively.
- Each slice 0-3 handles its own processing of operations assigned to it by data striping logic 304 , including
- the reverse of this approach may be used to convert a DLBA of a slice to an HLBA using the slice index.
- the limitation of the prior approach is that there is no variation between slices.
- the prior approach requires fixed and identical volume size for all of the slices.
- some NAND dies have more defects and cannot be used to meet the target volume size of a slice. Accordingly, these NAND dies must be discarded or may only be used for a smaller target volume size.
- NAND dies with a low number of defects may have available blocks in excess of the target volume size that will not be fully utilized. These excess blocks may be reserved for replacement of blocks that may subsequently fail.
- FIG. 4 illustrates an alternative approach for implementing data striping.
- each column represents a slice.
- DLBA_SY_X corresponds to LBA X of slice Y in the notation of FIG. 4 .
- the bolded text indicates where a slice is skipped during the round robin distribution of HLBAs.
- HLBA0 is assigned to DLBA_S0_0
- HLBA1 is assigned to DLBA_S1_0
- HLBA2 is assigned to DLBA_S2_0.
- HLBA3 is not assigned to DLBA_S1_0. Instead, DLBA_S1_0 is skipped and HLBA3 is assigned to slice 0: DLBA_S0_1.
- the other HLBAs are assigned in a round robin fashion with each DLBA that is written in bold font in FIG. 4 being skipped.
- a volume size will be a particular value, e.g., 512 GB, which will be divided equally into slices of the SSD, e.g. 128 GB each for four slices.
- the maximum allowed number of defects on each slice is 3% of the physical blocks of the NAND dies making up each slice.
- many blocks of the NAND dies in a slice will be excluded from the volume size calculation not only to account for the possible 3% of defects but an additionally number of blocks are designated for other purposes. For example, 15% may be designated for over provisioning (OP), 1% may be reserved for growing defects after manufacturing. So totally ⁇ 20% of the blocks in the NAND dies of a slice are excluded from the volume size.
- the total volume size 512 GB would be equal to 80% of the blocks of the NAND dies of the SSD. In a conventional system, this ratio is applied to all slices without exception.
- the target volume size would still be 80% of the blocks of the SSD in order to meet the 512 GB volume size.
- the volume size of each slice can be different according to the ratio of defects between slices. The number of mapped DLBAs and the number of skipped DLBA is used to achieve the different volume sizes.
- Each SSD has several dozens of NAND dies.
- one NAND die has several thousands of physical blocks.
- One physical NAND block has several thousands of pages, one page is 8 or 16 kbytes. All together this makes one SSD.
- Each DLBA may refer to 4K block of data aligned with a FTL (flash translation layer) 4 KB mapping.
- This mapping size is equivalent to the possible mapping size of a NAND that has ideally no defect and therefore has the maximum # of availably: physical blocks. Accordingly, a bigger DLBA table may be required using the approach of FIGS. 4 through 6 .
- a skipped DLBA may remain unmapped and therefore won't increase write application nor does it claim more space in the NAND dies.
- Table 1 illustrates the percentages of DLBAs of an SSD and slice of an SSD that are used to constitute a storage volume.
- the total number of DLBAs defined for the SSD system is 83% of the blocks (e.g. 4 kB blocks) of the NAND dies in the SSD system.
- the total number DLBAs that are mapped (not skipped) is 80% of blocks of the SSD system.
- the total number of skipped DLBA is 3% of the blocks of the SSD system.
- the number of mapped DLBAs defined for each slice may vary but the total number may be constrained to be less than or equal to a maximum percentage, such as 83%.
- the number of skipped DLBAs on each slice may likewise vary according to variation in numbers of defects.
- the mapped and skipped DLBAs are spread across the slices proportionally according to the ratio of defects of each slice. Some slices can go up to 83% mapped DLBAs and some slice can have mapped DLBAs far below 80% of mapped such that the total mapped DLBAs across all slices is 80%.
- the skipped DLBAs are determined for each slice during manufacturing will not be changed for the entire lifespan of the SSD life span. In some instances, changing would mean that a slice would not be able to maintain its target size.
- the slice with fewer available blocks will be selected less frequently then the slice with more available blocks and the slice with blocks in excess of the average blocks per slice will be used more and this excess capacity will be utilized. In this manner, fewer NAND dies need be rejected due to defects and those NAND dies with fewer defects may be used to make up for the defects of other NAND dies.
- a skip map may be used to implement the above-described asymmetric approach.
- a function may be defined and a processor provided in an SSD to execute the function where the function defines the mapping between HLBAs and DLBAs in order to implement skipping.
- HLBAs are mapped to slices and DLBAs in a round-robin fashion with certain DLBAs of certain slices being skipped, as shown in FIG. 4 .
- an entry in the skip map for a slice and DLBA indicates skipping, that DLBA of that slice will not be mapped to an HLBA. Instead, the next DLBA of the next slice in the round-robin scheme will be mapped to that HLBA.
- the proportion of skips for each slice may be determined as described above and the skips for each slice may be distributed periodically and possibly uniformly throughout the skip map.
- the valid count of HLBAs may be less than the capacity of the skip map.
- a variable VALID_CNT may specify the number of entries in the skip map, i.e. entries that are not skip entries.
- the number of HLBAs mapped by the skip map is equal to the number of DLBAs mapped by the skip map.
- the total valid count of HLBA and the count of mapped DLBA are a function of the target volume size of the SSD and will be the same for SSDs of the same size. However, each slice may have different valid count as described above.
- the skip map does not cover the entire LBA range.
- a skip map covering the entire LBA range may be too large to be feasible.
- the entire range of DLBAs is divided into zones 500 a - 500 c .
- FIG. 6 illustrates a method 600 for converting an HLBA (“the subject HLBA”) to a slice index and DLBA corresponding to the subject HLBA (“the mapped DLBA”).
- the method 600 may be executed by the NAND interface 204 , SSD controller 206 , host interface 208 , a host processor 102 , or other device implementing the asymmetric data striping approach described herein.
- HLBA_CNT may be less than or equal to DLBA_CNT, because skip marks in the slip mark will skip an HBLA to a next DLBA, which causes skip mapping.
- the method 600 may include calculating 604 an entry index (ENTRY_IDX) that is the index of an entry in the skip map corresponding to the subject HLBA.
- the method 600 may then include looking up 606 an entry of the skip map corresponding to ENTRY_IDX.
- the skip map look up function SMAP(ENTRY_IDX)
- OFFSET offset within a slice
- SLICE_IDX index
- the operation referencing the subject HLBA may then be processed 610 in the slice referenced by SLICE_IDX at the mapped DLBA.
- the operation is a write operation
- data will be written at the mapped DLBA by the slice referenced by SLICE_IDX.
- step 610 may include reading data from the mapped DLBA by the slice referenced by SLICE_IDX.
- each operation processed at step 610 may be part of an operation that has been divided into a number of segments corresponding to the number of slices. Accordingly, the method 600 may be preceded by a segmenting step in which the operation is divided into slice operations processed according to the method 600 . Each slice operation may then be processed according to the method 600 . Where an operation is segmented, the operation may correspond to several HLBAs such that each slice operation references a different HLBA of the several HLBAs and the HLBA of each slice operation is converted to a DLBA as described with respect to the method 600 .
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Security & Cryptography (AREA)
- Techniques For Improving Reliability Of Storages (AREA)
Abstract
Description
-
- Yield: The ratio of failure NAND due to high criteria is not trivial in business perspective and other troublesome is how to utilize the screened out NAND that has still a lot of valid blocks is another issue.
- Waste of resource: Each slice for parallel processing is mostly likely to have greater than required blocks to meet target volume in statistics. The residue has no place to use and just reserved for the replacement of possible future defects called growing defects.
-
-
Host LBA 0, 4, 8, and 12 . . . =>SLICE 0 -
Host LBA 1, 5, 9, 13 . . . =>SLICE 1 -
Host LBA 2, 6, 10, 14 . . . =>SLICE 2 -
Host LBA 3, 7, 11, 15 . . . =>SLICE 3
-
TABLE 1 |
DLBA Ratios. |
Total SSD System | Individual Slice | ||
Ratio of DLBA | 83% | 83% |
Ratio of Mapped DLBA | 80% | Variable (Up to 83%) |
Ratio Skipped |
3% | Variable (83 - Mapped |
DLBA %) | ||
Claims (15)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/156,929 US11023154B2 (en) | 2018-10-10 | 2018-10-10 | Asymmetric data striping for uneven NAND defect distribution |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/156,929 US11023154B2 (en) | 2018-10-10 | 2018-10-10 | Asymmetric data striping for uneven NAND defect distribution |
Publications (2)
Publication Number | Publication Date |
---|---|
US20200117382A1 US20200117382A1 (en) | 2020-04-16 |
US11023154B2 true US11023154B2 (en) | 2021-06-01 |
Family
ID=70160754
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/156,929 Active US11023154B2 (en) | 2018-10-10 | 2018-10-10 | Asymmetric data striping for uneven NAND defect distribution |
Country Status (1)
Country | Link |
---|---|
US (1) | US11023154B2 (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US12386531B2 (en) | 2022-04-04 | 2025-08-12 | Seagate Technology Llc | Partial block performance management |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020032828A1 (en) * | 2000-05-10 | 2002-03-14 | Seagate Technology, Llc | Seamless defect management conversion |
US20140095827A1 (en) * | 2011-05-24 | 2014-04-03 | Agency For Science, Technology And Research | Memory storage device, and a related zone-based block management and mapping method |
US20150363346A1 (en) * | 2013-04-02 | 2015-12-17 | Hewlett-Packard Development Company, L.P. | Sata initiator addressing and storage device slicing |
US20170286223A1 (en) * | 2016-03-29 | 2017-10-05 | International Business Machines Corporation | Storing data contiguously in a dispersed storage network |
-
2018
- 2018-10-10 US US16/156,929 patent/US11023154B2/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020032828A1 (en) * | 2000-05-10 | 2002-03-14 | Seagate Technology, Llc | Seamless defect management conversion |
US20140095827A1 (en) * | 2011-05-24 | 2014-04-03 | Agency For Science, Technology And Research | Memory storage device, and a related zone-based block management and mapping method |
US20150363346A1 (en) * | 2013-04-02 | 2015-12-17 | Hewlett-Packard Development Company, L.P. | Sata initiator addressing and storage device slicing |
US20170286223A1 (en) * | 2016-03-29 | 2017-10-05 | International Business Machines Corporation | Storing data contiguously in a dispersed storage network |
Also Published As
Publication number | Publication date |
---|---|
US20200117382A1 (en) | 2020-04-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3367251B1 (en) | Storage system and solid state hard disk | |
JP6226830B2 (en) | Information processing apparatus, data access method, and program | |
US11003625B2 (en) | Method and apparatus for operating on file | |
JP6982468B2 (en) | Memory system and control method | |
JP6785204B2 (en) | Memory system and control method | |
EP2843570B1 (en) | File reading method, storage device and reading system | |
CN106708424A (en) | Apparatus and method for performing selective underlying exposure mapping on user data | |
TW201723816A (en) | Storage system, method and system for managing storage media | |
CN106708751A (en) | Storage device including multi-partitions for multimode operations, and operation method thereof | |
US8738624B1 (en) | Increasing distributed database capacity | |
US20180113639A1 (en) | Method and system for efficient variable length memory frame allocation | |
US10216861B2 (en) | Autonomic identification and handling of ad-hoc queries to limit performance impacts | |
US11023154B2 (en) | Asymmetric data striping for uneven NAND defect distribution | |
WO2015087651A1 (en) | Device, program, recording medium, and method for extending service life of memory, | |
CN112181274B (en) | Large block organization method for improving performance stability of storage device and storage device thereof | |
KR101849116B1 (en) | Non-uniform memory access system, and memory management method and program thereof | |
WO2019047842A1 (en) | Logic partition method for solid state drive and device | |
US8468303B2 (en) | Method and apparatus to allocate area to virtual volume based on object access type | |
WO2021232743A1 (en) | Cache management method and apparatus, storage medium, and solid-state non-volatile storage device | |
CN111475279B (en) | System and method for intelligent data load balancing for backup | |
JPWO2018235149A1 (en) | Storage apparatus and storage area management method | |
CN105183375A (en) | Control method and apparatus for service quality of hot spot data | |
CN112181276B (en) | Large-block construction and distribution method for improving service quality of storage device and storage device thereof | |
KR20230028579A (en) | Elastic column cache for cloud databases | |
US20240192881A1 (en) | Overprovisioning Block Mapping for Namespace |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: PETAIO INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:YOON, JONGMAN;REEL/FRAME:047127/0201 Effective date: 20181003 |
|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO SMALL (ORIGINAL EVENT CODE: SMAL); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STCV | Information on status: appeal procedure |
Free format text: NOTICE OF APPEAL FILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT RECEIVED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2551); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY Year of fee payment: 4 |
|
AS | Assignment |
Owner name: PETAIO MEMORY TECHNOLOGY (NANJING) CO., LTD., CHINA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:PETAIO INC.;REEL/FRAME:071686/0234 Effective date: 20250710 |