US11163489B2 - Workload clusterization for memory system and method of executing the same - Google Patents

Workload clusterization for memory system and method of executing the same Download PDF

Info

Publication number
US11163489B2
US11163489B2 US16/420,746 US201916420746A US11163489B2 US 11163489 B2 US11163489 B2 US 11163489B2 US 201916420746 A US201916420746 A US 201916420746A US 11163489 B2 US11163489 B2 US 11163489B2
Authority
US
United States
Prior art keywords
workload
cluster
candidate cluster
item
items
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
US16/420,746
Other languages
English (en)
Other versions
US20190361628A1 (en
Inventor
Yauheni Yaromenka
Aliaksei CHARNEVICH
Joon Mo Koo
Siarhei ZALIVAKA
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SK Hynix Inc
Original Assignee
SK Hynix Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SK Hynix Inc filed Critical SK Hynix Inc
Priority to US16/420,746 priority Critical patent/US11163489B2/en
Assigned to SK Hynix Inc. reassignment SK Hynix Inc. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ZALIVAKA, SIARHEI, MO, KOO JOON, CHARNEVICH, ALIAKSEI, YAROMENKA, Yauheni
Publication of US20190361628A1 publication Critical patent/US20190361628A1/en
Application granted granted Critical
Publication of US11163489B2 publication Critical patent/US11163489B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0659Command handling arrangements, e.g. command buffers, queues, command scheduling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0238Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
    • G06F12/0246Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory in block erasable memory, e.g. flash memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/10Address translation
    • G06F12/1009Address translation using page tables, e.g. page table structures
    • G06F12/1018Address translation using page tables, e.g. page table structures involving hashing techniques, e.g. inverted page tables
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/064Management of blocks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/0644Management of space entities, e.g. partitions, extents, pools
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/0679Non-volatile semiconductor memory device, e.g. flash memory, one time programmable memory [OTP]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/72Details relating to flash memory management
    • G06F2212/7201Logical to physical mapping or translation of blocks or pages
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • Embodiments of the present disclosure relate to scheme for clustering workload items in a memory system, particularly a flash-based memory system, and method of executing such scheme.
  • the computer environment paradigm has shifted to ubiquitous computing systems that can be used anytime and anywhere.
  • portable electronic devices such as mobile phones, digital cameras, and notebook computers has rapidly increased.
  • These portable electronic devices generally use a memory system having memory device(s), that is, data storage device(s).
  • the data storage device is used as a main memory device or an auxiliary memory device of the portable electronic devices.
  • Data storage devices using memory devices provide excellent stability, durability, high information access speed, and low power consumption, since they have no moving parts. Examples of data storage devices having such advantages include universal serial bus (USB) memory devices, memory cards having various interfaces, and solid state drives (SSD).
  • USB universal serial bus
  • SSD solid state drives
  • the SSD may include flash memory components and a controller, which includes the electronics that bridge the flash memory components to the SSD input/output (I/O) interfaces.
  • the SSD controller may include an embedded processor that executes functional components such as firmware.
  • the SSD functional components are typically device specific, and in most cases, can be updated.
  • the two main types of flash memories are named after the NAND and NOR logic gates.
  • the individual flash memory cells exhibit internal characteristics similar to those of their corresponding gates.
  • the NAND-type flash memory may be written to and read from in blocks (or pages) which are generally much smaller than the entire memory space.
  • the NOR-type flash allows a single machine word (byte) to be written to an erased location or read independently.
  • the NAND-type flash memory operates primarily in memory cards, USB flash drives, solid-state drives (SSDs), and similar products, for general storage and transfer of data.
  • Flash-based storage e.g., NAND-type flash memory systems
  • FTL flash translation layer
  • L2P logical-to-physical mapping
  • the FTL also performs other operations such as garbage collection and wear leveling.
  • File systems usually store files as a sequence of fragments, i.e., ranges of logical block addresses (LBAs).
  • LBAs logical block addresses
  • fragmentation downgrades read performance in a NAND flash storage, because every fragment is read separately instead of performing a sequential read.
  • LBA ranges can be merged into a single cluster, i.e., a sequence of fragments that are read or written together in the same order.
  • NAND flash storage does not have enough storage and processing resources to store an entire history of commands and perform expensive calculations for these purposes.
  • a memory system comprises a memory device from which data is read and to which data is written; and a memory controller configured to control the memory device and to receive from a host workload items in a workload sequence, each workload item being defined by at least a start logical block address (LBA) and a length.
  • the memory controller includes a hash table.
  • the memory controller is further configured to merge sequential workload items in the workload sequence to constitute a single workload item for each set of sequential workload items; identify a start workload item, among the workload items, for a candidate cluster; store the LBA and a hit count of the start workload item in the hash table; identify an end workload item, among the workload items, for the candidate cluster; determine whether the candidate cluster is found in the workload sequence more than a threshold number of times; and accept the candidate cluster when it determined that the candidate cluster is found in the workload sequence more than the threshold number of times.
  • Another aspect of the present invention includes methods of clustering workload items in such memory systems, which may be performed by one or more components thereof.
  • another aspect of the present invention entails a method of clustering workload items of a specific type in a memory system.
  • the method comprises receiving workload items in a workload sequence, each workload item being defined by at least a start logical block address (LBA) and a length; merging sequential workload items in the workload sequence to constitute a single workload item for each set of sequential workload items; identifying a start workload item, among the workload items, for a candidate cluster; storing the LBA and a hit count of the start workload item in a hash table of the memory system; identifying an end workload item, among the workload items, for the candidate cluster; determining whether the candidate cluster is found in the workload sequence more than a threshold number of times; and accepting the candidate cluster when it determined that the candidate cluster is found in the workload sequence more than the threshold number of times.
  • LBA logical block address
  • FIG. 1 is a diagram illustrating a memory system in accordance with an embodiment of the present invention.
  • FIG. 2 is a block diagram illustrating a memory system in accordance with an embodiment of the present invention
  • FIG. 3 is a circuit diagram illustrating a memory block of a memory device of a memory system in accordance with an embodiment of the present invention.
  • FIG. 4 is a diagram illustrating merging of commands in accordance with an embodiment of the present invention.
  • FIG. 5 is a flow chart illustrating selecting a workload item as a start of a cluster in accordance with an embodiment of the present invention
  • FIG. 6 is a diagram illustrating a workload sequence and the state of a hash table after a command considered as a start of a cluster candidate has been added to the hash table, in accordance with an embodiment of the present invention.
  • FIG. 7 is a diagram illustrating a workload sequence and the state of a hash table after a candidate command becomes the first command of a cluster candidate, in accordance with an embodiment of the present invention.
  • FIG. 8 is a diagram illustrating a workload sequence and the state of a hash table after a command is skipped and a last command of a cluster candidate is added, in accordance with an embodiment of the present invention.
  • FIG. 9 is a diagram illustrating skipping certain commands not in a current cluster candidate and having a low probability of being added to any cluster candidate, in accordance with an embodiment of the present invention.
  • FIG. 10 is a diagram illustrating a workload sequence of cluster candidates and rejected candidates and a hash table in that state, in accordance with an embodiment of the present invention.
  • FIG. 11 is a diagram illustrating a workload sequence in which the first command in the sequence is found again with following commands considered cluster candidates and a hash table in that state, in accordance with an embodiment of the present invention.
  • FIG. 12 is a diagram illustrating further processing of a workload sequence in accordance with an embodiment of the present invention.
  • FIG. 13 is a diagram illustrating further processing of a workload sequence in accordance with another embodiment of the present invention.
  • FIG. 14 is a flow chart illustrating processes of clustering workload items, e.g., commands, in accordance with embodiments of the present invention.
  • the invention can be implemented in numerous ways, including as a process; an apparatus; a system; a composition of matter; a computer program product embodied on a computer readable storage medium; and/or a processor, such as a processor suitable for executing instructions stored on and/or provided by a memory coupled to the processor.
  • these implementations, or any other form that the invention may take, may be referred to as techniques.
  • the order of the steps of disclosed processes may be altered within the scope of the invention.
  • a component such as a processor or a memory described as being suitable for performing a task may be implemented as a general component that is temporarily configured to perform the task at a given time or a specific component that is manufactured to perform the task.
  • the term ‘processor’ or the like refers to one or more devices, circuits, and/or processing cores suitable for processing data, such as computer program instructions.
  • FIG. 1 is a block diagram schematically illustrating a memory system in accordance with an embodiment of the present invention.
  • the memory system 10 may include a memory controller 100 and a semiconductor memory device 200 , which may represent more than one such device.
  • the semiconductor memory device(s) 200 are preferably flash memory device(s), particularly of the NAN D-type.
  • the memory controller 100 may control overall operations of the semiconductor memory device 200 .
  • the semiconductor memory device 200 may perform one or more erase, program, and read operations under the control of the memory controller 100 .
  • the semiconductor memory device 200 may receive a command CMD, an address ADDR and data DATA through input/output (I/O) lines.
  • the semiconductor memory device 200 may receive power PWR through a power line and a control signal CTRL through a control line.
  • the control signal CTRL may include a command latch enable (CLE) signal, an address latch enable (ALE) signal, a chip enable (CE) signal, a write enable (WE) signal, a read enable (RE) signal, and the like.
  • the memory controller 100 and the semiconductor memory device 200 may be integrated in a single semiconductor device such as a solid state drive (SSD).
  • the SSD may include a storage device for storing data therein.
  • operation speed of a host (not shown) coupled to the memory system 10 may remarkably improve.
  • FIG. 2 is a detailed block diagram illustrating a memory system in accordance with an embodiment of the present invention.
  • the memory system of FIG. 2 may depict the memory system shown in FIG. 1 .
  • the memory device 200 may store data to be accessed by the host device.
  • the memory device 200 may be implemented with a volatile memory device such as a dynamic random access memory (DRAM) and/or a static random access memory (SRAM) or a non-volatile memory device such as a read only memory (ROM), a mask ROM (MROM), a programmable ROM (PROM), an erasable programmable ROM (EPROM), an electrically erasable programmable ROM (EEPROM), a ferroelectric random access memory (FRAM), a phase change RAM (PRAM), a magnetoresistive RAM (MRAM), and/or a resistive RAM (RRAM).
  • ROM read only memory
  • MROM mask ROM
  • PROM programmable ROM
  • EPROM erasable programmable ROM
  • EEPROM electrically erasable programmable ROM
  • FRAM ferroelectric random access memory
  • PRAM phase change RAM
  • MRAM magnetoresistive RAM
  • RRAM resistive RAM
  • the controller 100 may control storage of data in the memory device 200 .
  • the controller 100 may control the memory device 200 in response to a request from the host device.
  • the controller 100 may provide data read from the memory device 200 to the host device, and may store data provided from the host device into the memory device 200 .
  • the controller 100 may include a storage 110 , a control component 120 , which may be implemented as a processor such as a central processing unit (CPU), an error correction code (ECC) component 130 , a host interface (I/F) 140 and a memory interface (I/F) 150 , which are coupled through a bus 160 .
  • a control component 120 which may be implemented as a processor such as a central processing unit (CPU), an error correction code (ECC) component 130 , a host interface (I/F) 140 and a memory interface (I/F) 150 , which are coupled through a bus 160 .
  • a control component 120 which may be implemented as a processor such as a central processing unit (CPU), an error correction code (ECC) component 130 , a host interface (I/F) 140 and a memory interface (I/F) 150 , which are coupled through a bus 160 .
  • CPU central processing unit
  • ECC error correction code
  • I/F host interface
  • I/F memory interface
  • the storage 110 may serve as a working memory of the memory system 10 and the controller 100 , and store data for driving the memory system 10 and the controller 100 .
  • the storage 110 may store data used by the controller 100 and the memory device 200 for such operations as read, write, program and erase operations.
  • the ECC component 130 may detect and correct errors in the data read from the memory device 200 during the read operation.
  • the ECC component 130 may not correct error bits when the number of the error bits is greater than or equal to a threshold number of correctable error bits, and instead may output an error correction fail signal indicating failure in correcting the error bits.
  • the memory interface 150 may provide an interface between the controller 100 and the memory device 200 to allow the controller 100 to control the memory device 200 in response to a request from the host device.
  • the memory interface 150 may generate control signals for the memory device 200 and process data under the control of the control component (or CPU) 120 .
  • the memory interface 150 may generate control signals for the memory and process data under the control of the CPU 120 .
  • the memory device 200 may include a memory cell array 210 , a control circuit 220 , a voltage generation circuit 230 , a row decoder 240 , a page buffer (array) 250 , which may be in the form of an array of page buffers, a column decoder 260 , and an input/output circuit 270 .
  • the memory cell array 210 may include a plurality of memory blocks 211 which may store data. Subsets of the memory blocks may be grouped into respective super blocks (SBs) for certain operations. SBs and their use in the context of embodiments of the present invention are described in more detail below.
  • the voltage generation circuit 230 , the row decoder 240 , the page buffer (array) 250 , the column decoder 260 and the input/output circuit 270 may form a peripheral circuit for the memory cell array 210 .
  • the peripheral circuit may perform a program, read, or erase operation of the memory cell array 210 .
  • the control circuit 220 may control the peripheral circuit.
  • the page buffer (array) 250 may be in electrical communication with the memory cell array 210 through bit lines BL (shown in FIG. 3 ).
  • the page buffer (array) 250 may pre-charge the bit lines BL with a positive voltage, transmit data to, and receive data from, a selected memory block in program and read operations, or temporarily store transmitted data, in response to page buffer control signal(s) generated by the control circuit 220 .
  • the input/output circuit 270 may transmit to the control circuit 220 a command and an address, received from an external device (e.g., the memory controller 100 ), transmit data from the external device to the column decoder 260 , or output data from the column decoder 260 to the external device, through the input/output circuit 270 .
  • an external device e.g., the memory controller 100
  • the exemplary memory block 211 may include a plurality of word lines WL 0 to WLn ⁇ 1, a drain select line DSL and a source select line SSL coupled to the row decoder 240 . These lines may be arranged in parallel with the plurality of word lines between the DSL and SSL.
  • the memory blocks 211 may include a NAND-type flash memory cell. However, the memory blocks 211 are not limited to such cell type, but may include NOR-type flash memory cell(s).
  • Memory cell array 210 may be implemented as a hybrid flash memory in which two or more types of memory cells are combined, or one-NAND flash memory in which a controller is embedded inside a memory chip.
  • the workload may be defined as a sequence of commands that the NAND flash memory device, e.g., memory device 200 , receives from a host during a period of time, e.g., the lifetime of the memory device 200 .
  • commands are processed to group select sequential commands, each defined by command type, logical block address (LBA) and command length, into a cluster candidate.
  • LBA and hit count of a start command and the LBA and hit count of an end command of the cluster candidate are stored in the hash table 115 , which may be configured in SRAM or DRAM of the storage 110 .
  • a cluster is then formed based on the entries in the hash table 115 . Information describing each cluster allows the memory system 10 to perform defragmentation, which results in improving read performance.
  • the cluster technique, algorithm or method generally comprises three parts: preprocessing of the commands, training, and forming a cluster. While the present invention is applicable to any type of command, e.g., write, read, discard, the cluster technique is described below in the context of write commands. Those skilled in the art will understand how to extend such teaching to other types of commands.
  • Preprocessing of commands includes merging certain commands. Two or more sequential commands may be considered a single command. For example, sequential write commands (100, 10), (110, 20), (130, 15), (145, 15) are equivalent to one write command (100, 60), as shown in FIG. 4 . For brevity, each write command is presented with only its start address and length. The same shortened format is used for commands below. It may be assumed that the commands are write commands, although the present invention is not limited to that type of command.
  • the training part is described below.
  • the information that describes each cluster is stored in the hash table 115 .
  • Each workload item e.g., command
  • the algorithm may generate a number R in the range [0, 1], which is compared to a threshold probability P th .
  • the hash function can be implemented as a modulo operation in the simplest case or with a more complicated algorithm, e.g., polynomial or universal hash function.
  • a command arrives at a workload predictor, which may be configured as part of the control component 120 .
  • a workload predictor which may be configured as part of the control component 120 .
  • it is determined whether or not the LBA of that command is in the hash table 115 , which is indicative of whether or not the command has previously been encountered in the workload sequence. If so, the LBA is handled within the hash table 115 (step 503 ). If not (no at step 502 ), random number R in the range [0, 1] is generated at step 504 .
  • R ⁇ P th it is determined whether or not R ⁇ P th .
  • step 505 the LBA of the received command is added to the hash table 115 and the command is considered a first command of a new cluster (step 506 ). If “No” at step 505 , this LBA is skipped, i.e., not added to the hash table 115 (step 507 ). After either step 506 or step 507 , the process ends.
  • the command is considered part of a cluster candidate.
  • a first command in a workload sequence having a start LBA of 128 and a length (Len) of 1.6 is added to the hash table, as the first command of a selected cluster candidate.
  • the LBA (128) of that command and its hit count, which is 1 here, are added to a cell of the hash table 115 .
  • FIG. 6 shows the state of the hash table 115 after a candidate cluster has been selected and has a first command.
  • CLUSTER_LENGTH commands (two commands, namely, (256, 16) and (400, 16) in this example) are not considered as the start of a cluster.
  • CLUSTER_LENGTH is an integer number, and is 3 in this example.
  • a cell When storing a command LBA, a cell may be in one of three possible states: Unknown, Rejected and Accepted.
  • Unknown may indicate that the number of processed commands in the workload is less than K or the LBA of the first command appears less than D1 times.
  • K is an integer number representing the minimal number of processed commands required to change the state of the cluster to Rejected.
  • D1 is an integer number representing the minimal number of times start of the cluster candidate is met in the workload.
  • Rejected may indicate the number of commands in the candidate cluster is greater than K and both start of the cluster and end of the cluster appear less than D2 times (at the same time start of the cluster may appear more than D1 times).
  • D2 is an integer number meaning a minimal number of times start of the cluster candidate and end of the cluster candidate are met in the workload. Otherwise, Accepted is indicated.
  • the parameters K, D1 and D2 are tuning parameters, which may be determined and adjusted by a developer to optimize accuracy of the clustering algorithm within the constraints of the memory system 10 .
  • a larger K means a larger workload may be analyzed before making a decision.
  • K is constrained on the upper end of its range by limitations of the memory system 10 .
  • D1 and D2 pertain to accuracy of the algorithm. Larger D1 and D2 values indicate more certainty that a candidate is an actual cluster. The constraint on K, however, constrains D1 and D2. If D1 and D2 are increased too much, it would be difficult to detect a cluster because the hit count would not reach those values within K commands.
  • each cell of the hash table 115 may contain the number of times that a stored command has been encountered or found during the training stage (hit_count), and the number of processed LBAs before the stored command (hit_first_time). These items of information may be stored in an array in each cell. The command is to be added to this array after cluster length commands have been processed.
  • this array should have more than one possible end command. Also, the possible end commands in the array should be sorted in descending order with respect to their hit count values. Thus, the possible end commands are stored as an array in descending order of probability of being the end command (array_of_end_clusters). As new commands are added to the array, the least likely end commands can be removed.
  • a command with LBA 128 and a length of 16 in a workload sequence is considered as the first command of a cluster candidate.
  • the LBA of this command is added to the hash table 115 , along with the number of times the command has been found, which at this time is 1.
  • a second command (256, 16) followed by a third command (400, 16).
  • Information about the third command is added to the hash table 115 as the last command of the cluster candidate (end of cluster).
  • the hit count of this last command is 1 here.
  • the cluster candidate contains three commands, namely (128, 16), (256, 16), (400, 16); start of the cluster (128, 16) and end of the cluster (400, 16) are stored in the hash table 115 .
  • a first cluster candidate is still defined by a first command (128, 16) and a last command (400, 16), information on which is included in the hash table 115 .
  • command (320, 10) is selected as a first command of a second cluster candidate.
  • the following two commands (350, 16) (start of cluster) and (370, 8) (end of cluster) are considered as belonging to the second cluster, with the latter of the two defining the last command of that cluster candidate.
  • information for the first command (320, 10) and the last command (370, 8) are added to the hash table 115 as a second cluster candidate, as shown in FIG. 10 .
  • a command (450, 15) is received in the workload sequence, and then the same three commands in the first cluster candidate are encountered again in the workload sequence.
  • Command (450, 15) is rejected as a candidate because the probability R generated by the algorithm for this command is less than P th .
  • the last three commands in the workload sequence are considered as cluster candidates.
  • the hit count of each of command (128, 16) and (400, 16) is updated in the hash table 115 , as T 7 shown in FIG. 11 .
  • the process continues to a point in which the workload sequence includes 21 commands, as shown in FIG. 12 .
  • the cluster starting from command (128, 16) is met less than D2 times for the last K commands, and the LBA with the same hash was selected as a new cluster candidate. Therefore, the cells of the hash table 115 holding information on the first cluster candidate are changed to Rejected. Every K commands the hit count for each candidate is checked. If the hit count for a given candidate is less than D2, its status is set to Rejected. During the next K commands, if a new command with the same hash value is considered as a new candidate, the previously rejected candidate is replaced with the new one.
  • D2 is 3, as the pair (128, 16) (start of cluster) and (400, 16) (end of cluster) are encountered three times, which is not greater than D2. Therefore, cluster candidate (128, 16), (400, 16) is rejected.
  • FIGS. 7-1.1 and 13 illustrate another example. Steps 1 - 5 of this example are the same as those of the first example, which are described above in connection with FIGS. 7-11 respectively.
  • the process continues to a point in which the workload sequence includes 21 commands, as shown in FIG. 13 .
  • the cluster candidate starting from command (128, 16) is met 4 times, which is more than D2 times, where D2 is 3. Therefore, this cluster is accepted, and the hash table 115 is updated accordingly.
  • the cluster forming stage starts right after at least one of the cells in the hash table 115 gets into the Accepted state.
  • the algorithm analyzes each command, and if the command is found in the hash table 115 and a corresponding cell is the Accepted state, this command and following CLUSTER_LENGTH commands are to be stored in an array provided by the FTL. Then, LBA of an assumed end of the cluster which is stored as the first element of array_of_end_clusters is to be compared with LBA of (CLUSTER_LENGTH+1) th command. If the LBAs have the same value, as indicated by the comparison, a notification that a cluster is detected may be sent to FTL. Otherwise, the accumulated data may be ignored and the LBA of (CLUSTER_LENGTH+1) th command should be processed as described in connection with the training stage.
  • FIG. 14 is a flow chart describing steps in processes for clustering workload items, in accordance with embodiments of the present invention.
  • the steps shown in flow chart 1400 are exemplary. Those skilled in the art will understand that additional and/or alternative steps may be performed, or that the order of steps may be changed, to effectuate aspects of the present invention without departing from the inventive concepts disclosed herein.
  • workload items e.g., commands of the same type
  • sequential commands in the workload sequence are merged at step 1402 to constitute one or more single commands. That is, this merging operation may be performed for each set of sequential commands.
  • a start command for a cluster candidate is identified. Such identification may be subject to a probability condition, as previously explained.
  • the LBA and a hit count of that command are stored in a hash table, e.g., hash table 115 , at step 1404 .
  • an end command is identified for the cluster candidate.
  • it is examined in step 1406 to determine whether the cluster candidate appears in the workload sequence more than a threshold number of times. If so, at step 1407 , the cluster candidate is accepted. Then, accepted clusters may be processed.
  • embodiments of the present invention provides techniques to detect clusters of workload items, e.g., commands, in a host workload transmitted to a storage device. As such, embodiments of the present invention enable defragmentation of fragmented file chunks to advantageously transform random read access to sequential read access. As a result, read performance is improved.
  • workload items e.g., commands

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Techniques For Improving Reliability Of Storages (AREA)
US16/420,746 2018-05-23 2019-05-23 Workload clusterization for memory system and method of executing the same Active US11163489B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/420,746 US11163489B2 (en) 2018-05-23 2019-05-23 Workload clusterization for memory system and method of executing the same

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201862675338P 2018-05-23 2018-05-23
US16/420,746 US11163489B2 (en) 2018-05-23 2019-05-23 Workload clusterization for memory system and method of executing the same

Publications (2)

Publication Number Publication Date
US20190361628A1 US20190361628A1 (en) 2019-11-28
US11163489B2 true US11163489B2 (en) 2021-11-02

Family

ID=68615235

Family Applications (1)

Application Number Title Priority Date Filing Date
US16/420,746 Active US11163489B2 (en) 2018-05-23 2019-05-23 Workload clusterization for memory system and method of executing the same

Country Status (2)

Country Link
US (1) US11163489B2 (zh)
CN (1) CN110532195B (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102593757B1 (ko) * 2018-09-10 2023-10-26 에스케이하이닉스 주식회사 메모리 시스템 및 메모리 시스템의 동작방법
KR20210055875A (ko) * 2019-11-08 2021-05-18 삼성전자주식회사 저장 장치와 저장 장치 시스템 그리고 그 동작 방법
CN112597733B (zh) * 2020-12-30 2022-07-15 北京华大九天科技股份有限公司 一种存储单元的识别方法、设备及计算机可读存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100332846A1 (en) * 2009-06-26 2010-12-30 Simplivt Corporation Scalable indexing
US8566317B1 (en) * 2010-01-06 2013-10-22 Trend Micro Incorporated Apparatus and methods for scalable object clustering
US20160216907A1 (en) 2015-01-22 2016-07-28 Silicon Motion, Inc. Data storage device and flash memory control method
US20170109096A1 (en) 2015-10-15 2017-04-20 Sandisk Technologies Inc. Detection of a sequential command stream
US20170228188A1 (en) * 2016-02-09 2017-08-10 Samsung Electronics Co., Ltd. Automatic i/o stream selection for storage devices

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6038651A (en) * 1998-03-23 2000-03-14 International Business Machines Corporation SMP clusters with remote resource managers for distributing work to other clusters while reducing bus traffic to a minimum
US9292379B2 (en) * 2013-09-28 2016-03-22 Intel Corporation Apparatus and method to manage high capacity storage devices
KR102593362B1 (ko) * 2016-04-27 2023-10-25 에스케이하이닉스 주식회사 메모리 시스템 및 메모리 시스템의 동작 방법

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100332846A1 (en) * 2009-06-26 2010-12-30 Simplivt Corporation Scalable indexing
US8566317B1 (en) * 2010-01-06 2013-10-22 Trend Micro Incorporated Apparatus and methods for scalable object clustering
US20160216907A1 (en) 2015-01-22 2016-07-28 Silicon Motion, Inc. Data storage device and flash memory control method
US20170109096A1 (en) 2015-10-15 2017-04-20 Sandisk Technologies Inc. Detection of a sequential command stream
US20170228188A1 (en) * 2016-02-09 2017-08-10 Samsung Electronics Co., Ltd. Automatic i/o stream selection for storage devices

Also Published As

Publication number Publication date
CN110532195B (zh) 2023-04-07
CN110532195A (zh) 2019-12-03
US20190361628A1 (en) 2019-11-28

Similar Documents

Publication Publication Date Title
US10884947B2 (en) Methods and memory systems for address mapping
US10180805B2 (en) Memory system and operating method thereof
US10108472B2 (en) Adaptive read disturb reclaim policy
US10296452B2 (en) Data separation by delaying hot block garbage collection
US10102146B2 (en) Memory system and operating method for improving rebuild efficiency
US10943669B2 (en) Memory system and method for optimizing read threshold
US10482038B2 (en) Programmable protocol independent bar memory for SSD controller
US10403369B2 (en) Memory system with file level secure erase and operating method thereof
US11003587B2 (en) Memory system with configurable NAND to DRAM ratio and method of configuring and using such memory system
US10693496B2 (en) Memory system with LDPC decoder and method of operating such memory system and LDPC decoder
US10802761B2 (en) Workload prediction in memory system and method thereof
US10089255B2 (en) High performance host queue monitor for PCIE SSD controller
CN107977283B (zh) 具有ldpc解码器的存储器系统及其操作方法
US11163489B2 (en) Workload clusterization for memory system and method of executing the same
US10896125B2 (en) Garbage collection methods and memory systems for hybrid address mapping
CN110750380B (zh) 具有奇偶校验高速缓存方案的存储器系统以及操作方法
US20160283397A1 (en) Memory system and operating method thereof
US11538547B2 (en) Systems and methods for read error recovery
US11335417B1 (en) Read threshold optimization systems and methods using model-less regression
US11087846B1 (en) Memory system with single decoder, multiple memory sets and method for decoding multiple codewords from memory sets using the single decoder
US11356123B2 (en) Memory system with low-complexity decoding and method of operating such memory system
US11531605B2 (en) Memory system for handling program error and method thereof
US11023388B2 (en) Data path protection parity determination for data patterns in storage devices
US11115062B2 (en) Memory system with adaptive threshold decoding and method of operating such memory system
US11093382B2 (en) System data compression and reconstruction methods and systems

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

AS Assignment

Owner name: SK HYNIX INC., KOREA, REPUBLIC OF

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:YAROMENKA, YAUHENI;CHARNEVICH, ALIAKSEI;MO, KOO JOON;AND OTHERS;SIGNING DATES FROM 20190802 TO 20190806;REEL/FRAME:050010/0183

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE