CN106205708B - Cache memory device - Google Patents

Cache memory device Download PDF

Info

Publication number
CN106205708B
CN106205708B CN201510239685.XA CN201510239685A CN106205708B CN 106205708 B CN106205708 B CN 106205708B CN 201510239685 A CN201510239685 A CN 201510239685A CN 106205708 B CN106205708 B CN 106205708B
Authority
CN
China
Prior art keywords
data
group
unit
erased
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510239685.XA
Other languages
Chinese (zh)
Other versions
CN106205708A (en
Inventor
菅野伸一
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kioxia Corp
Original Assignee
Toshiba Memory Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US14/656,559 external-priority patent/US10474569B2/en
Application filed by Toshiba Memory Corp filed Critical Toshiba Memory Corp
Publication of CN106205708A publication Critical patent/CN106205708A/en
Application granted granted Critical
Publication of CN106205708B publication Critical patent/CN106205708B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

According to one embodiment, a cache memory device includes a non-volatile cache memory (4), a write unit (111), a determination unit (112), a selection unit (113), and an erase unit (115). The nonvolatile cache memory (4) includes a plurality of erase unit areas. Each of the erase unit areas includes a plurality of write unit areas. The write unit (111) writes data to the non-volatile cache memory (4). The determination unit (112) determines whether the plurality of erase unit areas satisfy an erase condition. The selection unit (113) selects an area to be erased from the plurality of erase unit areas when the plurality of erase unit areas satisfy the erase condition. The erasing unit (115) erases the data written to the area to be erased.

Description

Cache memory device
CROSS-REFERENCE TO RELATED APPLICATIONS
This application is based on and claims priority from the following applications: united states provisional application No. 62/097,530 filed on 12/29 2014; japanese patent application No. 2015-038997, filed on 27/2/2015; and U.S. non-provisional application No. 14/656,559 filed on 3/12/2015, all of which are incorporated herein by reference in their entirety.
Technical Field
Embodiments described herein relate generally to a cache memory device.
Background
Solid State Drives (SSDs) include non-volatile semiconductor memory such as NAND flash memory. The NAND flash memory includes a plurality of blocks (physical blocks). The plurality of blocks includes a plurality of memory cells arranged at intersections of word lines and bit lines.
Disclosure of Invention
in general, according to one embodiment, a cache memory device includes a non-volatile cache memory, a write unit, a determination unit, a selection unit, and an erase unit. The nonvolatile cache memory includes a plurality of erase unit areas. Each of the erase unit areas includes a plurality of write unit areas. The write unit writes data to the non-volatile cache. The determination unit determines whether the plurality of erase unit areas satisfy an erase condition. The selection unit selects an area to be erased from the plurality of erase unit areas when the plurality of erase unit areas satisfy the erase condition. The erasing unit erases the data written to the area to be erased.
Drawings
FIG. 1 is a block diagram showing a configuration example of an information processing device including a cache memory device according to a first embodiment;
Fig. 2 is a flowchart showing an example of first cache control of the first embodiment;
FIG. 3 is a flowchart showing an example of second cache control of the first embodiment;
Fig. 4 is a flowchart showing an example of third cache control of the first embodiment;
FIG. 5 is a flowchart showing an example of a fourth cache control of the first embodiment;
Fig. 6 is a block diagram showing a configuration example of an information processing system according to the second embodiment;
Fig. 7 is a flowchart showing an example of a process performed by the information processing system according to the second embodiment;
Fig. 8 is a block diagram showing an example of a detailed configuration of an information processing system according to a third embodiment; and
Fig. 9 is a perspective view showing an example of a storage system according to the third embodiment.
Detailed Description
Embodiments will be described below with reference to the drawings. In the following description, the same reference numerals denote components having almost the same functions and arrangements, and a repetitive description will be given of the components as necessary.
[ first embodiment ]
A cache memory device including a nonvolatile cache memory is described in the present embodiment.
In the present embodiment, data is erased in common per erase unit area in the nonvolatile cache memory. The erase unit area includes a plurality of write unit areas and a plurality of read unit areas.
In the present embodiment, NAND flash memories are used as the nonvolatile cache memory and the nonvolatile memory. However, each of the nonvolatile cache memory and the nonvolatile memory may be a memory other than a NAND flash memory, provided that the memory satisfies the above-described relationship among the erase unit area, the write unit area, and the read unit area.
When the nonvolatile cache memory and the nonvolatile memory are NAND flash memories, the erase unit area corresponds to a block. The writing unit area and the reading unit area correspond to one page.
in the present embodiment, the erase unit area can be controlled by another unit such as two blocks, for example, which allows data to be erased in common.
In this embodiment, an access indicates both writing data to and reading data from the memory device.
Fig. 1 is a block diagram showing a configuration example of an information processing device including a cache memory device according to the present embodiment.
The information processing system 35 includes the information processing apparatus 17 and the SSD 5. The information processing device 17 may be a host device corresponding to the SSD 5.
The information processing device 17 includes a processor 2, a memory 3, and a nonvolatile cache memory 4. The SSD5 may be included in the information processing device 17, or may be connected to the information processing device 17 so as to transmit and receive data via a network or the like. Instead of the SSD5, another nonvolatile memory device such as a Hard Disk Drive (HDD) may be used.
The information processing device 17 includes a cache memory device including a cache control unit 9, a memory 3 that stores management information 61 to 64, and a nonvolatile cache memory 4. However, all or a part of the cache control unit 9, the management information 61 to 64, the memory 3, and the nonvolatile cache memory 4 may be provided outside the information processing apparatus 17.
The non-volatile cache 4 includes block groups BG 1 -BG 4 the non-volatile cache 4 has a higher access speed than that of the SSD 5.
The block group (first group) BG 1 includes blocks (first erase unit regions) B 1,1 through B 1,K the block group BG 1 stores data accessed by the processor 2 (i.e., data used by the processor 2).
In the present embodiment, when the block group BG 1 satisfies the erase condition (first erase condition), a block to be erased (a block to be discarded or pushed out) (first area to be erased) is selected from the blocks B 1,1 to B 1,K in the block group BG 1 on a first-in-first-out (FIFO) basis.
For example, an erase condition may be satisfied when the number of pages written to each of blocks B 1,1 -B 1,K of block group BG 1 exceeds a predetermined number.
Data written to an to-be-erased block selected from blocks B 1,1 -B 1,K based on the FIFO is written to the block group BG 2 when the data is in a first low usage state (e.g., when the data is accessed less than a set first number of times or less than a set first frequency).
The block group (second group) BG 2 includes blocks (second erase unit regions) B 2,1 through B 2,L the block group BG 2 stores data in a first low use state among data written to a block to be erased selected from the block group BG 1.
In the present embodiment, when the block group BG 2 satisfies the erase condition (third erase condition), a block to be erased (third area to be erased) is selected from the blocks B 2,1 to B 2,L in the block group BG 2 based on the FIFO.
in contrast, when data written to an erase block selected from blocks B 2,1 -B 2,L is in a third high usage state (e.g., when the data is accessed a third number of times less than or at a third frequency less than the set third frequency), the data is written to a group of blocks BG 3.
The block group (third group) BG 3 includes blocks (third erase unit regions) B 3,1 through B 3,M. the block group BG 3 stores data in a first low usage state among data written to a block to be erased selected from the block group BG 1. the block group BG 3 also stores data in a third high usage state among data written to a block to be erased selected from the block group BG 2.
In the present embodiment, when the block group BG 3 satisfies the erase condition (second erase condition), a block to be erased (second area to be erased) is selected from the blocks B 3,1 to B 3,M in the block group BG 3 based on the FIFO.
Data written to an to-be-erased block selected from blocks B 3,1 to B 3,M by FIFO is written to the block group BG 4 when the data is in a second low usage state (for example, when the data is accessed less than a set second number of times or at a frequency less than a set second frequency).
The block group (fourth group) BG 4 includes blocks (fourth erase unit regions) B 4,1 to B 4,N, and the block group BG 4 stores data in the second low usage state among data written to a block to be erased selected from the block group BG 3.
In the present embodiment, when the block group BG 4 satisfies the erase condition (fourth erase condition), a block to be erased (fourth area to be erased) is selected from the blocks B 4,1 to B 4,N in the block group BG 4 based on the FIFO.
Data written to a block to be erased selected from blocks B 4,1 through B 4,N by FIFO is erased.
1 4 1 4the management information 61-64 includes, for example, identification information of data, information indicating whether the data is data to be deleted, and use status information of the data, however, the management information 61-64 may include, for example, identification information of the data, information indicating whether the data is data to be deleted, and use status information of the data, the block having the largest amount of invalid data or the block having an amount of invalid data greater than a predetermined amount may be selected as the block to be erased based on the management information 61-64.
The memory 3 stores various types of control data such as management information (lists) 61 to 64 and address conversion information 7. The memory 3 may be a volatile memory such as a Dynamic Random Access Memory (DRAM) or a Static Random Access Memory (SRAM), or may be a nonvolatile memory. The memory 3 may be included in a non-volatile cache 4.
in this embodiment, the cache control unit 9 may recognize identification information of the cached data (e.g., a logical address provided from the host (e.g., logical block addressing)), a location to which the data is written, and a usage status of the data based on the management information 61-64 and the address translation information 7, for example, the cache control unit 9 may select the data cached to each of the block groups BG 1 -BG 4 and the blocks erased from the FIFO based on the management information 61-64 and the address translation information 7.
The management information 61-64 is metadata of data written to the block groups BG 1 -BG 4, respectively, for example, the management information 61-64 includes information indicating a use state of the respective data by the processor 2, for example, the management information 61-64 includes identification information of the respective data, deletion information indicating whether the data is data to be deleted, valid/invalid information indicating whether the data is valid data, and cache determination information to determine whether an erase condition for an erase block is satisfied.
The deletion information is information indicating that a deletion command for data is issued. More specifically, the deletion information is information indicating that a deletion command for data is received from an application program or an Operating System (OS) executed by the processor 2, or the like. In the present embodiment, the deletion information includes, for example, information relating the identification information of each block to a logical address indicating data to be deleted written to each block.
The valid/invalid information is information indicating that, for example, the latest data is valid data and data other than the latest data is invalid data when the same data is written to a plurality of locations. In other words, for example, in the case where an update of data written to the nonvolatile cache memory 4 is performed, the valid data is the updated data. For example, in the case of performing an update, the invalid data is data that is not updated. In the present embodiment, the valid/invalid information includes, for example, information relating the identification information of each block to a logical address indicating valid data or invalid data written to each block.
The cache determination information is, for example, information including at least one of write information and read information per data or at least one of write information and read information per block, and the like.
For example, the writing information includes at least one of writing time, writing times, writing frequency, and writing order.
For example, the read information includes at least one of read time, read times, read frequency, and read order.
For example, the address translation information 7 correlates a logical address of the data with a physical address of the non-volatile cache 4 corresponding to the logical address (e.g., physical block addressing). For example, the address conversion information 7 is managed in a table form.
The processor 2 functions as an address conversion unit 8 and a cache control unit 9 by executing programs stored in the memory of the processor 2, the memory 3, the nonvolatile cache memory 4, or the SSD 5.
In the present embodiment, the program to cause the processor 2 to function as the address conversion unit 8 and the cache control unit 9 may be an OS, middleware, or firmware, for example. In the present embodiment, all or a part of the address conversion unit 8 or all or a part of the cache control unit 9 may be implemented by hardware.
the address conversion unit 8 generates information relating the logical address of the write data and a physical address indicating a location in the nonvolatile cache memory 4 where the write data is stored, and registers the generated information to the address conversion information 7.
When receiving a logical address of read data from the processor 2, the address conversion unit 8 converts the logical address into a physical address based on the address conversion information 7.
The cache control unit 9 performs cache control for the nonvolatile cache memory 4 having an access speed higher than that of the SSD 5. For example, the cache control unit 9 manages data and a logical address and a physical address indicating the data by a write-through method or a write-back method.
In the write-through method, data is stored in the nonvolatile cache 4 and also in the SSD 5.
In the write-back method, data stored in the nonvolatile cache 4 is not stored together in the SSD 5. The data is first stored in the non-volatile cache 4 and then the data pushed out of the non-volatile cache 4 is stored in the SSD 5.
The cache control unit 9 includes a generation unit 10, control units 11 to 14, and change units 15 and 16.
The generation unit 10 generates management information 61 to 64 corresponding to the block group BG 1 to BG 4 and writes the management information 61 to 64 to the memory 3.
The control units 11 to 14 control data writing and block erasing for the block group BG 1 to BG 4, respectively.
The control unit 11 includes a writing unit 111, a determination unit 112, a selection unit 113, a determination unit 114, and an erasing unit 115.
The write unit (first write unit) 111 writes data accessed by the processor 2 to the block group BG 1.
The determination unit (first determination unit) 112 determines whether the block group BG 1 satisfies an erase condition (first erase condition).
When the block group BG 1 satisfies the erase condition, the selection unit (first selection unit) 113 selects a block to be erased (first area to be erased) from the block group BG 1.
The determination unit (second determination unit) 114 determines whether each data item written to the block to be erased is in the first high-use state or the first low-use state and whether each item of the data is data to be deleted, based on the management information 61.
the erasing unit (first erasing unit) 115 erases the block to be erased when each data item written to the block to be erased can be discarded because each data item is written to the block group BG 2 to BG 3 or data to be deleted.
The control unit 12 includes a writing unit 121, a determining unit 122, a selecting unit 123, a determining unit 124, and an erasing unit 125.
when the determination unit 114 determines that the data written to the block to be erased of the block group BG 1 is in the first low usage state and is not the data to be deleted, the write unit (second write unit) 121 writes the data to the block group BG 2.
The determination unit (fifth determination unit) 122 determines whether the block group BG 2 satisfies the erase condition (third erase condition).
when the block group BG 2 satisfies the erase condition, the selection unit (third selection unit) 123 selects a block to be erased (third area to be erased) from the block group BG 2.
the determination unit 124 determines whether each item of data written to the block to be erased is in the third-high use state or the third-low use state and whether each item of the data is data to be deleted, based on the management information 62.
When data written to the block to be erased, which is in the third high usage state and is not data to be deleted, is written to the block group BG 3, the erasing unit (second erasing unit) 125 erases the data written to the block to be erased.
The control unit 13 includes a writing unit 131, a determining unit 132, a selecting unit 133, a determining unit 134, a writing unit 135, an erasing unit 136, and a writing unit 137.
when the determination unit 114 determines that the data written to the block to be erased of the block group BG 1 is in the first high usage state and is not the data to be deleted, the write unit (third write unit) 131 writes the data to the block group BG 3.
When the data written to the block group BG 2 is in the third high use state and is not data to be deleted, the write unit (sixth write unit) 137 writes the data to the block group BG 3, for example, when the data written to the block group BG 2 is data to be accessed by the processor 2, the write unit 137 may write the data to be accessed of the block group BG 2 to the block group BG 3.
The determination unit (third determination unit) 132 determines whether the block group BG 3 satisfies the erase condition (second erase condition).
When the block group BG 3 satisfies the erase condition, the selection unit (second selection unit) 133 selects a block to be erased (second area to be erased) from the block group BG 3.
The determination unit (fourth determination unit) 134 determines whether each data item written to the block to be erased is in the second high-use state or the second low-use state and whether each item of the data is data to be deleted, based on the management information 63.
When the data written to the block to be erased of the block group BG 3 is determined to be in the second highest usage state and is not the data to be deleted, the write unit (fifth write unit) 135 writes the data to another writable block in the block group BG 3 again.
The erase unit (third erase unit) 136 erases the block to be erased when each item of data written to the block to be erased can be discarded because each data item is written to the block group BG 4, is written again to the block group BG 3, or is data to be deleted.
Control unit 14 includes write unit 141, determination unit 142, selection unit 143, and erase unit 144.
When the determination unit 134 determines that the data written to the block to be erased of the block group BG 3 is in the second low usage state and is not the data to be deleted, the writing unit (fourth writing unit) 141 writes the data to the block group BG 4.
The determination unit (sixth determination unit) 142 determines whether the block group BG 4 satisfies the erase condition (fourth erase condition).
When the block group BG 4 satisfies the erase condition (fourth erase condition), the selecting unit (fourth selecting unit) 143 selects a block to be erased (fourth area to be erased) from the block group BG 4.
The erasing unit (fourth erasing unit) 144 erases data written to the block to be erased of the block group BG 4.
When the data written to the block group BG 2 reaches the third high usage state, the varying unit (first varying unit) 15 increases the number of blocks included in the block group BG 1 and decreases the number of blocks included in the block group BG 3, for example, when the data written to the block group BG 2 is accessed by the processor 2, the varying unit 15 increases the number of blocks included in the block group BG 1 and decreases the number of blocks included in the block group BG 3.
When the data written to the block group BG 4 reaches the fourth high usage state, the varying unit (second varying unit) 16 increases the number of blocks included in the block group BG 3 and decreases the number of blocks included in the block group BG 1, for example, when the data written to the block group BG 4 is accessed by the processor 2, the varying unit 16 increases the number of blocks included in the block group BG 3 and decreases the number of blocks included in the block group BG 1.
FIG. 2 is a flowchart showing an example of a first cache control according to the present embodiment FIG. 2 exemplarily shows a process in which data is written to a block group BG 1, data is written to a block group BG 2 or BG 3, and a block to be erased in the block group BG 1 is erased.
In step S201, the writing unit 111 writes data accessed by the processor 2 to the block group BG 1.
in step 202, the determination unit 112 determines whether the block group BG 1 satisfies the erase condition.
when the block group BG 1 does not satisfy the erase condition, the process proceeds to step S206.
When the block group BG 1 satisfies the erase condition, in step S203, the selection unit 113 selects a block to be erased from the block group BG 1.
In step S204, the determination unit 114 determines whether each data item written to the block to be erased is in the first high usage state or the first low usage state and whether each item of the data is data to be erased (deletion target data) based on the management information 61.
When the data item is in the first low usage state and the data is not data to be deleted (non-deletion target data), in step S301, the writing unit 121 writes the data item to the block group BG 2.
When the data item is in the first high usage state and the data is not data to be deleted, in step S401, the writing unit 131 writes the data item to the block group BG 3.
When each item of data written to the block to be erased can be discarded because each item of data is written to the block group BG 2 or the block group BG 3 or the data to be deleted, the erasing unit 115 erases the block to be erased in step S205.
In step S206, the cache control unit 9 determines whether the process is to be ended.
When the cache control unit 9 does not end the process, the process returns to step S201.
When the cache control unit 9 ends the process, the process ends.
FIG. 3 is a flowchart showing an example of second cache control according to the present embodiment FIG. 3 exemplarily shows a process in which data is written to a block group BG 2 and a block to be erased in a block group BG 2 is erased.
When the data written to the block to be erased of the block group BG 1 is determined to be in the first low usage state and is not the data to be deleted in step S204, the writing unit 121 writes the data to the block group BG 2 in step S301.
in step S302, the determination unit 122 determines whether the block group BG 2 satisfies the erase condition.
When the block group BG 2 does not satisfy the erase condition, the process proceeds to step S306.
When the block group BG 2 satisfies the erase condition, the selection unit 123 selects a block to be erased from the block group BG 2 in step S303.
In step S304, the determination unit 124 determines whether each data item written to the block to be erased is in the third-high use state or the third-low use state and whether each item of the data is data to be deleted, based on the management information 62.
When the data item is in the third low-use state or the data is to be deleted, the process proceeds to step S305.
When the data item is in the third-highest usage state and is not data to be deleted, in step S401, the writing unit 137 writes the data item to the block group BG 3.
In step S305, the erasing unit 125 erases the data of the block to be erased written to the block group BG 2.
In step S306, the cache control unit 9 determines whether the process is to be ended.
When the cache control unit 9 does not end the process, the process returns to step S301.
when the cache control unit 9 ends the process, the process ends.
FIG. 4 is a flow chart showing an example of a third cache control according to the present embodiment FIG. 4 exemplarily shows a process from writing data into the block group BG 3 to erasing data in the block group BG 3.
When the data written to the block to be erased of the block group BG 1 is determined to be in the first high usage state and is not the data to be deleted in step S204, the write unit 131 writes the data to the block group BG 3 in step S401 when the data written to the block group BG 2 is determined to be in the third high usage state (for example, the data is accessed by the processor 2) and is not the data to be deleted in step S304, the write unit 137 writes the data of the block group BG 2 to the block group BG 3.
In step S402, the determination unit 132 determines whether the block group BG 3 satisfies the erasing condition.
When the block group BG 3 does not satisfy the erase condition, the process proceeds to step S407.
When the block group BG 3 satisfies the erase condition, the selection unit 133 selects a block to be erased from the block group BG 3 in step S403.
In step S404, the determination unit 134 determines whether each data item written to the block to be erased is in the second high usage state or the second low usage state and whether each item of the data is data to be deleted, based on the management information 63.
When the data item is in the second low usage state and is not data to be deleted, in step S501, the writing unit 141 writes the data to the block group BG 4.
When the data is in the second-highest usage state and the data is not to be deleted, in step S405, the write unit 135 again writes the data written to the to-be-erased block of the block group BG 3 to another block in the block group BG 3.
In step S406, when each item of data written to the block to be erased can be discarded because each data item is written to the block group BG 4, written again to the block group BG 3, or data to be deleted, the erase unit 136 erases the block to be erased.
In step S407, the cache control unit 9 determines whether the process is to be ended.
When the cache control unit 9 does not end the process, the process returns to step S401.
When the cache control unit 9 ends the process, the process ends.
fig. 5 is a flowchart showing an example of a fourth cache control according to the present embodiment fig. 5 exemplarily shows a process in which data is written to the block group BG 4 and data in the block group BG 4 is erased.
When the data written to the block to be erased of the block group BG 3 is determined to be in the second low state and is not the data to be deleted in step S404, the writing unit 141 writes the data to the block group BG 4 in step S501.
In step S502, the determination unit 142 determines whether the block group BG 4 satisfies the erasing condition.
When the block group BG 4 does not satisfy the erase condition, the process proceeds to step S505.
When the block group BG 4 satisfies the erase condition, the selection unit 143 selects a block to be erased from the block group BG 4 in step S503.
In step S504, the erasing unit 144 erases the data of the block to be erased written in the block group BG 4.
In step S505, the cache control unit 9 determines whether the process is to be ended.
When the cache control unit 9 does not end the process, the process returns to step S501.
When the cache control unit 9 ends the process, the process ends.
In the block group BG 1 of the present embodiment, for example, data is written first to block B 1,1, then sequentially to block B 1,2, and then similarly to blocks B 1,3 -B 1,K when the amount of data of blocks B 1,1 -B 1,K included in the block group BG 1 exceeds a predetermined amount of data, block B 1,1, where the write first completed, is erased from FIFO, and data is written again sequentially to erased block B 1,1, after the write to block B 1,1 is completed, block B 1,2 is erased from FIFO, then data is written again sequentially to erased block B 1,2.
In the block group BG 1, it is determined whether data written to a block to be erased in the block group BG 1 is accessed less than a first number of times or at less than a first frequency, for example, based on the management information 61 when data written to a block to be erased in the block group BG 1 is accessed less than the first number of times or at less than the first frequency, the block group BG 2 is selected as a write destination of the data.
In contrast, when data written to a block to be erased in the block group BG 1 is accessed a first number of times or more or at a first frequency or more, the block group BG 3 is selected as a write destination of the data.
When the data written to the block to be erased in the block group BG 1 is data to be deleted, the data is discarded.
In the block group BG 2 of the present embodiment, data in the first low usage state from the block group BG 1 is first sequentially written to the block B 2,1, next sequentially written to the block B 2,2, and then similarly written to the blocks B 2,3 to B 2,L when the data amount of the blocks B 2,1 to B 2,L included in the block group BG 2 exceeds a predetermined data amount, the block B 2,1 in which writing is first completed is erased according to FIFO and data is again sequentially written to the erased block B 2,1, after the writing to the block B 2,1 is completed, the block B 2,2 is erased according to FIFO, then, data is sequentially written to the erased block B 2,2. the same control is repeated.
In the block group BG 2, it is determined whether data written to a block to be erased in the block group BG 2 is accessed less than a third number of times or less than a third frequency, for example, based on the management information 62.
In contrast, when data written to a block to be erased in the block group BG 2 is accessed a third number of times or more or at a third frequency or more, the block group BG 3 is selected as a write destination of the data.
When the data written to the block to be erased in the block group BG 2 is data to be deleted, the data is discarded.
In the block group BG 3 of the present embodiment, data in the first high use state from the block group BG 1, data in the third high use state from the block group BG 2, or re-written data from the block group BG 3 are sequentially written first to block B 3,1, next to block B 3,2, and then similarly to blocks B 3,3 to B 3,M when the amount of data of the blocks B 3,1 to B 3,M included in the block group BG 3 exceeds a predetermined amount of data, the block B 3,1 in which writing is first completed is erased according to a FIFO and data is sequentially written again to the erased block B 3,1 again after the writing to the block B 3,1 is completed, the block B 3,2 is erased according to the FIFO, then data is sequentially written again to the erased block B 3,2.
when the data written to the block to be erased in the block group BG 3 is accessed less than the second number of times or less than the second frequency, the block group BG 4 is selected as the write destination of the data.
In contrast, when the data written to the block to be erased in the block group BG 3 is accessed a second number of times or more or at a second frequency or more, the data is written to the block group BG 3 again.
When the data written to the block to be erased in the block group BG 3 is data to be deleted, the data is discarded.
In the block group BG 4 of the present embodiment, data in the second low usage state from the block group BG 3 is first sequentially written to block B 4,1, next sequentially written to block B 4,2, and then similarly written to blocks B 4,3 to B 4,N when the data amount of the blocks B 4,1 to B 4,N included in the block group BG 4 exceeds a predetermined data amount, the block B 4,1 in which writing is first completed is erased according to FIFO and data is again sequentially written to the erased block B 4,1. after the writing to the block B 4,1 is completed, the block B 4,2 is erased according to FIFO, then data is sequentially written to the erased block B 4,2. the same control is repeated.
When the data written to the block to be erased of the block group BG 4 is determined to be in the fifth high use state, the control unit 13 may write the data to the writable destination block of the block group BG 3 in terms of maintaining the data in the nonvolatile cache memory 4.
In the present embodiment, data is managed based on four block groups BG 1 to BG 4.
For example, the first data (once accessed data) accessed once by the processor 2 is managed in the block group BG 1.
For example, if the second data in chunk group BG 1 is accessed two or more times by processor 2 and pushed out of chunk group BG 1 based on the FIFO, the second data is moved from chunk group BG 1 to chunk group BG 3.
Note that, in the present embodiment, the size of the block group BG 1 is larger than the size of the block group BG 3.
For example, when the third data in block group BG 1 is pushed out of block group BG 1 based on the FIFO without being accessed by the processor 2, the third data is moved from block group BG 1 to block group BG 2.
For example, if the fourth data in block group BG 3 is cleared from block group BG 3 based on the FIFO without being accessed by the processor 2, the fourth data is moved from block group BG 3 to block group BG 4.
For example, in block groups BG 2 and BG 4, metadata can be cached instead of caching data.
In the present embodiment, for example, when the fifth data is stored in the chunk group BG 1, the sixth data in the chunk group BG 2 may be pushed out based on the FIFO.
For example, when the seventh data in the block group BG 1 is accessed and pushed out of the block group BG 1 based on the FIFO, the seventh data may be moved from the block group BG 1 to the block group BG 3, the eighth data in the block group BG 3 may be moved from the block group BG 3 to the block group BG 4 based on the FIFO, and the ninth data in the block group BG 4 may be pushed out of the block group BG 4 based on the FIFO.
If the size of the block group BG 1 increases, the eleventh data in the block group BG 3 is moved to the block group BG 4 based on a FIFO.
For example, when the twelfth data in the chunk group BG 4 is accessed and pushed out of the chunk group BG 4 based on the FIFO, the twelfth data is moved to the chunk group BG 3 and the size of the chunk group BG 1 is reduced.
in the present embodiment described above, the maintenance determination determines whether or not to maintain the data of the block unit, the transfer write writes the block data to be maintained to the destination block, and the data written to the nonvolatile cache memory 4 is erased on a block-by-block basis.
In the embodiment, the effective cache capacity can be increased, the bit rate of the nonvolatile cache memory 4 can be increased, and the speed of the information processing apparatus 17 can be increased.
In the present embodiment, in the case where garbage collection (garpage collection) is not performed for the nonvolatile cache memory 4, a decrease in performance can be avoided. Since garbage collection is not necessary, the number of writes to the nonvolatile cache 4 can be reduced and the lifetime of the nonvolatile cache 4 can be increased. Further, since garbage collection is not necessary, a provisioning area (provisioning area) does not need to be secured. Accordingly, the data capacity usable as a cache memory can be increased, and the use efficiency can be improved.
For example, when using non-volatile memory as a cache and discarding data regardless of block boundaries, garbage collection may be performed frequently to move valid data in a block of non-volatile memory to another block. In the present embodiment, there is no need to perform garbage collection in the nonvolatile cache memory 4. Therefore, as described above, in the present embodiment, the lifetime of the nonvolatile cache memory 4 can be increased.
[ second embodiment ]
The present embodiment is a modified example of the first embodiment. In the present embodiment, data and information transmission and reception between the information processing device 17 including the cache control unit 9 and the SSD5 are described.
In the present embodiment, a logical address is used as identification information of data. However, the data may be identified by other information.
Fig. 6 is a block diagram showing a configuration example of the information processing system 35 according to the present embodiment.
The cache control unit 9 further includes a transmitting unit 18, a receiving unit 19, a writing unit 20, and a transmitting unit 21, in addition to the constituent elements described in the first embodiment.
The transmission unit 18 transmits the write data for the SSD5 and the address of the write data to the SSD 5. In the present embodiment, the address transmitted from the transmission unit 18 to the SSD5 is a logical address, for example.
The reception unit 19 receives, from the SSD5, block information containing a logical address indicating valid data written to a block to be subjected to garbage collection.
In the present embodiment, the block information may include information that correlates identification information of each block in the SSD5 with identification information of data written to each block.
The write unit 20 writes (transcribes) all or a part of valid data indicated by a logical address included in the block information to a memory other than the nonvolatile memory 24 based on the block information and the management information 61 to 64 received from the SSD 5. For example, the other memory may be the non-volatile cache 4.
For example, in the case of receiving a delete command, the write unit 20 excludes the logical address of data indicating data that is to be deleted (a delete candidate) from the logical addresses indicating valid data included in the block information. Thus, valid data written to a block to be subject to garbage collection and not data to be deleted may be selected. Write unit 20 writes the selected data to another memory.
The transmission unit 21 generates deletion information including a logical address indicating data to be deleted and transmits the deletion information to the SSD 5. For example, the deletion information may include a logical address indicating data that is not written to the deletion target of another memory by the write unit 20 among logical addresses indicating valid data included in the block information. Instead of deleting the information, maintenance information including a logical address of data to be maintained may be transmitted from the transmission unit 21 to the SSD 5.
SSD5 includes processor 22, memory 23, and non-volatile memory 24.
For example, the memory 23 stores various types of control data such as address conversion information 32, valid/invalid information 33, and deletion information 34. The memory 23 may be a volatile memory such as a DRAM or an SRAM, or may be a nonvolatile memory. Memory 23 may be included in non-volatile memory 24.
the processor 22 functions as an address conversion unit 25, a write unit 26, a valid/invalid generation unit 27, a selection unit 28, a transmission unit 29, a reception unit 30, and a garbage collection unit 31 by executing a program stored in a memory in the processor 22, a program stored in the memory 23, or a program stored in the nonvolatile memory 24.
In the present embodiment, the program to cause the processor 22 to function as the address conversion unit 25, the write unit 26, the valid/invalid generation unit 27, the selection unit 28, the transmission unit 29, the reception unit 30, and the garbage collection unit 31 may be an OS, middleware, or firmware, for example. In the present embodiment, all or a part of the address conversion unit 25, the write unit 26, the valid/invalid generation unit 27, the selection unit 28, the transmission unit 29, the reception unit 30, and the garbage collection unit 31 may be implemented by hardware.
When write data and a logical address of the write data are received from the cache control unit 9, the address conversion unit 25 generates information that correlates the logical address of the write data with a physical address indicating a location in the nonvolatile memory 24 where the write data is stored, and registers the information to the address conversion information 32.
In the present embodiment, the address translation unit 25 is implemented by the processor 22. However, the address translation unit 25 may be configured separately from the processor 22.
Address translation unit 25 translates addresses based on, for example, table-form address translation information 32. Instead, addresses may be translated through key-value retrieval. For example, address translation may be implemented by means of key-value retrieval using a logical address as a key and a physical address as a value.
The writing unit 26 writes write data to a position indicated by the physical address obtained by the address conversion unit 25.
the valid/invalid generation unit 27 generates valid/invalid information 33 indicating whether each item of data written to the nonvolatile memory 24 is valid data or invalid data based on, for example, the address conversion information 32. Subsequently, the valid/invalid generation unit 27 stores the valid/invalid information 33 in the memory 23.
the selection unit 28 selects a block to be subjected to garbage collection.
For example, the selection unit 28 may select the block with the oldest write time from the blocks in the non-volatile memory 24 as the block to be subjected to garbage collection.
for example, the selection unit 28 may randomly select blocks from the blocks in the non-volatile memory 24 to be subject to garbage collection.
For example, selection unit 28 may select the block having the largest amount of invalid data, or having an amount of invalid data greater than a predetermined amount, as to be subject to garbage collection based on valid/invalid information 33.
For example, selection unit 28 may select the block having the largest amount of invalid data and data to be deleted or having more than a predetermined amount of invalid data and data to be deleted as the block to be subjected to garbage collection based on valid/invalid information 33 and deletion information 34.
The transmission unit 29 generates block information by deleting a logical address indicating invalid data determined to be invalid by the valid/invalid information 33 from a logical address indicating data written to a block to be subjected to garbage collection. In other words, the block information contains information that correlates identification information of a block to be subjected to garbage collection with a logical address indicating valid data written to the block. The transmission unit 29 transmits the block information to the cache control unit 9.
The receiving unit 30 receives the deletion information from the cache control unit 9 and stores the deletion information 34 in the nonvolatile memory 24.
The garbage collection unit 31 excludes invalid data and data to be deleted from data written to a block to be subjected to garbage collection based on the valid/invalid information 33 and the deletion information 34 stored in the nonvolatile memory 24, and performs garbage collection only for valid data that is not data to be deleted.
fig. 7 is a flowchart showing an example of a process performed by the information processing system according to the present embodiment.
In step S701, the transmission unit 18 transmits the write data and the logical address to the SSD 5.
in step S702, the address conversion unit 25 receives the write data and the logical address, and registers information relating the logical address of the write data and the physical address to the address conversion information 32.
in step S703, the writing unit 26 writes the write data to the position indicated by the physical address in the nonvolatile memory 24.
In step S704, the valid/invalid generation unit 27 generates valid/invalid information 33 indicating whether each data item written to the nonvolatile memory 24 is valid data or invalid data and stores the valid/invalid information 33 in the memory 23.
In step S705, the selection unit 28 selects a block to be subjected to garbage collection.
In step S706, the transmission unit 29 generates block information by deleting the logical address indicating the invalid data indicated as invalid by the valid/invalid information 33 from the logical address indicating the data written to the block to be subjected to garbage collection, and transmits the block information to the cache control unit 9.
In step S707, the reception unit 19 receives the block information from the SSD 5.
In step S708, the writing unit 20 writes all or a part of the data indicated by the logical address contained in the block information to a memory other than the nonvolatile memory 24 of the SSD5 based on the block information received from the SSD5 and the management information 61 to 64.
For example, in the case of receiving a delete command, the write unit 20 excludes a logical address indicating data to be deleted from logical addresses included in the block information, and writes data to be maintained indicated by the logical address to another memory.
In step S709, the transmission unit 21 transmits deletion information containing the logical address of the data to be deleted to the SSD 5.
in step S710, the receiving unit 30 receives the deletion information from the cache control unit 9 and stores the deletion information 34 in the memory 23.
in step S711, the garbage correction unit 31 excludes the invalid data and the data to be deleted from the data written to the block to be subjected to garbage correction based on the valid/invalid information 33 and the deletion information 34, and performs garbage correction on the valid data other than the data to be deleted.
In the present embodiment described above, the cache control unit 9 can acquire information on the data written to the block of the nonvolatile memory 24 from the SSD 5. The cache control unit 9 may thereby recognize the write status of data in the block of non-volatile memory 24. For example, in the present embodiment, it is possible to recognize whether data written to a block of the nonvolatile memory 24 is valid data or invalid data and whether the data can be deleted.
In the present embodiment, the SSD5 includes valid/invalid information 33 to determine whether data is valid data or invalid data, and deletion information 34 to determine whether data can be deleted. Thereby, it is possible to determine whether or not to write an erase to data of a block to be subjected to garbage collection when garbage collection is performed in the SSD 5. Accordingly, unnecessary data writes may be avoided and the lifetime of the non-volatile memory 24 may be increased.
In the present embodiment, the cache control unit 9 can prevent deletion target data among valid data indicated by a logical address contained in the block information received from the SSD5 from being transcribed from the nonvolatile memory 24 to another memory. In the present embodiment, the SSD5 may delete data (for example, deletable invalid data or valid data) that is not transcribed from the cache control unit 9 to another memory, from the SSD 5.
In the present embodiment described above, the block information relating to the block to be erased is transmitted from the SSD5 to the information processing device 17. However, the block information may include information relating each block in the non-volatile memory 24 to identification information of data written to each block, for example. The information processing device 17 can recognize the storage relationship between the block and the data in the SSD5 by receiving the relationship information from the SSD 5.
[ third embodiment ]
in the present embodiment, the information processing system 35 including the information processing system 17 and the SSD5 explained in the first embodiment and the second embodiment is explained in further detail.
Fig. 8 is a block diagram showing an example of a detailed structure of the information processing system 35 according to the present embodiment.
The information processing system 35 includes an information processing device 17 and a memory system 37.
The SSD5 according to the first embodiment and the second embodiment corresponds to the memory system 37.
The processor 22 of the SSD5 corresponds to the CPU 43B.
The address conversion information 32 corresponds to a LUT (look-up table) 45.
The memory 23 corresponds to the DRAM 47.
the information processing apparatus 17 functions as a host apparatus.
The controller 36 of the memory system 37 includes a front end 4F and a back end 4B.
The front end (host communication unit) 4F includes a host interface 41, a host interface controller 42, an encoding/decoding unit (advanced encryption standard (AES))44, and a CPU 43F.
The host interface 41 communicates with the information processing device 17 to exchange requests (write command, read command, erase command), LBAs (logical block addressing), and data.
a host interface controller (control unit) 42 controls communication of the host interface 41 based on control of the CPU 43F.
The encoding/decoding unit 44 encodes write data (plaintext) transmitted from the host interface controller 42 in a data write operation. The encoding/decoding unit 44 decodes encoded read data transmitted from the read buffer RB of the back end 4B in a data read operation. It should be noted that the transmission of write data and read data may be performed without using encoding/decoding unit 44 in accordance with a temporary command.
the CPU 43F controls the above components 41, 42, and 44 of the front end 4F to control the entire function of the front end 4F.
The back end (memory communication unit) 4B includes a write buffer WB, a read buffer RB, an LUT45, a DDRC 46, a DRAM 47, a DMAC 48, an ECC 49, a randomizer RZ, a NANDC 50, and a CPU 43B.
The write buffer (write data transfer unit) WB temporarily stores write data transmitted from the information processing apparatus 17. Specifically, the write buffer WB temporarily stores the data until it reaches a predetermined data size suitable for the non-volatile memory 24.
the read buffer (read data transfer unit) RB temporarily stores read data read from the nonvolatile memory 24. Specifically, the read buffer RB rearranges the read data into an order suitable for the information processing apparatus 17 (the order of the logical addresses LBA specified by the information processing apparatus 17).
The LUT45 is data to convert the logical address LBA into a physical address PBA (physical block addressing).
The DDRC 46 controls Double Data Rate (DDR) in the DRAM 47.
The DRAM 47 is a nonvolatile memory storing, for example, the LUT 45.
A Direct Memory Access Controller (DMAC)48 transfers write data and read data via the internal bus IB. In FIG. 8, only a single DMAC 48 is shown; however, the controller 36 may include two or more DMACs 48. The DMAC 48 may be set in various locations within the controller 36.
An ECC (error correction unit) 49 adds an Error Correction Code (ECC) to the write data transmitted from the write buffer WB. When the read data is transmitted to the read buffer RB, the ECC 49 corrects the read data read from the non-volatile memory 24 using the added ECC, if necessary.
In a data write operation, the randomizer RZ (or scrambler) spreads the write data in such a way that the write data is not biased in a certain page or in the word line direction of the non-volatile memory 24. By spreading the write data in this way, the number of writes can be standardized and the cell life of the memory cell MC of the nonvolatile memory 24 can be extended. Therefore, the reliability of the nonvolatile memory 24 can be improved. In addition, in the data read operation, read data read from the nonvolatile memory 24 passes through the randomizer RZ.
The NAND controller (NANDC)50 uses multiple channels (four channels CH 0-CH 3 are shown) to access the non-volatile memory 24 in parallel to meet the demand for a certain speed.
The CPU 43B controls each of the above components (45 to 50 and RZ) of the backend 4B to control the entire function of the backend 4B.
it should be noted that the structure of the controller 36 is merely an example and is not intended to be limiting thereby.
Fig. 9 is a perspective view showing an example of the storage system according to the present embodiment.
The storage system 100 includes a memory system 37 as an SSD.
For example, the memory system 37 is a relatively small module that would be approximately 20mm by 30mm in external size. It should be noted that the size and dimensions of the memory system 37 are not limited thereto and may be arbitrarily changed to various sizes.
Further, the memory system 37 may be applied to the information processing apparatus 17 as a server used in a data center or a cloud computing system employed in a company (enterprise) or the like. Thus, the memory system 37 may be an enterprise ssd (essd).
For example, the memory system 37 includes a plurality of connectors (e.g., slots) 38 that open upward. Each connector 38 is a serial attached scsi (sas) connector or the like. With the SAS connector, high-speed intercommunication can be established between information processing device 17 and each memory system 37 through the dual port of 6 Gbps. It should be noted that connector 38 may be PCI express (PCIe) or NVM express (NVMe).
A plurality of memory systems 37 are individually attached to a connector 38 of the information processing device 17 and are supported in an arrangement such that they stand in a substantially vertical direction. With this structure, a plurality of memory systems 37 can be collectively mounted in a compact size, and the memory systems 37 can be miniaturized. Further, each memory system 37 of the present embodiment is in the shape of a Small Form Factor (SFF) of 2.5 inches. Due to this shape, the memory system 37 is compatible with enterprise HDDs (eHDDs) and simple system compatibility with eHDDs can be achieved.
it should be noted that the memory system 37 is not limited to use in an enterprise HDD. For example, the memory system 37 may be used as a memory medium for a consumer electronic device, such as a notebook portable computer or a tablet computer terminal.
As can be understood from the above, the information processing system 35 and the storage system 100 having the structure of the present embodiment can realize the mass storage advantage with the same advantage of the second embodiment.
The structure of the memory system 37 according to the present embodiment is applicable to the information processing apparatus 17 according to the first embodiment. For example, the processor 2 according to the first embodiment may correspond to the CPU 43B. The address translation information 7 may correspond to the LUT 45. The memory 3 corresponds to the DRAM 47. The non-volatile cache memory 4 may correspond to the non-volatile memory 24.
while certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the invention. Indeed, the novel embodiments described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the embodiments described herein may be made without departing from the spirit of the inventions. It is intended that the appended claims and their equivalents cover such forms or modifications as would fall within the scope and spirit of the inventions.

Claims (15)

1. A cache memory device, comprising:
A nonvolatile cache memory including a first group including a plurality of first erase unit areas, a second group including a plurality of second erase unit areas, and a third group including a plurality of third erase unit areas, each of the first to third erase unit areas including a plurality of write unit areas;
A first write unit that writes data to the first group;
A generation unit that generates management information indicating a use state of the data written to the first group;
A first determination unit that determines whether the first group satisfies a first erase condition;
a first selection unit that selects a first area to be erased from the first group when the first group satisfies the first erase condition;
A second determination unit that determines whether the data written to the first area to be erased is in a first high-use state or a first low-use state based on the management information;
A second write unit that writes the data to the second group when the data is determined to be in the first low usage state;
A third write unit that writes the data to the third group when the data is determined to be in the first high usage state; and
A first erasing unit that erases the data written to the first area to be erased.
2. The cache memory device according to claim 1, wherein
The non-volatile cache further comprises a fourth group including a plurality of fourth erase unit areas, and
The cache memory device further comprises:
A third determination unit that determines whether the third group satisfies a second erase condition;
A second selecting unit that selects a second region to be erased from the third group when the third group satisfies the second erasing condition;
A fourth determination unit that determines whether the data written to the second area to be erased is in a second high usage state or a second low usage state based on the management information;
a fourth writing unit that writes the data written to the second area to be erased to the fourth group when the data written to the second area to be erased is determined to be in the second low usage state;
A fifth writing unit that writes the data written to the second area to be erased to the third group again when the data written to the second area to be erased is determined to be in the second high usage state; and
a second erasing unit that erases the data written to the second area to be erased.
3. The cache memory device according to claim 1, wherein
The first determination unit determines that the first erase condition is satisfied when an amount of data of the first group exceeds a predetermined amount.
4. The cache memory device according to claim 1, wherein
The first selection unit selects the first area to be erased from the first group on a first-in first-out basis.
5. The cache memory device according to claim 2, further comprising:
A first change unit that increases the number of the first erase unit areas included in the first group and decreases the number of the third erase unit areas included in the third group when the data written to the second group reaches a third high usage state; and
A second change unit that increases the number of the third erase unit areas included in the third group and decreases the number of the first erase unit areas included in the first group when the data written to the fourth group reaches a fourth high use state.
6. The cache memory device according to claim 5, further comprising:
A sixth write unit that writes the data written to the second group to the third group when the data written to the second group reaches the third high use state.
7. The cache memory device according to claim 2, further comprising:
A fifth determination unit that determines whether the second group satisfies a third erase condition;
a third selecting unit that selects a third to-be-erased area from the second group when the second group satisfies the third erasing condition;
A third erasing unit that erases the data written to the third area to be erased;
A sixth determination unit that determines whether the fourth group satisfies a fourth erase condition;
A fourth selecting unit that selects a fourth area to be erased from the fourth group when the fourth group satisfies the fourth erasing condition; and
A fourth erasing unit that erases the data written to the fourth area to be erased.
8. The cache memory device according to claim 7, wherein
The third selection unit selects, as the third area to be erased, an erase unit area indicating that the first write time or the first write order is old based on first write times or first write orders of the plurality of second erase unit areas, and
The fourth selecting unit selects, as the fourth area to be erased, an erase unit area indicating that a second writing time or a second writing order of the plurality of fourth erase unit areas is old, based on the second writing time or the second writing order.
9. A cache memory device, comprising:
A non-volatile cache memory; and
a control unit that controls the nonvolatile cache memory; and is
the non-volatile cache memory includes: a first group including a plurality of first erase unit areas, a second group including a plurality of second erase unit areas, and a third group including a plurality of third erase unit areas; and
The first to the third erase unit areas include a plurality of write unit areas;
The control unit:
Writing data to the first group;
Generating management information including information indicating a use state of the write data written to the first to third groups and information indicating whether the write data is data to be deleted;
Selecting an area to be erased from the first group when the first group meets an erasing condition;
Selecting the second group as a write destination group of the data written to the area to be erased, when the data written to the area to be erased is determined to be in a low use state and is not data to be deleted, based on the management information; selecting the third group as the write destination group when the data written to the area to be erased is determined to be in a high use state and is not data to be deleted;
Writing the data written to the area to be erased to the second group when the second group is selected as the write destination group; writing the data written to the area to be erased to the third group when the third group is selected as the write destination group;
Erasing the data written to the area to be erased.
10. The cache memory device according to claim 9, wherein
The erase condition is that an amount of data of the first group exceeds a predetermined amount.
11. The cache memory device according to claim 9, wherein
The control unit erases all the data written to the area to be erased.
12. The cache memory device according to claim 9, wherein
The control unit:
Determining whether to maintain the data written to the area to be erased in the non-volatile cache based on the management information;
writing the data written to the area to be erased to the write destination group when the data written to the area to be erased is determined to be maintained in the non-volatile cache.
13. The cache memory device according to claim 12, wherein
The management information includes identification information of the write data, information indicating whether the write data is data to be deleted, and information indicating a use state of the write data.
14. The cache memory device according to claim 9, wherein
The control unit selects, as the area to be erased, an erase unit area indicating that the write time or the write order is old from the first group based on write times or write orders of the plurality of first erase unit areas.
15. The cache memory device according to claim 9, wherein
The control unit randomly selects the area to be erased from the plurality of first erase unit areas.
CN201510239685.XA 2014-12-29 2015-05-12 Cache memory device Active CN106205708B (en)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US201462097530P 2014-12-29 2014-12-29
US62/097,530 2014-12-29
JP2015-038997 2015-02-27
JP2015038997A JP6320322B2 (en) 2014-12-29 2015-02-27 Cache memory device and program
US14/656,559 2015-03-12
US14/656,559 US10474569B2 (en) 2014-12-29 2015-03-12 Information processing device including nonvolatile cache memory and processor

Publications (2)

Publication Number Publication Date
CN106205708A CN106205708A (en) 2016-12-07
CN106205708B true CN106205708B (en) 2019-12-10

Family

ID=56357982

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510239685.XA Active CN106205708B (en) 2014-12-29 2015-05-12 Cache memory device

Country Status (3)

Country Link
JP (2) JP6320322B2 (en)
CN (1) CN106205708B (en)
TW (1) TW201624288A (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019074743A1 (en) * 2017-10-12 2019-04-18 Rambus Inc. Nonvolatile physical memory with dram cache
JP2022114726A (en) 2021-01-27 2022-08-08 キオクシア株式会社 Memory system and control method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1629983A (en) * 2003-12-19 2005-06-22 株式会社瑞萨科技 Nonvolatile semiconductor memory device
CN101499036A (en) * 2008-01-30 2009-08-05 株式会社东芝 Information storage device and control method thereof

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8402242B2 (en) * 2009-07-29 2013-03-19 International Business Machines Corporation Write-erase endurance lifetime of memory storage devices
US8438361B2 (en) * 2010-03-10 2013-05-07 Seagate Technology Llc Logical block storage in a storage device
US20120023144A1 (en) * 2010-07-21 2012-01-26 Seagate Technology Llc Managing Wear in Flash Memory
JP5066241B2 (en) * 2010-09-24 2012-11-07 株式会社東芝 Memory system
WO2012106362A2 (en) * 2011-01-31 2012-08-09 Fusion-Io, Inc. Apparatus, system, and method for managing eviction of data
US8782370B2 (en) * 2011-05-15 2014-07-15 Apple Inc. Selective data storage in LSB and MSB pages
KR101867282B1 (en) * 2011-11-07 2018-06-18 삼성전자주식회사 Garbage collection method for non-volatile memory device
JP5687648B2 (en) * 2012-03-15 2015-03-18 株式会社東芝 Semiconductor memory device and program
US20140089564A1 (en) * 2012-09-27 2014-03-27 Skymedi Corporation Method of data collection in a non-volatile memory

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1629983A (en) * 2003-12-19 2005-06-22 株式会社瑞萨科技 Nonvolatile semiconductor memory device
CN101499036A (en) * 2008-01-30 2009-08-05 株式会社东芝 Information storage device and control method thereof

Also Published As

Publication number Publication date
TW201624288A (en) 2016-07-01
JP2018136970A (en) 2018-08-30
JP2016126737A (en) 2016-07-11
JP6595654B2 (en) 2019-10-23
JP6320322B2 (en) 2018-05-09
CN106205708A (en) 2016-12-07

Similar Documents

Publication Publication Date Title
US10303599B2 (en) Memory system executing garbage collection
US10761977B2 (en) Memory system and non-transitory computer readable recording medium
US11726906B2 (en) Memory device and non-transitory computer readable recording medium
US8458394B2 (en) Storage device and method of managing a buffer memory of the storage device
US8954656B2 (en) Method and system for reducing mapping table size in a storage device
US20230342294A1 (en) Memory device and non-transitory computer readable recording medium
US20230281118A1 (en) Memory system and non-transitory computer readable recording medium
CN106205708B (en) Cache memory device
CN106201326B (en) Information processing apparatus
US10331551B2 (en) Information processing device and non-transitory computer readable recording medium for excluding data from garbage collection
US10474569B2 (en) Information processing device including nonvolatile cache memory and processor
JP6276208B2 (en) Memory system and program

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20170808

Address after: Tokyo, Japan

Applicant after: TOSHIBA MEMORY Corp.

Address before: Tokyo, Japan

Applicant before: Toshiba Corp.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: Tokyo

Patentee after: TOSHIBA MEMORY Corp.

Address before: Tokyo

Patentee before: Pangea Co.,Ltd.

Address after: Tokyo

Patentee after: Kaixia Co.,Ltd.

Address before: Tokyo

Patentee before: TOSHIBA MEMORY Corp.

CP01 Change in the name or title of a patent holder
TR01 Transfer of patent right

Effective date of registration: 20220129

Address after: Tokyo

Patentee after: Pangea Co.,Ltd.

Address before: Tokyo

Patentee before: TOSHIBA MEMORY Corp.

TR01 Transfer of patent right