CN106201326B - Information processing apparatus - Google Patents

Information processing apparatus Download PDF

Info

Publication number
CN106201326B
CN106201326B CN201510239405.5A CN201510239405A CN106201326B CN 106201326 B CN106201326 B CN 106201326B CN 201510239405 A CN201510239405 A CN 201510239405A CN 106201326 B CN106201326 B CN 106201326B
Authority
CN
China
Prior art keywords
data
unit
memory
block
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510239405.5A
Other languages
Chinese (zh)
Other versions
CN106201326A (en
Inventor
菅野伸一
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kioxia Corp
Original Assignee
Toshiba Memory Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US14/656,524 external-priority patent/US10331551B2/en
Application filed by Toshiba Memory Corp filed Critical Toshiba Memory Corp
Publication of CN106201326A publication Critical patent/CN106201326A/en
Application granted granted Critical
Publication of CN106201326B publication Critical patent/CN106201326B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

According to one embodiment, an information processing device (17) includes a transmitting unit (18) and a receiving unit (19). The transmit unit (18) transmits write data and logical addresses of the write data to a memory device (5). The memory device (5) includes a plurality of erase unit areas. Each of the erase unit areas includes a plurality of write unit areas. The receiving unit (19) receives, from the memory device (5), area information including data identification information indicating data written to an erase unit area to be subjected to garbage collection.

Description

Information processing apparatus
CROSS-REFERENCE TO RELATED APPLICATIONS
This application is based on and claims priority from the following applications: united states provisional application No. 62/097,538 filed on 12/29 2014; japanese patent application No. 2015-038999, filed on 27/2/2015; and U.S. non-provisional application No. 14/656,524 filed on 3/12/2015, all of which are incorporated herein by reference in their entirety.
Technical Field
Embodiments described herein relate generally to an information processing apparatus.
Background
Solid State Drives (SSDs) include non-volatile semiconductor memory such as NAND flash memory. The NAND flash memory includes a plurality of blocks (physical blocks). The plurality of blocks includes a plurality of memory cells arranged at intersections of word lines and bit lines.
Disclosure of Invention
In general, according to one embodiment, an information processing device includes a transmitting unit and a receiving unit. The transfer unit transfers write data and a logical address of the write data to a memory device. The memory device includes a plurality of erase unit areas. Each of the erase unit areas includes a plurality of write unit areas. The receiving unit receives, from the memory device, area information including data identification information indicating data written to an erase unit area to be subjected to garbage collection.
Drawings
Fig. 1 is a block diagram showing a configuration example of an information processing system according to a first embodiment;
Fig. 2 is a flowchart showing an example of a process performed by the information processing system according to the first embodiment;
Fig. 3 is a block diagram showing a configuration example of an information processing system according to the second embodiment;
FIG. 4 is a flowchart showing an example of first cache control of the second embodiment;
FIG. 5 is a flowchart showing an example of second cache control of the second embodiment;
Fig. 6 is a flowchart showing an example of third cache control of the second embodiment;
FIG. 7 is a flowchart showing an example of a fourth cache control of the second embodiment;
Fig. 8 is a block diagram showing an example of a detailed configuration of an information processing system according to a third embodiment; and
Fig. 9 is a perspective view showing an example of a storage system according to the third embodiment.
Detailed Description
Embodiments will be described below with reference to the drawings. In the following description, the same reference numerals denote components having almost the same functions and arrangements, and a repetitive description will be given of the components as necessary.
In each of the later-mentioned embodiments, in the nonvolatile memory and the nonvolatile cache memory, data is erased commonly per unit area of erase. The erase unit area includes a plurality of write unit areas and a plurality of read unit areas.
In the present embodiment, a NAND flash memory is used as each of the nonvolatile memory and the nonvolatile cache memory. However, each of the nonvolatile memory and the nonvolatile cache memory may be a memory other than a NAND flash memory, provided that the memory satisfies the above-described relationship among the erase unit area, the write unit area, and the read unit area.
When the nonvolatile memory and the nonvolatile cache memory are NAND flash memories, the erase unit area corresponds to a block. The writing unit area and the reading unit area correspond to one page.
In the present embodiment, the erase unit area can be controlled by another unit such as two blocks, for example, which allows data to be erased in common.
In this embodiment, an access indicates both writing data to and reading data from the memory device.
[ first embodiment ]
In the present embodiment, data and information transmission and reception between an information processing device and a memory device are described.
In the present embodiment, a logical address (e.g., logical block addressing) is used as identification information of data. However, the data may be identified by other information.
Fig. 1 is a block diagram showing a configuration example of an information processing system according to the present embodiment.
The information processing system 35 includes the information processing apparatus 17 and the SSD 5. SSD 5 is an example of a memory device. The information processing device 17 may be a host device corresponding to the SSD 5.
The SSD 5 may be included in the information processing device 17, or may be connected to the information processing device 17 so as to transmit and receive data via a network or the like. Instead of the SSD 5, another nonvolatile memory device such as a Hard Disk Drive (HDD) may be used.
The information processing apparatus 17 includes a cache control unit 9, a memory 3 that stores management information 61 to 64, and a nonvolatile cache memory 4. However, all or a part of the cache control unit 9, the management information 61 to 64, the memory 3, and the nonvolatile cache memory 4 may be provided outside the information processing apparatus 17.
The memory 3 stores various types of control data such as management information (lists) 61 to 64 and address conversion information 7. The memory 3 may be a volatile memory such as a Dynamic Random Access Memory (DRAM) or a Static Random Access Memory (SRAM), or may be a nonvolatile memory. The memory 3 may be included in a non-volatile cache 4. The memory may be included in the non-volatile cache 4.
The management information 61 to 64 are metadata of data written to the later-mentioned block groups BG 1 to BG 4, respectively, for example, the management information 61 to 64 include information indicating a use state of the respective data by the processor, for example, the management information 61 to 64 include identification information of the respective data, deletion information indicating whether the data is data to be deleted, valid/invalid information indicating whether the data is valid data, and cache determination information to determine whether an erasure condition for erasing a block is satisfied.
The deletion information is information indicating that a deletion command for data is issued. More specifically, the deletion information is information indicating that a deletion command for data is received from an application program or an Operating System (OS) executed by the processor, or the like. In the present embodiment, the deletion information includes, for example, information relating the identification information of each block to a logical address indicating data to be deleted written to each block.
The valid/invalid information is information indicating that, for example, the latest data is valid data and data other than the latest data is invalid data when the same data is written to a plurality of locations. In other words, for example, in the case where an update of data written to the nonvolatile cache memory 4 is performed, the valid data is the updated data. For example, in the case of performing an update, the invalid data is data that is not updated. In the present embodiment, the valid/invalid information includes, for example, information relating the identification information of each block to a logical address indicating valid data or invalid data written to each block.
the cache determination information is, for example, information including at least one of write information and read information per data or at least one of write information and read information per block, and the like.
For example, the writing information includes at least one of writing time, writing times, writing frequency, and writing order.
For example, the read information includes at least one of read time, read times, read frequency, and read order.
For example, the address translation information 7 correlates a logical address of the data with a physical address of the non-volatile cache 4 corresponding to the logical address (e.g., physical block addressing). For example, the address conversion information 7 is managed in a table form.
The cache control unit 9 performs cache control for the nonvolatile cache memory 4 having an access speed higher than that of the SSD 5. For example, the cache control unit 9 manages data and a logical address and a physical address indicating the data by a write-through method or a write-back method.
in the write-through method, data is stored in the nonvolatile cache 4 and also in the SSD 5.
In the write-back method, data stored in the nonvolatile cache 4 is not stored together in the SSD 5. The data is first stored in the non-volatile cache 4 and then the data pushed out of the non-volatile cache 4 is stored in the SSD 5.
In the first embodiment, the cache control unit 9 includes a transmitting unit 18, a receiving unit 19, a writing unit 20, and a transmitting unit 21. All or a portion of the cache 9 may be implemented in software or may be implemented in hardware.
The transfer unit 18 transfers the write data for the SSD 5 and the address of the write data to the SSD 5. In the present embodiment, for example, the address transferred from the transfer unit 18 to the SSD 5 is a logical address.
The reception unit 19 receives, from the SSD 5, block information containing a logical address indicating valid data written to a block to be subjected to garbage collection (garbage collection).
In the present embodiment, the block information may include information that correlates identification information of each block in the SSD 5 with identification information of data written to each block.
The write unit 20 writes (transcribes) all or a part of valid data indicated by a logical address included in the block information to a memory other than the nonvolatile memory 24 based on the block information and the management information 61 to 64 received from the SSD 5. For example, the other memory may be the non-volatile cache 4.
For example, in the case of receiving a delete command, the write unit 20 excludes the logical address of data indicating data that is to be deleted (a delete candidate) from the logical addresses indicating valid data included in the block information. Thus, valid data written to a block to be subject to garbage collection and not data to be deleted may be selected. Write unit 20 writes the selected data to another memory.
The transfer unit 21 generates deletion information including a logical address indicating data to be deleted and transfers the deletion information to the SSD 5. For example, the deletion information may include a logical address indicating data that is not written to the deletion target of another memory by the write unit 20 among logical addresses indicating valid data included in the block information. Instead of deleting the information, maintenance information including a logical address of data to be maintained may be transferred from the transfer unit 21 to the SSD 5.
SSD 5 includes processor 22, memory 23, and non-volatile memory 24.
For example, the memory 23 stores various types of control data such as address conversion information 32, valid/invalid information 33, and deletion information 34. The memory 23 may be a volatile memory such as a DRAM or an SRAM, or may be a nonvolatile memory. Memory 23 may be included in non-volatile memory 24.
The processor 22 functions as an address conversion unit 25, a write unit 26, a valid/invalid generation unit 27, a selection unit 28, a transmission unit 29, a reception unit 30, and a garbage collection unit 31 by executing a program stored in a memory in the processor 22, a program stored in the memory 23, or a program stored in the nonvolatile memory 24.
In the present embodiment, the program to cause the processor 22 to function as the address conversion unit 25, the write unit 26, the valid/invalid generation unit 27, the selection unit 28, the transfer unit 29, the reception unit 30, and the garbage collection unit 31 may be an OS, middleware, or firmware, for example. In the present embodiment, all or a part of the address conversion unit 25, the write unit 26, the valid/invalid generation unit 27, the selection unit 28, the transmission unit 29, the reception unit 30, and the garbage collection unit 31 may be implemented by hardware.
When write data and a logical address of the write data are received from the cache control unit 9, the address conversion unit 25 generates information that correlates the logical address of the write data with a physical address indicating a location in the nonvolatile memory 24 where the write data is stored, and registers the information to the address conversion information 32.
In the present embodiment, the address translation unit 25 is implemented by the processor 22. However, the address translation unit 25 may be configured separately from the processor 22.
Address translation unit 25 translates addresses based on, for example, table-form address translation information 32. Instead, addresses may be translated through key-value retrieval. For example, address translation may be implemented by means of key-value retrieval using a logical address as a key and a physical address as a value.
The writing unit 26 writes write data to a position indicated by the physical address obtained by the address conversion unit 25.
The valid/invalid generation unit 27 generates valid/invalid information 33 indicating whether each item of data written to the nonvolatile memory 24 is valid data or invalid data based on, for example, the address conversion information 32. Subsequently, the valid/invalid generation unit 27 stores the valid/invalid information 33 in the memory 23.
the selection unit 28 selects a block to be subjected to garbage collection.
For example, the selection unit 28 may select the block with the oldest write time from the blocks in the non-volatile memory 24 as the block to be subjected to garbage collection.
For example, the selection unit 28 may randomly select blocks from the blocks in the non-volatile memory 24 to be subject to garbage collection.
For example, selection unit 28 may select the block having the largest amount of invalid data, or having an amount of invalid data greater than a predetermined amount, as to be subject to garbage collection based on valid/invalid information 33.
For example, selection unit 28 may select the block having the largest amount of invalid data and data to be deleted or having more than a predetermined amount of invalid data and data to be deleted as the block to be subjected to garbage collection based on valid/invalid information 33 and deletion information 34.
The transfer unit 29 generates block information by deleting a logical address indicating invalid data determined to be invalid by the valid/invalid information 33 from a logical address indicating data written to a block to be subjected to garbage collection. In other words, the block information contains information that correlates identification information of a block to be subjected to garbage collection with a logical address indicating valid data written to the block. The transfer unit 29 transfers the block information to the cache control unit 9.
The receiving unit 30 receives the deletion information from the cache control unit 9 and stores the deletion information 34 in the nonvolatile memory 24.
The garbage collection unit 31 excludes invalid data and data to be deleted from data written to a block to be subjected to garbage collection based on the valid/invalid information 33 and the deletion information 34 stored in the nonvolatile memory 24, and performs garbage collection only for valid data that is not data to be deleted.
Fig. 2 is a flowchart showing an example of a process performed by the information processing system according to the present embodiment.
In step S201, the transfer unit 18 transfers the write data and the logical address to the SSD 5.
In step S202, the address conversion unit 25 receives the write data and the logical address, and registers information relating the logical address of the write data and the physical address to the address conversion information 32.
In step S203, the writing unit 26 writes the write data to the position indicated by the physical address in the nonvolatile memory 24.
In step S204, the valid/invalid generation unit 27 generates valid/invalid information 33 indicating whether each data item written to the nonvolatile memory 24 is valid data or invalid data and stores the valid/invalid information 33 in the memory 23.
in step S205, the selection unit 28 selects a block to be subjected to garbage collection.
In step S206, the transfer unit 29 generates block information by deleting the logical address indicating the invalid data indicated as invalid by the valid/invalid information 33 from the logical address indicating the data written to the block to be subjected to garbage collection, and transfers the block information to the cache control unit 9.
in step S207, the reception unit 19 receives the block information from the SSD 5.
In step S208, the writing unit 20 writes all or a part of the data indicated by the logical address contained in the block information to a memory other than the nonvolatile memory 24 of the SSD 5 based on the block information received from the SSD 5 and the management information 61 to 64.
For example, in the case of receiving a delete command, the write unit 20 excludes a logical address indicating data to be deleted from logical addresses included in the block information, and writes data to be maintained indicated by the logical address to another memory.
In step S209, the transfer unit 21 transfers deletion information containing the logical address of the data to be deleted to the SSD 5.
In step S210, the receiving unit 30 receives the deletion information from the cache control unit 9 and stores the deletion information 34 in the memory 23.
In step S211, the garbage correction unit 31 excludes invalid data and data to be deleted from data written to a block to be subjected to garbage correction based on the valid/invalid information 33 and the deletion information 34, and performs garbage correction for valid data other than the data to be deleted.
In the present embodiment described above, the cache control unit 9 can acquire information on the data written to the block of the nonvolatile memory 24 from the SSD 5. The cache control unit 9 may thereby recognize the write status of data in the block of non-volatile memory 24. For example, in the present embodiment, it is possible to recognize whether data written to a block of the nonvolatile memory 24 is valid data or invalid data and whether the data can be deleted.
In the present embodiment, the SSD 5 includes valid/invalid information 33 to determine whether data is valid data or invalid data, and deletion information 34 to determine whether data can be deleted. Thereby, it is possible to determine whether or not to write an erase to data of a block to be subjected to garbage collection when garbage collection is performed in the SSD 5. Accordingly, unnecessary data writes may be avoided and the lifetime of the non-volatile memory 24 may be increased.
In the present embodiment, the cache control unit 9 can prevent deletion target data among valid data indicated by a logical address contained in the block information received from the SSD 5 from being transcribed from the nonvolatile memory 24 to another memory. In the present embodiment, the SSD 5 may delete data (for example, deletable invalid data or valid data) that is not transcribed from the cache control unit 9 to another memory, from the SSD 5.
In the present embodiment described above, the block information relating to the block to be erased is transferred from the SSD 5 to the information processing apparatus 17. However, the block information may include information relating each block in the non-volatile memory 24 to identification information of data written to each block, for example. The information processing device 17 can recognize the storage relationship between the block and the data in the SSD 5 by receiving the relationship information from the SSD 5.
[ second embodiment ]
A cache memory device including the nonvolatile cache memory 4 is described in the present embodiment.
Fig. 3 is a block diagram showing a configuration example of the information processing device 35 according to the present embodiment.
The information processing device 17 includes a processor 2, a memory 3, and a nonvolatile cache memory 4.
The non-volatile cache 4 includes block groups BG 1 -BG 4 the non-volatile cache 4 has a higher access speed than that of the SSD 5.
the block group (first group) BG 1 includes blocks (first erase unit regions) B 1,1 through B 1,K the block group BG 1 stores data accessed by the processor 2 (i.e., data used by the processor 2).
In the present embodiment, when the block group BG 1 satisfies the erase condition (first erase condition), a block to be erased (a block to be discarded or pushed out) (first area to be erased) is selected from the blocks B 1,1 to B 1,K in the block group BG 1 on a first-in-first-out (FIFO) basis.
for example, an erase condition may be satisfied when the number of pages written to each of blocks B 1,1 -B 1,K of block group BG 1 exceeds a predetermined number.
Data written to an to-be-erased block selected from blocks B 1,1 -B 1,K based on the FIFO is written to the block group BG 2 when the data is in a first low usage state (e.g., when the data is accessed less than a set first number of times or less than a set first frequency).
The block group (second group) BG 2 includes blocks (second erase unit regions) B 2,1 through B 2,L the block group BG 2 stores data in a first low use state among data written to a block to be erased selected from the block group BG 1.
In the present embodiment, when the block group BG 2 satisfies the erase condition (third erase condition), a block to be erased (third area to be erased) is selected from the blocks B 2,1 to B 2,L in the block group BG 2 based on the FIFO.
In contrast, when data written to an erase block selected from blocks B 2,1 -B 2,L is in a third high usage state (e.g., when the data is accessed a third number of times less than or at a third frequency less than the set third frequency), the data is written to a group of blocks BG 3.
The block group (third group) BG 3 includes blocks (third erase unit regions) B 3,1 through B 3,M. the block group BG 3 stores data in a first low usage state among data written to a block to be erased selected from the block group BG 1. the block group BG 3 also stores data in a third high usage state among data written to a block to be erased selected from the block group BG 2.
In the present embodiment, when the block group BG 3 satisfies the erase condition (second erase condition), a block to be erased (second area to be erased) is selected from the blocks B 3,1 to B 3,M in the block group BG 3 based on the FIFO.
Data written to an to-be-erased block selected from blocks B 3,1 to B 3,M by FIFO is written to the block group BG 4 when the data is in a second low usage state (for example, when the data is accessed less than a set second number of times or at a frequency less than a set second frequency).
The block group (fourth group) BG 4 includes blocks (fourth erase unit regions) B 4,1 to B 4,N, and the block group BG 4 stores data in the second low usage state among data written to a block to be erased selected from the block group BG 3.
In the present embodiment, when the block group BG 4 satisfies the erase condition (fourth erase condition), a block to be erased (fourth area to be erased) is selected from the blocks B 4,1 to B 4,N in the block group BG 4 based on the FIFO.
Data written to a block to be erased selected from blocks B 4,1 through B 4,N by FIFO is erased.
1 4 1 4The management information 61-64 includes, for example, identification information of data, information indicating whether the data is data to be deleted, and use status information of the data, however, the management information 61-64 may include, for example, identification information of the data, information indicating whether the data is data to be deleted, and use status information of the data, the block having the largest amount of invalid data or the block having an amount of invalid data greater than a predetermined amount may be selected as the block to be erased based on the management information 61-64.
In this embodiment, the cache control unit 9 may recognize identification information of the cached data (e.g., a logical address provided from the host (e.g., logical block addressing)), a location to which the data is written, and a usage status of the data based on the management information 61-64 and the address translation information 7, for example, the cache control unit 9 may select the data cached to each of the block groups BG 1 -BG 4 and the blocks erased from the FIFO based on the management information 61-64 and the address translation information 7.
The processor 2 functions as an address conversion unit 8 and a cache control unit 9 by executing programs stored in the memory of the processor 2, the memory 3, the nonvolatile cache memory 4, or the SSD 5.
In the present embodiment, the program to cause the processor 2 to function as the address conversion unit 8 and the cache control unit 9 may be an OS, middleware, or firmware, for example. In the present embodiment, all or a part of the address conversion unit 8 or all or a part of the cache control unit 9 may be implemented by hardware.
The address conversion unit 8 generates information relating the logical address of the write data and a physical address indicating a location in the nonvolatile cache memory 4 where the write data is stored, and registers the generated information to the address conversion information 7.
When receiving a logical address of read data from the processor 2, the address conversion unit 8 converts the logical address into a physical address based on the address conversion information 7.
The cache control unit 9 includes a generation unit 10, control units 11 to 14, and change units 15 and 16.
The generation unit 10 generates the management information 61 to 64 corresponding to the block groups BG 1 to BG 4 in the nonvolatile cache memory 4 and writes the management information 61 to 64 to the memory 3.
The control units 11 to 14 control data writing and block erasing for the block group BG 1 to BG 4, respectively.
The control unit 11 includes a writing unit 111, a determination unit 112, a selection unit 113, a determination unit 114, and an erasing unit 115.
The write unit (first write unit) 111 writes data accessed by the processor 2 to the block group BG 1.
The determination unit (first determination unit) 112 determines whether the block group BG 1 satisfies an erase condition (first erase condition).
When the block group BG 1 satisfies the erase condition, the selection unit (first selection unit) 113 selects a block to be erased (first area to be erased) from the block group BG 1.
The determination unit (second determination unit) 114 determines whether each data item written to the block to be erased is in the first high-use state or the first low-use state and whether each item of the data is data to be deleted, based on the management information 61.
The erasing unit (first erasing unit) 115 erases the block to be erased when each data item written to the block to be erased can be discarded because each data item is written to the block group BG 2 to BG 3 or data to be deleted.
The control unit 12 includes a writing unit 121, a determining unit 122, a selecting unit 123, a determining unit 124, and an erasing unit 125.
When the determination unit 114 determines that the data written to the block to be erased of the block group BG 1 is in the first low usage state and is not the data to be deleted, the write unit (second write unit) 121 writes the data to the block group BG 2.
The determination unit (fifth determination unit) 122 determines whether the block group BG 2 satisfies the erase condition (third erase condition).
When the block group BG 2 satisfies the erase condition, the selection unit (third selection unit) 123 selects a block to be erased (third area to be erased) from the block group BG 2.
The determination unit 124 determines whether each item of data written to the block to be erased is in the third-high use state or the third-low use state and whether each item of the data is data to be deleted, based on the management information 62.
When data written to the block to be erased, which is in the third high usage state and is not data to be deleted, is written to the block group BG 3, the erasing unit (second erasing unit) 125 erases the data written to the block to be erased.
The control unit 13 includes a writing unit 131, a determining unit 132, a selecting unit 133, a determining unit 134, a writing unit 135, an erasing unit 136, and a writing unit 137.
When the determination unit 114 determines that the data written to the block to be erased of the block group BG 1 is in the first high usage state and is not the data to be deleted, the write unit (third write unit) 131 writes the data to the block group BG 3.
When the data written to the block group BG 2 is in the third high use state and is not data to be deleted, the write unit (sixth write unit) 137 writes the data to the block group BG 3, for example, when the data written to the block group BG 2 is data to be accessed by the processor 2, the write unit 137 may write the data to be accessed of the block group BG 2 to the block group BG 3.
The determination unit (third determination unit) 132 determines whether the block group BG 3 satisfies the erase condition (second erase condition).
When the block group BG 3 satisfies the erase condition, the selection unit (second selection unit) 133 selects a block to be erased (second area to be erased) from the block group BG 3.
The determination unit (fourth determination unit) 134 determines whether each data item written to the block to be erased is in the second high-use state or the second low-use state and whether each item of the data is data to be deleted, based on the management information 63.
When the data written to the block to be erased of the block group BG 3 is determined to be in the second highest usage state and is not the data to be deleted, the write unit (fifth write unit) 135 writes the data to another writable block in the block group BG 3 again.
The erase unit (third erase unit) 136 erases the block to be erased when each item of data written to the block to be erased can be discarded because each data item is written to the block group BG 4, is written again to the block group BG 3, or is data to be deleted.
Control unit 14 includes write unit 141, determination unit 142, selection unit 143, and erase unit 144.
When the determination unit 134 determines that the data written to the block to be erased of the block group BG 3 is in the second low usage state and is not the data to be deleted, the writing unit (fourth writing unit) 141 writes the data to the block group BG 4.
The determination unit (sixth determination unit) 142 determines whether the block group BG 4 satisfies the erase condition (fourth erase condition).
When the block group BG 4 satisfies the erase condition (fourth erase condition), the selecting unit (fourth selecting unit) 143 selects a block to be erased (fourth area to be erased) from the block group BG 4.
The erasing unit (fourth erasing unit) 144 erases data written to the block to be erased of the block group BG 4.
When the data written to the block group BG 2 reaches the third high usage state, the varying unit (first varying unit) 15 increases the number of blocks included in the block group BG 1 and decreases the number of blocks included in the block group BG 3, for example, when the data written to the block group BG 2 is accessed by the processor 2, the varying unit 15 increases the number of blocks included in the block group BG 1 and decreases the number of blocks included in the block group BG 3.
When the data written to the block group BG 4 reaches the fourth high usage state, the varying unit (second varying unit) 16 increases the number of blocks included in the block group BG 3 and decreases the number of blocks included in the block group BG 1, for example, when the data written to the block group BG 4 is accessed by the processor 2, the varying unit 16 increases the number of blocks included in the block group BG 3 and decreases the number of blocks included in the block group BG 1.
FIG. 4 exemplarily shows a process in which data is written to the block group BG 1, data is written to the block group BG 2 or BG 3, and a block to be erased in the block group BG 1 is erased.
In step S401, the writing unit 111 writes data accessed by the processor 2 to the block group BG 1.
In step 402, the determination unit 112 determines whether the block group BG 1 satisfies the erase condition.
When the block group BG 1 does not satisfy the erase condition, the process proceeds to step S406.
When the block group BG 1 satisfies the erase condition, the selection unit 113 selects a block to be erased from the block group BG 1 in step S403.
In step S404, the determination unit 114 determines whether each data item written to the block to be erased is in the first high usage state or the first low usage state and whether each item of the data is data to be erased (deletion target data) based on the management information 61.
When the data item is in the first low usage state and the data is not data to be deleted (non-deletion target data), in step S501, the writing unit 121 writes the data item to the block group BG 2.
When the data item is in the first high usage state and is not data to be deleted, in step S601, the writing unit 131 writes the data item to the block group BG 3.
When each item of data written to the block to be erased can be discarded because each item of data is written to the block group BG 2 or the block group BG3 or the data to be deleted, the erasing unit 115 erases the block to be erased in step S405.
In step S406, the cache control unit 9 determines whether the process is to be ended.
When the cache control unit 9 does not end the process, the process returns to step S401.
When the cache control unit 9 ends the process, the process ends.
FIG. 5 is a flowchart showing an example of second cache control according to the present embodiment FIG. 5 exemplarily shows a process in which data is written to the block group BG 2 and a block to be erased in the block group BG 2 is erased.
When the data written to the block to be erased of the block group BG 1 is determined to be in the first low usage state and is not the data to be deleted in step S404, the writing unit 121 writes the data to the block group BG 2 in step S501.
In step S502, the determination unit 122 determines whether the block group BG 2 satisfies the erase condition.
When the block group BG 2 does not satisfy the erase condition, the process proceeds to step S506.
When the block group BG 2 satisfies the erase condition, the selection unit 123 selects a block to be erased from the block group BG 2 in step S503.
In step S504, the determination unit 124 determines whether each data item written to the block to be erased is in the third-high use state or the third-low use state and whether each item of the data is data to be deleted, based on the management information 62.
When the data item is in the third low-use state or the data is to be deleted, the process proceeds to step S505.
When the data item is in the third-highest usage state and is not data to be deleted, in step S601, the writing unit 137 writes the data item to the block group BG 3.
In step S505, the erasing unit 125 erases the data of the block to be erased written to the block group BG 2.
In step S506, the cache control unit 9 determines whether the process is to be ended.
When the cache control unit 9 does not end the process, the process returns to step S501.
When the cache control unit 9 ends the process, the process ends.
FIG. 6 is a flow chart showing an example of a third cache control according to the present embodiment FIG. 6 exemplarily shows a process from writing data into the block group BG 3 to erasing data in the block group BG 3.
When the data written to the block to be erased of the block group BG 1 is determined to be in the first high usage state and is not the data to be deleted in step S404, the write unit 131 writes the data to the block group BG 3 in step S601, when the data written to the block group BG 2 is determined to be in the third high usage state (for example, the data is accessed by the processor 2) and is not the data to be deleted in step S304, the write unit 137 writes the data of the block group BG 2 to the block group BG 3.
In step S602, the determination unit 132 determines whether the block group BG 3 satisfies the erase condition.
When the block group BG 3 does not satisfy the erase condition, the process proceeds to step S607.
When the block group BG 3 satisfies the erase condition, the selection unit 133 selects a block to be erased from the block group BG 3 in step S603.
In step S604, the determination unit 134 determines whether each data item written to the block to be erased is in the second high usage state or the second low usage state and whether each item of the data is data to be deleted, based on the management information 63.
when the data item is in the second low usage state and is not data to be deleted, in step S701, the writing unit 141 writes the data to the block group BG 4.
When the data is in the second-highest usage state and the data is not to be deleted, in step S605, the write unit 135 again writes the data written to the to-be-erased block of the block group BG 3 to another block in the block group BG 3.
In step S606, when each item of data written to the block to be erased can be discarded because each data item is written to the block group BG 4, written again to the block group BG 3, or data to be deleted, the erase unit 136 erases the block to be erased.
In step S607, the cache control unit 9 determines whether the process is to be ended.
When the cache control unit 9 does not end the process, the process returns to step S601.
When the cache control unit 9 ends the process, the process ends.
FIG. 7 is a flowchart showing an example of a fourth cache control according to the present embodiment FIG. 7 exemplarily shows a process in which data is written to the block group BG 4 and data in the block group BG 4 is erased.
When the data written to the block to be erased of the block group BG 3 is determined to be in the second low state and is not the data to be deleted in step S604, the writing unit 141 writes the data to the block group BG 4 in step S701.
In step S702, the determination unit 142 determines whether the block group BG 4 satisfies the erasing condition.
When the block group BG 4 does not satisfy the erase condition, the process proceeds to step S705.
When the block group BG 4 satisfies the erase condition, the selection unit 143 selects a block to be erased from the block group BG 4 in step S703.
In step S704, the erasing unit 144 erases the data of the block to be erased written in the block group BG 4.
In step S705, the cache control unit 9 determines whether the process is to be ended.
When the cache control unit 9 does not end the process, the process returns to step S701.
When the cache control unit 9 ends the process, the process ends.
In the block group BG 1 of the present embodiment, for example, data is written first to block B 1,1, then sequentially to block B 1,2, and then similarly to blocks B 1,3 -B 1,K when the amount of data of blocks B 1,1 -B 1,K included in the block group BG 1 exceeds a predetermined amount of data, block B 1,1, where the write first completed, is erased from FIFO, and data is written again sequentially to erased block B 1,1, after the write to block B 1,1 is completed, block B 1,2 is erased from FIFO, then data is written again sequentially to erased block B 1,2.
In the block group BG 1, it is determined whether data written to a block to be erased in the block group BG 1 is accessed less than a first number of times or at less than a first frequency, for example, based on the management information 61 when data written to a block to be erased in the block group BG 1 is accessed less than the first number of times or at less than the first frequency, the block group BG 2 is selected as a write destination of the data.
In contrast, when data written to a block to be erased in the block group BG 1 is accessed a first number of times or more or at a first frequency or more, the block group BG 3 is selected as a write destination of the data.
When the data written to the block to be erased in the block group BG 1 is data to be deleted, the data is discarded.
In the block group BG 2 of the present embodiment, data in the first low usage state from the block group BG 1 is first sequentially written to the block B 2,1, next sequentially written to the block B 2,2, and then similarly written to the blocks B 2,3 to B 2,L when the data amount of the blocks B 2,1 to B 2,L included in the block group BG 2 exceeds a predetermined data amount, the block B 2,1 in which writing is first completed is erased according to FIFO and data is again sequentially written to the erased block B 2,1, after the writing to the block B 2,1 is completed, the block B 2,2 is erased according to FIFO, then, data is sequentially written to the erased block B 2,2. the same control is repeated.
In the block group BG 2, it is determined whether data written to a block to be erased in the block group BG 2 is accessed less than a third number of times or less than a third frequency, for example, based on the management information 62.
In contrast, when data written to a block to be erased in the block group BG 2 is accessed a third number of times or more or at a third frequency or more, the block group BG 3 is selected as a write destination of the data.
When the data written to the block to be erased in the block group BG 2 is data to be deleted, the data is discarded.
In the block group BG 3 of the present embodiment, data in the first high use state from the block group BG 1, data in the third high use state from the block group BG 2, or re-written data from the block group BG 3 are sequentially written first to block B 3,1, next to block B 3,2, and then similarly to blocks B 3,3 to B 3,M when the amount of data of the blocks B 3,1 to B 3,M included in the block group BG 3 exceeds a predetermined amount of data, the block B 3,1 in which writing is first completed is erased according to a FIFO and data is sequentially written again to the erased block B 3,1 again after the writing to the block B 3,1 is completed, the block B 3,2 is erased according to the FIFO, then data is sequentially written again to the erased block B 3,2.
When the data written to the block to be erased in the block group BG 3 is accessed less than the second number of times or less than the second frequency, the block group BG 4 is selected as the write destination of the data.
In contrast, when the data written to the block to be erased in the block group BG 3 is accessed a second number of times or more or at a second frequency or more, the data is written to the block group BG 3 again.
When the data written to the block to be erased in the block group BG 3 is data to be deleted, the data is discarded.
In the block group BG 4 of the present embodiment, data in the second low usage state from the block group BG 3 is first sequentially written to block B 4,1, next sequentially written to block B 4,2, and then similarly written to blocks B 4,3 to B 4,N when the data amount of the blocks B 4,1 to B 4,N included in the block group BG 4 exceeds a predetermined data amount, the block B 4,1 in which writing is first completed is erased according to FIFO and data is again sequentially written to the erased block B 4,1. after the writing to the block B 4,1 is completed, the block B 4,2 is erased according to FIFO, then data is sequentially written to the erased block B 4,2. the same control is repeated.
When the data written to the block to be erased of the block group BG 4 is determined to be in the fifth high use state, the control unit 13 may write the data to the writable destination block of the block group BG 3 in terms of maintaining the data in the nonvolatile cache memory 4.
In the present embodiment, data is managed based on four block groups BG 1 to BG 4.
For example, the first data (once accessed data) accessed once by the processor 2 is managed in the block group BG 1.
For example, if the second data in chunk group BG 1 is accessed two or more times by processor 2 and pushed out of chunk group BG 1 based on the FIFO, the second data is moved from chunk group BG 1 to chunk group BG 3.
Note that, in the present embodiment, the size of the block group BG 1 is larger than the size of the block group BG 3.
For example, when the third data in block group BG 1 is pushed out of block group BG 1 based on the FIFO without being accessed by the processor 2, the third data is moved from block group BG 1 to block group BG 2.
For example, if the fourth data in block group BG 3 is cleared from block group BG 3 based on the FIFO without being accessed by the processor 2, the fourth data is moved from block group BG 3 to block group BG 4.
for example, in block groups BG 2 and BG 4, metadata can be cached instead of caching data.
In the present embodiment, for example, when the fifth data is stored in the chunk group BG 1, the sixth data in the chunk group BG 2 may be pushed out based on the FIFO.
For example, when the seventh data in the block group BG 1 is accessed and pushed out of the block group BG 1 based on the FIFO, the seventh data may be moved from the block group BG 1 to the block group BG 3, the eighth data in the block group BG 3 may be moved from the block group BG 3 to the block group BG 4 based on the FIFO, and the ninth data in the block group BG 4 may be pushed out of the block group BG 4 based on the FIFO.
If the size of the block group BG 1 increases, the eleventh data in the block group BG 3 is moved to the block group BG 4 based on a FIFO.
For example, when the twelfth data in the chunk group BG 4 is accessed and pushed out of the chunk group BG 4 based on the FIFO, the twelfth data is moved to the chunk group BG 3 and the size of the chunk group BG 1 is reduced.
In the present embodiment described above, the maintenance determination determines whether or not to maintain the data of the block unit, the transfer write writes the block data to be maintained to the destination block, and the data written to the nonvolatile cache memory 4 is erased on a block-by-block basis.
In the embodiment, the effective cache capacity can be increased, the bit rate of the nonvolatile cache memory 4 can be increased, and the speed of the information processing apparatus 17 can be increased.
In the present embodiment, in the case where garbage collection is not performed for the nonvolatile cache memory 4, a decrease in performance can be avoided. Since garbage collection is not necessary, the number of writes to the nonvolatile cache 4 can be reduced and the lifetime of the nonvolatile cache 4 can be increased. Further, since garbage collection is not necessary, a provisioning area (provisioning area) does not need to be secured. Accordingly, the data capacity usable as a cache memory can be increased, and the use efficiency can be improved.
For example, when using non-volatile memory as a cache and discarding data regardless of block boundaries, garbage collection may be performed frequently to move valid data in a block of non-volatile memory to another block. In the present embodiment, there is no need to perform garbage collection in the nonvolatile cache memory 4. Therefore, as described above, in the present embodiment, the lifetime of the nonvolatile cache memory 4 can be increased.
[ third embodiment ]
In the present embodiment, the information processing system 35 including the information processing system 17 and the SSD 5 explained in the first embodiment and the second embodiment is explained in further detail.
Fig. 8 is a block diagram showing an example of a detailed structure of the information processing system 35 according to the present embodiment.
The information processing system 35 includes an information processing device 17 and a memory system 37.
The SSD 5 according to the first embodiment and the second embodiment corresponds to the memory system 37.
The processor 22 of the SSD 5 corresponds to the CPU 43B.
The address conversion information 32 corresponds to a LUT (look-up table) 45.
The memory 23 corresponds to the DRAM 47.
The information processing apparatus 17 functions as a host apparatus.
the controller 36 of the memory system 37 includes a front end 4F and a back end 4B.
The front end (host communication unit) 4F includes a host interface 41, a host interface controller 42, an encoding/decoding unit (advanced encryption standard (AES))44, and a CPU 43F.
The host interface 41 communicates with the information processing device 17 to exchange requests (write command, read command, erase command), LBAs, and data.
A host interface controller (control unit) 42 controls communication of the host interface 41 based on control of the CPU 43F.
The encoding/decoding unit 44 encodes write data (plaintext) transferred from the host interface controller 42 in a data write operation. The encoding/decoding unit 44 decodes encoded read data transferred from the read buffer RB of the back end 4B in a data read operation. It should be noted that the transfer of write data and read data may be performed without using encoding/decoding unit 44 in accordance with a temporary command.
The CPU 43F controls the above components 41, 42, and 44 of the front end 4F to control the entire function of the front end 4F.
The back end (memory communication unit) 4B includes a write buffer WB, a read buffer RB, an LUT45, a DDRC 46, a DRAM 47, a DMAC 48, an ECC 49, a randomizer RZ, a NANDC 50, and a CPU 43B.
The write buffer (write data transfer unit) WB temporarily stores write data transferred from the information processing apparatus 17. Specifically, the write buffer WB temporarily stores the data until it reaches a predetermined data size suitable for the non-volatile memory 24.
the read buffer (read data transfer unit) RB temporarily stores read data read from the nonvolatile memory 24. Specifically, the read buffer RB rearranges the read data into an order suitable for the information processing apparatus 17 (the order of the logical addresses LBA specified by the information processing apparatus 17).
the LUT45 is data to convert the logical address LBA into a physical address PBA (physical block addressing).
the DDRC 46 controls Double Data Rate (DDR) in the DRAM 47.
The DRAM 47 is a nonvolatile memory storing, for example, the LUT 45.
a Direct Memory Access Controller (DMAC)48 transfers write data and read data via the internal bus IB. In FIG. 8, only a single DMAC 48 is shown; however, the controller 36 may include two or more DMACs 48. The DMAC 48 may be set in various locations within the controller 36.
An ECC (error correction unit) 49 adds an Error Correction Code (ECC) to the write data transferred from the write buffer WB. When the read data is transferred to the read buffer RB, the ECC 49 uses the added ECC to correct the read data read from the non-volatile memory 24, if necessary.
In a data write operation, the randomizer RZ (or scrambler) spreads the write data in such a way that the write data is not biased in a certain page or in the word line direction of the non-volatile memory 24. By spreading the write data in this way, the number of writes can be standardized and the cell life of the memory cell MC of the nonvolatile memory 24 can be extended. Therefore, the reliability of the nonvolatile memory 24 can be improved. In addition, in the data read operation, read data read from the nonvolatile memory 24 passes through the randomizer RZ.
The NAND controller (NANDC)50 uses multiple channels (four channels CH 0-CH 3 are shown) to access the non-volatile memory 24 in parallel to meet the demand for a certain speed.
The CPU 43B controls each of the above components (45 to 50 and RZ) of the backend 4B to control the entire function of the backend 4B.
It should be noted that the structure of the controller 36 is merely an example and is not intended to be limiting thereby.
Fig. 9 is a perspective view showing an example of the storage system according to the present embodiment.
The storage system 100 includes a memory system 37 as an SSD.
For example, the memory system 37 is a relatively small module that would be approximately 20mm by 30mm in external size. It should be noted that the size and dimensions of the memory system 37 are not limited thereto and may be arbitrarily changed to various sizes.
Further, the memory system 37 may be applied to the information processing apparatus 17 as a server used in a data center or a cloud computing system employed in a company (enterprise) or the like. Thus, the memory system 37 may be an enterprise ssd (essd).
For example, the memory system 37 includes a plurality of connectors (e.g., slots) 38 that open upward. Each connector 38 is a serial attached scsi (sas) connector or the like. With the SAS connector, high-speed intercommunication can be established between information processing device 17 and each memory system 37 through the dual port of 6 Gbps. It should be noted that connector 38 may be PCI express (PCIe) or NVM express (NVMe).
A plurality of memory systems 37 are individually attached to a connector 38 of the information processing device 17 and are supported in an arrangement such that they stand in a substantially vertical direction. With this structure, a plurality of memory systems 37 can be collectively mounted in a compact size, and the memory systems 37 can be miniaturized. Further, each memory system 37 of the present embodiment is in the shape of a Small Form Factor (SFF) of 2.5 inches. Due to this shape, the memory system 37 is compatible with enterprise HDDs (eHDDs) and simple system compatibility with eHDDs can be achieved.
It should be noted that the memory system 37 is not limited to use in an enterprise HDD. For example, the memory system 37 may be used as a memory medium for a consumer electronic device, such as a notebook portable computer or a tablet computer terminal.
As can be understood from the above, the information processing system 35 and the storage system 100 having the structure of the present embodiment can realize the mass storage advantage with the same advantage of the second embodiment.
The structure of the memory system 37 according to the present embodiment is applicable to the information processing apparatus 17 according to the first embodiment. For example, the processor 2 according to the first embodiment may correspond to the CPU 43B. The address translation information 7 may correspond to the LUT 45. The memory 3 corresponds to the DRAM 47. The non-volatile cache memory 4 may correspond to the non-volatile memory 24.
While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the invention. Indeed, the novel embodiments described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the embodiments described herein may be made without departing from the spirit of the inventions. It is intended that the appended claims and their equivalents cover such forms or modifications as would fall within the scope and spirit of the inventions.

Claims (5)

1. An information processing apparatus in communication with a memory apparatus that translates logical addresses into physical addresses of the memory apparatus according to address translation information,
the information processing apparatus includes a 1 st memory configured to store 1 st valid data,
The memory device includes a 2 nd memory configured to store the 1 st valid data and the 2 nd valid data, and the 2 nd memory is nonvolatile and includes a plurality of erase unit areas, each of the erase unit areas including a plurality of write unit areas;
The information processing apparatus is characterized by further comprising:
A transfer unit that transfers write data and a logical address of the write data to the memory device;
A receiving unit that receives region information from the memory device, the region information including logical address data identification information indicating the 1 st valid data or the 2 nd valid data written to an erase unit region to be subjected to garbage collection;
A deletion information transmitting unit that transmits deletion information to the memory device, the deletion information including: a logical address indicating deletion target data that is data to be deleted from the 2 nd memory, the data to be deleted being not the 1 st valid data and being at least a part of the 2 nd valid data written to the erase unit area of the 2 nd memory to be subjected to garbage collection, and the deletion target data being excluded from garbage collection; and
And the erasing unit is used for moving data to be moved from the area to be erased of the 1 st memory to another area of the 1 st memory and erasing the area to be erased, wherein the data to be moved is data which is not deleted in the data written into the area to be erased.
2. The information processing apparatus according to claim 1, characterized by further comprising:
A write unit that writes at least a portion of the 1 st valid data or the 2 nd valid data indicated by the logical address data identification information included in the area information to the 1 st memory.
3. The information processing apparatus according to claim 2, wherein the control unit controls the display unit to display the image data based on the display unit information
The 1 st memory is a cache memory included in the information processing apparatus; and is
the writing unit writes the 1 st valid data or the 2 nd valid data excluding the data to be deleted to the 1 st memory.
4. The information processing apparatus according to claim 2, wherein the control unit controls the display unit to display the image data based on the display unit information
The 1 st memory is a cache memory included in the information processing apparatus; and is
The writing unit does not write the data to be deleted to the 1 st memory.
5. The information processing apparatus according to claim 1, wherein the information processing apparatus is further characterized in that
The non-volatile memory is a NAND flash memory,
The erase unit area is a block of which,
the writing unit area is one page, and
The memory device is a solid state drive.
CN201510239405.5A 2014-12-29 2015-05-12 Information processing apparatus Active CN106201326B (en)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US201462097538P 2014-12-29 2014-12-29
US62/097,538 2014-12-29
JP2015-038999 2015-02-27
JP2015038999A JP6378111B2 (en) 2014-12-29 2015-02-27 Information processing apparatus and program
US14/656,524 US10331551B2 (en) 2014-12-29 2015-03-12 Information processing device and non-transitory computer readable recording medium for excluding data from garbage collection
US14/656,524 2015-03-12

Publications (2)

Publication Number Publication Date
CN106201326A CN106201326A (en) 2016-12-07
CN106201326B true CN106201326B (en) 2019-12-10

Family

ID=56359601

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510239405.5A Active CN106201326B (en) 2014-12-29 2015-05-12 Information processing apparatus

Country Status (3)

Country Link
JP (2) JP6378111B2 (en)
CN (1) CN106201326B (en)
TW (1) TW201624491A (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI639918B (en) * 2017-05-11 2018-11-01 慧榮科技股份有限公司 Data storage device and operating method therefor
TWI649652B (en) * 2017-12-29 2019-02-01 國科美國研究實驗室 Fast and safe data storage device and method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102163458A (en) * 2010-02-12 2011-08-24 株式会社东芝 Semiconductor memory device
CN103744615A (en) * 2013-12-17 2014-04-23 记忆科技(深圳)有限公司 Dynamic compensation receiver and dynamic compensation receiving method

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100823171B1 (en) * 2007-02-01 2008-04-18 삼성전자주식회사 Computer system having a partitioned flash translation layer and flash translation layer partition method thereof
US8977805B2 (en) * 2009-03-25 2015-03-10 Apple Inc. Host-assisted compaction of memory blocks
JP2012123499A (en) * 2010-12-07 2012-06-28 Toshiba Corp Memory system
JP5687648B2 (en) * 2012-03-15 2015-03-18 株式会社東芝 Semiconductor memory device and program
CN104583977B (en) * 2012-08-23 2017-07-14 苹果公司 The compression of the memory block of main frame auxiliary
US9652376B2 (en) * 2013-01-28 2017-05-16 Radian Memory Systems, Inc. Cooperative flash memory control
KR20140128824A (en) * 2013-04-29 2014-11-06 삼성전자주식회사 Method for managing data using attribute data

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102163458A (en) * 2010-02-12 2011-08-24 株式会社东芝 Semiconductor memory device
CN103744615A (en) * 2013-12-17 2014-04-23 记忆科技(深圳)有限公司 Dynamic compensation receiver and dynamic compensation receiving method

Also Published As

Publication number Publication date
JP2016126739A (en) 2016-07-11
TW201624491A (en) 2016-07-01
JP6689325B2 (en) 2020-04-28
JP6378111B2 (en) 2018-08-22
JP2018195333A (en) 2018-12-06
CN106201326A (en) 2016-12-07

Similar Documents

Publication Publication Date Title
US10761977B2 (en) Memory system and non-transitory computer readable recording medium
US10303599B2 (en) Memory system executing garbage collection
US11726906B2 (en) Memory device and non-transitory computer readable recording medium
US8458394B2 (en) Storage device and method of managing a buffer memory of the storage device
US20230342294A1 (en) Memory device and non-transitory computer readable recording medium
CN108027764B (en) Memory mapping of convertible leaves
US20230281118A1 (en) Memory system and non-transitory computer readable recording medium
CN106201326B (en) Information processing apparatus
CN106205708B (en) Cache memory device
US20160124842A1 (en) Memory system and non-transitory computer readable recording medium
US10331551B2 (en) Information processing device and non-transitory computer readable recording medium for excluding data from garbage collection
US10474569B2 (en) Information processing device including nonvolatile cache memory and processor

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20170807

Address after: Tokyo, Japan

Applicant after: TOSHIBA MEMORY Corp.

Address before: Tokyo, Japan

Applicant before: Toshiba Corp.

GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: Tokyo

Patentee after: Kaixia Co.,Ltd.

Address before: Tokyo

Patentee before: TOSHIBA MEMORY Corp.

Address after: Tokyo

Patentee after: TOSHIBA MEMORY Corp.

Address before: Tokyo

Patentee before: Pangea Co.,Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220303

Address after: Tokyo

Patentee after: Pangea Co.,Ltd.

Address before: Tokyo

Patentee before: TOSHIBA MEMORY Corp.