CN117806572A - Storage device and memory management method - Google Patents

Storage device and memory management method Download PDF

Info

Publication number
CN117806572A
CN117806572A CN202410232483.1A CN202410232483A CN117806572A CN 117806572 A CN117806572 A CN 117806572A CN 202410232483 A CN202410232483 A CN 202410232483A CN 117806572 A CN117806572 A CN 117806572A
Authority
CN
China
Prior art keywords
cache
controller
buffer
data
read operation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410232483.1A
Other languages
Chinese (zh)
Inventor
陈文涛
许建强
苏忠益
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Kangxinwei Storage Technology Co Ltd
Original Assignee
Hefei Kangxinwei Storage Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei Kangxinwei Storage Technology Co Ltd filed Critical Hefei Kangxinwei Storage Technology Co Ltd
Priority to CN202410232483.1A priority Critical patent/CN117806572A/en
Publication of CN117806572A publication Critical patent/CN117806572A/en
Pending legal-status Critical Current

Links

Landscapes

  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention provides a storage device and a memory management method, comprising a controller, wherein a random access memory is arranged in the controller; and a flash memory divided into a plurality of physical flash blocks, each logical block corresponding to at least two physical flash blocks; the random access memory comprises a block management buffer area, wherein the block management buffer area is used for buffering an erasing operation time variable and a reading operation time variable, and each logic block corresponds to the erasing operation time variable and the reading operation time variable; and the data cache area is divided into a plurality of cache node units, and the cache node units form a cache linked list. The invention can greatly reduce the occupied amount of the cache space of the random access memory, so that the random access memory with smaller cache space can be better adapted to flash memory products with larger capacity.

Description

Storage device and memory management method
Technical Field
The present invention relates to the field of memory technologies, and in particular, to a memory device and a memory management method.
Background
Storage devices using flash memory as a storage medium are widely used in various fields. When a memory device is used, a random access memory is generally used to buffer data. At present, the capacity of a flash memory product is larger and larger, and the cache space of a random access memory is smaller, so that when the random access memory with smaller cache space is adapted to the flash memory product with larger capacity, the problem of insufficient cache space of the random access memory often occurs.
Disclosure of Invention
The invention aims to provide a storage device and a memory management method, which are used for solving the problem that the random access memory has insufficient cache space when the random access memory with smaller cache space is adapted to a flash memory product with larger capacity.
The present invention provides a storage device comprising:
a controller, wherein a random access memory is arranged in the controller; and
the flash memory is divided into a plurality of physical flash blocks, the physical flash blocks correspond to a plurality of logic blocks of the firmware, and each logic block corresponds to at least two physical flash blocks;
wherein the random access memory comprises:
the block management buffer area is used for buffering an erasing operation time variable and a reading operation time variable, and each logic block corresponds to the erasing operation time variable and the reading operation time variable; and
the data buffer area is divided into a plurality of buffer node units, the buffer node units form a buffer chain table, and the controller buffers the read operation data or the write operation data into the idle buffer node units in the buffer chain table.
In an embodiment of the present invention, the flash memory stores a primary mapping table, where the primary mapping table is used to indicate a mapping structure between a plurality of logical addresses and corresponding physical addresses, the plurality of logical addresses are divided into a plurality of segments of segmented logical addresses in the primary mapping table, the random access memory includes a mapping table buffer, the mapping table buffer stores a secondary mapping table, and the secondary mapping table includes a plurality of mapping management units, where the plurality of mapping management units are respectively used to indicate a mapping structure between the plurality of segments of segmented logical addresses and corresponding physical addresses.
In an embodiment of the invention, the number of map management units is equal to the number of segments of the segmented logical address.
In one embodiment of the present invention, the storage capacity B of each segment of the segmented logical address in the primary mapping table is expressed as 4 KB.ltoreq.B.ltoreq.16 KB.
In an embodiment of the present invention, the buffer list is divided into an idle list, a write operation list and a read operation list, where the idle list is used for managing the idle buffer node units, the write operation list is used for managing the buffer node units of the write operation data, the read operation data list is used for managing the buffer node units of the read operation data, when the read operation and the write operation are not performed, the buffer node units are mounted on the idle list, when the write operation is performed, the controller inserts the buffer node units in the idle list into the write operation list, and when the read operation is performed, the controller inserts the buffer node units in the idle list into the read operation list.
In one embodiment of the present invention, when the write operation is completed, the controller inserts an idle cache node unit in the write operation linked list into the idle linked list.
In one embodiment of the present invention, when the read operation is completed, the controller inserts an idle cache node unit in the read operation linked list into the idle linked list.
In an embodiment of the present invention, the controller updates the erase operation number variable corresponding to the logical block each time the controller performs a write operation on the logical block.
In an embodiment of the present invention, the controller updates the read operation number variable corresponding to the logic block each time the controller performs a read operation on the logic block.
The invention also provides a memory management method applied to a storage device, wherein the storage device comprises a controller and a flash memory, the controller is internally provided with a random access memory, the random access memory comprises a data cache area and a block management cache area, and the memory management method comprises the following steps:
dividing the flash memory into a plurality of physical flash memory blocks, wherein the plurality of physical flash memory blocks correspond to a plurality of logic blocks of firmware, each logic block corresponds to at least two physical flash memory blocks, the block management buffer is used for buffering an erasing operation time variable and a reading operation time variable, and each logic block corresponds to one erasing operation time variable and one reading operation time variable;
dividing the data cache area into a plurality of cache node units, wherein the cache node units form a cache linked list;
and when the read operation is performed, the controller caches the read operation data to an idle cache node unit in the cache chain table, and when the write operation is performed, the controller caches the write operation data to the idle cache node unit in the cache chain table.
In order to solve the technical problems, the invention is realized by the following technical scheme:
as described above, the present invention provides a storage device and a memory management method, which can greatly reduce the buffer space occupation amount of the random access memory, so that the random access memory with smaller buffer space can be better adapted to the flash memory product with larger capacity.
Of course, it is not necessary for any one product to practice the invention to achieve all of the advantages set forth above at the same time.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a memory device according to an embodiment of the invention;
FIG. 2 is a schematic diagram of a buffer chain table according to an embodiment of the present invention;
FIG. 3 is a schematic diagram showing the corresponding structures of physical flash blocks and logical blocks according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of the corresponding structures of the secondary mapping table and the primary mapping table according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a memory management method according to an embodiment of the invention.
In the figure: 10. a controller; 11. a host interface; 12. a flash memory interface; 13. a random access memory; 131. a cache node unit; 132. caching the linked list; 133. a secondary mapping table; 20, a step of; a flash memory; 21. a physical flash block; 22. a first level mapping table; 30. logic blocks.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, fig. 1 is a schematic diagram of a memory device according to the present invention, which may include a controller 10 and a flash memory 20. The controller 10 may include a host interface 11 for communicating with a host, a flash interface 12 for interfacing with a flash memory 20, and a random access memory 13. The host may send management commands to the controller 10 of the storage device through the host interface 11, and the controller 10 of the storage device may send feedback data to the host through the host interface 11. The controller 10 may read data from the flash memory 20 through the flash interface 12, or may write data to the flash memory 20 through the flash interface 12. The random access memory 13 may be used to buffer data written to the flash memory 20 and may also be used to buffer data to be read to the flash memory 20.
Referring to fig. 1, the storage device can be applied in a scenario where a random access memory 13 with a smaller cache space needs to be adapted to a flash memory product with a larger capacity. Specifically, the data buffered in the random access memory 13 of the storage device may include read and write operation data, an address mapping table, the number of erasing operations, the number of reading operations, and other data, and the storage device may greatly reduce the occupied buffer space of the random access memory 13 by reducing the occupied buffer space of the random access memory 13, so that the random access memory 13 with smaller buffer space is better adapted to the flash memory product with larger capacity.
Referring to fig. 2, the random access memory 13 may include a data buffer through which read/write operation data may be buffered. On the basis of this, the occupied buffer space of the random access memory 13 can be reduced by reducing the occupied buffer space of the data buffer. In order to more fully manage the buffer space in the data buffer, the data buffer may be divided into a plurality of buffer node units 131, and the plurality of buffer node units 131 may form a buffer chain table 132. During a read operation, the controller 10 may cache the read operation data to the free cache node units 131 in the cache linked list 132, and during a write operation, the controller 10 may cache the write operation data to the free cache node units 131 in the cache linked list 132. The cache space of the random access memory 13 can be more fully utilized by the management of the cache link list 132.
Referring to fig. 2, specifically, the buffer list 132 may be divided into a free list for managing the free buffer node units 131, a write list for managing the buffer node units 131 for writing data, and a read list for managing the buffer node units 131 for reading data. All cache node units 131 may be numbered sequentially, and the number may identify each cache node unit 131. At the initial time, all the cache node units 131 are not used for data caching, and are in an idle state, and at the moment, all the cache node units 131 are mounted on an idle linked list. During a write operation, the controller 10 obtains the buffer node units 131 of the free link list based on the buffer space required for the write operation data, and inserts the obtained buffer node units 131 into the write operation link list to buffer the write operation data. Upon completion of the write operation, the controller 10 inserts the completed free cache node unit 131 in the write operation linked list into the free linked list for the next cache use. When performing a read operation, the controller 10 acquires the cache node unit 131 of the free link table based on the cache space required for the read operation data, and inserts the acquired cache node unit 131 into the read operation link table to cache the read operation data. When the read operation is completed, the controller 10 inserts the completed free cache node unit 131 in the read operation linked list into the free linked list for the next cache use. When the data is cached in the caching mode, all the spaces of the data caching area can be fully utilized for reading operation caching, and all the spaces of the data caching area can be fully utilized for writing operation caching, so that the data reading and writing efficiency is improved.
Furthermore, when a section of data is written into the flash memory 20, a part of the cache data in the cache node unit 131 is written into the flash memory 20, the part of the cache node unit 131 becomes an idle state, and the idle state cache node unit 131 can be inserted into the read operation linked list, so that a part of space is saved for performing the read operation cache, and the occupied cache space of the data cache area is reduced. Similarly, when the read operation buffer of a segment of data is performed, after the read operation of the buffer data in a part of the buffer node units 131 is completed, the part of the buffer node units 131 becomes an idle state, and the buffer node units 131 in the idle state can be inserted into the write operation linked list, so that a part of space is saved for performing the write operation buffer, the buffer space occupied by the data buffer area is reduced, and the random access memory 13 with smaller buffer space can be better adapted to the flash memory product with larger capacity.
Referring to fig. 3, the ram 13 may further include a block management buffer, which may be used to buffer the erase operation number variable and the read operation number variable. The erase operation number variable is a variable for recording the number of erase operations of the flash memory 20, and the read operation number variable is a variable for recording the number of read operations of the flash memory 20. As the capacity of the flash memory 20 increases, the number of erase operations variable data and the number of read operations variable data of the block management buffer also increase. On this basis, by reducing the occupied space of the erase operation number variable and the read operation number variable in the random access memory 13, the occupied cache space of the random access memory 13 can be greatly reduced.
Referring to fig. 3, it should be noted that the erase operation frequency variable and the read operation frequency variable correspond to the logic block 30 in the logic space of the firmware, and the logic block 30 corresponds to the physical flash block 21 of the flash memory 20, when the read/write operation is performed on the physical flash block 21, the firmware actually operates the logic block 30 to further perform the read/write operation on the physical flash block 21, so as to update the corresponding erase operation frequency variable and the corresponding read operation frequency variable. On this basis, the flash memory 20 may be divided into a plurality of physical flash blocks 21, the plurality of physical flash blocks 21 corresponding to a plurality of logical blocks 30 of the firmware, and each logical block 30 may correspond to two physical flash blocks 21. With the above structure, the read operations of the two physical flash blocks 21 can be corresponding to one read operation number variable, and the erase operations of the two physical flash blocks 21 can be corresponding to one erase operation number variable, so that the occupied space of the read operation number variable and the erase operation number variable can be reduced by half, and the occupied cache space of the random access memory 13 can be greatly reduced. Further, each logical block 30 may also correspond to three physical flash blocks 21, four physical flash blocks 21, or other numbers of flash blocks, so as to ensure that each logical block 30 may correspond to at least two physical flash blocks 21, thereby reducing the occupied space.
Referring to fig. 3, further, when each logical block 30 corresponds to two physical flash blocks 21, each time an erase operation is performed on one logical block 30, the erase operation is continuously performed on the corresponding two physical flash blocks 21, and the value of the erase operation number variable corresponding to the logical block 30 is incremented by one to complete data update. When a read operation is performed on one logical block 30 at a time, only one of the two physical flash blocks 21 is read, i.e., the value of the erase operation number variable corresponding to the logical block 30 is incremented by one, thereby completing data update.
Referring to fig. 4, the random access memory 13 may further include a mapping table buffer for buffering an address mapping table. On this basis, the occupied cache space of the random access memory 13 can be greatly reduced by reducing the occupied space of the address mapping table. Specifically, the storage device performs hierarchical management on the address mapping table, the flash memory 20 stores a primary mapping table 22, where the primary mapping table 22 is used to indicate a mapping structure between multiple logical addresses and corresponding physical addresses, the multiple logical addresses are divided into multiple segments of segmented logical addresses in the primary mapping table 22, the random access memory 13 includes a mapping table buffer, the mapping table buffer stores a secondary mapping table 133, and the secondary mapping table 133 includes multiple mapping management units, and the multiple mapping management units are respectively used to indicate a mapping structure between multiple segments of segmented logical addresses and corresponding physical addresses. As can be seen from the above, the number of mapping management units is equal to the number of segments of the segment logical address, and the number of segments is related to the segment capacity of the segment logical address, and the larger the segment capacity is, the smaller the segment capacity is, and the larger the segment number is. To reduce the space occupied by multiple map management units in the secondary mapping table 133, the segmentation capacity of the segmented logical addresses in the primary mapping table 22 may be increased, thereby reducing the number of map management units. For example, the storage capacity B of each segment of the segmented logical address in the primary mapping table 22 may be expressed as 4 KB.ltoreq.B.ltoreq.16 KB, the segmented capacity of the segmented logical address in the primary mapping table 22 may be set to 4KB for small capacity flash memory 20, the segmented capacity may be set to 8KB, 16KB or other size for large capacity storage devices.
Referring to fig. 5, the present invention further provides a memory management method, which can be applied to the above-mentioned storage device, so that the random access memory with smaller cache space can be better adapted to the flash memory product with larger capacity. The memory management method may include the steps of:
step S10, dividing the flash memory into a plurality of physical flash blocks, where the plurality of physical flash blocks correspond to a plurality of logic blocks of the firmware, and each logic block corresponds to at least two physical flash blocks, where a block management buffer of the random access memory is used to buffer an erase operation number variable and a read operation number variable, and each logic block corresponds to one erase operation number variable and one read operation number variable.
And step S20, dividing the data cache area into a plurality of cache node units, wherein the cache node units form a cache chain table.
And step S30, when the read operation is performed, the controller caches the read operation data to an idle cache node unit in the cache linked list, and when the write operation is performed, the controller caches the write operation data to the idle cache node unit in the cache linked list.
Referring to fig. 3 and 5, in one embodiment of the present invention, when step S10 is performed, the flash memory 20 is divided into a plurality of physical flash blocks 21, the plurality of physical flash blocks 21 correspond to a plurality of logic blocks 30 of firmware, and each logic block 30 corresponds to at least two physical flash blocks 21, wherein a block management buffer of the random access memory 13 is used for buffering an erase operation number variable and a read operation number variable, and each logic block corresponds to one erase operation number variable and one read operation number variable. Specifically, the erase operation number variable is a variable for recording the number of erase operations of the flash memory 20, and the read operation number variable is a variable for recording the number of read operations of the flash memory 20. As the capacity of the flash memory 20 increases, the number of erase operations variable data and the number of read operations variable data of the block management buffer also increase. On this basis, by reducing the occupied space of the erase operation number variable and the read operation number variable in the random access memory 13, the occupied cache space of the random access memory 13 can be greatly reduced.
It should be noted that, the erase operation number variable and the read operation number variable correspond to the logical block 30 in the logic space of the firmware, and the logical block 30 corresponds to the physical flash block 21 of the flash memory 20, when the erase operation or the read operation is performed on the physical flash block 21, the firmware actually performs the erase operation or the read operation on the logical block 30 to implement the erase operation or the read operation on the physical flash block 21, so as to update the corresponding erase operation number variable and the read operation number variable. On this basis, the flash memory 20 may be divided into a plurality of physical flash blocks 21, the plurality of physical flash blocks 21 corresponding to a plurality of logical blocks 30 of the firmware, and each logical block 30 may correspond to two physical flash blocks 21. With the above structure, the read operations of the two physical flash blocks 21 can be corresponding to one read operation number variable, and the erase operations of the two physical flash blocks 21 can be corresponding to one erase operation number variable, so that the occupied space of the read operation number variable and the erase operation number variable can be reduced by half, and the occupied cache space of the random access memory 13 can be greatly reduced. Further, each logical block 30 may also correspond to three physical flash blocks 21, four physical flash blocks 21, or other numbers of flash blocks, so as to ensure that each logical block 30 may correspond to at least two physical flash blocks 21, thereby reducing the occupied space.
Further, when each logical block 30 corresponds to two physical flash blocks 21, each time an erase operation is performed on one logical block 30, it is necessary to continuously perform the erase operation on the corresponding two physical flash blocks 21, and add one to the value of the erase operation number variable corresponding to the logical block 30, so as to complete data update. When a read operation is performed on one logical block 30 at a time, only one of the two physical flash blocks 21 is read, i.e., the value of the erase operation number variable corresponding to the logical block 30 is incremented by one, thereby completing data update.
Referring to fig. 2 and 5, in an embodiment of the present invention, when step S20 and step S30 are performed, the data buffer area is divided into a plurality of buffer node units 131, wherein the plurality of buffer node units 131 form a buffer chain table 132. During a read operation, the controller 10 may cache the read operation data to the free cache node units 131 in the cache linked list 132, and during a write operation, the controller 10 may cache the write operation data to the free cache node units 131 in the cache linked list 132. The cache space of the random access memory 13 can be more fully utilized by the management of the cache link list 132.
Referring to fig. 2, specifically, the buffer list 132 may be divided into a free list for managing the free buffer node units 131, a write list for managing the buffer node units 131 for writing data, and a read list for managing the buffer node units 131 for reading data. All cache node units 131 may be numbered sequentially, and the number may identify each cache node unit 131. At the initial time, all the cache node units 131 are not used for data caching, and are in an idle state, and at the moment, all the cache node units 131 are mounted on an idle linked list. During a write operation, the controller 10 obtains the buffer node units 131 of the free link list based on the buffer space required for the write operation data, and inserts the obtained buffer node units 131 into the write operation link list to buffer the write operation data. Upon completion of the write operation, the controller 10 inserts the completed free cache node unit 131 in the write operation linked list into the free linked list for the next cache use. When performing a read operation, the controller 10 acquires the cache node unit 131 of the free link table based on the cache space required for the read operation data, and inserts the acquired cache node unit 131 into the read operation link table to cache the read operation data. When the read operation is completed, the controller 10 inserts the completed free cache node unit 131 in the read operation linked list into the free linked list for the next cache use. When the data is cached in the caching mode, all the spaces of the data caching area can be fully utilized for reading operation caching, and all the spaces of the data caching area can be fully utilized for writing operation caching, so that the data reading and writing efficiency is improved.
Referring to fig. 5, further, when a section of data is written into the flash memory 20, a part of the buffered data in the buffer node unit 131 is written into the flash memory 20, the part of the buffer node unit 131 becomes an idle state, and the idle state buffer node unit 131 can be inserted into the read operation linked list, so that a part of space is saved for the read operation buffer, and the occupied buffer space of the data buffer area is reduced. Similarly, when the read operation buffer of a segment of data is performed, after the read operation of the buffer data in a part of the buffer node units 131 is completed, the part of the buffer node units 131 becomes an idle state, and the buffer node units 131 in the idle state can be inserted into the write operation linked list, so that a part of space is saved for performing the write operation buffer, the buffer space occupied by the data buffer area is reduced, and the random access memory 13 with smaller buffer space can be better adapted to the flash memory product with larger capacity.
Referring to fig. 4, it should be noted that, after step S20, the data buffer area is divided into a plurality of cache node units 131, where after the step of forming the cache linked list 132 by the plurality of cache node units 131, the storage capacity of each segment of the segmented logical address in the primary mapping table 22 in the flash memory 20 may be set to B, where B is 4KB less than or equal to B less than or equal to 16KB. Specifically, the storage device performs hierarchical management on the address mapping table, the flash memory 20 stores a primary mapping table 22, where the primary mapping table 22 is used to indicate a mapping structure between a plurality of logical addresses and corresponding physical addresses, the plurality of logical addresses are divided into a plurality of segments of segmented logical addresses in the primary mapping table 22, and the random access memory 13 includes a mapping table buffer area, where the mapping table buffer area stores a secondary mapping table 133, and the secondary mapping table 133 includes a plurality of mapping management units, where the plurality of mapping management units are respectively used to indicate a mapping structure between a plurality of segments of segmented logical addresses and corresponding physical addresses. As can be seen from the above, the number of mapping management units is equal to the number of segments of the segment logical address, and the number of segments is related to the segment capacity of the segment logical address, and the larger the segment capacity is, the smaller the segment capacity is, and the larger the segment number is. To reduce the space occupied by multiple map management units in the secondary mapping table 133, the segmentation capacity of the segmented logical addresses in the primary mapping table 22 may be increased, thereby reducing the number of map management units. For example, the storage capacity B of each segment of the segment logical address in the primary mapping table 22 is expressed as 4 KB.ltoreq.B.ltoreq.16 KB, the segment capacity of the segment logical address in the primary mapping table 22 may be set to 4KB for small capacity flash memory 20, and the segment capacity may be set to 8KB, 16KB or other size for large capacity storage devices.
In summary, by the storage device and the memory management method provided by the invention, the occupied space of the cache data in the random access memory can be greatly reduced, so that the random access memory with smaller cache space is better adapted to the flash memory product with larger capacity.
In the description of the present specification, the descriptions of the terms "present embodiment," "example," "specific example," and the like, mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
The embodiments of the invention disclosed above are intended only to help illustrate the invention. The examples are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and the practical application, to thereby enable others skilled in the art to best understand and utilize the invention. The invention is limited only by the claims and the full scope and equivalents thereof.

Claims (10)

1. A memory device, comprising:
a controller, wherein a random access memory is arranged in the controller; and
the flash memory is divided into a plurality of physical flash blocks, the physical flash blocks correspond to a plurality of logic blocks of the firmware, and each logic block corresponds to at least two physical flash blocks;
wherein the random access memory comprises:
the block management buffer area is used for buffering an erasing operation time variable and a reading operation time variable, and each logic block corresponds to the erasing operation time variable and the reading operation time variable; and
the data buffer area is divided into a plurality of buffer node units, the buffer node units form a buffer chain table, and the controller buffers the read operation data or the write operation data into the idle buffer node units in the buffer chain table.
2. The memory device of claim 1, wherein the flash memory stores a primary mapping table for indicating a mapping structure between a plurality of logical addresses and corresponding physical addresses, wherein the plurality of logical addresses are divided into a plurality of segmented logical addresses in the primary mapping table, the random access memory includes a mapping table buffer storing a secondary mapping table including a plurality of mapping management units for indicating a mapping structure between the plurality of segmented logical addresses and corresponding physical addresses, respectively.
3. The storage device of claim 2, wherein the number of map management units is equal to the number of segments of the segment logical address.
4. The memory device of claim 2, wherein the storage capacity B of each segment of the segmented logical address in the primary mapping table is expressed as 4 kb+.b+.16 KB.
5. The storage device of claim 1, wherein the cache link list is divided into a free link list for managing free cache node units, a write operation link list for managing write operation data cache node units, and a read operation link list for managing read operation data cache node units, the cache node units being mounted to the free link list when no read operation or write operation is performed, the controller inserting cache node units in the free link list into the write operation link list when a write operation is performed, and the controller inserting cache node units in the free link list into the read operation link list when a read operation is performed.
6. The storage device of claim 5, wherein the controller inserts an empty cache node unit in the write operation linked list into the free linked list upon completion of a write operation.
7. The memory device of claim 5, wherein the controller inserts an empty cache node unit in the read linked list into the empty linked list upon completion of a read operation.
8. The memory device of claim 1, wherein the controller updates the erase operation count variable corresponding to the logical block each time the controller performs a write operation on the logical block.
9. The memory device of claim 1, wherein the controller updates a read operation number variable corresponding to the logical block each time the controller performs a read operation on the logical block.
10. The memory management method is characterized by being applied to a storage device, wherein the storage device comprises a controller and a flash memory, a random access memory is arranged in the controller, the random access memory comprises a data cache area and a block management cache area, and the memory management method comprises the following steps:
dividing the flash memory into a plurality of physical flash blocks, wherein the plurality of physical flash blocks correspond to a plurality of logic blocks of firmware, each logic block corresponds to at least two physical flash blocks, the block management buffer is used for buffering an erasing operation time variable and a reading operation time variable, and each logic block corresponds to the erasing operation time variable and the reading operation time variable;
dividing the data cache area into a plurality of cache node units, wherein the cache node units form a cache linked list;
and when the read operation is performed, the controller caches the read operation data to an idle cache node unit in the cache chain table, and when the write operation is performed, the controller caches the write operation data to the idle cache node unit in the cache chain table.
CN202410232483.1A 2024-03-01 2024-03-01 Storage device and memory management method Pending CN117806572A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410232483.1A CN117806572A (en) 2024-03-01 2024-03-01 Storage device and memory management method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410232483.1A CN117806572A (en) 2024-03-01 2024-03-01 Storage device and memory management method

Publications (1)

Publication Number Publication Date
CN117806572A true CN117806572A (en) 2024-04-02

Family

ID=90420250

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410232483.1A Pending CN117806572A (en) 2024-03-01 2024-03-01 Storage device and memory management method

Country Status (1)

Country Link
CN (1) CN117806572A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101030167A (en) * 2007-01-17 2007-09-05 忆正存储技术(深圳)有限公司 Flash-memory zone block management
CN101122886A (en) * 2007-09-03 2008-02-13 杭州华三通信技术有限公司 Method and device for dispensing cache room and cache controller
CN104461391A (en) * 2014-12-05 2015-03-25 上海宝存信息科技有限公司 Method and system for managing and processing metadata of storage equipment
CN115981555A (en) * 2022-12-21 2023-04-18 浙江宇视科技有限公司 Data processing method and device, electronic equipment and medium
CN116303118A (en) * 2023-05-18 2023-06-23 合肥康芯威存储技术有限公司 Storage device and control method thereof

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101030167A (en) * 2007-01-17 2007-09-05 忆正存储技术(深圳)有限公司 Flash-memory zone block management
CN101122886A (en) * 2007-09-03 2008-02-13 杭州华三通信技术有限公司 Method and device for dispensing cache room and cache controller
CN104461391A (en) * 2014-12-05 2015-03-25 上海宝存信息科技有限公司 Method and system for managing and processing metadata of storage equipment
CN115981555A (en) * 2022-12-21 2023-04-18 浙江宇视科技有限公司 Data processing method and device, electronic equipment and medium
CN116303118A (en) * 2023-05-18 2023-06-23 合肥康芯威存储技术有限公司 Storage device and control method thereof

Similar Documents

Publication Publication Date Title
US9329995B2 (en) Memory device and operating method thereof
US20050015557A1 (en) Nonvolatile memory unit with specific cache
US6587915B1 (en) Flash memory having data blocks, spare blocks, a map block and a header block and a method for controlling the same
KR101185617B1 (en) The operation method of a flash file system by a wear leveling which can reduce the load of an outside memory
US7962687B2 (en) Flash memory allocation for improved performance and endurance
US8180955B2 (en) Computing systems and methods for managing flash memory device
US20190220396A1 (en) Data Storage Device
EP2626792A1 (en) Wear leveling method, memory device, and information system
CN102779096B (en) Page, block and face-based three-dimensional flash memory address mapping method
WO2014074449A2 (en) Wear leveling in flash memory devices with trim commands
WO2009096180A1 (en) Memory controller, nonvolatile storage device, and nonvolatile storage system
CN110287068B (en) NandFlash driving method
CN109542354A (en) A kind of abrasion equilibrium method, device and equipment based on the erasing upper limit
US20090319721A1 (en) Flash memory apparatus and method for operating the same
US8429339B2 (en) Storage device utilizing free pages in compressed blocks
US8423707B2 (en) Data access method for flash memory and storage system and controller using the same
US20220083475A1 (en) Cache memory system and cache memory control method
CN109918316B (en) Method and system for reducing FTL address mapping space
US20050005057A1 (en) [nonvolatile memory unit with page cache]
TWI450271B (en) Method for managing a plurality of blocks of a flash memory, and associated memory device and controller thereof
CN114968096A (en) Control method of memory, memory and storage system
CN108733576B (en) Solid state disk and mapping method of memory conversion layer thereof
CN116540950B (en) Memory device and control method for writing data thereof
CN117806572A (en) Storage device and memory management method
CN116737613A (en) Mapping table management method and memory

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination