CN117215485A - ZNS SSD management method, data writing method, storage device and controller - Google Patents

ZNS SSD management method, data writing method, storage device and controller Download PDF

Info

Publication number
CN117215485A
CN117215485A CN202311177299.3A CN202311177299A CN117215485A CN 117215485 A CN117215485 A CN 117215485A CN 202311177299 A CN202311177299 A CN 202311177299A CN 117215485 A CN117215485 A CN 117215485A
Authority
CN
China
Prior art keywords
data
storage
writing
partition
storage units
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311177299.3A
Other languages
Chinese (zh)
Inventor
詹伟钦
赖振楠
吴斯奇
杨超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Hongxinyu Microelectronics Technology Co ltd
Original Assignee
Shanghai Hongxinyu Microelectronics Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Hongxinyu Microelectronics Technology Co ltd filed Critical Shanghai Hongxinyu Microelectronics Technology Co ltd
Priority to CN202311177299.3A priority Critical patent/CN117215485A/en
Publication of CN117215485A publication Critical patent/CN117215485A/en
Pending legal-status Critical Current

Links

Abstract

The application provides a ZNS SSD management method, a data writing method, a storage device and a controller. The method and the device for partitioning and repartitioning the multiple accessed storage units can be compatible with different types of solid state disks of different manufacturers, can optimize storage layout by dynamically adjusting the size and combination of the partitions, improve reading and writing efficiency, reduce management complexity and provide good storage experience for users.

Description

ZNS SSD management method, data writing method, storage device and controller
Technical Field
The application relates to the technical field of memory management, in particular to a ZNS SSD (Zoned Namespaces SSD, partition naming space solid state disk) management method, a data writing method, a storage device and a controller.
Background
Solid state drives have become commonplace in large data centers and enterprise software applications. A new type of solid state disk, commercially called a ZNS SSD, is currently introduced in the industry. Compared with the random writing and random reading of the traditional solid state disk, the internal part of the partition (zone) of the ZNS SSD only allows the sequential writing, does not allow the random writing, but supports the random reading. The ZNS SSD closely reflects the physical layout of the underlying flash memory storage, thereby significantly simplifying the FTL (Flash Translation Layer, flash converter) in the ZNS SSD. This improvement can greatly reduce the amount of DRAM (Dynamic Random Access Memory, dynamic random access controller) required for the ZNS SSD and can significantly reduce the cost of the ZNS SSD. The ZNS SSD is used in the server to provide an efficient storage solution, but the SSDs produced by different manufacturers are different in type and are difficult to realize compatibility, and the ZNS SSD divides a storage unit into a plurality of partitions, each partition has different sizes and performance characteristics, so that when the SSDs of the different types are accessed, a host side needs to manage a large number of storage units with different partition sizes, management complexity is increased, and read-write efficiency is affected.
Disclosure of Invention
In view of this, the application provides a ZNS SSD management method, a data writing method, a storage device and a controller, which can solve the problems that different types of solid state disks of different manufacturers are difficult to be compatible, the management complexity is high and the reading and writing efficiency is low.
The application provides a ZNS SSD management method, which comprises the following steps:
acquiring the storage space of the current partition of the plurality of storage units;
obtaining the least common multiple of the storage space of the current partition of the plurality of storage units;
and re-dividing each storage unit into a plurality of target partitions, wherein the storage space of each target partition is equal to the least common multiple.
The application provides a ZNS SSD management method, which comprises the following steps:
acquiring the storage space of the current partition of the plurality of storage units;
the method comprises the steps of obtaining a storage unit with the minimum storage space of a current partition, pairing with other storage units one by one in pairs, and obtaining the least common multiple of the storage space of the current partition of each pairing;
selecting a plurality of storage units corresponding to the least common multiple with the smallest value from the plurality of storage units as a group;
and re-dividing the storage units of each group into a plurality of target partitions, wherein the storage space of each target partition is equal to the least common multiple corresponding to each group.
The application provides a data writing method, which comprises the following steps:
responding to a received writing request, and obtaining the occupation of data contained in the writing request;
judging whether the occupation of the data reaches a preset threshold value or not;
if yes, acquiring storage spaces of current partitions of a plurality of storage units, then acquiring the least common multiple of the storage spaces of the current partitions of the plurality of storage units, and re-dividing each storage unit into a plurality of target partitions, wherein the storage spaces of the target partitions are equal to the least common multiple;
if not, acquiring the storage space of the current partition of the plurality of storage units, then acquiring the storage unit with the smallest storage space of the current partition, pairing with other storage units one by one, and acquiring the least common multiple of the storage space of the current partition of each pairing; selecting a plurality of storage units corresponding to the least common multiple with the smallest value from the plurality of storage units as a group, and re-dividing the storage units of each group into a plurality of target partitions, wherein the storage space of each target partition is equal to the least common multiple corresponding to each group;
and writing the data into the corresponding target partition.
Optionally, when the occupation of the data does not reach a preset threshold, writing the data into a corresponding target partition, including: the method comprises the steps of obtaining the residual erasing times of storage units of each group, and writing the data into a target partition corresponding to the group with the largest residual erasing times; and/or, obtaining the residual erasing times of the storage units of each group, and writing the data into a target partition corresponding to the group with the residual erasing times smaller than the preset erasing times threshold.
Optionally, when the occupation of the data does not reach a preset threshold, writing the data into a corresponding target partition, including: and selecting a target partition with the largest remainder of the division of the occupied storage space of the data by the storage space, and writing the data into the target partition with the largest remainder.
Optionally, the preset threshold is the least common multiple of the storage space of the current partition of all the currently accessed storage units.
Optionally, the method further comprises: selecting a plurality of storage units from all the accessed storage units according to preset information; and executing any ZNS SSD management method for the selected multiple storage units.
Optionally, the preset information includes at least one of a type of the stored data, a occupation of the stored data, and an used space of each storage unit.
The application provides a storage device, comprising:
the upper layer interface is used for receiving a write-in request of an upper layer program;
the storage interface is used for connecting the solid state disk;
a central processor for performing the method of any of the above.
The application provides a controller storing an adaptive management program which when executed by a processor implements a method as claimed in any one of the preceding claims.
As described above, the method and the system for partitioning and repartitioning the multiple accessed storage units can be compatible with different types of solid state disks of different manufacturers, optimize storage layout, improve reading and writing efficiency, reduce management complexity and provide good storage experience for users by dynamically adjusting the size and combination of the partitions.
Drawings
Fig. 1 is a flow chart of a ZNS SSD management method according to a first embodiment of the application;
fig. 2 is a flow chart of a ZNS SSD management method according to a second embodiment of the application;
FIG. 3a is a schematic diagram of a storage unit in a ZNS SSD;
FIG. 3b is a schematic diagram of data writing according to an embodiment of the present application;
fig. 4 is a flow chart of a ZNS SSD management method according to a third embodiment of the application;
FIG. 5 is a schematic diagram of obtaining a target memory address according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a memory device according to an embodiment of the present application.
Detailed Description
In order to solve the above problems in the prior art, the embodiments of the present application provide a ZNS SSD management method, a data writing method, a storage device, and a controller. The principles of solving the problems are basically the same or similar based on the same conception, and the embodiments of each of the protection subject matters can be referred to each other, and the repetition is omitted.
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be clearly described below with reference to specific embodiments and corresponding drawings. It will be apparent that the embodiments described below are only some, but not all, embodiments of the application. Under the condition of no conflict, the following embodiments and technical features thereof can be combined with each other and also belong to the technical scheme of the application.
First embodiment
Fig. 1 is a schematic flow chart of a ZNS SSD management method according to an embodiment of the application. The communication framework on which the method is based consists of a host and a plurality of solid state disks connected with the host, and an execution subject of the method can be the host. The host may be embodied as a server, and send an instruction to each solid state disk, where the CPU (also called SSD CPU, SSD controller) of each solid state disk parses information included in the instruction, for example, in a data writing scenario, the instruction includes data to be stored and a storage address, and of course, the instruction also includes other information, which is not described herein; the SSD CPU controls the storage unit to write the data into the corresponding storage address. The process of writing data into the memory cell can be referred to in the prior art.
The method comprises at least the following steps S11 to S13.
S11: the storage space of the current partition of the plurality of storage units is acquired.
The plurality of storage units can be based on a plurality of solid state disks currently accessed by the host, wherein the solid state disks are from different factories and different types, or the same manufacturer and different types, or the different manufacturers and the same types. In this case, the specifications of the plurality of memory cells are different.
Alternatively, the host determines whether the types are the same through page information of a storage unit of the solid state disk, where the page information refers to the number of pages stored in a single Block of the storage unit, for example, 4K or 8K.
The method can identify the solid state disk supporting the change of the partition size (or supporting the partition repartition), for example, whether the corresponding solid state disk supports the change of the partition size or not can be obtained by consulting the specification of the solid state disk or contacting with the manufacturer of the solid state disk. Then, steps S11 to S13 are performed on the plurality of solid state disks supporting the change of the partition size.
In an implementation scenario, the plurality of solid state disks currently accessed by the host may not all support changing the partition size, but may also include one or more solid state disks that do not support changing the partition size, and for these solid state disks, the host may be managed in a manner that is known in the art.
The host obtains the memory space of the current partition of each memory unit from the SSD CPU through a corresponding command or API (Application Program Interface, application programming interface). The memory space may represent the memory space of a partition or the size of a partition.
Optionally, the storage unit is a NAND flash, and the storage space is a Block capacity.
S12: and obtaining the least common multiple of the storage space of the current partition of the plurality of storage units.
The host computer can compare every two storage units to obtain the least common multiple of the storage spaces of the current partitions of the two storage units, and repeat until all the storage units are compared, and the finally obtained least common multiple is set as the size of the partition to be finally divided, which is called the storage space of the target partition.
By way of example, the following are examples: assuming that the host is currently connected to the A, B, C, D, E, F6 solid state disks, the current partition sizes of the solid state disks A, B, C, D are different, for example, the storage spaces of the storage units are sequentially 4K, 6K, 8K and 10K, the current partition sizes of the solid state disk E and the solid state disk a are the same, the current partition sizes of the solid state disk F and the solid state disk B are the same, and the host can only select the solid state disks (or the storage units thereof) with different current partition sizes for matching. The method comprises the steps that firstly, a host machine obtains the least common multiple of storage spaces of current partitions of a solid state disk A and a solid state disk B, wherein the least common multiple is 12K; step two, obtaining the least common multiple of the storage space of the current partition of the solid state disk C and the 12K, wherein the least common multiple is 24K; thirdly, obtaining the least common multiple of the storage space of the current partition of the 24K and the solid state disk D, wherein the least common multiple is 120K. The least common multiple of the storage space of the current partitions of the 6 solid state disks is 120K. If the new accessed solid state disk is in the follow-up, namely the new storage unit is accessed, repeating the process for all the storage units, and recalculating to obtain the new least common multiple.
S13: and re-dividing each storage unit into a plurality of target partitions, wherein the storage space of each target partition is equal to the least common multiple.
The embodiment of the application can be compatible with different types of solid state disks of different manufacturers by partitioning and repartitioning the plurality of accessed storage units, optimize storage layout, improve reading and writing efficiency, reduce management complexity and provide good storage experience for users by dynamically adjusting the size and combination of the partitions.
After each target partition is obtained by repartitioning, compared with each partition before partition (i.e. the current partition), a section of logical address interval (LBA) corresponding to each target partition is changed, and a new address mapping table is formed and stored in the SSD CPU. And subsequently, performing data reading and writing between the host and each accessed solid state disk based on the new address mapping table.
The memory space M1 of a single target partition is larger than the memory space M0 of any partition before partitioning, i.e. M 1 >M 0 And M is 1 Is M 0 Even multiples of (a). Therefore, in a large data (occupying large data) writing scene,the writing speed is fast.
Second embodiment
As shown in fig. 2, the ZNS SSD management method provided by the embodiment of the application is also based on a communication architecture composed of a host and a plurality of solid state disks (and storage units thereof) connected to the host. The same features of the present embodiment as those of the first embodiment can be referred to the foregoing, and will not be repeated here.
The method comprises at least the following steps S21 to S25.
S21: the storage space of the current partition of the plurality of storage units is acquired.
S22: and acquiring the storage unit with the smallest storage space of the current partition, pairing with other storage units one by one in pairs, and acquiring the least common multiple of the storage space of the current partition of each pairing.
S23: and selecting a plurality of storage units corresponding to the least common multiple with the smallest value from the plurality of storage units as a group.
As shown in fig. 2, in an example, the steps S22 and S23 are repeatedly performed for a plurality of storage units other than the group, and step S24 is performed: judging whether grouping is completed on all storage units; if not, continuing to execute the first step and the second step until grouping is completed on all the storage units. If yes, go to step S25.
S25: and re-dividing the storage units of each group into a plurality of target partitions, wherein the storage space of each target partition is equal to the least common multiple corresponding to each group.
By way of example, the following are examples: assuming that the host is currently connected to the A, B, C, D, E, F6 solid state disks, the current partition sizes of the solid state disks A, B, C, D are different, the storage spaces of the storage units are 4K, 6K, 8K and 10K in sequence, the current partition sizes of the solid state disk E and the solid state disk A are the same, and the current partition sizes of the solid state disk F and the solid state disk B are the same.
In step S22: the storage units corresponding to the solid state disk A and the solid state disk B are paired in pairs, and the least common multiple of the storage space of the current partition of the two pairs is 12K; the storage units corresponding to the solid state disk A and the solid state disk C are paired in pairs, and the least common multiple of the storage space of the current partition of the two pairs is 8K; the storage units corresponding to the solid state disk A and the solid state disk D are paired in pairs, and the least common multiple of the storage space of the current partition of the two pairs is 20K.
In step S23: the least common multiple of the minimum numerical value is 8K, so that the storage units corresponding to the solid state disk a and the solid state disk C are used as a group 1, and the storage units corresponding to the solid state disk E are also divided into the group 1 because the partition sizes of the solid state disk E and the solid state disk a are consistent, and the group 1 comprises the storage units corresponding to the solid state disk A, C, E.
The storage units corresponding to the solid state disks B, D, F except the group 1 are paired in pairs, and the least common multiple of the storage space of the current partition of the solid state disk B and the solid state disk D is 30K; the sizes of the partitions of the solid state disk B and the solid state disk F are equal. Therefore, the storage units corresponding to the solid state disk B, D, F can be divided into a group, which is called a group 2, unlike the aforementioned group 1.
In step S25, the storage space of the single target partition in group 1 is 8K, and the storage space of the single target partition in group 2 is 30K. In any grouping, the storage space of a single target partition is larger than the storage space of any partition before division and is even times that before division.
During the data writing phase, the present application may select the appropriate packet to perform the write operation based on the data size (i.e., the size of the data).
In an example, where the data size is x and the storage space of the target partition is y, a packet with the largest remainder after dividing x by y may be selected to perform the write operation.
In another example, the present application may perform the write operation according to the remaining number of erasures of the memory cells of each group, for example, selecting a group to which the memory cell having the largest remaining number of erasures belongs, and performing the write operation; and/or writing the data into a target partition corresponding to the group with the residual erasing times smaller than the preset erasing times threshold value. For example, since the preset erase count threshold may identify that the current remaining erase count has reached the preset value (e.g., guard value), the data may be preferentially written to the target partition corresponding to the packet having the remaining erase count less than the preset erase count threshold, so that relatively fewer packets that are about to reach or have reached the preset value (e.g., guard value) are erased and written to, thereby achieving more balanced erasure and writing (i.e., wear leveling). For another example, when it is determined that the remaining erase count of the target partition corresponding to the preset group or all the groups is smaller than the preset erase count threshold, the group to which the memory cell having the largest remaining erase count belongs is selected, and the write operation is performed. For example, in a scenario that 2 writing requests are received in a preset period, for one writing request, selecting a group to which a storage unit with the largest remaining erasing frequency belongs, executing the writing operation, and for another writing request, writing the data into a target partition corresponding to the group with the remaining erasing frequency smaller than a preset erasing frequency threshold, and flexibly selecting different wear balancing modes according to the size of the written data by presetting two wear balancing modes, so that balanced erasing and writing can be maximally realized. In the foregoing first embodiment, the storage space of the target partition is larger, and the data written into the solid state disk may be scattered in different positions in the target partition, for example, different pages of different blocks shown in fig. 3a or different pages of the same Block, which may cause a problem of write amplification, that is, a ratio between the data actually written into the solid state disk and the data calculated and written by the host computer is larger.
Taking two blocks 0 and 1 as an example, two invalid pages G remain finally, and the valid pages indicate that data has been written in the current page, as shown in fig. 3 b. If the data is needed to be written into a new page, the writing operation can be executed after the erasing operation is executed on the two invalid pages according to the storage principle of the storage unit; at this time, additional space provided by the ZNS SSD, i.e., block2, is required. The solid arrows represent the specific process as follows:
firstly, finding a block2 with all pages completely free from a ZNS SSD; then, copy the page ABCDEFH of block0 and the page a of block 1 into idle block 2; then, erasing all data of the block0, copying the page BCDEFH of the block 1 to the block0, and at the moment, the block0 has two idle pages; then, all data of block 1 is erased, so that the data is written into all pages of block2 and part of pages of block0, and finally free pages in block0 and block 1 are obtained, and the writing operation can be executed according to the normal flow. As can be seen from this process, in the first embodiment, since the target partition with a larger storage space is set, additional erasing and writing operations are required in the data writing stage, which results in an increase in the number of times of erasing and an increase in the amount of data to be erased, thereby reducing the lifetime of the solid state disk.
Compared with the first embodiment, the partitions of each group in the second embodiment are different in size, and when writing operation is performed on the data with smaller occupation, the host can preferentially find the appropriate group to write the data into the target partition of the group, so that additional erasing and writing operations are reduced, the erasing times are reduced, the amount of erased data is reduced, and the influence on the service life of the solid state disk is reduced.
In addition, in the foregoing first embodiment, if some target partitions have relatively less data and some target partitions have more data, it may cause some target partitions to perform frequent erasing and writing operations, while other target partitions perform relatively less erasing and writing operations, and this unbalanced erasing and writing (i.e. unbalanced wear) may eventually lead to a shortened life of the solid state disk to which some target partitions belong.
Compared with the first embodiment, the second embodiment can select a suitable packet (for example, a packet to which the memory cell with the largest remaining erasing number belongs is preferentially selected) according to the data size to perform the erasing and writing operations, so as to optimize the data storage mode, and thus realize more balanced erasing and writing.
Third embodiment
As shown in fig. 4, the data writing method provided by the embodiment of the present application is also based on a communication architecture composed of a host and a plurality of solid state disks (and storage units thereof) connected to the host. The same features as those of the above embodiment can be referred to in the foregoing embodiments, and will not be repeated here.
The data writing method includes at least the following steps S31 to S34.
S31: in response to receiving the write request, a footprint of data contained in the write request is obtained.
S32: judging whether the occupation of the data reaches a preset threshold value or not.
If so, step S331 is performed, that is, the steps of the ZNS SSD management method described in the first embodiment are performed, so as to re-divide each storage unit into a plurality of target partitions.
If not, step S332, i.e. the steps of the ZNS SSD management method described in the second embodiment, is performed, thereby obtaining packets each including a plurality of storage units.
And, S34: and writing the data into the corresponding target partition.
For convenience of description and understanding, the partition of the aforementioned first embodiment may be referred to as a SUPER ZONE mode, and the partition of the first embodiment may be referred to as a MIN ZONE mode.
The host may be equivalently provided with a determination module, which is configured to select an adaptive mode according to the occupation of data, so that the SSD partition may be effectively utilized. The SUPER ZONE mode is entered when the occupancy of the data is greater than or equal to a preset threshold (e.g., 120K), and the MIN ZONE mode is entered when the occupancy of the data is less than the preset threshold (e.g., 120K).
The preset threshold value can be set according to actual requirements, for example, the preset threshold value can be the least common multiple of the storage space of the current partition of all the current storage units. In the first embodiment, the current access to the A, B, C, D, E, F solid state disks is taken as an example, and the least common multiple is 120K. The larger data is written in the SUPER ZONE mode, so that the faster data writing rate can be ensured; the smaller data adopts the MIN ZONE mode to select proper groups to execute the writing operation, so that the erasing times can be reduced, the erasing data quantity can be reduced, and the wear balance of the solid state disk is facilitated.
When the occupation of the data does not reach the preset threshold, in an implementation manner, the host may first obtain the remaining erasing times of the storage units of each group, and then write the data into the target partition corresponding to the group with the largest remaining erasing times, and/or write the data into the target partition corresponding to the group with the remaining erasing times smaller than the preset erasing times threshold. For example, since the preset erase count threshold may identify that the current remaining erase count has reached the preset value (e.g., guard value), the data may be preferentially written to the target partition corresponding to the packet having the remaining erase count less than the preset erase count threshold, so that relatively fewer packets that are about to reach or have reached the preset value (e.g., guard value) are erased and written to, thereby achieving more balanced erasure and writing (i.e., wear leveling). For another example, when it is determined that the remaining erase count of the target partition corresponding to the preset group or all the groups is smaller than the preset erase count threshold, the group to which the memory cell having the largest remaining erase count belongs is selected, and the write operation is performed. For example, in a scenario that 2 writing requests are received in a preset period, for one writing request, selecting a group to which a storage unit with the largest remaining erasing frequency belongs, executing the writing operation, and for another writing request, writing the data into a target partition corresponding to the group with the remaining erasing frequency smaller than a preset erasing frequency threshold, and flexibly selecting different wear balancing modes according to the size of the written data by presetting two wear balancing modes, so that balanced erasing and writing can be maximally realized.
In another embodiment, the host may first select a target partition with the largest remainder of the storage space divided by the occupied storage of the data, and write the data into the target partition with the largest remainder; for example, the data is occupied as x, the storage space of the target partition is y, a packet with the largest remainder after dividing x by y (with the smallest remainder being zero) may be selected, and a write operation is performed on the data.
In step S34, the logical storage page address is used as an index to search in the new address mapping table, the physical storage page address corresponding to the logical storage page address is determined and used as the target storage address, and then the data is written into the target storage address. As shown in fig. 5, a first-level mapping table is searched by taking a logical word line address 4 as an index, and a corresponding physical word line address 4 and a corresponding second-level mapping table address 4 are determined; then, the logical storage page address is used as an index to search in the corresponding secondary mapping table 4, the physical storage page address 1 corresponding to the logical storage page address 1 is determined, namely, the physical storage page address is used as a target storage address, and then data is written.
Upon receiving the write request, the number of storage units currently accessed by the host may be greater than the number of storage units performing the foregoing steps S331 or S332, that is, the present application may select a part of the storage units for the upper ZONE mode or the MIN ZONE mode instead of all the storage units currently accessed by the host (e.g., all the storage units support changing partition sizes). Here, the data writing method further includes: selecting a plurality of storage units from all the accessed storage units according to preset information; the ZNS SSD management method described in the first embodiment 1 and/or 2 is performed on the selected plurality of storage units.
Optionally, the preset information includes at least one of a type of the stored data, a occupation of the stored data, and an used space of each storage unit. The types of stored data can be classified into APP accessed data, user configuration information data, and other types of data. The occupation of the stored data may be divided according to the aforementioned preset threshold, or divided into a plurality of size classes according to a plurality of different thresholds.
Prior to data writing, the host may perform the steps of:
s301: and determining the preset information and the type quantity thereof, and distributing a plurality of corresponding storage units for each type of preset information to form a flash memory group.
S302: and in response to receiving the write request, determining a corresponding target flash memory group according to the characteristic information of the write request. Namely the selected plurality of memory cells.
Further optionally, it is determined whether the flash group has sufficient physical space to write the data of the write request. If yes, writing the data into a target partition obtained based on the flash memory group; if not, one or more flash memory groups are selected from the currently accessed storage units, and after the SUPER ZONE mode or MIN ZONE mode is executed, the data is written into the corresponding target partition.
It should be noted that, in the present embodiment, although step numbers such as S31 and S32 are adopted, for the purpose of more clearly and briefly describing the corresponding contents, and not to constitute a substantial limitation on the sequence, those skilled in the art may perform S331 and then S31, for example, when implementing the present application, but these are all within the scope of the present application. For example, the host machine of the present application may execute steps S331 and S332 first, that is, the repartitioning of the partition is completed in the above-mentioned upper ZONE mode and the MIN ZONE mode before the write request is received, and store the new address mapping tables obtained by the two modes in, for example, the SSD CPU, respectively, and when the write request is received, select the corresponding new address mapping table according to the data occupation of the write request, so as to write the data into the target partition obtained according to the corresponding mode. Of course, the host of the present application may execute one of steps S331 and S332, and then select the corresponding mode according to the data occupation when receiving the write request.
The embodiment of the application also provides a storage device 5, as shown in fig. 6, which comprises an upper layer interface 51, a storage interface 52 and a central processing unit 53.
The upper layer interface 51 is configured to receive a write request of an upper layer program;
the storage interface 52 is used for connecting with a solid state disk;
the central processor 53 is configured to perform the steps of any of the foregoing embodiments (including the ZNS SSD management method described in the foregoing first and second embodiments and the data writing method described in the third embodiment). For specific principles and processes, reference may be made to the above embodiments, which are not described herein.
The embodiment of the present application further provides a controller, which includes the storage device 5 as described above, or performs the method of any embodiment described above, so as to implement partition repartition of the ZNS SSD, and the relevant description may refer to the above embodiments, which are not repeated herein.
It should be understood that, in the embodiment of the present application, the storage device 5 and the controller are respectively complete devices, and also have structures corresponding to those of known devices, only the components related to partition repartition of the ZNS SSD in the devices are described herein, and other components are not described in detail.
The foregoing description is only a partial embodiment of the present application and is not intended to limit the scope of the present application, and all equivalent structural modifications made by those skilled in the art using the present description and accompanying drawings are included in the scope of the present application.
Although the terms first, second, etc. are used herein to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. In addition, the singular forms "a", "an" and "the" are intended to include the plural forms as well. The terms "or" and/or "are to be construed as inclusive, or mean any one or any combination. An exception to this definition will occur only when a combination of elements, functions, steps or operations are in some way inherently mutually exclusive.

Claims (10)

1. A ZNS SSD management method, comprising:
acquiring the storage space of the current partition of the plurality of storage units;
obtaining the least common multiple of the storage space of the current partition of the plurality of storage units;
and re-dividing each storage unit into a plurality of target partitions, wherein the storage space of each target partition is equal to the least common multiple.
2. A ZNS SSD management method, comprising:
acquiring the storage space of the current partition of the plurality of storage units;
the method comprises the steps of obtaining a storage unit with the smallest storage space of a current partition, pairing with other storage units one by one, and obtaining the least common multiple of the storage space of the current partition of each pairing;
selecting a plurality of storage units corresponding to the least common multiple with the smallest value from the plurality of storage units as a group;
and re-dividing the storage units of each group into a plurality of target partitions, wherein the storage space of each target partition is equal to the least common multiple corresponding to each group.
3. A data writing method, comprising:
responding to a received writing request, and obtaining the occupation of data contained in the writing request;
judging whether the occupation of the data reaches a preset threshold value or not;
if yes, executing the ZNS SSD management method as recited in claim 1;
if not, executing the ZNS SSD management method as recited in claim 2;
and writing the data into the corresponding target partition.
4. The method of claim 3, wherein writing the data into the corresponding target partition when the occupancy of the data does not reach a preset threshold comprises:
the method comprises the steps of obtaining the residual erasing times of storage units of each group, and writing the data into a target partition corresponding to the group with the largest residual erasing times; and/or the number of the groups of groups,
and acquiring the residual erasing times of the storage units of each group, and writing the data into a target partition corresponding to the group with the residual erasing times smaller than a preset erasing times threshold value.
5. The method of claim 3, wherein writing the data into the corresponding target partition when the occupancy of the data does not reach a preset threshold comprises:
and selecting a target partition with the largest remainder of the division of the occupied storage space of the data by the storage space, and writing the data into the target partition with the largest remainder.
6. The method according to any of claims 3 to 5, wherein the preset threshold is a least common multiple of the memory space of the current partition of all memory units currently accessed.
7. The method according to any one of claims 3 to 5, further comprising: selecting a plurality of storage units from all the accessed storage units according to preset information;
the ZNS SSD management method of claim 1 and/or claim 2 being performed on the selected plurality of storage units.
8. The method of claim 7, wherein the predetermined information includes at least one of a type of the stored data, a occupation of the stored data, and an available space of each storage unit.
9. A memory device, comprising:
the upper layer interface is used for receiving a write-in request of an upper layer program;
the storage interface is used for connecting the solid state disk;
a central processor for performing the method of any one of claims 1 to 8.
10. A controller, characterized in that an adaptive hypervisor is stored, which when executed by a processor implements the method according to any of claims 1 to 8.
CN202311177299.3A 2023-09-12 2023-09-12 ZNS SSD management method, data writing method, storage device and controller Pending CN117215485A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311177299.3A CN117215485A (en) 2023-09-12 2023-09-12 ZNS SSD management method, data writing method, storage device and controller

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311177299.3A CN117215485A (en) 2023-09-12 2023-09-12 ZNS SSD management method, data writing method, storage device and controller

Publications (1)

Publication Number Publication Date
CN117215485A true CN117215485A (en) 2023-12-12

Family

ID=89040116

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311177299.3A Pending CN117215485A (en) 2023-09-12 2023-09-12 ZNS SSD management method, data writing method, storage device and controller

Country Status (1)

Country Link
CN (1) CN117215485A (en)

Similar Documents

Publication Publication Date Title
US20230315290A1 (en) Namespaces allocation in non-volatile memory devices
US9489409B2 (en) Rollover strategies in a N-bit dictionary compressed column store
US11675709B2 (en) Reading sequential data from memory using a pivot table
US10120588B2 (en) Sliding-window multi-class striping
EP2645259A1 (en) Method, device and system for caching data in multi-node system
CN110688256B (en) Metadata power-on recovery method and device, electronic equipment and storage medium
CN111930643B (en) Data processing method and related equipment
CN113434470B (en) Data distribution method and device and electronic equipment
CN116795735B (en) Solid state disk space allocation method, device, medium and system
CN110119245B (en) Method and system for operating NAND flash memory physical space to expand memory capacity
CN112148226A (en) Data storage method and related device
CN115079957B (en) Request processing method, device, controller, equipment and storage medium
CN117215485A (en) ZNS SSD management method, data writing method, storage device and controller
CN113391757B (en) Node expansion method and device and migration node
KR101376268B1 (en) Device and method of memory allocation with 2 stage for mobile phone
US10552086B2 (en) Global pool of garbage collection units (GCUs) in a shared non-volatile memory device
US10169250B2 (en) Method and apparatus method and apparatus for controlling access to a hash-based disk
US20220391091A1 (en) Management of Namespace Block Boundary Alignment in Non-Volatile Memory Devices
US20230266883A1 (en) Memory allocation method and apparatus, electronic device, and storage medium
CN113867642A (en) Data processing method and device and storage equipment
CN116737612A (en) Address management method and storage device
CN117762322A (en) Data storage method, storage device and equipment
CN116841454A (en) Cache management method applied to memory and memory
CN114647388A (en) High-performance distributed block storage system and management method
KR20230011212A (en) System, method, and device for utilization aware memory allocation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination