CN115934002A - Solid state disk access method, solid state disk, storage system and cloud server - Google Patents

Solid state disk access method, solid state disk, storage system and cloud server Download PDF

Info

Publication number
CN115934002A
CN115934002A CN202310218662.5A CN202310218662A CN115934002A CN 115934002 A CN115934002 A CN 115934002A CN 202310218662 A CN202310218662 A CN 202310218662A CN 115934002 A CN115934002 A CN 115934002A
Authority
CN
China
Prior art keywords
data
physical
partition
virtual
partitions
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310218662.5A
Other languages
Chinese (zh)
Other versions
CN115934002B (en
Inventor
李碧涵
杜宇
李启阳
吴忠杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Co Ltd
Original Assignee
Alibaba China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba China Co Ltd filed Critical Alibaba China Co Ltd
Priority to CN202310218662.5A priority Critical patent/CN115934002B/en
Publication of CN115934002A publication Critical patent/CN115934002A/en
Application granted granted Critical
Publication of CN115934002B publication Critical patent/CN115934002B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the application provides an access method of a solid state disk, the solid state disk, a storage system and a cloud server. In the embodiment of the application, different types of physical pages in a physical partition are classified into different virtual partitions in advance for the same physical partition in a solid state disk, and the physical pages included in the same virtual partition are used for writing data matched with the reading delay of the physical pages. In this way, in the data writing stage, based on the sensitivity of the data to be written to the read delay and the corresponding relationship between the pre-established read delay sensitivity level and the virtual partition, the data to be written can be written into the physical page matching the sensitivity of the data to be written to the read delay. Therefore, the data layout in the solid state disk can be accurately controlled, the physical page with lower reading delay and sensitive reading delay for data writing is realized, the physical page with higher reading delay for data writing with less sensitive reading delay is realized, and a reliable basis is provided for improving the reading performance of the storage system.

Description

Solid state disk access method, solid state disk, storage system and cloud server
Technical Field
The application relates to the technical field of computers, in particular to an access method of a solid state disk, the solid state disk, a storage system and a cloud server.
Background
A Block Storage (Block Storage) system provides low-delay, durable and high-reliability Block-level random Storage for a cloud server, and the reading delay of data is a key performance index of the Block Storage system. To provide low read latency on the order of hundred microseconds, block storage systems use a ZNS (Zoned Namespace) SSD (Solid State Disk) to store data.
The ZNS SSD manages storage capacity in the form of zones (partitions), each Zone includes a plurality of physical pages, and the Zone supports a sequential write mode inside the Zone, that is, data is sequentially written in the plurality of physical pages included in the Zone according to an arrangement order of the plurality of physical pages included in the Zone. The read latency characteristics of different types of physical pages are different, and the read latency of a physical Page refers to a time length required for reading data from a physical Page, for example, for a QLC (Quad-Level Cell) type ZNS SSD, the types of physical pages included in the qslc (Quad-Level Cell) type ZNS SSD are sorted from low to high in read latency, and sequentially include a LP (Lower Page) type physical Page, an UP (Upper Page) type physical Page, an XP (Extra Page) type physical Page, and a TP (Top Page) type physical Page. The sequential write mode causes the data written by each physical page to have no control over the read latency requirements, and data sensitive to the read latency may be written to a physical page with high read latency, which affects the read performance of the block storage system.
Disclosure of Invention
Various aspects of the present application provide an access method for a solid state disk, a storage system, and a cloud server, so as to accurately control data layout in the solid state disk and improve the read performance of the storage system.
The embodiment of the application provides an access method of a solid state disk, wherein the solid state disk comprises a plurality of physical partitions, the physical partitions comprise a plurality of physical pages of different types, and the reading delays of the physical pages of different types are different, and the method comprises the following steps: in response to a write request, determining a target physical partition to which current data to be written needs to be written from a plurality of physical partitions included in a solid state disk, wherein the current data to be written comprises at least one first data; determining a read delay sensitivity level of the first data according to the sensitivity degree of the first data to the read delay, wherein the read delay sensitivity level is positively correlated with the sensitivity degree of the read delay; according to the reading delay sensitivity level of at least one first data, inquiring the corresponding relation between the reading delay sensitivity level which is established in advance and a virtual partition in a physical partition, and determining the virtual partition which belongs to a target physical partition and corresponds to the at least one first data respectively, wherein different types of physical pages in the physical partition are classified into different virtual partitions; and writing at least one first data into the physical pages under the virtual partitions which respectively correspond to the target physical partitions.
An embodiment of the present application further provides a solid state disk, including: a memory and a processor; a memory for storing a computer program; the processor is coupled to the memory for executing the computer program for performing the steps in the method for accessing a solid state disk.
The embodiment of the application further provides a solid state disk, a controller and a solid state disk, wherein the solid state disk comprises a plurality of physical partitions, the physical partitions correspond to a plurality of different types of virtual partitions, the virtual partitions comprise at least one physical page of the same type, and the reading delays of the different types of physical pages are different; and the controller is used for executing steps in the access method of the solid state disk.
An embodiment of the present application further provides a cloud server, which at least includes: a storage system, the storage system comprising: the solid state disk comprises a plurality of physical partitions, the physical partitions correspond to a plurality of different types of virtual partitions, the virtual partitions comprise at least one physical page of the same type, and the reading delays of the different types of physical pages are different; and the controller is used for executing steps in the access method of the solid state disk.
Embodiments of the present application further provide a computer-readable storage medium storing a computer program, which, when executed by a processor, causes the processor to implement the steps in the method for accessing a solid state disk.
In the embodiment of the application, different types of physical pages in the physical partition are classified into different virtual partitions in advance for the same physical partition in the solid state disk, and the physical pages included in the same virtual partition are used for writing data matched with the reading delay of the physical pages. In this way, in the data writing stage, based on the sensitivity of the data to be written to the read delay and the corresponding relationship between the pre-established read delay sensitivity level and the virtual partition, the data to be written can be written into the physical page matching the sensitivity of the data to be written to the read delay. Therefore, the data layout in the solid state disk can be accurately controlled, the physical page with lower reading delay for writing data with more sensitive reading delay into the physical page and the physical page with higher reading delay for writing data with less sensitive reading delay into the physical page can be realized, the requirements of different data for different reading delays can be further met, the working performance of the solid state disk is improved, the application range of the solid state disk is expanded, and a reliable basis is provided for improving the reading performance of a storage system.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a schematic structural diagram of a solid state disk according to an embodiment of the present application;
fig. 2 is a flowchart of an access method for a solid state disk according to an embodiment of the present application;
FIG. 3 is a diagram illustrating exemplary relationships between storage capacity and offset addresses of virtual partitions according to an embodiment of the present application;
fig. 4 is a flowchart of another access method for a solid state disk according to an embodiment of the present application;
fig. 5 is a mapping relationship diagram between a write cache unit and a virtual partition according to an embodiment of the present application;
fig. 6 is an exemplary application scenario diagram provided in an embodiment of the present application;
fig. 7 is a schematic structural diagram of another access apparatus for a solid state disk according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of another solid state disk according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the embodiments of the present application, "at least one" means one or more, "a plurality" means two or more. "and/or" describes the access relationship of the associated object, meaning that there may be three relationships, e.g., A and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone, wherein A and B can be singular or plural. In the description of the text of the present application, the character "/" generally indicates that the former and latter associated objects are in an "or" relationship. In the embodiments of the present application, the terms "first", "second", "third", and the like are used only for distinguishing the contents of different objects, and do not have any special meaning.
A Block Storage (Block Storage) system provides low-delay, durable and high-reliability Block-level random Storage for a cloud server, and the reading delay of data is a key performance index of the Block Storage system. To provide low read latency on the order of hundred microseconds, block storage systems use a ZNS (Zoned Namespace) SSD (Solid State Disk) to store data.
The ZNS SSD manages storage capacity in the form of zones (partitions), each Zone includes a plurality of physical pages, and the Zone supports a sequential write mode inside the Zone, that is, data is sequentially written in the plurality of physical pages included in the Zone according to an arrangement order of the plurality of physical pages included in the Zone. The read delay characteristics of different types of physical pages are different, for example, for a QLC (Quad-Level Cell) type ZNS SSD, the physical types included therein are, in order of read delay from low to high, a LP (Lower Page) type physical Page, an UP (Upper Page) type physical Page, an XP (Extra Page) type physical Page, and a TP (Top Page) type physical Page. The sequential write mode makes the data written by each physical page have no control over the requirement of read latency, and data sensitive to read latency may be written to a physical page with high read latency, which affects the read performance of the block storage system.
Therefore, the embodiment of the application provides an access method of a solid state disk, the solid state disk, a storage system and a cloud server. In the embodiment of the application, different types of physical pages in the physical partition are classified into different virtual partitions in advance for the same physical partition in the solid state disk, and the physical pages included in the same virtual partition are used for writing data matched with the reading delay of the physical pages. In this way, in the data writing stage, based on the sensitivity of the data to be written to the read delay and the corresponding relationship between the pre-established read delay sensitivity level and the virtual partition, the data to be written can be written into the physical page matching the sensitivity of the data to be written to the read delay. Therefore, the data layout in the solid state disk can be accurately controlled, the physical page with lower reading delay for writing data with more sensitive reading delay into the physical page and the physical page with higher reading delay for writing data with less sensitive reading delay into the physical page can be realized, the requirements of different data for different reading delays can be further met, the working performance of the solid state disk is improved, the application range of the solid state disk is expanded, and a reliable basis is provided for improving the reading performance of a storage system.
Fig. 1 is a schematic structural diagram of a solid state disk according to an embodiment of the present application. The solid state disk shown in fig. 1 may be various types of solid state disks, for example, a ZNS SSD. Referring to fig. 1, the solid state disk may at least include a storage medium, for example, including but not limited to: nand Flash memory media. The storage space of the storage medium of the solid state disk is divided into a plurality of physical partitions. Each physical partition comprises a plurality of physical pages (pages) of different types, the physical partition is the minimum unit of the solid state disk for erasing data, the physical Page is the minimum unit of the solid state disk for reading and writing, and the reading delays of the physical pages of different types are different. For example, ZNS SSDs using NAND Flash (NAND Flash) as a storage medium are mainly TLC (triple-Level Cell) SSDs and QLC (Quad-Level Cell) SSDs. Types of physical pages in TLC SSD include: LP (Lower Page), UP (Upper Page), XP (Extra Page), each type of physical Page provides 33% storage capacity, respectively. The types of physical pages in the QLC SSD include: LP, UP, XP, TP (Top Page), each type of physical Page providing 25% storage capacity, respectively. According to the sequence from low to high of the reading delay, the page sequence is an LP type physical page, an UP type physical page, an XP type physical page and a TP type physical page. The LP type physical page is a page which only needs 1 time of voltage comparison when data is read in the solid state disk, the reading speed is fastest, and the reading delay is shortest; the UP type physical page refers to a page needing 2 times of voltage comparison when data is read in the solid state disk, the reading speed is slower than that of the LP type physical page, and the reading delay is longer than that of the LP type physical page; the XP type physical page refers to a page needing 4 times of voltage comparison when data is read in the solid state disk, the reading speed is slower than that of the UP type physical page, and the reading delay is longer than that of the UP type physical page; the TP-type physical page is a page which needs 8 times of voltage comparison when data is read in the solid state disk, the reading speed is slower than that of the XP-type physical page, and the reading delay is longer than that of the XP-type physical page.
In this embodiment, for any physical partition, at least one physical page in the physical partition is classified in advance according to the type of the physical page, that is, the physical pages of the same type are classified into the same virtual partition, and different virtual partitions correspond to different types of physical pages. For example, referring to FIG. 1, for any one of a plurality of physical partitions, a plurality of LP type physical pages are classified as virtual partition 1; a plurality of UP type physical pages are classified as virtual partition 2; physical pages of a plurality of XP types are classified as virtual partitions 3; physical pages of the XP types are categorized as virtual partitions 4. Of course, the different virtual partitions shown in FIG. 1 are merely examples of how many virtual partitions a physical partition may have in practice be associated with a page type of a physical page.
In practical applications, the data to be written into the physical page can be directly written into the physical page. Further optionally, in order to reduce write amplification of the solid state disk, improve read-write performance and service life of the solid state disk, a unit having a data caching function may be further provided, and for convenience of understanding and distinction, the unit having the data caching function is referred to as a write caching unit. The data needing to be written into the physical page is firstly cached into the write cache unit, and the cached data in the write cache unit is migrated to the physical page after the write cache unit meets the condition. For example, when the write cache unit reaches the trigger time or the cached capacity reaches a certain capacity, it is determined that the write cache unit satisfies the condition.
In practical application, a plurality of write cache units may be set for the whole solid state disk, or a plurality of write cache units may be set for each physical partition. Further optionally, in order to facilitate management of data writing and reduce the number of times of incorrect operations of data writing, a write cache unit may be further configured for each virtual partition in the physical partition. For example, in fig. 1, the virtual partition 2, the virtual partition 3, and the virtual partition 4 correspond to one write cache unit, respectively.
Of course, the solid state disk may also include other components, such as a controller communicatively coupled to the storage medium. Controllers include, for example but are not limited to: an MCU (Microcontroller), a CPU (Central processing Unit), and an MPU (Microprocessor Unit). The controller serves as the brain of the whole solid state disk and serves various tasks such as data scheduling and transferring, garbage recycling and wear balancing. For example, referring to fig. 1, the host system may perform data read-write interaction with the controller, so that the host system may write data into the solid state disk and also read data from the solid state disk. Host systems include, for example, but are not limited to: the computer system can be any device form such as a notebook computer, a desktop computer, a smart phone, a tablet computer, an industrial computer and the like. The host system and the solid state disk may be in the same device or may be in different devices, which is not limited to this.
The solid state disk shown in fig. 1 is merely an example, and the specific structure of the solid state disk is not limited in the embodiment of the present application.
The technical solutions provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings.
Fig. 2 is a flowchart of an access method of a solid state disk according to an embodiment of the present application. The method can be executed by an access device of the solid state disk, and the device can be composed of software and/or hardware and can be generally configured in the solid state disk.
Referring to fig. 2, the method may include the steps of:
201. in response to a write request, determining a target physical partition to which data to be written needs to be written from a plurality of physical partitions included in the solid state disk, wherein the data to be written includes at least one first datum.
202. And determining a read delay sensitivity level of the first data according to the sensitivity degree of the first data to the read delay, wherein the read delay sensitivity level is positively correlated with the sensitivity degree of the read delay.
203. And inquiring a corresponding relation between a pre-established reading delay sensitivity level and a virtual partition in a physical partition according to the reading delay sensitivity level of at least one first data, and determining the virtual partition which belongs to a target physical partition and corresponds to each of the at least one first data, wherein different types of physical pages in the physical partition are classified into different virtual partitions.
204. And respectively writing at least one first datum into a physical page under a virtual partition which respectively corresponds to the target physical partition.
In practical applications, the data to be written in the solid state disk may be data acquired from outside the solid state disk, or data generated by a solid state disk triggered Garbage Collection (GC) operation, but is not limited thereto. For convenience of understanding, the data which needs to be written into the solid state disk currently in the write request is referred to as current data to be written.
After receiving a write request for requesting to write data to be written to a solid state disk, first, a physical partition to which the data to be written needs to be written is determined from a plurality of physical partitions included in the solid state disk.
In practical applications, the solid state disk may not limit the type of data written into the physical partition, and at this time, after a write request requesting to write the current data to be written is received, a target physical partition for writing the current data to be written may be randomly selected from a plurality of physical partitions included in the solid state disk. As a further alternative, the solid state disk may have constraints on the data types of data written to the physical partitions in order to facilitate data management and data security considerations. For example, the data types may be divided into data externally obtained by the host system and data generated by a garbage collection operation, and accordingly, some physical partitions are used for writing data externally obtained by the host system, and other physical partitions are used for writing data generated by the garbage collection operation. For another example, the data types are divided into critical data and non-critical data, and the non-critical data is less important than the critical data. Which data belongs to the key data and which data belongs to the non-key data can be flexibly defined according to requirements. Accordingly, some physical partitions are used to write critical data and other physical partitions are used to write non-critical data. Of course, the data type of the data to be written in the solid state disk may be flexibly defined as required, and is not limited thereto.
Based on the above, a physical partition matched with the data type of the current data to be written may be determined as a target physical partition to which the current data to be written needs to be written, from a plurality of physical partitions included in the solid state disk, based on the data type of the current data to be written.
In practical applications, some physical partitions in the solid state disk may be in an enabled state, and some physical partitions may be in a disabled state. A physical partition in an enabled state may allow data writes and a physical partition in a disabled state may prohibit data writes. The physical partition currently in the enabled state may be a full physical partition or a partial physical partition. Further optionally, in order to improve the success rate of data writing, a target physical partition for writing data to be written currently may be selected from physical partitions of the solid state disk currently in an enabled state.
In the present embodiment, the solid state disk supports a new data writing mode, and for ease of understanding, the new data writing mode is referred to as a sequential writing mode based on virtual partitions. A conventional sequential write mode (which may also be referred to as a default sequential write mode) sequentially writes data in a plurality of physical pages included in a Zone according to an arrangement order of the plurality of physical pages included in the Zone. In a conventional sequential write mode, data sensitive to a read delay may be written into a physical page with a lower read delay or a physical page with a higher read delay, and data less sensitive to a read delay may be written into a physical page with a lower read delay or a physical page with a higher read delay.
Different from the traditional sequential writing mode, the sequential writing mode based on the virtual partition can realize that the physical page with lower reading delay writes data sensitive to the reading delay, and the physical page with higher reading delay writes data not sensitive to the reading delay, thereby meeting the requirements of different data for different reading delays.
Further optionally, in order to meet different use requirements of a user for the solid state disk and improve the flexibility of data writing of the solid state disk, the solid state disk can support both a default sequential writing mode and a sequential writing mode based on a virtual partition. Based on this, before performing step 202, as one example, a virtual partition mode may be selected from selectable operating modes of the solid state disk, including a default sequential write mode and a sequential write mode based on virtual partitions. Of course, the default sequential write mode of the solid state disk may be disabled, such that the available operating mode of the solid state disk is a sequential write mode based on the virtual partition, for which case the operating mode may not need to be selected. When a user selects a default sequential write mode of the solid state disk, writing data to be written currently into a Physical page, to which data is not written, after a Physical page to which data has been written most recently in a target Physical partition, that is, sequentially writing data into a plurality of Physical pages included in a Zone according to an arrangement sequence of Physical Addresses (PAs) of the plurality of Physical pages included in the Zone. More introduction regarding the support of default sequential write mode for solid state drives may be incorporated into the related art. When the user selects the sequential write mode based on the virtual partition, steps 202 to 204 are executed to write the current data to be written into the physical page under the virtual partition based on the sensitivity of the current data to be written to the read delay.
It should be noted that, in a scenario where the selection of the operation mode is required, the execution sequence for selecting the operation mode is not limited as long as the operation mode can be selected before the execution of step 202, for example, before step 201, or after step 201 and before step 202.
In this embodiment, the data to be written currently may include one or more data, that is, the data to be written currently is composed of a plurality of data. Examples of data currently to be written include, but are not limited to: basic data information of the user, user behavior data, fault log data, operation data of various application services, and the like. For ease of understanding, data included in the data to be currently written is referred to as first data, and the data to be currently written may include one or more first data.
In practical application, different data have different sensitivity degrees to the reading delay, and the higher the sensitivity degree of the data to the reading delay, the smaller the expected reading delay is when the data is read; the lower the sensitivity of the data to read latency, the greater the read latency is expected when the data is read. The sensitivity of each data to the reading delay may be obtained by mining the big data, or the sensitivity of each data may be determined by performing statistical analysis on the massive data by using an empirical statistical method, or the sensitivity of each data may be determined by evaluating the massive data by using an expert evaluation method, without limitation.
In this embodiment, in order to accurately write data into a physical page whose sensitivity to the read delay matches the sensitivity of the first data, a read delay sensitivity level of the first data may be further determined according to the sensitivity of the first data to the read delay, where the read delay sensitivity level is positively correlated with the sensitivity of the read delay, that is, the read delay sensitivity level increases with the increase of the sensitivity to the read delay. For example, sensitivity ranges corresponding to the respective read delay sensitivity levels may be set in advance, and the read delay sensitivity level to which the first data belongs may be determined based on the sensitivity range in which the sensitivity of the first data to the read delay falls. For another example, a plurality of sensitivity level thresholds may be set, and a plurality of read delay sensitivity levels may be divided by using the plurality of sensitivity level thresholds. Illustratively, the sensitivity thresholds are a first sensitivity threshold and a second sensitivity threshold respectively according to the numerical order from small to large; if the sensitivity of the first data to the reading delay is smaller than a first sensitivity threshold, the first data belongs to a low reading delay sensitivity level; if the sensitivity of the first data to the reading delay is greater than or equal to the first sensitivity threshold and less than the second sensitivity threshold, the first data belongs to a medium reading delay sensitivity level; if the sensitivity of the first data to the read delay is greater than or equal to the second sensitivity threshold, the first data belongs to a high read delay sensitivity level.
In this embodiment, in order to accurately write data into a physical page whose sensitivity to a read delay matches the data, a correspondence relationship between a read delay sensitivity level and a virtual partition in a physical partition is established in advance for one physical partition in a part or all of physical partitions in a solid state disk. Specifically, for physical partitions in the solid state disk, classifying at least one type of physical page in the physical partitions so as to classify at least one same type of physical page in the physical partitions into the same virtual partition; determining the respective read delay sensitivity level of at least one virtual partition according to the read delay of the physical page corresponding to the at least one virtual partition; and establishing a corresponding relation between the reading delay sensitivity level and the virtual partition in the physical partition according to the reading delay sensitivity level of at least one virtual partition.
It is to be understood that the correspondence between the read delay sensitivity level and the virtual partition in the physical partition may indicate the read delay sensitivity level corresponding to the virtual partition, and may also indicate a virtual partition to which data having the read delay sensitivity level may be written.
In the embodiment, different types of physical pages are classified into different virtual partitions aiming at the same physical partition, and the reading delay sensitivity level is increased along with the increase of the reading delay of the physical pages. Therefore, data layout in the solid state disk can be accurately controlled, data which are sensitive to reading delay are written into the physical pages with lower reading delay, data which are not sensitive to reading delay are written into the physical pages with higher reading delay, and the requirements of different data for different reading delays are met.
In this embodiment, according to the read latency sensitivity level of at least one first data, a correspondence between a read latency sensitivity level established in advance and a virtual partition in a physical partition is queried, a virtual partition belonging to a target physical partition and corresponding to the at least one first data is determined, and the at least one first data is written into a physical page under the virtual partition belonging to the target physical partition and corresponding to the at least one first data. It is worth noting that inside each virtual partition, data is written into the physical pages of the virtual partition, which are not written with data, in a sequential writing mode, that is, a plurality of physical pages of the virtual partition are sequentially written in sequence; after each write of data in the virtual partition, the physical address of the physical page to which the data was most recently written is recorded to determine the physical address of the physical page to which the data is next to be written.
According to the technical scheme provided by the embodiment of the application, different types of physical pages in the physical partition are classified into different virtual partitions in advance aiming at the same physical partition in the solid state disk, and the physical pages included in the same virtual partition are used for writing data matched with the reading delay of the physical pages. In this way, in the data writing stage, based on the sensitivity of the data to be written to the read delay and the corresponding relationship between the pre-established read delay sensitivity level and the virtual partition, the data to be written can be written into the physical page matching the sensitivity of the data to be written to the read delay. Therefore, the data layout in the solid state disk can be accurately controlled, the physical page with lower reading delay for writing data with more sensitive reading delay into the physical page and the physical page with higher reading delay for writing data with less sensitive reading delay into the physical page can be realized, the requirements of different data for different reading delays can be further met, the working performance of the solid state disk is improved, the application range of the solid state disk is expanded, and a reliable basis is provided for improving the reading performance of a storage system.
In this embodiment, the data reading method is not limited. For example, after data is written into a physical page in the solid state disk, identification information of the data and an LBA (Logical Block Address) mapped by a physical Address of the written physical page are recorded and maintained; in a data reading stage, initiating a reading request comprising identification information of data to be read, inquiring the identification information of pre-recorded data and the LBA mapped by the physical address of a physical page written in the identification information of the data to be read in the reading request, determining the LBA mapped by the physical address of the physical page written in the data to be read, and determining the physical address mapped by the LBA written in the physical page written in the data to be read; and accessing the corresponding physical page according to the physical address of the physical page written by the data to be read so as to read the data to be read.
For another example, after data is written into a physical page in a solid state disk, identification information of the data and a physical address of the written physical page are recorded and maintained. In the data reading stage, initiating a reading request comprising identification information of data to be read, inquiring the identification information of pre-recorded data and the physical address of a physical page written in the pre-recorded data according to the identification information of the data to be read in the reading request, and determining the physical address of the physical page written in the data to be read; and accessing the corresponding physical page according to the physical address of the physical page written by the data to be read so as to read the data to be read.
Further optionally, in order to improve the accuracy and convenience of reading data, data reading may be performed based on a mapping relationship between the virtual partition identification, the start offset address, and the physical address.
Based on the above, after determining the virtual partition corresponding to each of the at least one first data, the starting offset address of each of the at least one first data in its associated virtual partition may also be determined; correspondingly, after writing at least one first data into the corresponding physical page under the virtual partition belonging to the target physical partition, the mapping relationship among the virtual partition identifier, the start offset address and the physical address can be established according to the virtual partition identifier of the at least one first data, the start offset address in the associated virtual partition and the physical address corresponding to the physical page written by the virtual partition identifier.
It should be noted that the virtual partition identifier is used to uniquely identify a virtual partition, and for any physical partition, after determining a plurality of virtual partitions under the physical partition, a virtual partition identifier corresponding to each virtual partition is allocated to each virtual partition.
In this embodiment, the start offset address of the first data in the virtual partition reflects a start writing position of the first data in the virtual partition, the end offset address of the first data in the virtual partition reflects an end writing position of the first data in the virtual partition, and the start offset address of the first data in the virtual partition and the data amount of the first data determine an end offset address of the first data in the virtual partition. It can be understood that the storage capacity corresponding to the address range defined by the start offset address and the end offset address of the first data in the virtual partition is the storage capacity occupied by the data amount of the first data.
In this embodiment, the storage capacity of the virtual partition is the sum of the storage capacities provided by all the physical pages included in the virtual partition, and the address range corresponding to the storage space of the virtual partition is determined according to the storage capacity of the virtual partition, and for further description about the relationship between the storage capacity and the address range, reference may be made to related technologies.
Referring to fig. 3, assuming that a physical page provides a storage capacity of 4 KB (bytes) and there are 16 physical pages under a virtual partition, the storage capacity of the virtual partition is 64 KB. For data 1 written into the first virtual partition, the starting offset address of the data 1 in the virtual partition is 0, and if the storage capacity required to be occupied by the data amount of the data 1 is 4 KB, the ending offset address of the data 1 in the virtual partition is 4 KB; for data 2 written in the second virtual partition, the start offset address of data 2 in the virtual partition is 4 KB, and if the storage capacity required to be occupied by the data amount of data 2 is 8 KB, the end offset address of data 2 in the virtual partition is 12 KB. For data 3 written into the third virtual partition, the start offset address of data 3 in the virtual partition is 12 KB, and if the storage capacity required to be occupied by the data amount of data 3 is 4 KB, the end offset address of data 4 in the virtual partition is 16 KB. That is, the ending offset address of the last data in the virtual partition is the starting offset address of the next data in the virtual partition. Further optionally, in order to accurately and conveniently determine the starting offset address of the data in the virtual partition, a write pointer may be maintained for each virtual partition, where the write pointer may reflect the currently used storage capacity of the virtual partition, and may also point to the starting offset address of the next data to be written in the virtual partition. Following the above example, after data 1 is written into a physical page in the virtual partition, the write pointer of the virtual partition points to the offset address corresponding to 4 KB, and it can be known from the write pointer of the virtual partition that the start offset address of the next data to be written is 4 KB, and at this time, the used storage capacity of the virtual partition is 4 KB. After the data 2 is written into the physical page in the virtual partition, the write pointer of the virtual partition points to the offset address corresponding to 12 KB, and it can be known from the write pointer of the virtual partition that the start offset address of the next data to be written is 12 KB, and the used storage capacity of the virtual partition is 12 KB.
Based on the above, determining a starting offset address of each of the at least one first data in its associated virtual partition comprises: for any first data in the at least one first data, determining a starting offset address of the next data to be written, which is currently pointed by a write pointer of an associated virtual partition, in the virtual partition as the starting offset address of the first data in the associated virtual partition; and updating the write pointer of the virtual partition associated with the first data according to the data amount of the first data.
It is to be understood that, when the write pointer of its associated virtual partition is updated according to the data amount of the first data, the termination offset address of the first data in its associated virtual partition may be determined according to the data amount of the first data and the start offset address of the first data in its associated virtual partition, and the termination offset address of the first data in its associated virtual partition may be updated to the start offset address of the next data to be written, which is currently pointed to by the write pointer, in the virtual partition.
Based on the above, in the data reading phase, a read request may be received, where the read request includes a target virtual partition identifier, a target start offset address, and a data read amount; responding to the read request, inquiring a mapping relation according to the target virtual partition identification and the target initial offset address, and determining a target physical address corresponding to the target virtual partition identification and the target initial offset address; and reading data of the data reading amount from the physical page corresponding to the target physical address.
Specifically, the target virtual partition identification refers to an identification of a virtual partition in which a read request requests to operate, the target starting offset address refers to a starting offset address of data in the virtual partition requested to be read by the read request, and the data reading amount refers to the amount of data required to be read. Based on the mapping relationship between the virtual partition identification, the starting offset address, and the physical address, a target physical address corresponding to the target virtual partition identification and the target starting offset address may be determined. And determining a target ending offset address of the data to be read in the virtual partition according to the target starting offset address and the data reading amount, and reading the data in the physical page corresponding to the target starting offset address and the target ending offset address, namely reading the data of the data reading amount from the physical page corresponding to the target physical address.
Fig. 4 is a flowchart of another access method for a solid state disk according to an embodiment of the present application. The method can be executed by an access device of the solid state disk, and the device can be composed of software and/or hardware and can be generally configured in the solid state disk.
Referring to fig. 4, the method may include the steps of:
401. and responding to the write request, and determining a target physical partition to which the current data to be written needs to be written from a plurality of physical partitions included in the solid state disk, wherein the current data to be written comprises at least one first data.
402. And determining a read delay sensitivity level of the first data according to the sensitivity degree of the first data to the read delay, wherein the read delay sensitivity level is positively correlated with the sensitivity degree of the read delay.
403. And inquiring a corresponding relation between a pre-established reading delay sensitivity level and a virtual partition in a physical partition according to the reading delay sensitivity level of the at least one first data, and determining the virtual partition which belongs to the target physical partition and corresponds to the at least one first data, wherein different types of physical pages are classified into different virtual partitions.
404. And caching at least one first data in the write cache units associated with the virtual partitions belonging to the target physical partitions respectively.
405. And in response to the preset data migration condition being met, writing the cached data in the cache unit of each write of the plurality of virtual partitions under the target physical partition into the physical page included in the associated virtual partition.
The implementation manners of step 401, step 402, and step 403 in this embodiment are the same as the implementation manners of step 201, step 202, and step 203 in the foregoing embodiment, and are not described again here.
In this embodiment, in order to reduce the write amplification of the solid state disk, improve the read-write performance and the service life of the solid state disk, a write cache unit may be set for each virtual partition in the physical partition. Therefore, when writing at least one first data into a physical page under each virtual partition, at least one first data may be cached in a write cache unit associated with a corresponding virtual partition belonging to the target physical partition; and in response to that the target physical partition meets the preset condition, writing the cached data in the respective write cache units of the virtual partitions under the target physical partition into the physical pages included in the respective associated virtual partitions respectively.
Taking fig. 5 as an example, the gray squares in fig. 5 represent the buffered data, and the arrows in fig. 5 represent the write pointers. The data is firstly cached in the write cache unit corresponding to each virtual partition, and then is migrated to the physical page of the virtual partition from the write cache unit. And after the data is migrated into the physical page of the virtual partition, updating the current direction of the write pointer, wherein the write pointer points to the initial offset address of the next data to be written in the virtual partition.
In this embodiment, when a target physical partition meets a preset data migration condition, data is triggered to be migrated from a write cache unit to a physical page in a corresponding virtual partition. The preset data migration condition can be flexibly set as required. For example, the preset data condition is that the cache duration of the data in the write cache unit reaches a specified duration; or the preset data migration condition is that the current used cache capacity of the write cache unit reaches the specified cache capacity, or the preset data migration condition is that the write cache unit caches the key data, and the like, which is not limited herein.
In practical applications, when it is determined that the preset data migration condition is satisfied, the cached data in the respective write cache units of the multiple virtual partitions under the target physical partition may be directly written into the physical pages included in the respective associated virtual partitions, respectively, or the minimum current cached capacity in the current cached capacities of the respective write cache units of at least part of the virtual partitions under the target physical partition is determined, and the data occupying the minimum current cached capacity in the respective write cache units of the multiple virtual partitions under the target physical partition is written into the physical pages included in the respective associated virtual partitions, respectively.
Further optionally, in order to better reduce write amplification of the solid state disk and better improve read-write performance and service life of the solid state disk, before writing the cached data in the respective write cache units of the multiple virtual partitions under the target physical partition into the physical pages included in the respective associated virtual partitions, it may be determined that the current cached capacity of the respective write cache units of at least part of the virtual partitions under the target physical partition needs to occupy the storage capacity provided by at least one physical page; and caching the appointed filling data in the respective write cache units of at least part of the virtual partitions under the target physical partition until the current cached capacities of the respective write cache units of the virtual partitions under the target physical partition are the same and the current cached capacities of the write cache units need to occupy the storage capacity provided by at least one physical page.
Specifically, if the current cached capacity of each write cache unit of one or more virtual partitions in the target physical partition needs to occupy the storage capacity provided by one or more physical pages, the current cached capacity of each write cache unit of all virtual partitions in the target physical partition is made to be the same by a data filling method, and the current cached capacity of each write cache unit needs to occupy the storage capacity provided by one or more physical pages, at this time, the write cache unit corresponding to each virtual partition in the target physical partition may be triggered to perform a data migration operation, so as to migrate data from the write cache unit to the corresponding physical page. When data padding is performed, the number of padding is, for example, data composed of a plurality of 0 bits, which is not limited. In addition, in order to facilitate data management, padding data may be marked for indicating that corresponding data is padding data. Assuming that the storage capacity of one physical page is denoted as S, the current cached capacity of the write cache unit is denoted as M, M = N × S, N is a positive integer, and the value of N is flexibly set as needed, for example, 3. That is, under the condition that the current cached capacity of each write cache unit is N times of the storage capacity of the physical page, the data can be triggered to be migrated from the write cache unit to the corresponding physical page under the virtual partition, so that the whole physical page can be written in one time. Along the example of fig. 5, the current cached capacity of the write cache unit of the virtual partition 1 needs to occupy 2 physical pages, the current cached capacity of the write cache unit of the virtual partition 2 needs to occupy 3 physical pages, the current cached capacity of the write cache unit of the virtual partition 3 needs to occupy 1 physical page, the current cached capacity of the write cache unit of the virtual partition 4 needs to occupy 4 physical pages, and data padding is performed in the respective write cache units of the virtual partitions 1 to 3 until the current cached capacity of the respective write cache units of the virtual partitions 1 to 3 needs to occupy 4 physical pages, at which time, the trigger data is migrated from the respective write cache units of the virtual partitions 1 to 4 to the corresponding physical pages.
Further optionally, in order to reduce the influence on the write pointer, after the specified padding data is cached in the respective write cache units of at least part of the virtual partitions under the target physical partition, the respective write pointers of at least part of the virtual partitions under the target physical partition may also be updated according to the data amount of the corresponding specified padding data. Since the designated padding data is finally written into the physical page of the virtual partition, that is, the designated padding data is also data to be written into the physical page of the virtual partition, after the designated padding data is written into the physical page of the virtual partition, the current pointing direction of the write pointer of the virtual partition is updated. The content of the update of the current pointing direction of the write pointer after data writing can be referred to the related description above, and is not described in detail here.
In some optional embodiments, power-off protection may be provided for the write cache unit, and particularly, under the condition that the storage capacity provided by the write cache unit is large, the power-off protection may effectively avoid data loss, and ensure data security.
In some alternative embodiments, for example, data writing in a garbage collection scenario may not provide additional power-off protection for the write cache unit. If the power failure occurs, whether the data are missing in the migration process can be determined by comparing the data length of the data before the migration with the data length of the data after the migration.
According to the technical scheme provided by the embodiment of the application, different types of physical pages in the physical partition are classified into different virtual partitions in advance aiming at the same physical partition in the solid state disk, and the physical pages included in the same virtual partition are used for writing data matched with the reading delay of the physical pages. In this way, in the data writing stage, based on the sensitivity of the data to be written to the read delay and the corresponding relationship between the pre-established read delay sensitivity level and the virtual partition, the data to be written can be written into the physical page matching the sensitivity of the data to be written to the read delay. Furthermore, in order to reduce the write amplification of the solid state disk and improve the read-write performance and the service life of the solid state disk, data is cached firstly and then written into a physical page. Therefore, the data layout in the solid state disk can be accurately controlled, the data sensitive to the reading delay is written in the physical page with lower reading delay, the data not sensitive to the reading delay is written in the physical page with higher reading delay, the requirements of different data for different reading delays are met, the working performance of the solid state disk is improved, and the application range of the solid state disk is expanded.
In order to better understand the technical solutions provided in the embodiments of the present application, a specific scenario embodiment is described below.
Referring to fig. 6, a storage system in a cloud server uses various solid state disks such as a ZNS SSD for data storage, for example, including but not limited to: a Block Storage (Block Storage) system, an Object-based Storage (Object-based Storage) system, and so forth.
In practical application, the solid state disk triggers garbage collection operation to release more available storage space. During execution of the garbage collection operation, valid data is migrated from one physical partition to another. Specifically, during the execution of the garbage collection operation for the solid state disk, it is determined by the controller in the solid state disk that data needs to be migrated out of the physical partition (which may be referred to as the source physical partition) and the physical partition (destination physical partition) to which data needs to be migrated. The controller takes the data, which needs to be migrated, of the source physical partition as the current data to be written, and executes the access method of the solid state disk, which is provided by the embodiment of the application, so as to migrate the data from the source physical partition to the destination physical partition. Referring to fig. 6, valid data in each type of physical page under each virtual partition of the source physical partition is migrated to the write cache unit corresponding to each virtual partition of the destination physical partition, and then is migrated from the write cache unit corresponding to each virtual partition of the destination physical partition to each type of physical page under each virtual partition of the destination physical partition.
In practical application, when the solid state disk is in a garbage collection stage, a power failure situation may occur, and if the write cache unit is protected from power failure, after being powered on again, data which is not completely migrated in the write cache unit can be continuously migrated to a physical page of a target physical partition, so that data loss can be prevented from occurring before and after data migration. If the write cache unit is not protected from power failure, the data volume of the virtual partition of the source physical partition before data migration and the data volume of the virtual partition of the destination physical partition after data migration can be compared, and if the data volume of the virtual partition of the source physical partition before data migration is greater than the data volume of the virtual partition of the destination physical partition after data migration, it is indicated that data loss occurs in data migration due to power failure. If the data volume of the virtual partition of the source physical partition before data migration is equal to the data volume of the virtual partition of the destination physical partition after data migration, it indicates that data loss does not occur in the data migration due to the power-off factor.
Fig. 7 is a schematic structural diagram of another access apparatus for a solid state disk according to an embodiment of the present application. The apparatus may be comprised of software and/or hardware and may be generally configured in a solid state drive. The solid state disk comprises a plurality of physical partitions, the physical partitions comprise a plurality of physical pages of different types, and the reading delays of the physical pages of different types are different.
Referring to fig. 7, the apparatus may include:
a first determining module 71, configured to determine, in response to a write request, a target physical partition to which data to be currently written needs to be written from among multiple physical partitions included in a solid state disk, where the data to be currently written includes at least one first data;
a second determining module 72, configured to determine a read delay sensitivity level of the first data according to a sensitivity degree of the first data to the read delay, where the read delay sensitivity level is positively correlated to the sensitivity degree of the read delay;
the query module 73 is configured to query a correspondence between a pre-established reading delay sensitivity level and a virtual partition in a physical partition according to the reading delay sensitivity level of at least one first data, and determine that each of the at least one first data corresponds to a virtual partition belonging to a target physical partition, where different types of physical pages are categorized into different virtual partitions;
and a writing module 74, configured to write at least one piece of first data into a corresponding physical page under a virtual partition belonging to the target physical partition.
Further optionally, before querying the correspondence between the pre-established read delay sensitivity level and the virtual partition in the physical partition, the querying module 73 is further configured to: classifying at least one type of physical page in the physical partition so as to classify at least one same type of physical page in the physical partition into the same virtual partition; determining a respective read delay sensitivity level of at least one virtual partition according to a respective read delay of a physical page corresponding to the at least one virtual partition; and establishing a corresponding relation between the reading delay sensitive grade and the virtual partition in the physical partition according to the reading delay sensitive grade of at least one virtual partition.
Further optionally, after determining that each of the at least one first data corresponds to a virtual partition belonging to the target physical partition, the first determining module 71 is further configured to: determining a starting offset address of each of the at least one first data in its associated virtual partition;
accordingly, after the writing module 74 writes at least one first data into the corresponding physical page under the virtual partition belonging to the target physical partition, the first determining module 71 is further configured to: and establishing a mapping relation among the virtual partition identification, the starting offset address and the physical address according to the respective virtual partition identification of the at least one first data, the starting offset address in the associated virtual partition and the physical address corresponding to the physical page written by the virtual partition.
Further optionally, when the first determining module 71 determines the starting offset address of each of the at least one first data in its associated virtual partition, it is specifically configured to: for any first data in the at least one first data, determining a starting offset address of the next data to be written, which is currently pointed by a write pointer of an associated virtual partition, in the virtual partition as the starting offset address of the first data in the associated virtual partition; and updating the write pointer of the virtual partition associated with the first data according to the data amount of the first data.
Further optionally, the apparatus further comprises: the reading module is used for receiving a reading request, wherein the reading request comprises a target virtual partition identifier, a target starting offset address and a data reading amount; responding to the read request, inquiring a mapping relation according to the target virtual partition identification and the target initial offset address, and determining a target physical address corresponding to the target virtual partition identification and the target initial offset address; and reading data of the data reading amount from the physical page corresponding to the target physical address.
Further optionally, when the writing module 74 writes at least one piece of first data into the corresponding physical page under the virtual partition belonging to the target physical partition, specifically, the writing module is configured to: caching at least one first data into write cache units associated with virtual partitions belonging to target physical partitions respectively and correspondingly; and in response to the preset data migration condition being met, writing the cached data in the cache unit of each write of the plurality of virtual partitions under the target physical partition into the physical page included in the associated virtual partition.
Further optionally, before writing the cached data in the cache unit of each of the plurality of virtual partitions under the target physical partition into the physical page included in each associated virtual partition, the writing module 74 is further configured to: determining that the current cached capacity of each write cache unit of at least part of virtual partitions under a target physical partition needs to occupy the storage capacity provided by at least one physical page; and caching the appointed filling data in the respective write cache units of at least part of the virtual partitions under the target physical partition until the current cached capacities of the respective write cache units of the virtual partitions under the target physical partition are the same and the current cached capacities of the write cache units need to occupy the storage capacity provided by at least one physical page.
Further optionally, after the write module 74 caches the designated fill data in the respective write cache units of at least part of the virtual partitions under the target physical partition, the write module is further configured to: and updating the respective write pointers of at least part of the virtual partitions under the target physical partition according to the corresponding data volume of the specified filling data.
Further optionally, the first determining module 71 is further configured to: the method comprises the steps of selecting a sequential write mode based on a virtual partition from selectable working modes of the solid state disk, wherein the selectable working modes comprise a default sequential write mode and a sequential write mode based on the virtual partition.
The apparatus shown in fig. 7 may perform the method of the embodiment shown in fig. 2 or fig. 4, and the implementation principle and the technical effect are not described in detail. The specific manner in which each module and unit of the apparatus shown in fig. 7 in the above embodiment perform operations has been described in detail in the embodiment related to the method, and will not be described in detail here.
It should be noted that, the executing subjects of the steps of the method provided in the foregoing embodiments may be the same device, or different devices may also be used as the executing subjects of the method. For example, the execution subjects of step 201 to step 204 may be device a; for another example, the execution subject of steps 201 and 202 may be device a, and the execution subject of steps 203 and 204 may be device B; and so on.
In addition, in some of the flows described in the above embodiments and the drawings, a plurality of operations occurring in a specific order are included, but it should be clearly understood that these operations may be executed out of order or in parallel as they appear herein, and the sequence numbers of the operations, such as 201, 202, etc., are used merely to distinguish various operations, and the sequence numbers themselves do not represent any execution order. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different.
Fig. 8 is a schematic structural diagram of another solid state disk provided in an embodiment of the present application. As shown in fig. 8, the solid state disk includes: a memory 81 and a processor 82;
memory 81 is used to store computer programs and may be configured to store other various data to support operations on the computing platform. Examples of such data include instructions for any application or method operating on the computing platform, contact data, phonebook data, messages, pictures, videos, and so forth.
The Memory 81 may be implemented by any type of volatile or nonvolatile Memory device or combination thereof, such as Static Random-access Memory (SRAM), electrically Erasable Programmable Read-Only Memory (EEPROM), erasable Programmable Read-Only Memory (EPROM), programmable Read-Only Memory (PROM), read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk or optical disk.
And a processor 82, coupled to the memory 81, for executing the computer program in the memory 81 for the steps of the access method of the solid state disk.
Further, as shown in fig. 8, the solid state disk further includes: communication components 83, display 84, power components 85, audio components 86, and the like. Only some of the components are schematically shown in fig. 8, and it is not meant that the solid state disk includes only the components shown in fig. 8. In addition, the components within the dashed line frame in fig. 8 are optional components, not necessary components, and may be determined according to the product form of the solid state disk.
For details of the implementation process of each action performed by the processor, reference may be made to the foregoing method embodiment or the related description in the device embodiment, and details are not described herein again.
Correspondingly, an embodiment of the present application further provides a storage system, including: the solid state disk comprises a plurality of physical partitions, the physical partitions correspond to a plurality of virtual partitions of different page types, the virtual partitions comprise at least one physical page of the same page type, and the reading delays of the physical pages of the different page types are different; and the controller is used for executing each step of the access method of the solid state disk.
Correspondingly, an embodiment of the present application further provides a cloud server, which at least includes: a storage system, the storage system comprising: the solid state disk comprises a plurality of physical partitions, the physical partitions correspond to a plurality of virtual partitions of different page types, the virtual partitions comprise at least one physical page of the same page type, and the reading delays of the physical pages of different page types are different; and the controller is used for executing each step of the access method of the solid state disk.
Accordingly, the embodiment of the present application further provides a computer readable storage medium storing a computer program, and the computer program can implement the steps of the access method for the solid state disk when being executed.
Accordingly, embodiments of the present application also provide a computer program product, which includes a computer program/instructions, when the computer program/instructions are executed by a processor, cause the processor to implement the steps of the access method of the solid state disk.
The communication component is configured to facilitate wired or wireless communication between the device in which the communication component is located and other devices. The device where the communication component is located can access a wireless network based on a communication standard, such as a WiFi, a 2G, 3G, 4G/LTE, 5G and other mobile communication networks, or a combination thereof. In an exemplary embodiment, the communication component receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the Communication component further includes a Near Field Communication (NFC) module to facilitate short-range Communication. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared Data Association (IrDA) technology, ultra Wide Band (UWB) technology, bluetooth (BT) technology, and other technologies.
The Display includes a screen, which may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation.
The power supply assembly provides power for various components of the device in which the power supply assembly is located. The power components may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device in which the power component is located.
The audio component may be configured to output and/or input an audio signal. For example, the audio component includes a Microphone (MIC) configured to receive an external audio signal when the device in which the audio component is located is in an operating mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in a memory or transmitted via a communication component. In some embodiments, the audio assembly further comprises a speaker for outputting audio signals.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-readable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The Memory may include volatile Memory in a computer readable medium, random Access Memory (RAM), and/or nonvolatile Memory such as Read Only Memory (ROM) or flash Memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase Change RAM (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (Electrically-Erasable Programmable Read Only Memory (EEPROM), flash Memory or other Memory technology, compact disc Read Only Memory (CD-ROM), digital Versatile Disc (DVD) or other optical storage, magnetic cassette tape, magnetic tape storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or apparatus that comprises the element.
The above are merely examples of the present application and are not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (13)

1. A method for accessing a solid state disk, the solid state disk comprising a plurality of physical partitions, the physical partitions comprising a plurality of physical pages of different types, the physical pages of different types having different read latencies, the method comprising:
in response to a write request, determining a target physical partition to which current data to be written needs to be written from a plurality of physical partitions included in the solid state disk, wherein the current data to be written includes at least one first data;
determining a read delay sensitivity level of the first data according to the sensitivity degree of the first data to the read delay, wherein the read delay sensitivity level is positively correlated with the sensitivity degree of the read delay;
inquiring a corresponding relation between a pre-established reading delay sensitivity level and a virtual partition in a physical partition according to the reading delay sensitivity level of the at least one first data, and determining the virtual partition which belongs to the target physical partition and corresponds to the at least one first data respectively, wherein different types of physical pages are classified into different virtual partitions;
and respectively writing the at least one piece of first data into a physical page under a virtual partition which belongs to the target physical partition.
2. The method of claim 1, prior to querying the pre-established correspondence between read latency sensitivity levels and virtual partitions in the physical partitions, further comprising:
classifying at least one type of physical page in the physical partition so as to classify at least one same type of physical page in the physical partition into the same virtual partition;
determining a respective read delay sensitivity level of at least one virtual partition according to a respective read delay of a physical page corresponding to the at least one virtual partition;
and establishing a corresponding relation between the reading delay sensitive grade and the virtual partition in the physical partition according to the reading delay sensitive grade of at least one virtual partition.
3. The method of claim 1, after determining that each of the at least one first datum corresponds to a virtual partition belonging to the target physical partition, further comprising:
determining a starting offset address of each of the at least one first data in its associated virtual partition;
correspondingly, after writing the at least one first data into the physical pages under the virtual partitions respectively corresponding to the target physical partition, the method further includes:
and establishing a mapping relation among the virtual partition identification, the starting offset address and the physical address according to the respective virtual partition identification of the at least one first data, the starting offset address in the associated virtual partition and the physical address corresponding to the physical page written by the virtual partition.
4. The method of claim 3, wherein determining a starting offset address for each of the at least one first data in its associated virtual partition comprises:
for any first data in the at least one first data, determining a starting offset address of the next data to be written, which is currently pointed by a write pointer of the associated virtual partition, in the virtual partition as the starting offset address of the first data in the associated virtual partition;
and updating the write pointer of the associated virtual partition according to the data volume of the first data.
5. The method of claim 4, further comprising:
receiving a read request, wherein the read request comprises a target virtual partition identifier, a target starting offset address and a data read amount;
responding to the read request, inquiring the mapping relation according to the target virtual partition identification and the target starting offset address, and determining a target physical address corresponding to the target virtual partition identification and the target starting offset address;
and reading the data of the data reading amount from the physical page corresponding to the target physical address.
6. The method according to any of claims 1 to 5, wherein writing the at least one first data into the physical pages under the corresponding virtual partitions belonging to the target physical partition respectively comprises:
caching the at least one piece of first data into write cache units associated with virtual partitions which respectively correspond to the target physical partitions;
and in response to that a preset data migration condition is met, writing the cached data in the respective write cache units of the virtual partitions under the target physical partition into the physical pages included in the respective associated virtual partitions respectively.
7. The method according to claim 6, before writing the cached data in the cache unit of each of the plurality of virtual partitions under the target physical partition into the physical page included in the associated virtual partition, further comprising:
determining that the current cached capacity of each write cache unit of at least part of virtual partitions under the target physical partition needs to occupy the storage capacity provided by at least one physical page;
and caching specified filling data in respective write cache units of at least part of the virtual partitions under the target physical partition until the current cached capacities of the respective write cache units of the virtual partitions under the target physical partition are the same, and the current cached capacities of the write cache units need to occupy the storage capacity provided by at least one physical page.
8. The method of claim 7, further comprising, after caching specified fill data in respective write cache locations of at least some of the virtual partitions below the target physical partition:
and updating respective write pointers of at least part of the virtual partitions under the target physical partition according to the data volume of the corresponding specified filling data.
9. The method of any of claims 1 to 5, wherein before determining the read delay sensitivity level of the first data according to the sensitivity of the first data to read delay, further comprising:
and selecting a sequential writing mode based on the virtual partition from the selectable working modes of the solid state disk, wherein the selectable working modes comprise a default sequential writing mode and a sequential writing mode based on the virtual partition.
10. A solid state disk, comprising: a memory and a processor; the memory for storing a computer program; the processor is coupled to the memory for executing the computer program for performing the steps of the method of any of claims 1-9.
11. A storage system, comprising: the solid state disk comprises a plurality of physical partitions, the physical partitions correspond to a plurality of virtual partitions of different types, the virtual partitions comprise at least one physical page of the same type, and the reading delays of the physical pages of different types are different;
the controller for performing the steps of the method of any one of claims 1-9.
12. A cloud server, comprising at least: a storage system, the storage system comprising: the solid state disk comprises a plurality of physical partitions, the physical partitions correspond to a plurality of virtual partitions of different types, the virtual partitions comprise at least one physical page of the same type, and the reading delays of the physical pages of different types are different;
the controller for performing the steps of the method of any one of claims 1-9.
13. A computer-readable storage medium having a computer program stored thereon, which, when executed by a processor, causes the processor to carry out the steps of the method of any one of claims 1 to 9.
CN202310218662.5A 2023-03-08 2023-03-08 Solid state disk access method, solid state disk, storage system and cloud server Active CN115934002B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310218662.5A CN115934002B (en) 2023-03-08 2023-03-08 Solid state disk access method, solid state disk, storage system and cloud server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310218662.5A CN115934002B (en) 2023-03-08 2023-03-08 Solid state disk access method, solid state disk, storage system and cloud server

Publications (2)

Publication Number Publication Date
CN115934002A true CN115934002A (en) 2023-04-07
CN115934002B CN115934002B (en) 2023-08-04

Family

ID=86649372

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310218662.5A Active CN115934002B (en) 2023-03-08 2023-03-08 Solid state disk access method, solid state disk, storage system and cloud server

Country Status (1)

Country Link
CN (1) CN115934002B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116360711A (en) * 2023-06-02 2023-06-30 杭州沃趣科技股份有限公司 Distributed storage processing method, device, equipment and medium

Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100268907A1 (en) * 2009-04-16 2010-10-21 International Business Machines Corporation Selecting A Target Number of Pages for Allocation to a Partition
CN102263818A (en) * 2011-07-07 2011-11-30 北京飞杰信息技术有限公司 Method for storing and reading file data, and apparatus thereof
US20120226850A1 (en) * 2011-03-04 2012-09-06 Sony Corporation Virtual memory system, virtual memory controlling method, and program
CN103605623A (en) * 2013-10-31 2014-02-26 北京智谷睿拓技术服务有限公司 Memory device reading-writing control method and reading-writing control device
CN106326134A (en) * 2015-06-30 2017-01-11 华为技术有限公司 Flash Translation Layer (FTL) address mapping method and device
WO2017083170A1 (en) * 2015-11-13 2017-05-18 Microsoft Technology Licensing, Llc Latency-based energy storage device selection
CN107817945A (en) * 2016-09-13 2018-03-20 中国科学院微电子研究所 A kind of method for reading data and system for mixing internal storage structure
CN108762681A (en) * 2018-05-31 2018-11-06 郑州云海信息技术有限公司 A kind of solid state disk and its reading/writing method and device
CN109002706A (en) * 2018-06-08 2018-12-14 中国科学院计算技术研究所 Data isolation guard method and system in a kind of process based on user class page table
CN110471618A (en) * 2018-05-10 2019-11-19 阿里巴巴集团控股有限公司 Fast side channel access stores equipment
US20200174926A1 (en) * 2017-06-22 2020-06-04 Telefonaktiebolaget Lm Ericsson (Publ) Apparatuses and methods for allocating memory in a data center
CN111880750A (en) * 2020-08-13 2020-11-03 腾讯科技(深圳)有限公司 Method, device and equipment for distributing read-write resources of disk and storage medium
CN113721862A (en) * 2021-11-02 2021-11-30 腾讯科技(深圳)有限公司 Data processing method and device
CN114020218A (en) * 2021-11-25 2022-02-08 建信金融科技有限责任公司 Mixed repeating data deleting and scheduling method and system
CN114153553A (en) * 2021-10-29 2022-03-08 郑州云海信息技术有限公司 High-availability control method and system for virtual machine and related components
CN115037694A (en) * 2022-04-26 2022-09-09 上海地面通信息网络股份有限公司 Data transmission method and device, electronic equipment and storage medium
CN115421651A (en) * 2022-08-19 2022-12-02 阿里巴巴(中国)有限公司 Data processing method of solid state disk, electronic device and medium
CN115437553A (en) * 2021-06-03 2022-12-06 美光科技公司 Tracking data locations to improve memory performance
CN115469816A (en) * 2022-11-02 2022-12-13 摩尔线程智能科技(北京)有限责任公司 Read-write switching method, device and equipment of memory and storage medium
CN115706711A (en) * 2021-08-05 2023-02-17 北京车和家信息技术有限公司 Data transmission method, data transmission device, data transmission equipment and storage medium

Patent Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100268907A1 (en) * 2009-04-16 2010-10-21 International Business Machines Corporation Selecting A Target Number of Pages for Allocation to a Partition
US20120226850A1 (en) * 2011-03-04 2012-09-06 Sony Corporation Virtual memory system, virtual memory controlling method, and program
CN102263818A (en) * 2011-07-07 2011-11-30 北京飞杰信息技术有限公司 Method for storing and reading file data, and apparatus thereof
CN103605623A (en) * 2013-10-31 2014-02-26 北京智谷睿拓技术服务有限公司 Memory device reading-writing control method and reading-writing control device
CN106326134A (en) * 2015-06-30 2017-01-11 华为技术有限公司 Flash Translation Layer (FTL) address mapping method and device
WO2017083170A1 (en) * 2015-11-13 2017-05-18 Microsoft Technology Licensing, Llc Latency-based energy storage device selection
CN107817945A (en) * 2016-09-13 2018-03-20 中国科学院微电子研究所 A kind of method for reading data and system for mixing internal storage structure
US20200174926A1 (en) * 2017-06-22 2020-06-04 Telefonaktiebolaget Lm Ericsson (Publ) Apparatuses and methods for allocating memory in a data center
CN110471618A (en) * 2018-05-10 2019-11-19 阿里巴巴集团控股有限公司 Fast side channel access stores equipment
CN108762681A (en) * 2018-05-31 2018-11-06 郑州云海信息技术有限公司 A kind of solid state disk and its reading/writing method and device
CN109002706A (en) * 2018-06-08 2018-12-14 中国科学院计算技术研究所 Data isolation guard method and system in a kind of process based on user class page table
CN111880750A (en) * 2020-08-13 2020-11-03 腾讯科技(深圳)有限公司 Method, device and equipment for distributing read-write resources of disk and storage medium
CN115437553A (en) * 2021-06-03 2022-12-06 美光科技公司 Tracking data locations to improve memory performance
CN115706711A (en) * 2021-08-05 2023-02-17 北京车和家信息技术有限公司 Data transmission method, data transmission device, data transmission equipment and storage medium
CN114153553A (en) * 2021-10-29 2022-03-08 郑州云海信息技术有限公司 High-availability control method and system for virtual machine and related components
CN113721862A (en) * 2021-11-02 2021-11-30 腾讯科技(深圳)有限公司 Data processing method and device
CN114020218A (en) * 2021-11-25 2022-02-08 建信金融科技有限责任公司 Mixed repeating data deleting and scheduling method and system
CN115037694A (en) * 2022-04-26 2022-09-09 上海地面通信息网络股份有限公司 Data transmission method and device, electronic equipment and storage medium
CN115421651A (en) * 2022-08-19 2022-12-02 阿里巴巴(中国)有限公司 Data processing method of solid state disk, electronic device and medium
CN115469816A (en) * 2022-11-02 2022-12-13 摩尔线程智能科技(北京)有限责任公司 Read-write switching method, device and equipment of memory and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
""基于数据访问特性的闪存读写冲突优化"", 《中国优秀硕士学位论文全文数据库电子期刊信息科技辑》, pages 137 - 96 *
张萍;郭玉东;: "虚拟外存管理技术研究", 计算机工程与设计, no. 20, pages 86 - 89 *
杨腾飞: ""对象云存储中分类分级数据的访问控制方法"", 《软件学报》, pages 2334 - 2353 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116360711A (en) * 2023-06-02 2023-06-30 杭州沃趣科技股份有限公司 Distributed storage processing method, device, equipment and medium
CN116360711B (en) * 2023-06-02 2023-08-11 杭州沃趣科技股份有限公司 Distributed storage processing method, device, equipment and medium

Also Published As

Publication number Publication date
CN115934002B (en) 2023-08-04

Similar Documents

Publication Publication Date Title
CN111506262B (en) Storage system, file storage and reading method and terminal equipment
TWI739859B (en) Method of operating storage device managing multi-namespace
US9645924B2 (en) Garbage collection scaling
TWI607306B (en) Readdressing memory for non-volatile storage devices
US9262313B2 (en) Provisioning in heterogenic volume of multiple tiers
US20140013032A1 (en) Method and apparatus for controlling writing data in storage unit based on nand flash memory
CN107908571B (en) Data writing method, flash memory device and storage equipment
CN114860163B (en) Storage system, memory management method and management node
US9116904B2 (en) File system operation on multi-tiered volume
CN110554999B (en) Cold and hot attribute identification and separation method and device based on log file system and flash memory device and related products
US20170075614A1 (en) Memory system and host apparatus
US11151052B2 (en) Reading sequential data from memory using a pivot table
US20140280397A1 (en) Heterogenic volume generation and use system
US20190004968A1 (en) Cache management method, storage system and computer program product
US20140372673A1 (en) Information processing apparatus, control circuit, and control method
KR20200110547A (en) Storage device and computing device including storage device
CN115421651A (en) Data processing method of solid state disk, electronic device and medium
CN115934002B (en) Solid state disk access method, solid state disk, storage system and cloud server
CN118051179A (en) Techniques for partition namespace storage using multiple partitions
US10073851B2 (en) Fast new file creation cache
KR101026634B1 (en) A method of data storage for a hybrid flash memory
CN110018987B (en) Snapshot creating method, device and system
US11972143B2 (en) Techniques for balancing write commands on solid state storage devices (SSDs)
US20200073572A1 (en) Storage system and storage control method
CN112783420A (en) Data deleting and garbage recycling method, device, system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant