Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in software or in one or more hardware modules or integrated circuits or in different networks and/or processor devices and/or micro-host chip devices.
For ease of understanding, several terms referred to in this application are first explained below.
Physical address (Flash Physical Address, FPA): information is stored in units of bytes in memory, and each byte unit is given a unique memory address, called a physical address, in order to properly store or retrieve information.
Logical address (Logic Translation Unit, LTU): the address generated by the CPU refers to an address of a memory cell (memory cell), a storage unit (storage element), and a network host (network host) from the viewpoint of an application program in the computer architecture.
The scheme provided by the embodiment of the application relates to storage and other technologies, and is specifically described through the following embodiments.
FIG. 1 shows a schematic block diagram of a computer system according to an embodiment of the present disclosure. The computer system 100 includes a host 110 and a solid state disk, the solid state disk includes a main control chip 120 and a flash memory chip 130, and a storage medium of the flash memory chip 130 may be a flash memory chip array.
The host interface 122 of the main control chip 120 is connected to the host 110 to transmit instructions. The processor 124 may be a front-end processor of a solid state disk, and is connected to the host interface 122, the memory controller 126, and the cache module 128, where the cache module 128 is used to store an address mapping table and the like.
The memory controller 126 of the main control chip 120 is connected to the flash memory chip 130, and performs data access operation on corresponding memory cells of the flash memory chip 130 according to the physical address provided by the processor 124.
Flash memory chip 130 includes an array of flash memory chips. In order to improve the data read/write performance, the memory controller 126 of the main control chip 120 may read/write the flash memory chip of the flash memory chip 130 through the multiple channels CH0 and CH 1. Each channel connects a set of flash memory chips, each flash memory chip including a plurality of physical blocks (e.g., physical block 131 and physical block 132 in fig. 1, etc.), each physical block including a plurality of physical pages, and data access operations to the flash memory chips including read, write, and erase. Because of the physical characteristics of the flash memory chip, the basic unit of data operation is, for example, a physical page, the basic unit of erase operation is, for example, a physical block, one physical page size is 8KB or 4KB, one physical block has 256 physical pages, and one logical unit number LUN has 4096 physical blocks.
When the host 110 performs a data operation, the main control chip 120 receives an instruction from the host 110. The host chip 120 maps logical addresses in the instructions to physical addresses that are used to characterize locations in the flash chip 130.
Specifically, the flash memory chip, that is, the data in the memory is indexed by taking 4KB (or other sizes) as an index unit, the address of the data in the flash memory chip is called a physical address, the host end, that is, the sending end of the writing request and the reading request, uses the index as a logical address when accessing the data, the corresponding relationship between a pair of physical addresses and logical addresses is called an address mapping relationship, every 1024 address mapping relationships form a mapping table, and because the mapping table occupies a certain memory space, the flash memory also needs to be written, and the corresponding mapping data needs to be modified every time when 4KB data is written, therefore, the address mapping relationship is frequently written in the data writing process according to the writing calculation of the mapping data corresponding to one time of 4KB data, and further the performance of the SSD is reduced.
In order to solve the problems in the above-described drawbacks, each step of the data request processing method in the present exemplary embodiment will be described in more detail with reference to the drawings and examples.
Fig. 2 shows a flow chart of a data request processing method in an embodiment of the disclosure. The method provided by the embodiments of the present disclosure may be performed by any storage device having computing processing capabilities.
As shown in fig. 2, a main control chip according to one embodiment of the present disclosure performs a data request processing method, including the steps of:
step S202, in response to a writing request for target data, writing management information of the target data into a linear data structure, and caching the management information into a first cache module based on the linear data structure, wherein if the writing of the linear data structure is completed, an address mapping relation generated based on the linear data structure is cached into an address mapping table of a second cache module.
Wherein, the linear data structure refers to a data structure for sequentially storing data, and in the present disclosure, MCMD is used to refer to the linear data structure.
Preferably, the linear data structure is in particular a queue structure.
Management information refers to information for managing target data, including but not limited to: and the corresponding cache numbers of the logic address LTU, the physical address FPA and the target data of 4KB when cached in the main control chip.
The first cache module may specifically be an area that specifically stores management information in the main control chip or the cache chip.
The address mapping relation generated based on the linear data structure is specifically an address mapping relation between a physical address and a logical address of the flash memory chip.
The second cache module may specifically be an area that specifically stores the address mapping table in the main control chip or the cache chip.
Specifically, after management information is written in one linear data structure, the address mapping relation generated based on the linear data structure is cached to a second cache module of the main control chip.
Step S204, in response to the read request of the target data, the target data is read based on the status of the management information, wherein the status of the management information is determined based on whether the address mapping relationship is cached in the second cache module
The states of the address mapping relation in the two cache modules comprise unwritten, written-in, erased and the like, wherein if the address mapping relation is in the written-in state, the address mapping relation is determined to be cached in the second cache module, and if the address mapping relation is in the unwritten or erased state, the address mapping relation is determined to be not cached in the second cache module.
Reading the target data based on the state of the management information is specifically determining which area of the storage device is to read the target data based on the state of the management information.
Specifically, the number of the linear data structures may be multiple, and each linear data structure may correspond to a group of address mapping relationships, so that batch caching of management information of target data can be realized based on the setting of the linear data structures, so that batch writing of the address mapping relationships in the second cache module is further realized, and in the writing process, if a reading instruction is received, a reading mode of the target data is determined by detecting the state of the management information, so that the read data is the target data.
In this embodiment, in the process of writing target data, management information of the target data is cached in a first cache module of a main control chip, and an address mapping relation generated based on the management information is cached in a second cache module of the main control chip, so that when the target data needs to be read, the state of the management information is detected based on the state of the address mapping relation in the second cache module, so that how to read the target data is determined based on the state of the relation information, and consistency between the data needing to be read and the written data is ensured when the cache of the target data is quickly read, and accuracy and reliability of reading the target data are further realized.
Furthermore, the address mapping relation generated based on the linear data structure is cached to the second cache module, so that the frequency of writing and modifying the address mapping relation in the second cache module can be reduced, and frequent writing and modifying of a flash memory chip are not needed, thereby being beneficial to optimizing the performance of the solid state disk.
As shown in fig. 3, in one embodiment of the disclosure, step S202, in response to a write request for target data, adopts a linear data structure to cache management information of the target data to a specific implementation manner of a first cache module of a main control chip, includes:
Step S302, responding to the writing request, entering a first stage of writing target data, writing the target data into a data cache area, and obtaining a corresponding cache number.
The cache number refers to the corresponding cache number when each 4KB target data is cached in the cache region.
Step S304, a first index structure is established based on the buffer number and the physical address of the target data.
The first index structure is an index structure established by taking a physical address as an index key value and taking a cache code as an index target.
In step S306, the linear data structure includes a first number of data units, management information is written to the data units of the linear data structure based on the first index structure, and the management information written to the data units only is recorded as a first state.
Wherein in the present disclosure scmd is used to designate data units, i.e. one MCMD is made up of a first number of scmd.
The first phase points to a phase of writing management information in the linear data structure, in which the address mapping relationship has not yet been written to the address mapping table in the second cache module.
In the first stage, the state of the management information is configured based on the state of the corresponding address mapping relation in the second cache module, specifically: the first state refers to a state in which management information is written into the first cache module, and is called an insertion state.
The first number may be specifically configured based on a size of the first cache module.
In this embodiment, during writing of target data, based on the transfer states of the management information in the first cache module and the second cache module, a first stage of the target data during writing is determined, and a state of the management information in the first stage is determined as a first state, in which the cached target data corresponding to the management information can be guaranteed to be in the data cache region, so if a read command is received in the first state and the read command hits the cache, the target data can be directly read from the data cache region, so as to ensure consistency between the read data and the written target data and quick response of the read command.
As shown in fig. 4, in an embodiment of the present disclosure, step S204, a specific implementation manner of caching an address mapping relationship generated based on a linear data structure to a second cache module of a main control chip includes:
step S402, if the writing of the linear data structure is completed, entering a second stage of writing the target data, and establishing a second index structure based on the physical address and the corresponding logical address.
The second stage refers to a stage of writing the corresponding address mapping relation into the second cache module after writing management information into the linear data structure.
The second index structure is an index structure established by taking the logical address as an index key value and taking the physical address as an index target.
In step S404, a set of address mapping relationships to be written in the linear data structure is extracted based on the second index structure.
Step S406, the address mapping relation to be written is written into the address mapping table, the target data is written into the flash memory chip in the writing process of the address mapping relation, and the management information written into the second cache module by the address mapping relation is recorded as a second state.
In the second stage, the state of the management information is configured based on the state of the corresponding address mapping relationship in the second cache module, specifically: the second state refers to the state of management information in the process that the address mapping relation starts to be written into the address mapping table, and can be understood as an upgrade modification state.
In this embodiment, during writing of target data, based on the transfer states of the management information in the first cache module and the second cache module, the second stage of the target data during writing is determined, and the state of the management information in the second stage is determined as the second state, in which the target data starts to be written into the flash memory chip of the storage device due to the cached target data corresponding to the management information, in which state, that is, if the management information is detected to be in the second state, it is difficult to ensure that the read data is the target data to be read if the read command hits the cache, that is, if the target data is read from the data cache area, in which state, in order to ensure the consistency of the data reading and writing, the target data can be read from the flash memory chip to ensure the reliability of the data reading.
In one embodiment of the present disclosure, the first index structure and the second index structure are both hash index structures.
In addition, the first index structure and the second index structure may also be binary search tree or B-tree structures, etc.
As shown in fig. 5, a main control chip according to another embodiment of the present disclosure performs a data request processing method, including the steps of:
in step S502, in response to a write request for the target data, the management information of the target data is cached to the first cache module of the main control chip by adopting a linear data structure.
Step S504, the address mapping relation generated based on the linear data structure is cached to a second cache module of the main control chip, so that the access operation to the address mapping table is completed based on the address mapping relation in the second cache module.
In step S506, the address mapping table meeting the writing condition is written into the flash memory chip of the storage device.
Specifically, the address mapping relation generated based on the linear data structure is cached to the second cache module of the main control chip, namely, the address mapping relation is cached in the main control chip, and when the changed address mapping relation meets the writing condition, the address mapping relations are written into the flash memory chip at one time in a centralized manner, so that the access frequency to the flash memory chip can be reduced.
The writing condition means that the address mapping relations are accumulated to a certain quantity, namely, the fact that the quantity of the address mapping relations reaches a second quantity is detected, and an address mapping table formed based on the second quantity of the address mapping relations is written into the flash memory chip.
In addition, it may also be determined whether to write the address mapping table to the flash memory chip of the memory device based on the detection of the write state of the linear data structure.
In this embodiment, by detecting whether the address mapping table in the second cache module reaches the writing condition, when the writing condition is reached, the generated address mapping table is written into the flash memory chip at one time, so as to reduce the writing frequency of the flash memory chip, and further realize the optimization of the performance of the storage device including the flash memory chip.
As shown in fig. 6, in one embodiment of the present disclosure, step S506, a specific implementation of writing an address mapping table reaching a writing condition to a flash memory chip of a storage device includes:
in step S602, in response to the write request, the target data is written into the data buffer, and the corresponding buffer number is determined.
In step S604, a first index structure is established based on the physical address of the target data and the corresponding cache number.
In step S606, management information is sequentially written to the first number of data units based on the first index structure in the first stage, the management information being in the first state in the first stage.
In step S608, in the second phase, an operation of writing the target data into the flash memory chip of the storage device is performed, and the management information enters the second state in the second phase.
Step S610, entering a third stage of writing target data, detecting that the number of address mapping relations cached by the address mapping table reaches a second number, determining that a writing condition is reached, and writing the address mapping table into the flash memory chip.
The third stage refers to a stage of writing the address mapping table into the flash memory chip at one time, in which the writing of the target data into the flash memory chip is completed.
Step S612, deleting the corresponding management information in the first buffer module and the target data in the data buffer.
In addition, in the third stage, the corresponding management information in the first cache module and the target data in the data cache region can be deleted, and after the deletion, even if the read command hits the cache, the data in the data cache region cannot be used, so that the target data needs to be read from the deletion chip.
Correspondingly, it can be understood by those skilled in the art that the state of the management information enters the deleted state at this time.
In this embodiment, based on the write operation to the target data, the write stage is divided into three stages, namely, a first stage, a second stage and a third stage, where the first stage is a stage of writing management information of the target data to the first cache module, the second stage is a stage of caching an address mapping relation generated based on the linear data structure to the second cache module, the second stage is a stage of writing the target data to the flash memory chip, the third stage is a stage of writing the generated address mapping table and the target data to the flash memory chip, the third stage is a stage of writing the target data to the flash memory chip, and corresponding deletion, and the states of the management information are configured into different states based on the different stages, so that when a read request of the target data is received in the different states, whether the target data is read from the data cache area or the flash memory chip is determined based on the states of the management information, so as to ensure that the read data is the target data.
As shown in fig. 7, in one embodiment of the present disclosure, step S206, in response to a read request for target data, reads a specific implementation of target data based on a state of management information, includes:
in step S702, in response to the read request, a target logical address corresponding to the read request is determined.
In step S704, the target logical address is used as the first index, and the corresponding target physical address is queried in the second cache module based on the second index structure.
The implementation manner of querying the corresponding target physical address in the second cache module based on the first index structure specifically includes: hash searching is performed based on the target logical address to query the target physical address.
If the target physical address is queried, reading target data based on the state of the target physical address and the management information, wherein the method specifically comprises the following steps:
in step S706, the target physical address is used as the second index, and the corresponding target cache number is retrieved from the first cache module based on the first index structure.
The implementation manner of searching the corresponding target cache number in the first cache module based on the second index structure specifically includes: hash searching is performed based on the target physical address to query the target cache number.
Reading target data based on the state of the target cache number and the management information, specifically comprising:
in step S708, if the target cache number is detected, the state of the management information is detected.
One specific way to detect the status of the management information includes: the status of the management information is detected based on the phase flag, which is determined based on the processing timing of the management information.
Another specific way of detecting the status of the management information includes: based on the state flag detecting the state of the management information, the state flag may be determined by adding a flag in the corresponding state.
In step S710, if the management information is detected to be in the first state, the target data is read from the data buffer based on the target buffer number.
If the management information is detected to be in the first state, that is, the target data can be guaranteed to be in the data cache region, the target data is read from the data cache region based on the target cache number, so that the target data is guaranteed to be read quickly, and the consistency of the target objects corresponding to the writing operation and the reading operation can be guaranteed.
In step S712, if the management information is detected to be in the second state, the target data is read from the flash memory chip.
If the management information is detected to be in the second state, the buffered target data corresponding to the management information starts to be written into the flash memory chip of the storage device, in this state, if the read command hits the buffer, that is, if the target data is read from the data buffer, it is difficult to ensure that the read data is the target data to be read, so that the target data needs to be read from the flash memory chip to ensure the reliability of the target data reading.
In one embodiment of the present disclosure, further comprising:
in step S714, if the target physical address is not found, the target data is read from the flash memory chip.
If the target physical address is queried, the method indicates to enter a third stage of writing target data.
In one embodiment of the present disclosure, reading the target data based on the target cache number and the state of the management information further includes:
in step S716, if the target cache number is not detected, the target data is read from the flash memory chip.
If the target physical address is not queried or if the target cache number is not detected, the target data needs to be read from the flash memory chip so as to ensure the reliability of target data reading.
In this embodiment, based on the read operation on the target data, by detecting the state of the management information, that is, detecting whether the management information is in the first stage, the second stage, or the third stage, the first stage is a stage of writing the management information of the target data to the first cache module, the second stage is a stage of caching the address mapping relationship generated based on the linear data structure to the second cache module, the third stage is a stage of writing the generated address mapping table and the target data to the flash memory chip, and the state of the management information is configured to be different states based on the different stages, so that when the read request of the target data is received in the different states, it is determined whether the target data is read from the data cache area or the flash memory chip based on the state of the management information, so as to ensure that the read data is the target data.
In one embodiment of the present disclosure, the management information written by each data unit corresponds to 4KB of target data.
Accordingly, when the written address mapping relations are accumulated to 20488, the generated address mapping table is written into the flash memory chip NAND.
As shown in fig. 8, a data request processing method according to another embodiment of the present disclosure specifically includes:
in step S802, in response to a write request of the host side for the target data, the cache number scmd is searched based on the physical address FPA to obtain a first index structure, so as to finish caching management information of one 4KB data to the first cache module.
The first index structure is used for indexing according to the hash according to the physical address FPA.
The management information includes an address mapping relationship between the FPA and scmd, and an address mapping relationship between the FPA and LTU.
In step S804, all the data units scmd in the linear data structure MCMD are written with the management information.
The management information is in a first state, namely an insertion state, namely the management information in the scmd is written into the first cache module, and at the moment, reading can be hit and written in the main control chip.
Step S806, generating a second index structure according to the address mapping relation between the logical address and the physical address, writing the address mapping relation in the MCMD structure full of management information into the second buffer module based on the second index structure, and simultaneously starting writing the target data in the data buffer area into the flash memory chip.
The management information is in a second state, namely an upgrade modification state, and if the read request hits writing, the data in the data cache area is unsafe in the second state.
In step S808, writing the 4KB data into the flash memory is completed, and the management data in the first cache module and the target data in the data cache region are removed.
At this time, the data in the data cache is invalid, and even if the read request hits the write, the data in the cache cannot be utilized.
In step S810, when the address mapping relation is accumulated to the second number, the generated address mapping table is written into the flash memory chip of the storage device.
The operations of steps S802 to S810 are repeatedly performed based on the write operation of the new target data.
In step S812, the logical address provided by the host is obtained in response to the read request of the host for the target data.
In step S814, a hash search is performed in the second cache module according to the logical address, and the physical address FPA is searched.
In step S816, if the physical address FPA is searched, the cache number is searched with the physical address FPA as an index.
In step S818, if the physical address FPA is not searched, 4KB data is read from the flash memory chip NAND, so as to realize slow reading.
Step S820, if the cache number is searched, the target data is read from the data cache area, and quick reading is realized.
In step S822, if the cache number is not searched or if the cache number is searched, the management information is not in the inserted state, and the 4KB data is read from the flash memory chip NAND, so as to realize the slow reading.
It is noted that the above-described figures are only schematic illustrations of processes involved in a method according to an exemplary embodiment of the invention, and are not intended to be limiting. It will be readily appreciated that the processes shown in the above figures do not indicate or limit the temporal order of these processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, for example, among a plurality of modules.
Those skilled in the art will appreciate that the various aspects of the invention may be implemented as a system, method, or program product. Accordingly, aspects of the invention may be embodied in the following forms, namely: an entirely hardware embodiment, an entirely software embodiment (including firmware, micro-code, etc.) or an embodiment combining hardware and software aspects may be referred to herein as a "circuit," module "or" system.
A data request processing apparatus 900 according to this embodiment of the present invention is described below with reference to fig. 9. The data request processing apparatus 900 shown in fig. 9 is merely an example, and should not impose any limitation on the functions and scope of use of the embodiments of the present invention.
The data request processing apparatus 900 is embodied in the form of a hardware module. The components of the data request processing apparatus 900 may include, but are not limited to: a writing module 902, configured to respond to a writing request for target data, write management information of the target data into a linear data structure, and cache the management information into a first cache module based on the linear data structure, where if writing of the linear data structure is completed, an address mapping relationship generated based on the linear data structure is cached into an address mapping table of a second cache module; and a reading module 904, configured to read the target data based on the state of the management information in response to a read request for the target data, where the state of the management information is determined based on whether the address mapping relationship is cached in the second cache module.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, including several instructions to cause a computing device (may be a personal computer, a server, a terminal device, or a network device, etc.) to perform the method according to the embodiments of the present disclosure.
In an exemplary embodiment of the present disclosure, a computer-readable storage medium having stored thereon a program product capable of implementing the method described above in the present specification is also provided. In some possible embodiments, the aspects of the invention may also be implemented in the form of a program product comprising program code for causing a terminal device to carry out the steps according to the various exemplary embodiments of the invention as described in the "exemplary method" section of this specification, when the program product is run on the terminal device.
A program product for implementing the above-described method according to an embodiment of the present invention may employ a portable compact disc read-only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present invention is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The computer readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
It should be noted that although in the above detailed description several modules or units of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit in accordance with embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
Furthermore, although the steps of the methods in the present disclosure are depicted in a particular order in the drawings, this does not require or imply that the steps must be performed in that particular order or that all illustrated steps be performed in order to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step to perform, and/or one step decomposed into multiple steps to perform, etc.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, including several instructions to cause a computing device (may be a personal computer, a server, a mobile terminal, or a network device, etc.) to perform the method according to the embodiments of the present disclosure.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.