CN115712388A - Data storage method, device and equipment of solid-state disk and storage medium - Google Patents

Data storage method, device and equipment of solid-state disk and storage medium Download PDF

Info

Publication number
CN115712388A
CN115712388A CN202211338727.1A CN202211338727A CN115712388A CN 115712388 A CN115712388 A CN 115712388A CN 202211338727 A CN202211338727 A CN 202211338727A CN 115712388 A CN115712388 A CN 115712388A
Authority
CN
China
Prior art keywords
data
queue
cache
page data
read
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211338727.1A
Other languages
Chinese (zh)
Inventor
许研
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Agricultural Bank of China
Original Assignee
Agricultural Bank of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Agricultural Bank of China filed Critical Agricultural Bank of China
Priority to CN202211338727.1A priority Critical patent/CN115712388A/en
Publication of CN115712388A publication Critical patent/CN115712388A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The embodiment of the invention discloses a data storage method, a data storage device, data storage equipment and a data storage medium of a solid-state disk. Wherein, the method comprises the following steps: when a write request is received, judging whether a cache queue in a solid-state disk has an idle cache space; if the cache queue does not have a free cache space, reading page data in the cache queue according to a set sequence; if the memory chip corresponding to the read page data is in a data erasing state, keeping the read page data in the cache queue, and continuing to read the next page data; if the memory chip corresponding to the read page data is not in a data erasing state, writing the read page data into the memory chip, and deleting the read page data from the cache queue; and writing the data corresponding to the write request into the cache queue. According to the technical scheme, the conflict between the writing request and the solid-state disk erasing operation can be avoided, so that the writing response time of the whole solid-state disk is reduced.

Description

Data storage method, device and equipment of solid-state disk and storage medium
Technical Field
The embodiment of the invention relates to the technical field of data storage, in particular to a data storage method, a data storage device, data storage equipment and a storage medium of a solid-state disk.
Background
Compared with the traditional mechanical disk, the solid-state disk has the advantages of no seek time, high read-write speed, low energy consumption, small volume, shock resistance and the like, and also becomes a data storage scheme for the disputed deployment of a novel data center. At present, a solid-state disk storage medium generally refers to a flash memory medium, which does not support immediately performing repeated writing operation on a storage unit on which data has been written, and the data of the storage unit needs to be erased before performing writing operation on the storage unit again. Therefore, when continuing the write operation to the solid-state disk that is fully written, a data erase operation, i.e., garbage collection, is performed first, which inevitably results in some loss of write performance.
Disclosure of Invention
Embodiments of the present invention provide a data storage method, apparatus, device and storage medium for a solid-state disk, which can avoid a conflict between a write request and an erase operation of the solid-state disk, thereby reducing the write response time of the entire solid-state disk.
According to an aspect of the present invention, there is provided a data storage method of a solid-state disk, including:
when a write request is received, judging whether a cache queue in a solid-state disk has a free cache space;
if the cache queue does not have a free cache space, reading page data in the cache queue according to a set sequence;
if the memory chip corresponding to the read page data is in a data erasing state, keeping the read page data in the cache queue, and continuously reading the next page data;
if the memory chip corresponding to the read page data is not in a data erasing state, writing the read page data into the memory chip, and deleting the read page data from the cache queue;
and writing the data corresponding to the write request into the cache queue.
Optionally, the cache queue caches data by using any one of the following policies: least recently used are LRU policy, first-in-first-out policy or first-in-last-out policy.
Optionally, if the cache queue caches data by using the LRU policy, the sequence is set to be the sequence from the head of the queue to the end of the queue;
if the cache queue caches data by adopting a first-in first-out strategy, setting the sequence to be the sequence from the head of the queue to the tail of the queue;
if the cache queue caches data by adopting a first-in and last-out strategy, the sequence is set to be the sequence from the tail of the queue to the head of the queue.
Optionally, after determining whether the buffer queue in the solid state disk has a free buffer space, the method further includes:
and if the cache queue has an idle cache space, writing the data corresponding to the write request into the cache queue.
Optionally, writing the data corresponding to the write request into the buffer queue includes:
dividing data corresponding to the write request into at least one page of data; wherein the size of the page data is a set value;
and sequentially writing the at least one page of data into the buffer queue.
Optionally, after continuing to read the next page of data, the method further includes:
and if the page data in the cache queue is read completely and the memory chips corresponding to the read page data are all in a data erasing state, re-reading the page data in the cache queue according to the set sequence.
Optionally, the obtaining of the memory chip corresponding to the read page data includes:
and acquiring the memory chip corresponding to the read page data through the page mapping relation.
According to another aspect of the present invention, there is provided a data storage device of a solid-state disk, including:
the cache space judgment module is used for judging whether a cache queue in the solid-state disk has an idle cache space or not when a write request is received;
the page data reading module is used for reading page data in the cache queue according to a set sequence if the cache queue does not have a free cache space;
the page data retention module is used for retaining the read page data in the cache queue and continuously reading the next page data if the memory chip corresponding to the read page data is in a data erasing state;
the page data writing module is used for writing the read page data into the memory chip and deleting the read page data from the cache queue if the memory chip corresponding to the read page data is not in a data erasing state;
and the data writing module is used for writing the data corresponding to the writing request into the cache queue.
According to another aspect of the present invention, there is provided an electronic apparatus including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the first and the second end of the pipe are connected with each other,
the memory stores a computer program executable by the at least one processor, and the computer program is executed by the at least one processor to enable the at least one processor to execute the data storage method of the solid-state disk according to any embodiment of the present invention.
According to another aspect of the present invention, there is provided a computer-readable storage medium storing computer instructions for causing a processor to implement a data storage method of a solid-state disk according to any one of the embodiments of the present invention when the computer instructions are executed.
The method comprises the steps of judging whether a cache queue in the solid-state disk has an idle cache space or not when a write request is received; if the cache queue does not have a free cache space, reading page data in the cache queue according to a set sequence; if the memory chip corresponding to the read page data is in a data erasing state, keeping the read page data in the cache queue, and continuously reading the next page data; if the memory chip corresponding to the read page data is not in a data erasing state, writing the read page data into the memory chip, and deleting the read page data from the cache queue; and writing the data corresponding to the write request into the cache queue. According to the technical scheme, the conflict between the writing request and the solid-state disk erasing operation can be avoided, so that the writing response time of the whole solid-state disk is shortened.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present invention, nor do they necessarily limit the scope of the invention. Other features of the present invention will become apparent from the following description.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings required to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the description below are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of a data storage method of a solid-state disk according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating an example solid state disk storage system provided in accordance with an embodiment of the present invention;
fig. 3 is a diagram illustrating a chip of a solid-state disk cache management process according to an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of a data storage device of a solid-state disk according to a second embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device according to a third embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example one
Fig. 1 is a flowchart of a data storage method of a solid-state disk according to an embodiment of the present invention, where the present embodiment is applicable to a case where data of the solid-state disk is stored, and the method may be performed by a data storage device of the solid-state disk, where the data storage device of the solid-state disk may be implemented in a form of hardware and/or software, and the data storage device of the solid-state disk may be configured in an electronic device with data processing capability. As shown in fig. 1, the method includes:
step 110, when a write request is received, judging whether a buffer queue in the solid-state disk has a free buffer space.
A write request may be understood as a request to store data in a built-in cache of a solid-state disk. The write request may contain data that needs to be written to the solid state disk. The solid state disk can be formed by a flash memory chip formed by floating gate MOS tubes. The storage system of the solid state disk may include a cache queue portion and a flash chip portion.
Specifically, in this embodiment, a data structure in a built-in cache of the solid state disk may be set to cache data based on the LRU linked list, and the solid state disk performs cache mapping in a page level mapping manner, that is, the size of node data of a single linked list is exactly 1 page size in the solid state disk, and the solid state disk may be composed of a plurality of flash memory chips, and a single flash memory chip may include a plurality of physical blocks, and a single physical block may include a plurality of physical pages, and a physical page may be a minimum access unit. The buffer queue may be a buffer queue of a solid state disk. The buffer queue in this embodiment may use different policies for buffering data. Cache space may be understood as space that may be used for caching data.
For example, as shown in fig. 2, the host system may include an application, a file system, and a block device driver; the solid state disk storage system may include a cache queue and a plurality of flash memory chips. Among them, garbage collection can be understood as data erasure. The minimum unit of the data erasing operation can be a single physical block, the solid-state disk can be accessed in parallel in a multi-channel mode through a data bus, one data bus corresponds to one channel, and one channel corresponds to a plurality of flash memory chips. Generally, the average access time to memory by a host is expressed by the following equation: AMAT = Hit _ Time + Miss _ Rate × Miss _ Penalty; the AMAT is the average access time of the solid-state disk cache, the Miss _ Rate is the Miss Rate Miss _ Penalty is the Miss cost, generally speaking, the solid-state disk garbage recycling operation time is within a certain range, therefore, the method can be mainly optimized for the hit Rate, the lower the Miss Rate is, the lower the average access time is, the lower the relative time delay is, and the better the user experience of the solid-state disk is. In this embodiment, it may be assumed that the cache hit rate Miss _ latency is not changed, and the overall average access time may also be reduced by reducing the Miss cost.
In this embodiment, when a write request is received, it is determined whether a buffer queue in the solid state disk has an empty buffer space.
In this embodiment, optionally, the buffer queue uses any one of the following policies to buffer data: least recently used are LRU policy, first-in-first-out policy or first-in-last-out policy.
LRU is an abbreviation of Least Recently Used, and is a commonly Used page replacement algorithm, which can be understood as selecting the Least Recently Used data to be eliminated. The least recently used LRU policy in this embodiment may be understood as reading and storing the most recently and least used data from the cache queue into the flash memory chip. The first-in first-out strategy can be understood as that data stored in the buffer queue is read into the flash memory chip first. The first-in-last-out strategy can be understood as that data stored in a cache queue is read into a flash memory chip. In this embodiment, the setting order of the policy cache data is different.
In this embodiment, the cache queue may employ any one of a least recently used LRU policy, a first-in first-out policy, or a first-in last-out policy to cache data. By the arrangement, different strategies can be flexibly selected for caching data according to actual requirements.
In this embodiment, optionally, if the cache queue uses the LRU policy to cache data, the sequence is set to be the sequence from the head of the queue to the end of the queue; if the cache queue caches data by adopting a first-in first-out strategy, setting the sequence to be the sequence from the head of the queue to the tail of the queue; if the cache queue caches data by adopting a first-in last-out strategy, the sequence is set to be the sequence from the tail of the queue to the head of the queue.
Wherein, the head of line can be understood as the data stored in the buffer queue first. The tail of the queue can be understood as the data which is finally stored in the buffer queue. In this embodiment, if the cache queue uses the LRU policy to cache data, the set sequence may be a sequence from the head of the queue to the end of the queue; if the cache queue caches data by adopting a first-in first-out strategy, the set sequence can be the sequence from the head of the queue to the tail of the queue; if the cache queue caches data by adopting a first-in last-out strategy, the set sequence can be the sequence from the tail of the queue to the head of the queue. In the embodiment, different policy cache data are cached in different setting sequences, so that the policy can be reasonably selected according to actual requirements to cache data.
In this embodiment, optionally, after determining whether the buffer queue in the solid-state disk has a free buffer space, the method further includes: and if the cache queue has an idle cache space, writing the data corresponding to the write request into the cache queue.
In this embodiment, after determining whether the buffer queue in the solid-state disk has an idle buffer space, if the buffer queue has an idle buffer space, the data corresponding to the write request may be written into the buffer queue. According to the scheme, when the buffer queue is judged to have the free buffer space, the data corresponding to the write request can be directly written into the buffer queue, and the method is more convenient and faster.
In this embodiment, optionally, writing the data corresponding to the write request into the buffer queue includes: dividing data corresponding to the write request into at least one page of data; wherein the size of the page data is a set value; and sequentially writing the at least one page of data into the buffer queue.
Here, the page data may be understood as data cut into a page size. The page data may have a plurality of page data. The size of the page data may be a set value; wherein, the setting value can be set according to actual demand. Illustratively, the size of the page data in the present embodiment may be 4KB.
In this embodiment, data corresponding to the write request may be divided into at least one page data with a size set as a value, and the at least one page data with a size set as a value may be sequentially written into the buffer queue. By means of the arrangement, the data can be firstly divided into page data with set size, and then the page data is written into the cache queue, so that the method is more convenient and faster.
And step 120, if the cache queue does not have a free cache space, reading the page data in the cache queue according to a set sequence.
The setting sequence can be understood as a preset sequence, and can also be set according to actual requirements. The page data may be understood as data of a set value obtained by slicing. In this embodiment, if the buffer queue does not have a free buffer space, the page data in the buffer queue may be read according to a set sequence.
Step 130, if the memory chip corresponding to the read page data is in a data erasing state, retaining the read page data in the cache queue, and continuing to read the next page data.
Wherein, the page data and the memory chip can be well corresponded in advance. Data erasure may be understood as erasing invalid data stored in a certain memory cell to make it become a free memory cell.
In this embodiment, optionally, the obtaining of the memory chip corresponding to the read page data includes: and acquiring the memory chip corresponding to the read page data through the page mapping relation.
The page mapping relationship may be understood as a mapping relationship between page data and a memory chip. The page mapping relationship may be a preset relationship in a solid-state disk. In this embodiment, the memory chip corresponding to the page data may be oddly read through the page mapping relationship. By means of the setting, the memory chip corresponding to the page data can be obtained, and data caching is facilitated.
In this embodiment, if the memory chip corresponding to the read page data is in a data erasing state, the read page data may be retained in the cache queue, and then the next page data is continuously read.
In this embodiment, optionally, after continuing to read the next page data, the method further includes: and if the page data in the cache queue is read completely and the memory chips corresponding to the read page data are all in a data erasing state, re-reading the page data in the cache queue according to the set sequence.
In this embodiment, if the page data in the cache queue is read completely and the memory chips corresponding to the read page data are all in the data erasing state, the page data in the cache queue may be read again in the case setting sequence. By means of the setting, the purpose of avoiding write conflict can be achieved, so that cache miss cost is reduced, and user time delay is reduced.
Step 140, if the memory chip corresponding to the read page data is not in the data erasing state, writing the read page data into the memory chip, and deleting the read page data from the cache queue.
In this embodiment, if the memory chip corresponding to the read page data is in an unprocessed data erasure state, the read page data may be written into the memory chip, and the read page data may be deleted from the cache queue.
And 150, writing the data corresponding to the write request into the cache queue.
In this embodiment, data corresponding to the write request may be written into the buffer queue.
An exemplary chip diagram of a solid-state disk cache management process in an embodiment of the present invention is shown in fig. 3. In this embodiment, after a user layer initiates a write data request, the write data may be sent to a solid-state disk built-in cache through file system and block device layer processing, and is divided into pages (usually, 4 KB) and added to the tail of the LRU linked list in units of pages, as shown in fig. 3. When the cache queue is full, at this time, after a write request of a user arrives again, a cache write-back operation of a built-in cache of the solid-state disk is triggered, that is, a linked list queue head node Dn (having the longest existence time) can be written into a flash memory medium, and at this time, it is determined by a page mapping mode that Dn will be written into a physical block of the chip 1. If the physical block in the chip 1 is found to be performing garbage collection operation at this time, because it is necessary to wait for the completion of the garbage collection operation to perform write operation on the storage location, dn in the LRU linked list can be temporarily retained in the cache queue first, and data pages are sequentially searched from the head of the queue to the tail of the queue, for example, D4, and if it is found through page mapping that garbage collection operation does not occur on the chip n to be written in by D4 at this time, the data page D4 in the cache linked list is written back to the flash memory medium, and meanwhile, the linked list node D4 is deleted, so that a space is made for a newly arrived write request. Similarly, if the chip to which D4 is mapped is also performing garbage collection operation, then D4 is retained first, and the search continues forward, for example, D3, until the end of the queue. And if the chips corresponding to all the data pages in the cache queue are in garbage collection operation, writing back the data page of the head node of the queue. The method and the device can achieve the purpose of avoiding the write conflict, thereby reducing the cache miss cost and reducing the user time delay. In this embodiment, a corresponding chip is searched through a page mapping table, and whether the chip is in a garbage collection state is used to determine whether a cache node to be written back to a flash memory medium can be written back to the flash memory medium only when the garbage collection is completed, and if so, other nodes in an idle state are written back in sequence, so that a data erasure waiting time delay is avoided, and a time delay cost of cache miss is reduced.
The method comprises the steps of judging whether a cache queue in the solid-state disk has an idle cache space or not when a write request is received; if the cache queue does not have a free cache space, reading page data in the cache queue according to a set sequence; if the memory chip corresponding to the read page data is in a data erasing state, keeping the read page data in the cache queue, and continuously reading the next page data; if the memory chip corresponding to the read page data is not in a data erasing state, writing the read page data into the memory chip, and deleting the read page data from the cache queue; and writing the data corresponding to the write request into the cache queue. According to the technical scheme, the conflict between the writing request and the solid-state disk erasing operation can be avoided, so that the writing response time of the whole solid-state disk is reduced.
Example two
Fig. 4 is a schematic structural diagram of a data storage apparatus of a solid-state disk according to a second embodiment of the present invention, where the apparatus can execute a data storage method of the solid-state disk according to any embodiment of the present invention, and has functional modules and beneficial effects corresponding to the execution method. As shown in fig. 4, the apparatus includes:
a buffer space determining module 410, configured to determine whether a buffer queue in the solid state disk has an idle buffer space when a write request is received;
a page data reading module 420, configured to, if the cache queue does not have a free cache space, read page data in the cache queue according to a set order;
the page data retaining module 430 is configured to retain the read page data in the cache queue and continue to read the next page data if the memory chip corresponding to the read page data is in a data erasing state;
a page data writing module 440, configured to write the read page data into the memory chip and delete the read page data from the cache queue if the memory chip corresponding to the read page data is not in a data erasing state;
a first data writing module 450, configured to write data corresponding to the write request into the buffer queue.
Optionally, the buffer queue buffers data by using any one of the following policies: least recently used is an LRU policy, a first-in-first-out policy, or a first-in-last-out policy.
Optionally, if the cache queue caches data by using the LRU policy, the sequence is set to be the sequence from the head of the queue to the end of the queue;
if the cache queue adopts a first-in first-out strategy to cache data, setting the sequence as the sequence from the head of the queue to the tail of the queue;
if the cache queue caches data by adopting a first-in and last-out strategy, the sequence is set to be the sequence from the tail of the queue to the head of the queue.
Optionally, the apparatus further comprises: and a second data writing module, configured to, after determining whether a buffer queue in the solid state disk has an idle buffer space, write data corresponding to the write request into the buffer queue if the buffer queue has the idle buffer space.
Optionally, the first data writing module 450 is specifically configured to:
dividing data corresponding to the write request into at least one page of data; wherein the size of the page data is a set value;
and sequentially writing the at least one page of data into the buffer queue.
Optionally, the apparatus further comprises: and the re-exclusive right module is used for re-reading the page data in the cache queue according to the set sequence if the page data in the cache queue is read completely and the memory chips corresponding to the read page data are all in a data erasing state after the next page data is continuously read.
Optionally, the page data retaining module 430 is specifically configured to:
and acquiring the memory chip corresponding to the read page data through the page mapping relation.
The data storage device of the solid-state disk provided by the embodiment of the invention can execute the data storage method of the solid-state disk provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
EXAMPLE III
FIG. 5 illustrates a schematic diagram of an electronic device 10 that may be used to implement an embodiment of the invention. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital assistants, cellular phones, smart phones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 5, the electronic device 10 includes at least one processor 11, and a memory communicatively connected to the at least one processor 11, such as a Read Only Memory (ROM) 12, a Random Access Memory (RAM) 13, and the like, wherein the memory stores a computer program executable by the at least one processor, and the processor 11 can perform various suitable actions and processes according to the computer program stored in the Read Only Memory (ROM) 12 or the computer program loaded from a storage unit 18 into the Random Access Memory (RAM) 13. In the RAM13, various programs and data necessary for the operation of the electronic apparatus 10 may also be stored. The processor 11, the ROM12, and the RAM13 are connected to each other via a bus 14. An input/output (I/O) interface 15 is also connected to bus 14.
A number of components in the electronic device 10 are connected to the I/O interface 15, including: an input unit 16 such as a keyboard, a mouse, or the like; an output unit 17 such as various types of displays, speakers, and the like; a storage unit 18 such as a magnetic disk, an optical disk, or the like; and a communication unit 19 such as a network card, modem, wireless communication transceiver, etc. The communication unit 19 allows the electronic device 10 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
Processor 11 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 11 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, or the like. The processor 11 performs the various methods and processes described above, such as the data storage method of a solid state disk.
In some embodiments, the data storage method of the solid state disk may be implemented as a computer program tangibly embodied in a computer-readable storage medium, such as storage unit 18. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 10 via the ROM12 and/or the communication unit 19. When the computer program is loaded into the RAM13 and executed by the processor 11, one or more steps of the data storage method of the solid-state disk described above may be performed. Alternatively, in other embodiments, the processor 11 may be configured to perform the data storage method of the solid state disk by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for implementing the methods of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be performed. A computer program can execute entirely on a machine, partly on a machine, as a stand-alone software package partly on a machine and partly on a remote machine or entirely on a remote machine or server.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. A computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the Internet.
The computing system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service are overcome.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present invention may be executed in parallel, sequentially, or in different orders, and are not limited herein as long as the desired result of the technical solution of the present invention can be achieved.
The above-described embodiments should not be construed as limiting the scope of the invention. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made, depending on design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A method for storing data in a solid state disk, comprising:
when a write request is received, judging whether a cache queue in a solid-state disk has a free cache space;
if the cache queue does not have a free cache space, reading page data in the cache queue according to a set sequence;
if the memory chip corresponding to the read page data is in a data erasing state, keeping the read page data in the cache queue, and continuously reading the next page data;
if the memory chip corresponding to the read page data is not in a data erasing state, writing the read page data into the memory chip, and deleting the read page data from the cache queue;
and writing the data corresponding to the write request into the cache queue.
2. The method of claim 1, wherein the buffer queue buffers data using any one of the following policies: least recently used are LRU policy, first-in-first-out policy or first-in-last-out policy.
3. The method according to claim 2, wherein if the cache queue employs LRU policy to cache data, the sequence is set to be the sequence from the head of the queue to the end of the queue;
if the cache queue adopts a first-in first-out strategy to cache data, setting the sequence as the sequence from the head of the queue to the tail of the queue;
if the cache queue caches data by adopting a first-in last-out strategy, the sequence is set to be the sequence from the tail of the queue to the head of the queue.
4. The method of claim 1, after determining whether the buffer queue in the solid state disk has free buffer space, further comprising:
and if the cache queue has an idle cache space, writing the data corresponding to the write request into the cache queue.
5. The method according to claim 1 or 4, wherein writing the data corresponding to the write request into the buffer queue comprises:
dividing data corresponding to the write request into at least one page of data; wherein the size of the page data is a set value;
and sequentially writing the at least one page of data into the buffer queue.
6. The method of claim 1, further comprising, after continuing to read the next page of data:
and if the page data in the cache queue is read completely and the memory chips corresponding to the read page data are all in a data erasing state, re-reading the page data in the cache queue according to the set sequence.
7. The method according to claim 1, wherein obtaining the memory chip corresponding to the read page data comprises:
and acquiring the memory chip corresponding to the read page data through the page mapping relation.
8. A data storage device for a solid state disk, comprising:
the cache space judgment module is used for judging whether a cache queue in the solid-state disk has an idle cache space or not when a write request is received;
the page data reading module is used for reading page data in the cache queue according to a set sequence if the cache queue does not have a free cache space;
the page data retention module is used for retaining the read page data in the cache queue and continuously reading the next page data if the memory chip corresponding to the read page data is in a data erasing state;
the page data writing module is used for writing the read page data into the memory chip and deleting the read page data from the cache queue if the memory chip corresponding to the read page data is not in a data erasing state;
and the data writing module is used for writing the data corresponding to the writing request into the cache queue.
9. An electronic device, characterized in that the electronic device comprises:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores a computer program executable by the at least one processor, the computer program being executable by the at least one processor to enable the at least one processor to perform the data storage method of a solid state disk of any one of claims 1-7.
10. A computer-readable storage medium, wherein the computer-readable storage medium stores computer instructions for causing a processor to implement the data storage method of the solid-state disk according to any one of claims 1 to 7 when executed.
CN202211338727.1A 2022-10-28 2022-10-28 Data storage method, device and equipment of solid-state disk and storage medium Pending CN115712388A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211338727.1A CN115712388A (en) 2022-10-28 2022-10-28 Data storage method, device and equipment of solid-state disk and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211338727.1A CN115712388A (en) 2022-10-28 2022-10-28 Data storage method, device and equipment of solid-state disk and storage medium

Publications (1)

Publication Number Publication Date
CN115712388A true CN115712388A (en) 2023-02-24

Family

ID=85231617

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211338727.1A Pending CN115712388A (en) 2022-10-28 2022-10-28 Data storage method, device and equipment of solid-state disk and storage medium

Country Status (1)

Country Link
CN (1) CN115712388A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117075822A (en) * 2023-10-17 2023-11-17 苏州元脑智能科技有限公司 Data reading and writing method, device, equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117075822A (en) * 2023-10-17 2023-11-17 苏州元脑智能科技有限公司 Data reading and writing method, device, equipment and storage medium
CN117075822B (en) * 2023-10-17 2024-02-06 苏州元脑智能科技有限公司 Data reading and writing method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
US10599637B2 (en) Granular buffering of metadata changes for journaling file systems
CN110275841B (en) Access request processing method and device, computer equipment and storage medium
EP2542989B1 (en) Buffer pool extension for database server
EP2478441B1 (en) Read and write aware cache
CN108268219B (en) Method and device for processing IO (input/output) request
US20150143045A1 (en) Cache control apparatus and method
CN105677580A (en) Method and device for accessing cache
CN111338561B (en) Memory controller and memory page management method
US11593268B2 (en) Method, electronic device and computer program product for managing cache
CN113127382A (en) Data reading method, device, equipment and medium for additional writing
CN112540731A (en) Data additional writing method, device, equipment, medium and program product
US9087392B2 (en) Techniques for efficient GPU triangle list adjacency detection and handling
CN115712388A (en) Data storage method, device and equipment of solid-state disk and storage medium
US20170351609A1 (en) Storage drive dependent track removal in a cache for storage
CN113094392A (en) Data caching method and device
US11074189B2 (en) FlatFlash system for byte granularity accessibility of memory in a unified memory-storage hierarchy
CN107748649B (en) Method and device for caching data
CN111858393B (en) Memory page management method, memory page management device, medium and electronic equipment
US10635594B1 (en) Dynamically redistribute cache space based on time savings
CN114528229A (en) Cache data access method and device and electronic equipment
CN110658999B (en) Information updating method, device, equipment and computer readable storage medium
US10592420B1 (en) Dynamically redistribute cache space with min-max technique
CN113742131B (en) Method, electronic device and computer program product for storage management
CN115964391A (en) Cache management method, device, equipment and storage medium
CN110502458B (en) Command queue control method, control circuit and address mapping equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination