CN112540933A - Cache reading and writing method and device and electronic equipment - Google Patents

Cache reading and writing method and device and electronic equipment Download PDF

Info

Publication number
CN112540933A
CN112540933A CN202011350425.7A CN202011350425A CN112540933A CN 112540933 A CN112540933 A CN 112540933A CN 202011350425 A CN202011350425 A CN 202011350425A CN 112540933 A CN112540933 A CN 112540933A
Authority
CN
China
Prior art keywords
cache
read
data
level
caches
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011350425.7A
Other languages
Chinese (zh)
Inventor
吴业亮
朱正东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huayun Data Holding Group Co Ltd
Original Assignee
Huayun Data Holding Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huayun Data Holding Group Co Ltd filed Critical Huayun Data Holding Group Co Ltd
Priority to CN202011350425.7A priority Critical patent/CN112540933A/en
Publication of CN112540933A publication Critical patent/CN112540933A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0806Multiuser, multiprocessor or multiprocessing cache systems
    • G06F12/0811Multiuser, multiprocessor or multiprocessing cache systems with multilevel cache hierarchies
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention relates to the technical field of storage, in particular to a cache read-write method, a cache read-write device and electronic equipment, wherein the method comprises the steps of obtaining a data read-write request and processing states of all levels of caches, wherein the processing states comprise queue depth and IO time delay; and determining a target cache corresponding to the data read-write request according to the processing state of each level of cache so as to perform corresponding read-write operation on the target cache. According to the cache reading and writing method, after the data reading and writing request is obtained, the processing state of each level of cache is used for determining the target cache, and data reading and writing from the same cache are avoided, so that the performance of each level of cache can be utilized to the maximum extent, and the reading and writing efficiency of the cache is improved.

Description

Cache reading and writing method and device and electronic equipment
Technical Field
The invention relates to the technical field of storage, in particular to a cache reading and writing method and device and electronic equipment.
Background
The Cache is a Cache area for data exchange (called Cache), and the concept is originally from a memory and a CPU. When a certain hardware needs to read data, the required data is firstly searched from the cache, if the required data is found, the data is directly executed, and if the required data is not found, the required data is found from the memory. Since caches run much faster than memory, the role of caches is to help the hardware run faster.
Fig. 1 shows the principle of buffering read data, which is commonly used in the prior art. As shown in fig. 1, in the process of reading data, in order to improve performance, the data is first read from the high-speed device, and if the data exists in the high-speed device, success is returned. If the high-speed device does not exist, the request is sent to the low-speed device again. Fig. 2 shows the principle of buffering write data, which is commonly used in the prior art. As shown in fig. 2, in order to improve performance, a layer of cache is added on the low-speed disk, and when writing data, the client writes the data into the high-speed device first, and if the data is written successfully, the client returns a success; when the high speed device reaches the threshold, the high speed device brushes the infrequent data into the low speed device.
However, for the above-mentioned reading process, since the data is read from the high-speed device first and then read from the low-speed device in the case of no reading, the process is still relatively slow for the service scenario with relatively high performance requirement. For the write process, if the bandwidth of the high-speed device is 1000, the bandwidth of the low-speed device is 200. At this time, if the client terminal has an IO peak (1100) instantaneously, and the bandwidth of the high-speed device is only 1000, the client terminal will fill up the storage bandwidth of the high-speed device, and the high-speed device will be very slow to write to the device at this time. Meanwhile, since the high-speed device does not reach the data-dropping condition yet and the low-speed device does not have data IO, the bandwidth of 200 on the low-speed device is wasted, and the performance of each device is not fully utilized.
Disclosure of Invention
In view of this, embodiments of the present invention provide a cache read-write method, an apparatus, and an electronic device, so as to solve the problem of low cache read-write efficiency.
According to a first aspect, an embodiment of the present invention provides a cache read-write method, where the method includes:
acquiring a data read-write request and processing states of caches at all levels;
and determining a target cache corresponding to the data read-write request according to the processing state of each level of cache so as to perform corresponding read-write operation on the target cache.
According to the cache reading and writing method provided by the embodiment of the invention, after the data reading and writing request is obtained, the processing state of each level of cache is utilized to determine the target cache, so that the data reading and writing from the same cache are avoided, the performance of each level of cache can be utilized to the maximum extent, and the reading and writing efficiency of the cache is improved.
With reference to the first aspect, in the first implementation manner of the first aspect, when the data read/write request is a read data request, the determining, according to the processing state of each level of cache, a target cache corresponding to the data read/write request to perform corresponding read/write operation on the target cache includes:
judging whether the processing states of all the caches meet a first preset condition or not;
and when the processing states of all the caches meet the first preset condition, determining that the target cache corresponding to the read data request is all the caches, so as to read data from all the caches simultaneously.
According to the cache reading and writing method provided by the embodiment of the invention, when the processing states of all the caches meet the first preset condition, data can be read from all the caches at different levels at the same time, the bandwidth of all the caches at different levels is fully utilized in the data reading process, and the data reading efficiency is improved.
With reference to the first implementation manner of the first aspect, in a second implementation manner of the first aspect, the determining, according to the processing state of each level of cache, a target cache corresponding to the data read-write request to perform corresponding read-write operation on the target cache further includes:
and when caches with processing states which do not meet the first preset condition exist in all the caches, sequentially determining the target caches based on the priority of each level of cache.
According to the cache reading and writing method provided by the embodiment of the invention, when the caches with the processing states not meeting the first preset condition exist, the target caches are sequentially determined according to the priority of each level of cache, so that the normal reading of data is ensured.
With reference to the second implementation manner of the first aspect, in a third implementation manner of the first aspect, the sequentially determining the target caches based on the priorities of the caches at different levels includes:
judging whether corresponding data exists in the cache with the highest priority or not by using the data reading request;
and when the cache with the highest priority does not have corresponding data, sequentially judging whether the next-level cache has corresponding data or not so as to determine the target cache.
With reference to the first aspect, in a fourth implementation manner of the first aspect, when the data read/write request is a data write request, the determining, according to the processing state of each level of cache, a target cache corresponding to the data read/write request to perform a corresponding read/write operation on the target cache includes:
acquiring the priority of each level of cache;
judging whether the processing state of the cache with the highest priority meets a second preset condition or not;
and when the processing state of the cache with the highest priority meets the second preset condition, sequentially determining the target cache from the next-level cache.
According to the cache read-write method provided by the embodiment of the invention, when data is written, whether the processing state of the cache with the highest priority meets the second preset condition is judged, and when the second preset condition is met, the cache with the highest priority cannot meet the requirement of writing data, the target cache needs to be determined from the next-level cache in sequence, so that the waste of other cache bandwidths caused by the fact that all data writing requests need to be written into the cache with the highest priority is avoided, and the data writing efficiency is improved.
With reference to the fourth implementation manner of the first aspect, in the fifth implementation manner of the first aspect, the sequentially determining the target cache from the next-level cache includes:
judging whether the processing states of all the next-level caches meet a third preset condition or not;
and when the processing states of all the next-level caches meet the third preset condition, determining the cache with the highest priority as the target cache.
According to the cache read-write method provided by the embodiment of the invention, when the processing states of all the next-level caches meet the third preset condition, the processing states of all the next-level caches are more, if data are written into the next-level caches, the data writing efficiency is low, and at this time, the data need to be written into the cache with the highest priority, so that the data writing efficiency is improved.
With reference to the fourth implementation manner or the fifth implementation manner of the first aspect, in a sixth implementation manner of the first aspect, the determining, according to the processing state of each level of cache, a target cache corresponding to the data read-write request to perform corresponding read-write operation on the target cache further includes:
judging whether the cache with the highest priority meets a disk dropping condition or not;
and when the cache with the highest priority meets the disk-dropping condition, flushing the disk of the data of the cache with the highest priority to the next level of cache of the cache with the highest priority.
According to a second aspect, an embodiment of the present invention further provides a cache read/write apparatus, where the apparatus includes:
the acquisition module is used for acquiring data read-write requests and processing states of all levels of caches;
and the determining module is used for determining a target cache corresponding to the data read-write request according to the processing state of each level of cache so as to perform corresponding read-write operation on the target cache.
According to the cache read-write device provided by the embodiment of the invention, after the data read-write request is obtained, the processing state of each level of cache is utilized to determine the target cache, so that the data read-write is prevented from being started from the same cache, the performance of each level of cache can be utilized to the maximum extent, and the read-write efficiency of the cache is improved.
According to a third aspect, an embodiment of the present invention further provides an electronic device, including:
the cache reading and writing method comprises a memory and a processor, wherein the memory and the processor are connected in communication with each other, the memory stores computer instructions, and the processor executes the computer instructions to execute the cache reading and writing method according to the first aspect of the present invention or any embodiment of the first aspect.
According to a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, where the computer-readable storage medium stores computer instructions, and the computer instructions are configured to enable a computer to execute the method for reading and writing a cache according to the first aspect of the present invention or any implementation manner of the first aspect.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 illustrates a schematic diagram of buffering read data in the prior art;
FIG. 2 is a schematic diagram illustrating a prior art cache write data;
FIG. 3 is a flow chart of a cache read/write method according to an embodiment of the present invention;
FIG. 4 is a flow chart of a cache read/write method according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of the principle of buffering read data according to an embodiment of the present invention;
FIG. 6 is a flow chart of a cache read/write method according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a cache write data according to an embodiment of the invention;
FIG. 8 is a block diagram of a cache read/write apparatus according to an embodiment of the present invention;
fig. 9 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
According to an embodiment of the present invention, there is provided an embodiment of a cache read/write method, it should be noted that the steps shown in the flowchart of the figure may be executed in a computer system such as a set of computer executable instructions, and that although a logical order is shown in the flowchart, in some cases, the steps shown or described may be executed in an order different from that shown.
In this embodiment, a cache read-write method is provided, which can be used in electronic devices, such as a computer, a mobile phone, a tablet computer, and the like, fig. 3 is a flowchart of the cache read-write method according to an embodiment of the present invention, and as shown in fig. 3, the flowchart includes the following steps:
and S11, acquiring the data read-write request and the processing state of each level of cache.
The processing state includes a queue depth and an IO delay.
The electronic equipment acquires a data read-write request from the client, wherein the data read-write request is used for reading corresponding data from the cache or writing the corresponding data into the cache.
And the electronic equipment also acquires the processing state of each level of cache, wherein the electronic equipment distributes the acquired data read-write request to each level of cache for processing. The processing state of each level of cache is used for representing the read-write request which needs to be processed by the current cache. For each level of cache, corresponding to a storage queue, storing read-write requests to be processed in the storage queue, and determining the corresponding queue depth of each level of cache according to the length of the storage queue corresponding to the cache; the IO time delay represents the time for processing IO. The power equipment can obtain the bandwidth required by all IO requests in the processing queue of each level of cache by using the queue depth and the IO time delay of each level of cache, that is, the electronic equipment determines the load condition of each level of cache by obtaining the processing state of each level of cache, so that the obtained data read-write request is distributed to a proper cache for processing.
For the electronic device, the number of all the buffers included in the electronic device may be set according to actual situations, and the specific number of all the buffers is not limited at all. For example, a level 2 cache may be included, one level being a cache and the other level being a low cache; a level 3 cache, a level one cache, a level two cache, and a low cache, respectively, may also be included. Wherein, the high speed and the low speed are divided by the processing speed of each level of cache. In particular, for a cache, its processing rate is fast but the storage space is small; for a low-speed cache, the processing rate is low but the storage space is large.
And S12, determining a target cache corresponding to the data read-write request according to the processing state of each level of cache, so as to perform corresponding read-write operation on the target cache.
After the processing states of the caches at all levels are acquired, the electronic equipment compares the processing states of the caches at all levels to determine a target cache corresponding to the acquired data read-write request. For example, when the processing states of the caches at all levels are good, the operation of reading data from all the caches can be performed simultaneously, and the operation on other caches is stopped after the data is read; when the processing state of each level of cache is not ideal, data can be written into the cache from first, and the like. Details about this step will be described later.
According to the cache reading and writing method provided by the embodiment, after the data reading and writing request is obtained, the processing state of each level of cache is used for determining the target cache, so that data reading and writing from the same cache are avoided, the performance of each level of cache can be utilized to the maximum extent, and the reading and writing efficiency of the cache is improved.
In this embodiment, a cache read-write method is provided, and in this embodiment, a data read-write request is taken as a read-data request as an example. The cache read-write method can be used for electronic devices such as computers, mobile phones, tablet computers, and the like, fig. 4 is a flowchart of the cache read-write method according to an embodiment of the present invention, and as shown in fig. 4, the flowchart includes the following steps:
and S21, acquiring the data read-write request and the processing state of each level of cache.
Please refer to S11 in fig. 3 for details, which are not described herein.
And S22, determining a target cache corresponding to the data read-write request according to the processing state of each level of cache, so as to perform corresponding read-write operation on the target cache.
In this embodiment, a detailed description is given by taking a data read-write request acquired by an electronic device as a read-data request.
Specifically, the above S22 may include the following steps:
s221, determining whether the processing states of all the caches satisfy a first preset condition.
As described above, the processing state includes the queue depth and the IO delay, and therefore, when the processing state is compared with the first preset condition, the queue depth and the IO delay need to be compared respectively. The first preset condition is used for indicating that the queue depth of each level of cache is smaller than a queue depth threshold, and the IO time delay is smaller than a time threshold. For the convenience of the following description, the queue depth threshold and the time threshold are collectively referred to as thresholds hereinafter, but in the actual operation process, two thresholds need to be set respectively to make comparison judgment respectively.
For each level of cache, the corresponding first preset condition may be different, for example, for a cache with a higher processing rate, the corresponding threshold may be set to be smaller than the first threshold; for a cache with a lower processing rate, the corresponding threshold value may be set to be smaller than a second threshold value, where the first threshold value is larger than the second threshold value. And setting corresponding threshold values according to the proportion of the processing rate of each level of cache, and the like. The setting manner of the first preset condition is not limited at all, and may be set according to actual conditions.
Taking fig. 5 as an example, the cache in this embodiment includes 3 levels of caches, which are a first level high speed device, a second level high speed device and a low speed device, respectively, and the first preset conditions corresponding to the first level cache are smaller than a first threshold, smaller than a second threshold and smaller than a third threshold.
After the processing states of the caches at all levels are obtained, the electronic equipment compares the processing states with corresponding threshold values respectively to determine whether the processing states of all the caches meet a first preset condition.
Executing S222 when the processing states of all the caches meet the first preset condition; otherwise, S223 is executed.
S222, determining that the target cache corresponding to the read data request is all caches, so as to read data from all caches simultaneously.
The electronic device determines that the processing states of all the caches satisfy the first preset condition through the determination in S221, and at this time, the electronic device determines that the target cache corresponding to the read data request acquired in S21 is all the caches, that is, the electronic device reads data from all the caches simultaneously.
Specifically, when data is read, data of the primary high-speed device, the secondary high-speed device and the low-speed device are read at the same time, and success is returned after the data is read first.
And S223, sequentially determining target caches based on the priorities of the caches at all levels.
The electronic device determines, through the determination in S221, that there are caches whose processing states do not satisfy the first preset condition in all the caches, and at this time, it is necessary to sequentially determine the target caches based on the priorities of the caches at all levels.
The priority of the cache is used to indicate the processing rate of the cache, for example, as shown in fig. 5, the priority of the first level high speed device is greater than the priority of the second level high speed device, and the priority of the second level high speed device is greater than the priority of the low speed device.
And the electronic equipment stops reading all the caches simultaneously and reads the caches sequentially according to the priority. For example, first, reading from the primary high-speed device, and when the data can not be read, then reading from the secondary high-speed device, etc.
When the caches with the processing states not meeting the first preset condition exist, the target caches are sequentially determined according to the priority of each level of cache, so that normal reading of data is guaranteed.
As an optional implementation manner of this embodiment, the step S223 may include the following steps:
(1) and judging whether corresponding data exists in the cache with the highest priority by using the read data request.
The data required by the client can be indicated in the read data request, which can be used by the electronic device to confirm the data to be read. Therefore, the electronic device first reads the corresponding data from the cache with the highest priority by using the read data request.
(2) And when the cache with the highest priority does not have corresponding data, sequentially judging whether the next-level cache has corresponding data or not so as to determine a target cache.
If no corresponding data exists in the cache with the highest priority, reading from the next level of cache; and sequentially going down until the data is read out.
Taking fig. 5 as an example, the electronic device first reads data from the first-level high-speed device; when the first-level high-speed equipment does not have corresponding data, reading the data from the second-level high-speed equipment; and when the corresponding data does not exist in the secondary high-speed device, reading the data from the low-speed device.
According to the cache read-write method provided by the embodiment, when the processing states of all the caches meet the first preset condition, data can be simultaneously read from all the caches at all levels, the bandwidth of all the caches at all levels is fully utilized in the data reading process, and the data reading efficiency is improved.
In this embodiment, a data read/write request is taken as an example of a data write request. The cache read-write method can be used for electronic devices such as computers, mobile phones, tablet computers, and the like, fig. 6 is a flowchart of the cache read-write method according to an embodiment of the present invention, and as shown in fig. 6, the flowchart includes the following steps:
and S31, acquiring the data read-write request and the processing state of each level of cache.
The processing body comprises queue depth and IO time delay.
Please refer to S11 in fig. 3 for details, which are not described herein.
And S32, determining a target cache corresponding to the data read-write request according to the processing state of each level of cache, so as to perform corresponding read-write operation on the target cache.
In this embodiment, a detailed description is given by taking a data read-write request acquired by an electronic device as a data write request as an example.
Specifically, the above S32 may include the following steps:
s321, acquiring the priority of each level of cache.
As described above, the priority of each level of cache can be divided by its processing speed.
S322, determining whether the processing status of the cache with the highest priority satisfies a second preset condition.
The second predetermined condition may be that the processing state in the cache with the highest priority is greater than a certain threshold. The second preset condition is not limited at all, and may be set according to actual conditions.
Executing S323 when the processing state of the cache with the highest priority meets the second preset condition; otherwise, determining the cache with the highest priority as the target cache.
And S323, sequentially determining a target cache from the next-level cache.
When the processing state of the cache with the highest priority meets a second preset condition, the cache with the highest priority is busy at the moment, and data can be written into the next-level cache. For example, as shown in fig. 7, the electronic device preferentially writes data into the primary high-speed device, and when the queue time of the IO request queue of the primary high-speed device exceeds 2ms, the primary high-speed device is bypassed at this time, and it is determined whether the data can be written into the secondary high-speed device. If the queuing time of the second-level high-speed device also exceeds the corresponding threshold value, the second-level high-speed device can be bypassed at the moment, and whether the data can be written into the low-speed device or not is judged.
Wherein, the number 1 on the line with an arrow in fig. 7 indicates that the client can directly write data into the corresponding primary high-speed device, secondary high-speed device and low-speed device; the number 2 on the line with the arrow indicates that the data in the primary high-speed equipment can be flashed into the secondary high-speed equipment; the number 3 on the arrowed line indicates that the data in the secondary high-speed device can be flashed to the low-speed device.
As an optional implementation manner of this embodiment, the foregoing S323 may include the following steps:
(1) and judging whether the processing states of all the next-level caches meet a third preset condition or not.
The third preset condition indicates that the processing state of the corresponding cache is greater than the corresponding threshold. Wherein, if the threshold value is larger than the corresponding threshold value, the cache of each level may set a corresponding proportion according to the service scenario. The setting of the third preset condition is not limited at all, and may be specifically set according to an actual situation.
(2) And when the processing states of all the next-level caches meet a third preset condition, determining the cache with the highest priority as a target cache.
For example, when the client issues a write data request, the first step is to queue for writing. When the write queue depth and the IO time delay of the low-speed device are higher than the write queue length and the IO time delay of the first-stage high-speed device, the write data request is transferred to the first-stage high-speed device, and the low-speed device does not directly accept the service request of the client.
When the processing states of all the next-level caches meet the third preset condition, the processing states of all the next-level caches are busy at the moment, and if data is written into the next-level caches, the data writing efficiency is low, and at the moment, the data needs to be written into the cache with the highest priority, so that the data writing efficiency is improved.
According to the cache read-write method provided by the embodiment, when data is written, whether the processing state of the cache with the highest priority meets the second preset condition is judged, and when the second preset condition is met, the cache with the highest priority cannot meet the requirement of writing data, the target cache needs to be determined from the next-level cache in sequence, so that waste of other cache bandwidths caused by the fact that all data writing requests need to be written into the cache with the highest priority is avoided, and the data writing efficiency is improved.
As an optional implementation manner of this embodiment, the step S32 may further include the following steps:
(1) and judging whether the cache with the highest priority meets the disk-dropping condition or not.
The condition of the falling disk can be a storage threshold value or a time threshold value. The electronic device determines whether the highest priority cache satisfies a landing condition by comparing the processing status of the highest priority cache with a storage threshold and a time threshold.
(2) And when the cache with the highest priority meets the disk-dropping condition, flushing the disk of the data in the cache with the highest priority to the next level of cache with the highest priority.
For example, when the primary high-speed device reaches a storage threshold or a time threshold, the primary high-speed device then swaps the disk into the secondary high-speed device, and the data in the primary high-speed device is landed into the secondary high-speed device.
As a specific implementation manner of this embodiment, the method for reading and writing the cache data according to this embodiment may include the following steps when writing the data:
(1) the client-side queues in the first-level high-speed equipment when writing data, when the queue length in the first-level high-speed equipment reaches a threshold value, the client-side directly bypasses the first-level high-speed equipment to queue in the second-level high-speed equipment for writing, and if the queue length of the second-level high-speed equipment is still long, the client-side directly queues in the low-speed equipment for writing
(2) Each stage is provided with a device queue length threshold value, and after the threshold value is reached, the queue is directly queued at the next stage
(3) And if the depth of the low-speed equipment queue is higher than that of the high-speed equipment queue, returning to the high-speed equipment for queuing.
The cache data read-write method provided by the embodiment of the invention fully utilizes the bandwidth of each level of cache in the data read-write process, and exerts the performance of the cache to the utmost. The cache data reading and writing method can be applied to a storage system and is also used for caching with a CPU.
In this embodiment, a cache read-write apparatus is further provided, where the apparatus is used to implement the foregoing embodiments and preferred embodiments, and details are not described again after the description. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
This embodiment provides a cache read/write apparatus, as shown in fig. 8, including:
an obtaining module 41, configured to obtain a data read/write request and processing states of each level of cache, where the processing states include queue depth and IO delay;
and the determining module 42 is configured to determine, according to the processing state of each level of cache, a target cache corresponding to the data read-write request, so as to perform corresponding read-write operation on the target cache.
The cache read-write device provided in this embodiment determines the target cache by using the processing states of the caches of different levels after acquiring the data read-write request, and avoids performing data read-write from the same cache, so that the performance of the caches of different levels can be maximally utilized, and the read-write efficiency of the cache is improved.
The cache read/write apparatus in this embodiment is presented in the form of a functional unit, where the unit refers to an ASIC circuit, a processor and a memory that execute one or more software or fixed programs, and/or other devices that can provide the above-mentioned functions.
Further functional descriptions of the modules are the same as those of the corresponding embodiments, and are not repeated herein.
An embodiment of the present invention further provides an electronic device, which has the cache read-write apparatus shown in fig. 9.
Referring to fig. 9, fig. 9 is a schematic structural diagram of an electronic device according to an alternative embodiment of the present invention, and as shown in fig. 9, the electronic device may include: at least one processor 51, such as a CPU (Central Processing Unit), at least one communication interface 53, memory 54, at least one communication bus 52. Wherein a communication bus 52 is used to enable the connection communication between these components. The communication interface 53 may include a Display (Display) and a Keyboard (Keyboard), and the optional communication interface 53 may also include a standard wired interface and a standard wireless interface. The Memory 54 may be a high-speed RAM Memory (volatile Random Access Memory) or a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The memory 54 may alternatively be at least one memory device located remotely from the processor 51. Wherein the processor 51 may be in connection with the apparatus described in fig. 8, the memory 54 stores an application program, and the processor 51 calls the program code stored in the memory 54 for performing any of the above-mentioned method steps.
The communication bus 52 may be a Peripheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (EISA) bus. The communication bus 52 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 9, but this does not indicate only one bus or one type of bus.
The memory 54 may include a volatile memory (RAM), such as a random-access memory (RAM); the memory may also include a non-volatile memory (english: non-volatile memory), such as a flash memory (english: flash memory), a hard disk (english: hard disk drive, abbreviated: HDD) or a solid-state drive (english: SSD); the memory 54 may also comprise a combination of the above types of memories.
The processor 51 may be a Central Processing Unit (CPU), a Network Processor (NP), or a combination of a CPU and an NP.
The processor 51 may further include a hardware chip. The hardware chip may be an application-specific integrated circuit (ASIC), a Programmable Logic Device (PLD), or a combination thereof. The PLD may be a Complex Programmable Logic Device (CPLD), a field-programmable gate array (FPGA), a General Array Logic (GAL), or any combination thereof.
Optionally, the memory 54 is also used to store program instructions. The processor 51 may call program instructions to implement the cache read/write method as shown in the embodiments of fig. 3, 4 and 6 of the present application.
The embodiment of the invention also provides a non-transient computer storage medium, wherein the computer storage medium stores computer executable instructions which can execute the cache read-write method in any method embodiment. The storage medium may be a magnetic Disk, an optical Disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a Flash Memory (Flash Memory), a Hard Disk (Hard Disk Drive, abbreviated as HDD), a Solid State Drive (SSD), or the like; the storage medium may also comprise a combination of memories of the kind described above.
Although the embodiments of the present invention have been described in conjunction with the accompanying drawings, those skilled in the art may make various modifications and variations without departing from the spirit and scope of the invention, and such modifications and variations fall within the scope defined by the appended claims.

Claims (10)

1. A cache read-write method, the method comprising:
acquiring a data read-write request and processing states of all levels of caches, wherein the processing states comprise queue depth and IO time delay;
and determining a target cache corresponding to the data read-write request according to the processing state of each level of cache so as to perform corresponding read-write operation on the target cache.
2. The method according to claim 1, wherein when the data read/write request is a read data request, the determining, according to the processing state of the caches at different levels, a target cache corresponding to the data read/write request to perform a corresponding read/write operation on the target cache includes:
judging whether the processing states of all the caches meet a first preset condition or not;
and when the processing states of all the caches meet the first preset condition, determining that the target cache corresponding to the read data request is all the caches, so as to read data from all the caches simultaneously.
3. The method according to claim 2, wherein the determining, according to the processing state of each level of the cache, a target cache corresponding to the data read-write request to perform a corresponding read-write operation on the target cache further comprises:
and when caches with processing states which do not meet the first preset condition exist in all the caches, sequentially determining the target caches based on the priority of each level of cache.
4. The method according to claim 3, wherein the sequentially determining the target cache based on the priorities of the caches at different levels comprises:
judging whether corresponding data exists in the cache with the highest priority or not by using the data reading request;
and when the cache with the highest priority does not have corresponding data, sequentially judging whether the next-level cache has corresponding data or not so as to determine the target cache.
5. The method according to claim 1, wherein when the data read/write request is a write data request, the determining, according to the processing state of the caches at different levels, a target cache corresponding to the data read/write request to perform a corresponding read/write operation on the target cache includes:
acquiring the priority of each level of cache;
judging whether the processing state of the cache with the highest priority meets a second preset condition or not;
and when the processing state of the cache with the highest priority meets the second preset condition, sequentially determining the target cache from the next-level cache.
6. The method of claim 5, wherein said sequentially determining the target cache from the next-level cache comprises:
judging whether the processing states of all the next-level caches meet a third preset condition or not;
and when the processing states of all the next-level caches meet the third preset condition, determining the cache with the highest priority as the target cache.
7. The method according to claim 5 or 6, wherein the determining, according to the processing state of each level of cache, a target cache corresponding to the data read-write request to perform corresponding read-write operation on the target cache further comprises:
judging whether the cache with the highest priority meets a disk dropping condition or not;
and when the cache with the highest priority meets the disk-dropping condition, flushing the disk of the data in the cache with the highest priority to the next level of cache with the highest priority.
8. A cache read-write apparatus, comprising:
the acquisition module is used for acquiring a data read-write request and processing states of each level of cache, wherein the processing states comprise queue depth and IO time delay;
and the determining module is used for determining a target cache corresponding to the data read-write request according to the processing state of each level of cache so as to perform corresponding read-write operation on the target cache.
9. An electronic device, comprising:
a memory and a processor, wherein the memory and the processor are communicatively connected to each other, the memory stores computer instructions, and the processor executes the computer instructions to perform the cache read-write method according to any one of claims 1 to 7.
10. A computer-readable storage medium having stored thereon computer instructions for causing a computer to perform the cache read-write method of any one of claims 1-7.
CN202011350425.7A 2020-11-26 2020-11-26 Cache reading and writing method and device and electronic equipment Pending CN112540933A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011350425.7A CN112540933A (en) 2020-11-26 2020-11-26 Cache reading and writing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011350425.7A CN112540933A (en) 2020-11-26 2020-11-26 Cache reading and writing method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN112540933A true CN112540933A (en) 2021-03-23

Family

ID=75016838

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011350425.7A Pending CN112540933A (en) 2020-11-26 2020-11-26 Cache reading and writing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN112540933A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115061972A (en) * 2022-07-05 2022-09-16 摩尔线程智能科技(北京)有限责任公司 Processor, data read-write method, device and storage medium
CN117075822A (en) * 2023-10-17 2023-11-17 苏州元脑智能科技有限公司 Data reading and writing method, device, equipment and storage medium

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6076157A (en) * 1997-10-23 2000-06-13 International Business Machines Corporation Method and apparatus to force a thread switch in a multithreaded processor
US6408345B1 (en) * 1999-07-15 2002-06-18 Texas Instruments Incorporated Superscalar memory transfer controller in multilevel memory organization
US20030088740A1 (en) * 2001-10-23 2003-05-08 Ip-First, Llc. Microprocessor and method for performing selective prefetch based on bus activity level
CN102096556A (en) * 2010-12-03 2011-06-15 成都市华为赛门铁克科技有限公司 Method for copying data as well as method, device and system for reading data
CN105531665A (en) * 2013-06-21 2016-04-27 微软技术许可有限责任公司 Cache destaging for virtual storage devices
CN105988730A (en) * 2015-03-02 2016-10-05 华为技术有限公司 Cache data reading method, bypass apparatus and cache system
CN106933750A (en) * 2015-12-31 2017-07-07 北京国睿中数科技股份有限公司 For data in multi-level buffer and the verification method and device of state
CN107451146A (en) * 2016-05-31 2017-12-08 北京京东尚科信息技术有限公司 The method of data and data cached multi-level buffer device are read using multi-level buffer
CN108733310A (en) * 2017-04-17 2018-11-02 伊姆西Ip控股有限责任公司 Method, equipment and computer readable storage medium for managing storage system
CN109408415A (en) * 2018-10-10 2019-03-01 郑州云海信息技术有限公司 A kind of caching method and device of read request
CN111258497A (en) * 2018-11-30 2020-06-09 慧与发展有限责任合伙企业 Bypassing storage level memory read cache based on queue depth threshold
CN111837110A (en) * 2018-03-20 2020-10-27 超威半导体公司 Prefetcher-based speculative dynamic random access memory read request techniques
CN111831219A (en) * 2019-04-19 2020-10-27 慧与发展有限责任合伙企业 Storage class memory queue depth threshold adjustment
CN111984552A (en) * 2020-08-21 2020-11-24 苏州浪潮智能科技有限公司 Cache management method and device, electronic equipment and storage medium

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6076157A (en) * 1997-10-23 2000-06-13 International Business Machines Corporation Method and apparatus to force a thread switch in a multithreaded processor
US6408345B1 (en) * 1999-07-15 2002-06-18 Texas Instruments Incorporated Superscalar memory transfer controller in multilevel memory organization
US20030088740A1 (en) * 2001-10-23 2003-05-08 Ip-First, Llc. Microprocessor and method for performing selective prefetch based on bus activity level
CN102096556A (en) * 2010-12-03 2011-06-15 成都市华为赛门铁克科技有限公司 Method for copying data as well as method, device and system for reading data
CN105531665A (en) * 2013-06-21 2016-04-27 微软技术许可有限责任公司 Cache destaging for virtual storage devices
CN105988730A (en) * 2015-03-02 2016-10-05 华为技术有限公司 Cache data reading method, bypass apparatus and cache system
CN106933750A (en) * 2015-12-31 2017-07-07 北京国睿中数科技股份有限公司 For data in multi-level buffer and the verification method and device of state
CN107451146A (en) * 2016-05-31 2017-12-08 北京京东尚科信息技术有限公司 The method of data and data cached multi-level buffer device are read using multi-level buffer
CN108733310A (en) * 2017-04-17 2018-11-02 伊姆西Ip控股有限责任公司 Method, equipment and computer readable storage medium for managing storage system
CN111837110A (en) * 2018-03-20 2020-10-27 超威半导体公司 Prefetcher-based speculative dynamic random access memory read request techniques
CN109408415A (en) * 2018-10-10 2019-03-01 郑州云海信息技术有限公司 A kind of caching method and device of read request
CN111258497A (en) * 2018-11-30 2020-06-09 慧与发展有限责任合伙企业 Bypassing storage level memory read cache based on queue depth threshold
CN111831219A (en) * 2019-04-19 2020-10-27 慧与发展有限责任合伙企业 Storage class memory queue depth threshold adjustment
CN111984552A (en) * 2020-08-21 2020-11-24 苏州浪潮智能科技有限公司 Cache management method and device, electronic equipment and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115061972A (en) * 2022-07-05 2022-09-16 摩尔线程智能科技(北京)有限责任公司 Processor, data read-write method, device and storage medium
CN115061972B (en) * 2022-07-05 2023-10-13 摩尔线程智能科技(北京)有限责任公司 Processor, data read-write method, device and storage medium
CN117075822A (en) * 2023-10-17 2023-11-17 苏州元脑智能科技有限公司 Data reading and writing method, device, equipment and storage medium
CN117075822B (en) * 2023-10-17 2024-02-06 苏州元脑智能科技有限公司 Data reading and writing method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
US8893146B2 (en) Method and system of an I/O stack for controlling flows of workload specific I/O requests
CN107870732B (en) Method and apparatus for flushing pages from solid state storage devices
US11218163B2 (en) Memory system and information processing system
CN112540933A (en) Cache reading and writing method and device and electronic equipment
US11681623B1 (en) Pre-read data caching method and apparatus, device, and storage medium
CN110209502B (en) Information storage method and device, electronic equipment and storage medium
CN108984104B (en) Method and apparatus for cache management
CN109726137B (en) Management method of garbage collection task of solid state disk, controller and solid state disk
CN111563052B (en) Caching method and device for reducing read delay, computer equipment and storage medium
US20170004087A1 (en) Adaptive cache management method according to access characteristics of user application in distributed environment
CN112783807B (en) Model calculation method and system
US9330033B2 (en) System, method, and computer program product for inserting a gap in information sent from a drive to a host device
CN111400052A (en) Decompression method, decompression device, electronic equipment and storage medium
CN117056054A (en) Interrupt control method, interrupt controller, computer device, and storage medium
CN112925472A (en) Request processing method and device, electronic equipment and computer storage medium
CN115437572A (en) Data dropping method, device, equipment and medium
CN115499513A (en) Data request processing method and device, computer equipment and storage medium
CN111338567B (en) Mirror image caching method based on Protocol Buffer
CN112416564A (en) Interrupt processing method and processing device
CN113138718A (en) Storage method, apparatus, system, and medium for distributed block storage system
CN116991781B (en) Request processing device, method, chip, storage medium and electronic equipment
CN113986134B (en) Method for storing data, method and device for reading data
CN114371810B (en) Data storage method and device of HDFS
CN113806249B (en) Object storage sequence lifting method, device, terminal and storage medium
CN115794446B (en) Message processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210323

RJ01 Rejection of invention patent application after publication