CN114860625A - Data access method, device, equipment and readable storage medium - Google Patents

Data access method, device, equipment and readable storage medium Download PDF

Info

Publication number
CN114860625A
CN114860625A CN202210345380.7A CN202210345380A CN114860625A CN 114860625 A CN114860625 A CN 114860625A CN 202210345380 A CN202210345380 A CN 202210345380A CN 114860625 A CN114860625 A CN 114860625A
Authority
CN
China
Prior art keywords
reading
read
data
hard disk
cache page
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210345380.7A
Other languages
Chinese (zh)
Inventor
林烽
陈骁
陈文生
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wangsu Science and Technology Co Ltd
Original Assignee
Wangsu Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wangsu Science and Technology Co Ltd filed Critical Wangsu Science and Technology Co Ltd
Priority to CN202210345380.7A priority Critical patent/CN114860625A/en
Publication of CN114860625A publication Critical patent/CN114860625A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0877Cache access modes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0862Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with prefetch
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

After receiving a data access request, a server judges whether target data requested by the data access request exists in a cache or a memory. And if the target data does not exist in the memory and the cache, determining to initiate a reading request to the hard disk. And then, the server judges the medium type of the hard disk for storing the target data, closes the pre-reading action of the reading request when the medium type indicates that the hard disk is a solid state hard disk, and initiates the reading request with the pre-reading action closed to the hard disk to acquire the target data. By adopting the scheme, when the medium type indicates that the hard disk is a solid state hard disk, the pre-reading action is closed, the read request is prevented from being probabilistically blocked due to the influence of garbage collection, the concurrency capability of the server is improved, the data access speed is improved, and the possibility of system jitter is reduced.

Description

Data access method, device, equipment and readable storage medium
Technical Field
The present application relates to the field of cloud computing technologies, and in particular, to a data access method, an apparatus, a device, and a readable storage medium.
Background
With the development of electronic information technology, hard disks are widely used in storage systems due to their strong storage capacity. Common hard disks include mechanical hard disks, solid state hard disks, hybrid hard disks, and the like.
Usually, the speed of a Central Processing Unit (CPU) is much faster than that of a hard disk, and the CPU cannot directly operate the hard disk. Therefore, in the design of the current operating system, a memory and a cache are introduced, and data are read into the memory in advance in a pre-reading mode, so that the condition that a CPU waits for the slow speed of a hard disk is avoided. When the CPU accesses the target data, the target data is firstly searched from the cache. If the data is found, the data is immediately read and sent to the CPU for processing. And if the cache does not have the target data, searching in the memory. If the memory does not have the target data, an Input Output (IO) request is initiated to the storage device, the target data is read into the memory, and then the target data is called into the cache from the memory. Thus, the CPU can directly read the target data from the cache.
The current data pre-reading algorithm is mostly suitable for mechanical hard disks, and the random IO capacity of the mechanical hard disks is measured by Input/Output Per Second (IOPS), which is usually 60-120 times/Second. With the development of storage media, solid state disks with stronger random IO capability and the like appear, and the existing data pre-reading algorithm is not applicable.
Disclosure of Invention
The application provides a data access method, a data access device, data access equipment and a readable storage medium, whether a pre-reading behavior is started is determined according to the type of a hard disk medium, different pre-reading modes are adopted for different hard disks, the adaptability is strong, and the purpose of improving the data access speed is achieved.
In a first aspect, an embodiment of the present application provides a data access method, including:
receiving a data access request;
when target data requested by the data access request does not exist in the cache and the memory, determining to initiate a read request to a hard disk;
determining the media type of the hard disk;
and determining whether to start the pre-reading behavior of the reading request and acquire the target data when acquiring the target data according to the media type.
In a second aspect, an embodiment of the present application provides a data access apparatus, including:
the receiving and sending module is used for receiving a data access request;
the determining module is used for determining to initiate a reading request to the hard disk when target data requested by the data access request does not exist in the cache and the memory;
the processing module is used for determining the type of the hard disk medium;
and the reading module is used for determining whether to start the pre-reading behavior of the reading request when the target data is acquired according to the media type.
In a third aspect, an embodiment of the present application provides an electronic device, including: a processor, a memory and a computer program stored on the memory and executable on the processor, the processor when executing the computer program causing the electronic device to carry out the method according to the first aspect or the various possible implementations of the first aspect.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium, in which computer instructions are stored, and when executed by a processor, the computer instructions are configured to implement the method according to the first aspect or various possible implementation manners of the first aspect.
In a fifth aspect, embodiments of the present application provide a computer program product comprising a computer program, which when executed by a processor, implements the method according to the first aspect or the various possible implementations of the first aspect.
According to the data access method, the data access device, the data access equipment and the readable storage medium, after the server receives the data access request, whether target data requested by the data access request exists in a cache or a memory is judged. And if the target data does not exist in the memory and the cache, determining to initiate a reading request to the hard disk. And then, the server judges the medium type of the hard disk storing the target data, determines whether to start a pre-reading behavior according to the medium type, adopts different pre-reading modes aiming at different hard disks, has strong adaptability and realizes the purpose of improving the data access speed. And when the media type indicates that the hard disk is a solid state disk, closing the pre-reading behavior, avoiding the read request from being probabilistically blocked due to the influence of garbage collection, improving the concurrency capability of the server, improving the data access speed and reducing the possibility of system jitter.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic network architecture diagram of a data method provided in an embodiment of the present application;
FIG. 2 is a schematic diagram of a server in a data access method provided in an embodiment of the present application;
FIG. 3 is a flow chart of a data access method provided by an embodiment of the present application;
FIG. 4 is a schematic diagram of a stripe in a data access method provided in an embodiment of the present application;
FIG. 5 is another flow chart of a data method provided by an embodiment of the present application;
fig. 6 is a schematic diagram of a data access device according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
The server generally includes a Central Processing Unit (CPU) with computing power, a hard disk, a memory, and other hardware. The hard disk and the memory are both storage media. Because the hard disk is driven by mechanical parts to operate, the speed of reading data from the hard disk by the CPU is far lower than the speed of reading data from the memory. That is, the speed of the hard disk is much slower than the speed of the memory. When the CPU performs data operation, if the CPU directly performs read/write operation on the hard disk (also referred to as initiating an IO request to the hard disk), the CPU waits for a slow speed of the hard disk, and the CPU utilization rate is low. For this reason, current operating system designs avoid CPU latency by pre-reading. In the pre-reading mode, when the hard disk needs to be read and written, the target data and a part of data before and after the target data (usually, a part of data after the target data) are read into the memory in advance through a pre-reading algorithm, and then are called into the cache. Therefore, the CPU can directly acquire the target data from the cache, the CPU is prevented from being blocked due to waiting for the operation of the hard disk, and the purpose of improving the system performance is achieved.
The prior pre-reading algorithm has at least the following disadvantages proved by experiments:
the method has the disadvantages that 1, the method is not suitable for the solid state disk and the like, so that the reading request of the solid state disk is probabilistically blocked, and the concurrency capability of the server is reduced.
When the cache and the memory have no target data, the operating system initiates an IO request to the disk to read the data, wherein the IO request comprises a random IO request and a sequential IO request. The random IO request means that target data corresponding to a plurality of consecutive IO requests are dispersed in different sectors of different pages of a disk, and the speed of reading the target data is slow. The sequential IO requests mean that target data corresponding to IO requests for several times are adjacent in sequence, and the target data reading speed is high. The random IO requests include read requests and write requests.
The development and evolution of the current pre-reading algorithm are based on a Hard DisK Drive (HDD) to a great extent, and the HDD has the characteristics of poor random IO capability and strong sequential IO capability. However, with the development of storage media, Solid State Drives (SSD) have been widely used in high performance computing environments due to their characteristics of almost 0 access delay and insensitivity to random access, and the random IO capability of the SSD reaches tens of thousands to tens of thousands of times, which is hundreds of times or even thousands of times that of a mechanical hard disk. The existing pre-reading algorithm does not consider the characteristics of the solid state disk. For example, when there is a concurrent write IO in the solid state disk, internal hardware Garbage Collection (GC) may cause a read request to be probabilistically blocked. The larger the number of read requests, the greater the probability that the read requests are blocked, which in turn leads to poor concurrency.
In the initial stage, if the read request is small, there is a problem of "slow start" of the pre-read window, that is, the pre-read window needs to be expanded many times to reach an ideal state. The small read request means that the data size of the target data of the read request is small, and the read request is also called a small request.
In the process of reading and writing data on a mechanical hard disk, the time spent on reading and writing data on the hard disk mainly comprises a seek time T1, a rotation delay T2 and a transmission time T3. The seek time T1 refers to the time required for the head arm to move to a specified track, and when the current track where the head arm is located is different from the specified track in offset, the time required for the head arm to move to the specified track is different, and the "average seek time" is usually used to represent T1, and this time is relatively fixed, and is generally about 6 milliseconds (ms).
The spin delay T2 refers to the time to move a sector under the head, and is closely related to the hard disk speed, and the spin delay T2 of the mainstream mechanical hard disk is approximately 3ms-6 ms.
The transfer time T3 refers to the time to read or write data from or to the hard disk and is proportional to the amount of data, i.e., T3 ═ data amount ÷ transfer speed. Wherein the transmission speed is approximately in the range of 100MB/s-200 MB/s.
Due to the above-mentioned characteristics of hard disks, the ability of the hard disk to process read requests is greatly limited. At the same time, the ability of the hard disk to handle read requests is also limited by the size of the read request. The random IO capability of existing mechanical hard disks is typically between 60-120 times, i.e. the mechanical hard disk can handle 60-120 random IO requests per second.
Based on the read ahead, data needs to be read from the hard disk into the memory in advance, and the data read in advance into the memory is generally larger than the data size of the target data requested by the read request and contains the target data. The data read ahead into the memory is called read ahead data. The criticality of pre-reading is timeliness and accuracy.
Timeliness refers to: the read-ahead data should be read into the memory from the hard disk before the CPU accesses, which means that the data amount of the read-ahead data is relatively large each time the data is read more. The larger the data volume of the pre-read data is, the more the timeliness can be ensured. However, the larger the data amount of the read-ahead data, the larger the aforementioned transfer time T3. Moreover, due to the limitation of the memory size, the old pre-read data must be eliminated from the memory after each time the pre-read data is read. If the pre-read data is eliminated too early to be accessed, the same data will be read in many times, causing jitter in system performance. For this reason, the operating system typically specifies an upper limit to limit the amount of data that is read ahead.
The accuracy refers to: utilization of pre-read data. The higher the utilization, the higher the accuracy. The pre-reading algorithm usually maintains certain historical information as a basis for prediction, and dynamically adjusts the data size of pre-reading data of the next IO request, so as to avoid excessive reading of useless data. The data size of the pre-read data is referred to as the size of the pre-read window.
Taking the example that a server carries a linux operating system, in the current pre-reading algorithm, when a file is accessed, the operating system of the server initiates a reading request to a hard disk for many times, so that the file is read into a memory. By default, the operating system determines an initial read-ahead window according to the request size (rqsize) of the read request for the first access and the size (ramax) of the maximum read request supported by the hard disk. For example, when the read request is small, i.e., rqsize < ═ ramax/32, then the initial read ahead window is 4 × rqsize. For another example, when the read request is medium, i.e. rqsize < ═ ramax/4, then the initial read-ahead window is 2 × rqsize. As another example, when the read request is large, i.e., rqsiz > ramax/4, the initial pre-read window is ramax.
After the initial pre-reading window is determined, the pre-reading algorithm enters a window dynamic adjustment process. Ideally, the pre-read algorithm considers: when the window expansion condition is met, the size of the pre-reading window of the next reading request is 2 times or 4 times of the current pre-reading window. Therefore, based on the current pre-reading algorithm, the maximum window, namely ramax, needs to be reached through multiple window extensions.
For example, if ramax is 512KB and the size of the initial read request is 4KB, i.e. the initial read request is a small request, the initial pre-read window is 4 × rqsize is 16KB, and ramax can only be reached by 5 adjustments. During the adjustment process, the values of the 5 pre-reading windows are 16KB, 64KB, 128KB, 256KB and 512KB in sequence.
For another example, when the read request is medium, assuming that the size of the initial read request is 32KB, the size of the initial pre-read window is 2 × rqsize ═ 64KB, and ramax can be reached by 4 adjustments. In the adjustment process, the values of the 4 pre-reading windows are 64KB, 128KB, 256KB and 512KB in sequence.
According to the above, it can be seen that: when the read request is small or the IO request is a medium request, the existing pre-read algorithm is not friendly, and the pre-read window needs to be adjusted several times to reach the ideal size.
Particularly, when the read-ahead algorithm is applied to a server in cloud computing, a plurality of applications (tenants) share the CPU and the storage capacity of the server, the server cannot directly control the behavior of the applications, and the applications may trigger an operating system to continuously initiate a small read request to a hard disk due to unreasonable design or even malicious behavior. The pre-reading algorithm dynamically expands, contracts or maintains the pre-reading window unchanged through detection. The pre-reading algorithm has wide adaptability, but under the scene with fixed service characteristics, the performance of the algorithm is not high, especially when the size of the file is within 2 times of the size of an ideal pre-reading window. For example, the server reads a file of 1MB by multiple read requests, and the read-ahead window is 64KB, and the read-ahead window is 512KB during the fourth read-ahead. When the pre-read window reaches the ideal size, 1MB of files are ready to be read from the hard disk to the memory, and the pre-read algorithm is not efficient. However, when the server reads a file of 1000MB by a plurality of read requests, if the read-ahead window is 64KB, the efficiency can be improved by expanding the read-ahead window to 512 KB. For another example, for an application with strong local randomness, such as a database, if the application triggers the operating system to continuously issue an unreasonable small request to the hard disk, the random IO capability of a host machine using a mechanical hard disk as a storage device is easily exhausted. Even for a host machine which takes a solid state disk as storage equipment, the concurrency of random requests is large along with the increase of services, and the stability of the system is difficult to ensure by adopting the existing pre-reading algorithm to expand and reduce the pre-reading window.
Therefore, the embodiment of the application provides a data access method, when the media type indicates that the hard disk is a solid state disk, the pre-reading behavior is closed, the read request is prevented from being probabilistically blocked due to the influence of garbage collection, the concurrency capability of the server is improved, and the data access speed is improved. Meanwhile, when the hard disk is a mechanical hard disk, the pre-reading algorithm is improved, so that small requests or small requests triggered by services with strong local randomness can be efficiently processed on the premise of ensuring the performance of sequential IO requests.
Fig. 1 is a schematic network architecture diagram of a data method according to an embodiment of the present application. Referring to fig. 1, the network architecture includes a server 11 and a terminal device 12.
The server 11 has enormous computing power, storage capacity, and the like, and is capable of providing services to the terminal devices. The server 11 may be hardware or software. When the server 11 is hardware, the server 11 is a single server or a distributed server cluster composed of a plurality of servers. When the server 11 is software, it may be a plurality of software modules or a single software module, and the embodiments of the present application are not limited.
The server 11 is provided with a CPU, a cache, a memory, a hard disk, and the like. When the CPU accesses the target data, the target data is firstly searched from the cache. If the data is found, the data is immediately read and sent to the CPU for processing. And if the cache does not have the target data, searching in the memory. And if the target data does not exist in the memory, initiating a read request to the storage device.
In the embodiment of the application, when the server 11 is applied to a cloud computing scene, the server 11 serves as a host machine and provides the capability of sharing hardware through software technologies such as a container of a linux system or a virtual machine. For example, different tenants (applications) share CPU computing power or storage power, thereby achieving the purpose of resource sharing. Different servers 11 have different media types of hard disks, so that the random IO capacity difference between different servers 11 is relatively large. Typically, cloud computing providers deploy services with the same features on the same server 11. For example, if a solid state disk with strong random capability is installed on one server 11, a service that is easy to trigger a random request is deployed on the server. If a mechanical hard disk with poor random capability is installed on another server 11, a service which is not easy to trigger random request is deployed on the server.
Fig. 2 is a schematic diagram of a server in the data access method according to the embodiment of the present application. Referring to fig. 2, under the linux operating system, a server has a user space and a kernel space. The applications of the user space can be viewed as different tenants or services deployed on the server. When a user logs on the terminal device 12 and needs to acquire target data, the user initiates a data access request to the server 11. After the server 11 receives the data access request, if there is no target data in the cache and the memory, the application program in the user space triggers the operating system to initiate a read request to the hard disk. When the hard disk is a solid state disk, the pre-reading action is closed, so that the read request is prevented from being probabilistically blocked due to the influence of garbage collection, and the concurrency capability of the server is improved. When the hard disk is a mechanical hard disk, the pre-reading algorithm is improved, so that small requests or small requests triggered by services with strong local randomness can be efficiently processed on the premise of ensuring the performance of sequential IO requests.
The terminal device 12 may be hardware or software. When the terminal device 12 is hardware, the terminal device 12 is, for example, a mobile phone, a tablet computer, a personal computer, an electronic book reader, a laptop portable computer, a desktop computer, or the like, which is installed with an android operating system, a microsoft operating system, a saiban operating system, a Linux operating system, or an apple iOS operating system. When the terminal device 12 is software, it can be installed in the above listed hardware devices, in this case, the terminal device 12 is, for example, a plurality of software modules or a single software module, and the embodiment of the present application is not limited.
It should be understood that the number of servers 11 and terminal devices 12 in fig. 1 is merely illustrative. In practical implementation, any number of servers 11 and terminal devices 12 are deployed according to practical requirements.
Next, a data access method provided in the embodiment of the present application will be described in detail based on the implementation environment shown in fig. 1 and the server shown in fig. 2. For example, please refer to fig. 3. Fig. 3 is a flowchart of a data access method provided in an embodiment of the present application. The present embodiment is explained from the perspective of a server, and the present embodiment includes:
301. a data access request is received.
As shown in fig. 1, when a user accesses the internet by using a terminal device, the user sends a data access request to a server. For example, a user is browsing a small video, and sends a data access request to a server to request video data, which is target data. For another example, the user is browsing the network picture, and sends a data access request to the server to request the picture data, where the picture data is the target data.
302. And when target data requested by the data access request does not exist in the cache and the memory, determining to initiate a read request to the hard disk.
After receiving the data access request, the server firstly checks whether target data requested by the data access request exists in the cache. And if the target data exists in the cache, directly coding the target data and the like and responding to the terminal equipment. And if the target data does not exist in the cache, checking whether the target data exists in the memory. And if the target data exist in the memory, calling the target data into a cache, reading the target data from the cache by the CPU, coding the target data and the like, and sending the target data to the terminal equipment.
If the target data does not exist in the memory, the server determines to initiate a reading request to the hard disk, namely, the target data is read from the hard disk.
303. The media type of the hard disk is determined.
304. And determining whether to start the pre-reading behavior of the reading request and acquire the target data when acquiring the target data according to the media type.
Illustratively, in step 303 and step 304, the server distinguishes hard disks and adopts different pre-reading modes for hard disks of different media types. In the existing pre-reading method, hard disks are not distinguished according to media types, and no matter which hard disk is used, a part of data (pre-reading data) is read into a memory in advance through a pre-reading algorithm.
In the embodiment of the application, after the server determines the media type of the hard disk, the pre-reading behavior is determined to be turned on or turned off according to the media type. If the pre-reading action is started, a part of data (pre-reading data) is read into the memory in advance through a pre-reading algorithm. If the pre-read behavior is not turned on, then pre-read is not needed.
After the target data are obtained, the server carries out processing such as coding on the target data and sends the target data to the terminal equipment so as to respond to the terminal equipment.
According to the data method provided by the embodiment of the application, after the server receives the data access request, whether the target data requested by the data access request exists in the cache or the memory is judged. And if the target data does not exist in the memory and the cache, determining to initiate a reading request to the hard disk. And then, the server judges the medium type of the hard disk storing the target data, determines whether to start a pre-reading behavior according to the medium type, adopts different pre-reading modes aiming at different hard disks, has strong adaptability and realizes the purpose of improving the data access speed.
Optionally, in the above embodiment, when the media type of the hard disk indicates that the hard disk is a solid state disk, the pre-read behavior of the read request is closed. And then, initiating a reading request with the pre-reading behavior closed to the hard disk to acquire the target data.
Illustratively, when the target data needs to be read from the hard disk, the server determines the type of media of the hard disk storing the target data. And when the media type indicates that the hard disk is a solid state disk, closing the pre-reading action of the reading request by the server. The solid state disk includes a common Serial Advanced Technology Attachment (sata) solid state disk, a middle-end pci nvme solid state disk, a high-end class memory hard disk, and the like.
For a mechanical hard disk, the time required for a read request includes the aforementioned seek time T1, rotational delay T2, and transfer time T3. However, for the solid state disk with stronger random IO capability, since there is no seek time T1 and no rotation delay T2, the time required for a read request is the transfer time T3, and the transfer time T3 is proportional to the amount of data transferred. Therefore, the size of the read-ahead window set by the read-ahead algorithm will directly affect the time required for this read request.
In general, a solid state disk may block a read request for several milliseconds to several tens of milliseconds due to a garbage collection operation. This possibility becomes large due to the size of the read request, and therefore, a large pre-read window is not appropriate for a solid state disk.
In addition, due to the limitation of the size of the memory and the cache, other old cache data in the memory can be eliminated after the pre-read data enters the memory. Under a common high-concurrency scene of cloud computing, a pre-reading window is too large, different processes can interfere with each other, and data to be used are mutually eliminated. If the existing mode of expanding the pre-reading window for multiple times is adopted, the pre-reading window frequently enters the processes of expanding, reducing and repeatedly detecting, and the performance stability of the system is difficult to ensure.
In the embodiment of the application, in consideration of that the transmission speed of the solid state disk is basically 500MB per second or even 1GB per second in the reading process, when the data volume of the target data requested by the read request is relatively large, the read request is also referred to as a large request. Taking the data size of the target data as 512KB as an example, the transmission time T3 is within 1 millisecond. In a high-concurrency service scenario of cloud computing, when the CPU waits for target data, other tasks are likely to be executed, and therefore, when the time required for a read request is 1 millisecond, the time is within an acceptable range of the CPU.
Therefore, in the embodiment of the application, when the hard disk is a solid state disk, the server selects not to read in advance, so that the defect that the time required by a read request is amplified due to the influence of garbage collection can be avoided, and the possibility of system jitter caused by data read in advance is reduced.
In the embodiment of the present application, the closing of the read-ahead behavior of the read request means: the target data is directly read without pre-reading in the reading process. Therefore, when the target data does not exist in the cache and the memory, the operating system sends a read request to the hard disk to read the target data instead of reading the pre-read data with larger data volume. For example, a read request is 4KB in size, and if read-ahead behavior is enabled, the amount of read-ahead data may be 64KB, etc., the 64KB of read-ahead data containing 4KB of target data. If the pre-read behavior is turned off, the operating system reads 4KB of target data from the hard disk.
With the pre-read behavior turned off, the server's operating system reads 4KB of target data directly from the hard disk. And after reading the target data, calling the target data into the memory and further into the cache. The CPU reads the target data directly from the cache.
By adopting the scheme, when the medium type indicates that the hard disk is a solid state hard disk, the pre-reading action is closed, the read request is prevented from being probabilistically blocked due to the influence of garbage collection, the concurrency capability of the server is improved, the data access speed is improved, and the possibility of system jitter is reduced.
Optionally, in the above embodiment, when the hard disk is a mechanical hard disk, the server determines the size of the read-ahead window and starts the read-ahead behavior of the read request. And then, initiating a reading request with a pre-reading behavior started to the hard disk to read pre-reading data according to the size of the pre-reading window, wherein the pre-reading data comprises the target data.
For example, for a mechanical hard disk with weak random IO capability, data pre-reading can play a great role. According to the foregoing, it can be seen that: for a small read request, if the initial prediction window setting is small, the pre-read algorithm needs to perform detection for several times to reach the ideal size of the pre-read window, and the data access efficiency is low. Especially when the size of the file is within 2 times of the size of the ideal pre-reading window, the inefficiency of the pre-reading algorithm is further amplified. In this embodiment, when the media type of the hard disk indicates that the hard disk is a mechanical hard disk, the operating system of the server determines the size of the read-ahead window, where the read-ahead window is larger than the initial read-ahead window, and the initial read-ahead window is 2 times or 4 times the size of the initial read request.
In the embodiment of the present application, the size of the predetermined window is a fixed size, such as 128 KB. For example, cloud computing providers deploy services with the same features to the same server. The sizes of the pre-reading windows corresponding to different services are different.
By adopting the scheme, for the mechanical hard disk, the problem of slow start of the pre-reading window caused by the fact that the initial pre-reading window is small can be solved by determining the large pre-reading window.
Optionally, in the above embodiment, after the operating system initially initiates a read request to the hard disk to determine the size of the pre-read window, subsequent pre-reading is performed according to the pre-read window. That is, the size of the read-ahead window is used to indicate the size of the read-ahead data of the read request and the read request following the read request until the entire file is read.
For example, when the hard disk is a mechanical hard disk, the server's operating system determines the size of the read-ahead window, such as 128 KB. And the operating system initiates a read request to the hard disk, reads the 128KB pre-read data from the hard disk and calls the pre-read data into the memory. When the server initiates the read request again, the size of the initial read-ahead window is continuously used, that is, the read-ahead data of 128KB from the hard disk is continuously read and put into the memory. Subsequent read requests are pre-read in the same manner until the entire file is read.
When the pre-read window is large, the amount of data to be transferred increases, and the transfer time T3 described above increases. However, for mechanical hard disks, this effect is very limited. Taking the example that the transfer speed of the mechanical hard disk is 120MB per second, when the pre-read windows are respectively 4KB, 32KB, 128KB and 256KB, the transfer time T3 is respectively 0.03ms, 0.26ms, 1.04ms and 2.08 ms. Assuming that the seek time T1 and the rotation delay T2 add up to 10ms, the time duration required for one read request is 10.03ms, 10.26ms, 11.04ms, 12.08ms, respectively. It can be seen that for small requests below 256KB, the duration of a read request is mainly the seek time and the rotation time, and increasing the size of the pre-read window within a reasonable range does not significantly increase the duration required for this read request. The total time required for one read request includes the aforementioned seek time period T1, rotation time period T2, and transmission time period T3, and the time required for one read request is also referred to as the delay of the read request.
By adopting the scheme, after the size of the pre-reading window of the initial reading request is determined, pre-reading is carried out according to the size of the pre-reading window every time the reading request is initiated, and the behaviors of detecting expansion and reducing the pre-reading window by the existing pre-reading algorithm are eliminated. Moreover, because the initial pre-reading window is larger, the size of the pre-reading window is continuously used for each subsequent pre-reading, and the problem of slow start of the pre-reading window is solved. And the pre-reading window is fixed and cannot be expanded, so that the occupied pressure of the cache and the memory caused by the overlarge expansion of the pre-reading window can be reduced.
Optionally, in the above embodiment, a default value is preset. And when the medium type of the hard disk indicates that the hard disk is a mechanical hard disk, determining the size of the pre-reading window according to a default value. For example, the preset default value is 128KB, and the size of the pre-reading window is 128KB no matter what service is deployed on the server.
Optionally, in the foregoing embodiment, when the media type of the hard disk indicates that the hard disk is a mechanical hard disk, the server determines the size of the read-ahead window according to the delivered configuration file, where the sizes of the read-ahead windows corresponding to different services are different.
For example, a cloud computing provider deploys services with the same features on the same server. The service characteristics of the same server are relatively fixed, and the pre-reading behavior is predictable. Therefore, the user can learn the size of the pre-reading window suitable for the service through machine learning and the like, and send the pre-reading window to the server through the configuration file. When the operating system initially initiates a reading request, the size of a pre-reading window is determined according to the configuration file, and then pre-reading is carried out according to the pre-reading window during each pre-reading. For example, a virtual machine is used to provide video data, the size of the read-ahead window is 256KB, and the virtual machine is deployed on the server shown in fig. 1. For another example, a virtual machine is used to provide picture data, and the size of the pre-reading window is 128 KB.
By adopting the scheme, the size of the pre-reading window is related to the service type deployed on the server, and the throughput rate and the resource utilization rate of the server can be improved.
In general, target data requested by a read request is only a part of a file, and an operating system can read the complete file by initiating the read request multiple times. Hereinafter, for clarity, the file containing the target data will be referred to as the target file.
In the above embodiment, the server divides the target file in the hard disk into a plurality of stripes in advance according to the size of the pre-read window, the stripes store data, and the target file is a file including target data requested by the read request. When the operating system of the server initiates a reading request with a pre-reading behavior to the hard disk to read the pre-reading data according to the size of the pre-reading window, the reading request carries an initial address, and the operating system of the server determines the initial address from the reading request. The operating system then determines a stripe containing the starting address from the plurality of stripes, which is referred to as the target stripe. That is, the operating system determines which stripe the starting address falls into, takes the stripe containing the starting address as the target stripe, and takes the data stored in the target stripe as the read-ahead data.
Fig. 4 is a schematic diagram of a stripe in a data access method provided in an embodiment of the present application. Referring to fig. 4, the server divides the target file into N stripes, and the size of each stripe is the size of the reading window. For example, the size of the read window is 128KB, and the size of each stripe is 128 KB. Assuming that the starting address of the read request falls into the stripe M, the server takes the data in the stripe M as pre-read data, that is, reads the data in the whole stripe M into the memory. The benefits of this are: the small read requests are normalized so that a small read request becomes a read request with a larger amount of read-ahead data. The data of one stripe is read as the pre-read data each time, and the disk efficiency is high.
Moreover, the target file is stripe-divided according to the size of the pre-reading window, so that unfriendly reading requests in cloud computing leasing business can be effectively avoided. For example, some applications trigger the operating system to continue to initiate small random requests, i.e., to initiate large numbers of read requests with small amounts of target data. In the embodiment of the application, the whole target stripe data is used as the pre-read data, and the data is read into the memory in advance, so that the CPU can directly read the data from the memory without initiating a read request to the hard disk.
Furthermore, local random reading behaviors can be well responded by means of 'striping', and the random width can be responded by means of a matched pre-reading window as long as the random width is within a limited range.
By adopting the scheme, the small requests or the read requests with local random characteristics can be efficiently processed through the striping.
Optionally, in the embodiment, after determining, by the server, a target stripe including the start address from the multiple stripes, a first cache page, a second cache page, and a third cache page are further determined from the target stripe, where the second cache page is a cache page at a preset position in the target stripe, and the first cache page and the third cache page are located in preset ranges on left and right sides of the second cache page, respectively. And then, when the second cache page is accessed, determining the pre-reading data of the next reading request according to the first cache page and the second cache page.
For example, each time the operating system initiates a read request, the target stripe of the next read request can be predicted according to the target stripe corresponding to the current read request. For example, if the target stripe corresponding to the current read request is the stripe M, when the operating system accesses the second cache page in the stripe M, the first cache page and the third cache page can be determined according to the second cache page, and further, according to the first cache page and the third cache page, whether the stripe (M-1) or the stripe N is used for reading the pre-read data of the next read request is determined.
Referring to fig. 4, the second cache page is, for example, the middle-most cache page in the stripe M, i.e., the cache page at the position marked 2 in the figure. The first cache page is located on the left side of the second cache page, the third cache page is located on the right side of the second cache page, and the third cache page cannot be the last cache page in the stripe M. Therefore, the read-ahead data corresponding to the next read request is predicted in advance, so that the CPU can be prevented from being in a waiting state, and the data access speed is improved.
Optionally, referring to fig. 4, in the foregoing embodiment, the first cache page, the second cache page, and the third cache page are located at one-quarter, one-half, and three-quarters of the target stripe, respectively. For example, the stripes M total 256KB, then 64KB, 128KB and 192KB are distributed at the quarter, half and three quarter positions. Therefore, the position of each cache page can be quickly determined.
It should be noted that, although the above description is given by taking the first cache page, the second cache page, and the third cache page as examples, which are respectively located at one-quarter, one-half, and three-quarters of the target stripe, the embodiments of the present application are not limited, and in other feasible implementation manners, it is only required to ensure that the first cache page and the third cache page are located at two sides of the second cache page, and the first cache page is not the cache page at the beginning of the target stripe, and the third stripe is not the last cache page in the target stripe.
Optionally, in the foregoing embodiment, the pre-read data is extracted and read into the memory through a pre-read algorithm, which is a mechanism under the linux operating system. Another mechanism under the Linux operating system is the triggering of asynchronous read-ahead.
In the embodiment of the application, the triggering of asynchronous pre-reading is guided by introducing a 'direction mark', and pre-reading data of a next pre-reading window is predicted by presetting a checkpoint, namely whether a next reading request is forward reading or backward reading, so as to ensure the high efficiency of sequential IO (including reverse access) disk access. Illustratively, please refer to table 1, where table 1 is a relationship indication table of the pre-reading direction and each cache page.
TABLE 1
Mark 2 Mark 1 Mark 3 Direction of pre-reading
× To the rear
× Forward
Without pre-reading
× × Without pre-reading
Referring to the second row of table 1, optionally, in the foregoing embodiment, when the second cache page is accessed, if the first cache page is accessed and the third cache page is not accessed, it is determined that the read-ahead data of the next read request is the first stripe located behind the target stripe. For example, if the target stripe of the current read request is stripe M, then the target stripe of the next read request is stripe N.
Referring to the third line of table 1, when the second cache page is accessed, if the first cache page is not accessed and the third cache page is accessed, it is determined that the read-ahead data of the next read request is the first stripe located before the target stripe. For example, the target stripe of the current read request is stripe M, then the target stripe of the next read request is stripe (M-1).
By adopting the scheme, the middle point of the target stripe is selected as the check point triggered by asynchronous pre-reading, namely the position of the second cache page indicated by the mark 2 in the figure 4, and then whether the first cache page and the second cache page are accessed or not is detected, so that the direction of the stripe corresponding to the next reading request is determined, the speed is high, and the accuracy is high.
Referring to the fourth row and the fifth row of table 1, when the second cache page is accessed, if both the first cache page and the second cache page are accessed, it is determined that the next read request has no pre-read data. And when the second cache page is accessed, if the first cache page and the third cache page are not accessed, determining that the next read request has no pre-read data.
Illustratively, a middle point of the target stripe is selected as a check point triggered by asynchronous pre-reading, that is, a position of the second cache page indicated by a mark 2 in fig. 4, and then it is detected whether the first cache page and the second cache page have been accessed, if both the first cache page and the second cache page have been accessed or not, it is determined as a local random request, asynchronous reading is not triggered, and invalid pre-reading is avoided.
Fig. 5 is another flowchart of a data method provided in an embodiment of the present application. The embodiment comprises the following steps:
501. and determining that the target data does not exist in the memory and the cache.
502. Determining the media type of the hard disk storing the target data, and executing step 503 when the media type indicates that the hard disk is a solid state disk; when the media type indicates that the hard disk is a mechanical hard disk, step 504 is performed.
503. And closing the pre-reading behavior of the reading request, and initiating the reading request with the closed pre-reading behavior to the hard disk to acquire the target data.
The server is, for example, a host in cloud computing. When the storage device of the host machine in the cloud computing is a solid state disk, the concurrency capability of the server can be improved by closing the pre-reading action, and the possibility of reading request blocking caused by hardware garbage collection and the like is reduced to a certain extent.
504. And determining the size of a pre-reading window, and striping the target file.
Illustratively, the server determines the size of the pre-reading window according to a default value or a configuration file issued by a user. And finally, striping the target file according to the size of the pre-reading window. Wherein the target file is a file containing target data. In general, target data requested by a read request is only a part of a file, and an operating system can read the complete file by initiating the read request multiple times.
505. The target stripe is determined from the starting address of the read request.
Illustratively, the server determines a target stripe containing a start address from the plurality of stripes according to the start address carried by the read request, and takes the data stored in the target stripe as pre-read data.
506. And initiating a reading request to the hard disk.
507. In the process of reading the target stripe, when the triggering condition of asynchronous read-ahead is met, the target stripe of the next read request is predicted.
For example, when the server reads a cache page at a preset position, such as a cache page located at the middle of the target stripe (i.e., a second cache page), the target stripe of the next read request is predicted according to a first cache page and a third cache page within a preset range on the left and right sides of the second cache page. And when one accessed cache page and one unaccessed cache page exist in the first cache page and the third cache page, predicting a target stripe of the next read request.
And when the first cache page and the third cache page are both accessed or not accessed, determining that the next read request is not read in advance, namely closing the read-in advance action.
The server is, for example, a host in cloud computing. When a mechanical hard disk is used as a storage device of a server, the pre-reading algorithm in the embodiment of the present application can reduce inefficient reading requests submitted by upper-layer services, such as continuous small requests or locally random small requests, by means of "striping" on the premise of maintaining efficient sequential IO, and can cope with complex rental services, so that the hard disk maintains higher IO service capability.
In addition, in the above embodiment, when the server is used as a host, two types of hard disks may be installed on the host, that is, a solid state disk and a mechanical hard disk are installed at the same time. However, the target data requested by the data access request is located on either the mechanical hard disk or the solid state hard disk. Therefore, the data access method provided by the embodiment of the application is also applicable.
The following are embodiments of the apparatus of the present application that may be used to perform embodiments of the method of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method of the present application.
Fig. 6 is a schematic diagram of a data access device according to an embodiment of the present application. The data access apparatus 600 includes: a transceiver module 61, a determination module 62, a processing module 63 and a reading module 64.
A transceiver module 61, configured to receive a data access request;
a determining module 62, configured to determine to initiate a read request to a hard disk when target data requested by the data access request does not exist in the cache and the memory;
a processing module 63, configured to determine a media type of the hard disk;
and the reading module 64 is configured to determine whether to start a pre-reading action of the reading request and acquire the target data when acquiring the target data according to the media type.
In a feasible implementation manner, the reading module 64 determines whether to start a pre-reading action of the reading request and obtain the target data when obtaining the target data according to the media type, and is configured to close the pre-reading action of the reading request when the media type of the hard disk indicates that the hard disk is a solid state hard disk; and initiating a reading request with the pre-reading behavior closed to the hard disk to acquire the target data.
In a feasible implementation manner, the reading module 64 determines whether to start a pre-reading action of the reading request and acquire the target data when acquiring the target data according to the media type, and is configured to determine a size of a pre-reading window when the media type of the hard disk indicates that the hard disk is a mechanical hard disk, start the pre-reading action of the reading request, and initiate a reading request with the pre-reading action started to the hard disk so as to read pre-reading data according to the size of the pre-reading window, where the pre-reading data includes the target data.
In one possible implementation, the size of the read-ahead window is used to indicate the size of the read request and the read-ahead data of the read request following the read request.
In a possible implementation manner, the reading module 64 is configured to determine a start address from the read request, determine a target stripe including the start address from multiple stripes, and use data in the target stripe as the read-ahead data, where the multiple stripes are obtained by dividing a target file according to the size of the read-ahead window, and the target file is a file including the target data.
In a possible implementation manner, after the reading module 64 determines a target stripe including the start address from the multiple stripes, the processing module 63 is further configured to determine a first cache page, a second cache page, and a third cache page from the target stripe, where the second cache page is a cache page at a preset position in the target stripe, the first cache page and the third cache page are located in preset ranges on left and right sides of the second cache page, respectively, and when the second cache page is accessed, the processing module determines read-ahead data of a next read request according to the first cache page and the second cache page.
In a possible implementation manner, the processing module 63 is configured to, when the second cache page is accessed, determine that read-ahead data of a next read request is a first stripe located after the target stripe if the first cache page has been accessed and the third cache page has not been accessed; when the second cache page is accessed, if the first cache page is not accessed and the third cache page is accessed, determining that the pre-read data of the next read request is the first stripe located before the target stripe.
In a possible implementation manner, the processing module 63 is configured to determine that, when the second cache page is accessed, if both the first cache page and the second cache page are accessed, the next read request has no read-ahead data; and when the second cache page is accessed, if the first cache page and the third cache page are not accessed, determining that the next read request has no pre-read data.
In a possible implementation manner, the first cache page, the second cache page, and the third cache page are located at one-quarter, one-half, and three-quarters of the target stripe, respectively.
In a possible implementation manner, the determining module 62 is configured to determine the size of the pre-reading window according to a default value when the media type of the hard disk indicates that the hard disk is a mechanical hard disk; or, when the media type of the hard disk indicates that the hard disk is a mechanical hard disk, determining the size of the read-ahead window according to the issued configuration file, where the sizes of the read-ahead windows corresponding to different services are different.
The data access device provided in the embodiment of the present application may perform the actions of the server in the above embodiments, and the implementation principle and the technical effect are similar, which are not described herein again.
Fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 7, the electronic device 700 is, for example, the server, and the electronic device 700 includes:
a processor 71 and a memory 72;
the memory 72 stores computer instructions;
the processor 71 executes the computer instructions stored in the memory 72, so that the processor 71 executes the method for protecting against traffic attacks implemented by the control center as described above; or, the processor 71 is caused to execute the protection method against the attack node to implement the traffic attack as described above.
For a specific implementation process of the processor 71, reference may be made to the above method embodiments, which implement similar principles and technical effects, and details of this embodiment are not described herein again.
Optionally, the electronic device 700 further comprises a communication component 73. Wherein the processor 71, the memory 72 and the communication means 73 may be connected by a bus 74.
Embodiments of the present application further provide a computer-readable storage medium, in which computer instructions are stored, and when executed by a processor, the computer instructions are used to implement the data access method implemented by the server.
Embodiments of the present application further provide a computer program product, which contains a computer program, and when the computer program is executed by a processor, the computer program implements the data access method implemented by the server as above.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (13)

1. A method of data access, comprising:
receiving a data access request;
when target data requested by the data access request does not exist in the cache and the memory, determining to initiate a reading request to a hard disk;
determining the media type of the hard disk;
and determining whether to start the pre-reading behavior of the reading request and acquire the target data when acquiring the target data according to the media type.
2. The method of claim 1, wherein determining whether to initiate a pre-read action of the read request and obtain the target data when obtaining the target data according to the media type comprises:
when the media type of the hard disk indicates that the hard disk is a solid state disk, closing the pre-reading behavior of the reading request;
and initiating a reading request with the pre-reading behavior closed to the hard disk to acquire the target data.
3. The method of claim 1, wherein determining whether to initiate a pre-read action of the read request and obtain the target data when obtaining the target data according to the media type comprises:
when the medium type of the hard disk indicates that the hard disk is a mechanical hard disk, determining the size of a pre-reading window;
starting a pre-reading behavior of the reading request;
and initiating a reading request with a pre-reading behavior started to the hard disk so as to read pre-reading data according to the size of the pre-reading window, wherein the pre-reading data comprises the target data.
4. The method of claim 3,
the size of the read-ahead window is used to indicate the size of read-ahead data of the read request and read requests subsequent to the read request.
5. The method of claim 3, wherein initiating a read request with read-ahead behavior turned on to the hard disk to read-ahead data according to the size of the read-ahead window comprises:
determining a starting address from the read request;
determining a target stripe containing the starting address from a plurality of stripes, and taking data in the target stripe as the pre-reading data, wherein the plurality of stripes are obtained by dividing a target file according to the size of a pre-reading window, and the target file is a file containing the target data.
6. The method of claim 5, wherein after determining a target stripe from the plurality of stripes that includes the starting address, further comprising:
determining a first cache page, a second cache page and a third cache page from the target band, wherein the second cache page is a cache page at a preset position in the target band, and the first cache page and the third cache page are respectively located in preset ranges at the left side and the right side of the second cache page;
and when the second cache page is accessed, determining the pre-reading data of the next reading request according to the first cache page and the second cache page.
7. The method of claim 6, wherein determining read ahead data for a next read request according to the first cache page and the second cache page when accessing the second cache page comprises:
when the second cache page is accessed, if the first cache page is accessed and the third cache page is not accessed, determining that the pre-read data of the next read request is the first stripe behind the target stripe;
when the second cache page is accessed, if the first cache page is not accessed and the third cache page is accessed, determining that the pre-read data of the next read request is the first stripe located before the target stripe.
8. The method of claim 6, wherein determining read ahead data for a next read request according to the first cache page and the second cache page when accessing the second cache page comprises:
when the second cache page is accessed, if the first cache page and the second cache page are both accessed, determining that the next reading request has no pre-reading data;
and when the second cache page is accessed, if the first cache page and the third cache page are not accessed, determining that the next read request has no pre-read data.
9. The method of claim 6,
the first cache page, the second cache page, and the third cache page are located at one-quarter, one-half, and three-quarters of the target stripe, respectively.
10. The method of any of claims 3-9, wherein determining the size of the read-ahead window when the media type of the hard disk indicates that the hard disk is a mechanical hard disk comprises:
when the medium type of the hard disk indicates that the hard disk is a mechanical hard disk, determining the size of the pre-reading window according to a default value;
alternatively, the first and second electrodes may be,
and when the medium type of the hard disk indicates that the hard disk is a mechanical hard disk, determining the size of the pre-reading window according to the issued configuration file, wherein the sizes of the pre-reading windows corresponding to different services are different.
11. A data access device, comprising:
the receiving and sending module is used for receiving a data access request;
the determining module is used for determining to initiate a reading request to the hard disk when target data requested by the data access request does not exist in the cache and the memory;
the processing module is used for determining the type of the hard disk medium;
and the reading module is used for determining whether to start the pre-reading behavior of the reading request and acquiring the target data when acquiring the target data according to the media type.
12. An electronic device comprising a processor, a memory, and a computer program stored on the memory and executable on the processor, wherein the processor, when executing the computer program, causes the electronic device to carry out the method of any one of claims 1 to 10.
13. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1 to 10.
CN202210345380.7A 2022-03-31 2022-03-31 Data access method, device, equipment and readable storage medium Pending CN114860625A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210345380.7A CN114860625A (en) 2022-03-31 2022-03-31 Data access method, device, equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210345380.7A CN114860625A (en) 2022-03-31 2022-03-31 Data access method, device, equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN114860625A true CN114860625A (en) 2022-08-05

Family

ID=82629307

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210345380.7A Pending CN114860625A (en) 2022-03-31 2022-03-31 Data access method, device, equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN114860625A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117032588A (en) * 2023-09-26 2023-11-10 苏州元脑智能科技有限公司 Data reading method and device, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117032588A (en) * 2023-09-26 2023-11-10 苏州元脑智能科技有限公司 Data reading method and device, electronic equipment and storage medium
CN117032588B (en) * 2023-09-26 2024-02-09 苏州元脑智能科技有限公司 Data reading method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US10380078B1 (en) Dynamic storage tiering in a virtual environment
US10346081B2 (en) Handling data block migration to efficiently utilize higher performance tiers in a multi-tier storage environment
US8904061B1 (en) Managing storage operations in a server cache
US9141529B2 (en) Methods and apparatus for providing acceleration of virtual machines in virtual environments
EP2502148B1 (en) Selective file system caching based upon a configurable cache map
US9934108B2 (en) System and method for optimizing mirror creation
US8782335B2 (en) Latency reduction associated with a response to a request in a storage system
US8190815B2 (en) Storage subsystem and storage system including storage subsystem
US8560801B1 (en) Tiering aware data defragmentation
US20140115252A1 (en) Block storage-based data processing methods, apparatus, and systems
US8694563B1 (en) Space recovery for thin-provisioned storage volumes
US8572338B1 (en) Systems and methods for creating space-saving snapshots
US8856439B2 (en) Method and device for utilizing application-level prior knowledge for selectively storing data in higher performance media
WO2017157145A1 (en) Data pre-fetching method and device
US11762583B2 (en) System and method for selectively throttling commands within a storage system
WO2015035814A1 (en) Data writing method and storage device
US8930626B1 (en) Cache management system and method
CN114860625A (en) Data access method, device, equipment and readable storage medium
US20110258424A1 (en) Distributive Cache Accessing Device and Method for Accelerating to Boot Remote Diskless Computers
US11315028B2 (en) Method and apparatus for increasing the accuracy of predicting future IO operations on a storage system
EP4310690A1 (en) Systems and methods for data prefetching for low latency data read from a remote server
CN103064926B (en) Data processing method and device
US9043533B1 (en) Sizing volatile memory cache based on flash-based cache usage
WO2023102784A1 (en) Data access method and apparatus, disk controller, disk and data storage system
US8886883B1 (en) System and method for improving cache performance

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination