WO2021238260A1 - 一种预读数据缓存方法、装置、设备及存储介质 - Google Patents
一种预读数据缓存方法、装置、设备及存储介质 Download PDFInfo
- Publication number
- WO2021238260A1 WO2021238260A1 PCT/CN2021/073442 CN2021073442W WO2021238260A1 WO 2021238260 A1 WO2021238260 A1 WO 2021238260A1 CN 2021073442 W CN2021073442 W CN 2021073442W WO 2021238260 A1 WO2021238260 A1 WO 2021238260A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- queue
- data
- read
- reading
- target
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0862—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with prefetch
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0806—Multiuser, multiprocessor or multiprocessing cache systems
- G06F12/0811—Multiuser, multiprocessor or multiprocessing cache systems with multilevel cache hierarchies
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/10—File systems; File servers
- G06F16/17—Details of further file system functions
- G06F16/172—Caching, prefetching or hoarding of files
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0866—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
- G06F12/0868—Data transfer between cache memory and other subsystems, e.g. storage devices or host systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2455—Query execution
- G06F16/24552—Database cache management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2457—Query processing with adaptation to user needs
- G06F16/24578—Query processing with adaptation to user needs using ranking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/10—Providing a specific technical effect
- G06F2212/1016—Performance improvement
- G06F2212/1024—Latency reduction
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Definitions
- This application relates to the field of computer application technology, and in particular to a method, device, device, and storage medium for pre-reading data caching.
- the storage system has the advantage of being able to store a large number of files, but at the same time, it also has the problem of slow file reading and loading.
- it is necessary to find the storage server where the file to be read is located, and then obtain corresponding data from the storage server. If the corresponding data is obtained only when there is an actual file reading request, it is limited by the impact of bandwidth, disk performance, etc., which may cause slower data loading and longer read latency.
- file pre-reading is usually used to solve the problem of slow data loading and longer reading delay.
- the pre-read data is loaded into the memory, and subsequent read or write operations to the file will increase the data loading speed and reduce the read delay.
- the pre-read data, the written data, and the read data share a cache queue, and the data in the entire cache queue is aged according to the degree of heat. Processing in this way will cause pre-read data that has not been accessed temporarily to be aged first. But in fact, the possibility of pre-reading data being accessed again later is often higher than that of written data or read data. If the pre-reading data is always aged first, then when there is a corresponding file reading demand, the Unable to use the pre-read data, which slows down the data loading speed and increases the read delay, which affects the system performance.
- the purpose of this application is to provide a pre-reading data caching method, device, equipment and storage medium to facilitate protection of the pre-reading data effectiveness and improve system performance.
- a method for pre-reading data caching including:
- the failure priority of the pre-reading queue is the lowest.
- the method further includes:
- the target pre-reading data in the pre-reading queue and/or the second-level cache queue is moved into the reset queue.
- the method further includes:
- the target pre-read data is read and the target pre-read data needs to be written, the target pre-read data is moved from the reset queue to the write queue.
- the data in the reset queue, the second-level cache queue, and the pre-reading queue are aged according to the preset invalidation priority order deal with.
- the invalidation priority order from high to low is: the reset queue, the second-level cache queue, and the pre-read queue.
- the performing aging processing on the data in the reset queue, the second-level cache queue, and the pre-reading queue according to a preset invalidation priority order includes:
- the aging process operation is stopped.
- the performing aging processing on each data in the reset queue in turn includes:
- the sequentially performing aging processing on each data in the second-level cache queue includes:
- the performing aging processing on each data in the pre-reading queue includes:
- the aging process is performed on each data in the pre-reading queue in turn according to the order of popularity from low to high.
- a pre-reading data caching device includes:
- the read instruction receiving module is used to receive the read instruction for the target file
- the data is moved into the first module, which is used to move the target pre-read data from the pre-read queue into the second-level cache queue if it is determined that the target pre-read data of the target file exists in the pre-read queue;
- a data reading module configured to read the target pre-read data in the second-level cache queue
- the data is moved into the second module, which is used to move the target pre-read data from the second-level cache queue to the reset queue after the reading is completed;
- the failure priority of the pre-reading queue is the lowest.
- a pre-reading data caching device including:
- Memory used to store computer programs
- the processor is configured to implement the steps of any one of the above-mentioned pre-reading data caching methods when the computer program is executed.
- a computer-readable storage medium having a computer program stored on the computer-readable storage medium, and when the computer program is executed by a processor, realizes the steps of any one of the above-mentioned pre-reading data caching methods.
- the target pre-reading data is moved from the pre-reading queue into the second In the second-level cache queue, the target pre-reading data is read in the second-level cache queue. After the reading is completed, the target pre-reading data is moved from the second-level cache queue to the reset queue, and the pre-reading queue has the lowest invalidation priority. According to the extent to which the data can be read, a multi-level cache is set up to protect the effectiveness of the pre-read data, which can improve the overall pre-read efficiency and read performance.
- FIG. 1 is an implementation flowchart of a method for pre-reading data caching in an embodiment of the application
- FIG. 2 is a schematic diagram of a specific process of pre-reading data caching in an embodiment of the application
- FIG. 3 is a schematic structural diagram of a pre-reading data caching device in an embodiment of the application.
- Fig. 4 is a schematic structural diagram of a pre-reading data caching device in an embodiment of the application.
- an implementation flowchart of a pre-reading data caching method provided by an embodiment of this application, the method may include the following steps:
- a large number of files can be stored in the storage system.
- users or other systems have file reading requirements, they can send file reading requests to the storage system.
- the target file is the file to be pre-read according to the read IO. If it is, the target file can be pre-read to obtain the target pre-read data of the target file.
- the target file is any file in the storage system.
- the target pre-reading data can be stored in the pre-reading queue. Multiple pre-reading data can be stored in the pre-reading queue.
- the pre-reading queue can be expressed as a readahead queue.
- the pre-reading queue in the memory can be checked first to determine whether the target pre-reading data of the target file exists in the pre-reading queue. If it exists, the operation of step S120 can be continued. If it does not exist, it indicates that the target file may not be pre-read before, or the target pre-read data of the target file has been aged. When it is determined that there is no target pre-reading data of the target file in the pre-reading queue, the target pre-reading data cannot be used, and the relevant data of the target file needs to be searched and read in the storage server of the storage system.
- the target After receiving the read instruction for the target file and it is determined that the target pre-reading data of the target file exists in the pre-reading queue, it indicates that the target file has been pre-read before. In this case, the target can be pre-read
- the read data is moved from the pre-reading queue to the second-level cache queue.
- the data stored in the pre-reading queue is the pre-read data obtained after pre-reading the file, and has not been actually read.
- the corresponding data in the pre-reading queue can be moved into the second-level cache queue. In this way, the pre-read data that has not been actually read and the pre-read data to be read can be stored in different queues, and differentiated by different queues, which is convenient for data management.
- the target pre-reading data After moving the target pre-reading data from the pre-reading queue into the second-level cache queue, the target pre-reading data can be read in the second-level cache queue.
- multiple queues are set up to store data at different stages.
- the target pre-read data is read in the second-level cache queue, and if the reading is completed, the target pre-read data can be moved from the second-level cache queue to the reset queue.
- the reset queue can be expressed as a reset queue.
- the pre-reading queue, the second-level cache queue, and the reset queue share the cache space.
- the data in the queue needs to be aged to release the cache space and reduce the used cache Space to increase the available cache space.
- the aging strategy based on different queues can be different.
- the invalidation priority of the pre-reading queue is the lowest. That is, when the used cache space exceeds the space threshold, the data in the second-level cache queue and/or reset queue is first aged, and all data in the second-level cache queue and reset queue is aged out After that, if the used buffer space has not yet reached the set requirement, the data in the pre-reading queue is then aged. In order to protect the effectiveness of pre-reading data, improve the overall pre-reading efficiency and read performance.
- the target pre-reading data is moved from the pre-reading queue to the second level In the cache queue, the target pre-reading data is read in the second-level cache queue. After the reading is completed, the target pre-reading data is moved from the second-level cache queue to the reset queue, and the pre-reading queue has the lowest invalidation priority. According to the extent to which the data can be read, a multi-level cache is set up to protect the effectiveness of the pre-read data, which can improve the overall pre-read efficiency and read performance.
- the method may further include the following steps:
- the target pre-read data in the pre-reading queue and/or the second-level cache queue is moved into the reset queue.
- the target pre-read data is moved from the pre-reading queue into the secondary cache queue , Read the target pre-read data in the second-level cache queue.
- the target file may be closed, making the reading of the target pre-read data in an unfinished state.
- the target read-ahead data may still be stored in the read-ahead queue or the second-level cache queue, or partly stored in the read-ahead queue and part of the second-level cache queue.
- the read-ahead queue and/or the second-level cache can be stored The target pre-read data in the queue is moved into the reset queue.
- the method may further include the following steps:
- the target pre-read data is read and the target pre-read data is required to be written, the target pre-read data is moved from the reset queue to the write queue.
- the target pre-read data can be written according to actual needs.
- the target pre-read data can be moved from the reset queue to the write queue, and the corresponding write operation is performed in the write queue. In order to avoid the target pre-read data being aged when the data in the reset queue is aged.
- the method may further include the following steps:
- the data in the reset queue, the second-level buffer queue, and the pre-reading queue are aged in accordance with the preset invalidation priority order.
- the invalidation priority order may be preset for the set of several queues, and the invalidation priority of the pre-reading queue is the lowest.
- a space threshold can be set. When the used cache space exceeds the space threshold, it is considered that the data in the queue needs to be invalidated to release the cache space.
- the space threshold can be set and adjusted according to actual conditions, for example, set to be the same size as the total cache space, or 90% of the total cache space.
- the pre-reading queue, the second-level cache queue, and the reset queue share the cache space.
- the data stored therein will occupy the cache space.
- the used cache space continues to decrease.
- the size of the used cache space can be monitored.
- the data in the reset queue, the second-level buffer queue, and the pre-reading queue can be aged in accordance with the preset invalidation priority order.
- the trim (elimination of failure) principle can be used for aging treatment.
- the pre-reading queue has the lowest invalidation priority.
- the order of invalidation priority from high to low can be: reset queue, second-level cache queue, and pre-read queue.
- each data in the reset queue can be aged in sequence; in the process of aging the data in the reset queue, if the used buffer space is less than or equal to the space Threshold, the aging processing operation is stopped; otherwise, after all the data in the reset queue is aging processed, each data in the second-level cache queue is sequentially aging processed; the data in the second-level cache queue is aging In the process of processing, if the used cache space is less than or equal to the space threshold, the aging processing operation is stopped; otherwise, after aging all the data in the secondary cache queue, each data in the pre-reading queue is processed in turn Aging processing; during the aging processing of the data in the pre-reading queue, if the used cache space is less than or equal to the cache threshold, the aging processing operation is stopped.
- the order of invalidation priority is: reset queue, second-level cache queue, pre-reading queue, it indicates that the reset queue has the highest invalidation priority, the second-level cache queue has the second priority, and the pre-read queue has the lowest invalidation priority. .
- the cache space is continuously released, and the used cache space is constantly updated.
- the used cache space is less than or equal to the space threshold, it indicates that the currently updated used cache space is sufficient, and the aging processing operation can be stopped.
- each data in the second-level cache queue is subjected to aging processing in turn.
- each data in the second-level cache queue may be aged in order in descending order of the storage duration. Data with a small storage duration may be being read or will be read. The probability of being read is higher. The data with a small storage duration is preferentially reserved, which can improve the reading efficiency.
- the cache space is still being released continuously, and the used cache space is constantly updated.
- the used cache space is less than or equal to the space threshold, it indicates that the currently updated used cache space is sufficient, and the aging processing operation can be stopped.
- each data in the pre-reading queue is aging processing in turn.
- each data in the pre-reading queue may be subjected to aging processing in sequence from low to high popularity. To prioritize the retention of hot data.
- the cache space is still being released continuously, and the used cache space is constantly updated.
- the popularity of data can be determined based on the number of times the data has been accessed, the distance between the accessed time and the current time, and so on.
- Fig. 2 is a schematic diagram of a specific implementation process of an embodiment of the application.
- the target pre-read data obtained by pre-reading the target file is stored in the pre-reading queue.
- the target pre-reading data is moved from the pre-reading queue to the secondary cache queue. That is, when the read service hits the pre-reading queue, the target pre-read data is moved into the second-level cache queue.
- the target pre-read data is moved into the reset queue.
- the target pre-reading data in the pre-reading queue and the second-level cache queue are moved into the reset queue.
- the used cache space exceeds the space threshold, use the trim principle to age the data in the queue.
- the data in the reset queue is aged according to the thermal aging strategy.
- the space threshold is still exceeded, and then the data in the secondary cache queue is aged
- the data is aged according to the time aging strategy, but still exceeds the threshold, and finally the data in the pre-reading queue is aged according to the thermal aging strategy.
- the target pre-read data of the target file is in the reset queue, the target pre-read data in the reset queue is moved into the pre-read queue.
- the storage system applied in the embodiment of this application may be a distributed storage file system.
- pre-read data a three-level cache mechanism is set up according to the degree to which the data can be read, and the pre-read data that has not been read is placed in the pre-read queue.
- the read pre-read data is stored in the secondary cache queue, and the read pre-read data is stored in the reset queue.
- the pre-read data that has not been read is more likely to be read than the written data.
- set the pre-reading queue to the lowest failure priority, which protects pre-read data and enhances the adaptability of pre-read scenarios, especially in read-write mixed business scenarios, and improves read and pre-read performance.
- an embodiment of the present application also provides a pre-reading data caching device, and the pre-reading data caching device described below and the pre-reading data caching method described above can be referenced correspondingly.
- the device may include the following modules:
- the reading instruction receiving module 310 is configured to receive the reading instruction for the target file
- the data is moved into the first module 320, which is used to move the target pre-read data from the pre-read queue into the secondary cache queue if it is determined that there is target pre-read data of the target file in the pre-read queue;
- the data reading module 330 is used to read the target pre-read data in the second-level cache queue
- the data is moved into the second module 340, which is used to move the target pre-read data from the second-level cache queue to the reset queue after the reading is completed;
- the pre-reading queue has the lowest invalidation priority.
- the target pre-reading data is moved from the pre-reading queue to the second level In the cache queue, the target pre-reading data is read in the second-level cache queue. After the reading is completed, the target pre-reading data is moved from the second-level cache queue to the reset queue, and the pre-reading queue has the lowest invalidation priority. According to the extent to which the data can be read, a multi-level cache is set up to protect the effectiveness of the pre-read data, which can improve the overall pre-read efficiency and read performance.
- this application also includes a third module for data migration, which is used to:
- the target file After receiving the read instruction for the target file, if the target file is detected to be closed when the read is not completed, move the target pre-read data in the pre-reading queue and/or the second-level cache queue into the reset queue .
- a fourth module for data migration which is used to:
- a data aging module for:
- the data in the reset queue, the second-level buffer queue, and the pre-reading queue are aged in accordance with the preset invalidation priority order.
- the invalidation priority order from high to low is: reset queue, second-level cache queue, and pre-read queue.
- the data aging processing module is used to:
- the aging process operation is stopped;
- the aging process operation is stopped.
- the data aging processing module is used to:
- each data in the second-level cache queue is aged in sequence
- each data in the pre-reading queue is subjected to aging processing in turn.
- an embodiment of the present application also provides a pre-reading data caching device, including:
- Memory used to store computer programs
- the processor is used to implement the steps of the pre-reading data caching method when the computer program is executed.
- the pre-reading data caching device may include a processor 10, a memory 11, a communication interface 12 and a communication bus 13.
- the processor 10, the memory 11, and the communication interface 12 all communicate with each other through the communication bus 13.
- the processor 10 may be a central processing unit (Central Processing Unit, CPU), an application-specific integrated circuit, a digital signal processor, a field programmable gate array, or other programmable logic devices.
- CPU Central Processing Unit
- application-specific integrated circuit e.g., an application-specific integrated circuit
- digital signal processor e.g., a digital signal processor
- field programmable gate array e.g., a field programmable gate array
- the processor 10 can call a program stored in the memory 11, and specifically, the processor 10 can perform operations in the embodiment of the pre-reading data caching method.
- the memory 11 is used to store one or more programs, the programs may include program codes, and the program codes include computer operation instructions.
- the memory 11 stores at least programs for implementing the following functions:
- the pre-reading queue has the lowest invalidation priority.
- the memory 11 may include a program storage area and a data storage area, where the program storage area may store an operating system and applications required by at least one function (such as file reading function, queue storage function) Programs, etc.; the data storage area can store data created during use, such as priority data, read status data, etc.
- the program storage area may store an operating system and applications required by at least one function (such as file reading function, queue storage function) Programs, etc.
- the data storage area can store data created during use, such as priority data, read status data, etc.
- the memory 11 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device or other volatile solid-state storage devices.
- the communication interface 12 may be an interface of a communication module for connecting with other devices or systems.
- the structure shown in FIG. 4 does not constitute a limitation on the pre-reading data caching device in the embodiment of the present application.
- the pre-reading data caching device may include more or more than that shown in FIG. 4 Few parts, or a combination of some parts.
- the embodiment of the present application also provides a computer-readable storage medium, on which a computer program is stored, and the computer program is executed by a processor to realize the steps of the above-mentioned pre-reading data caching method .
- the steps of the method or algorithm described in combination with the embodiments disclosed herein can be directly implemented by hardware, a software module executed by a processor, or a combination of the two.
- the software module can be placed in random access memory (RAM), internal memory, read-only memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disks, removable disks, CD-ROMs, or all areas in the technical field. Any other known storage media.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Memory System Of A Hierarchy Structure (AREA)
Abstract
一种预读数据缓存方法、装置、设备及存储介质,该方法包括以下步骤:接收针对目标文件的读取指令;如果确定预读队列中存在目标文件的目标预读数据,则将目标预读数据从预读队列移入二级缓存队列中;在二级缓存队列中读取目标预读数据;读取完成后,将目标预读数据从二级缓存队列移入重置队列中;其中,预读队列的失效优先级最低。应用本申请实施例所提供的技术方案,可以保护预读数据的有效性,可以提升整体的预读效率、读性能。
Description
本申请要求于2020年05月29日提交中国专利局、申请号为202010479026.4、发明名称为“一种预读数据缓存方法、装置、设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本申请涉及计算机应用技术领域,特别是涉及一种预读数据缓存方法、装置、设备及存储介质。
随着计算机技术的快速发展,存储系统逐渐发展起来,其在各行业的应用越来越广泛。存储系统存在可进行大量文件存储的优点,但是同时,也存在文件读取加载慢的问题。在对存储系统中的文件进行读取时,需要查找到待读取的文件所在的存储服务器,然后向该存储服务器获取相应数据。如果都是在有实际的文件读取请求时才进行相应数据的获取,则受限于带宽、磁盘性能等影响,可能会使得数据加载较慢,读取时延较长。
针对这种情况,通常通过文件预读解决数据加载较慢,读取时延较长的问题。对文件进行预读操作之后,将预读数据加载到内存中,后续对该文件进行读或者写操作等时,将会提高数据加载速度,减少读取时延。
在相关技术中,预读数据、写入完成数据、读的数据共用一个缓存队列,整个缓存队列中的数据按照热度进行老化。按照这种方式进行处理,将会导致暂时未被访问的预读数据先被老化处理。但实际上预读数据后期被再次访问的可能性往往高于写入完成的数据或者读过的数据,如果总是先对预读数据进行老化处理,则在有相应文件读取需求时,将无法利用预读数据,使得数据加载速度变慢,读取时延变长,影响系统性能。
发明内容
本申请的目的是提供一种预读数据缓存方法、装置、设备及存储介质,以方便保护预读数据的有效性,提升系统性能。
为解决上述技术问题,本申请提供如下技术方案:
一种预读数据缓存方法,包括:
接收针对目标文件的读取指令;
如果确定预读队列中存在所述目标文件的目标预读数据,则将所述目标预读数据从所述预读队列移入二级缓存队列中;
在所述二级缓存队列中读取所述目标预读数据;
读取完成后,将所述目标预读数据从所述二级缓存队列移入重置队列中;
其中,所述预读队列的失效优先级最低。
在本申请的一种具体实施方式中,在所述接收针对目标文件的读取指令之后,还包括:
在读取未完成的情况下,如果监测到所述目标文件关闭,则将所述预读队列和/或所述二级缓存队列中的所述目标预读数据移入所述重置队列中。
在本申请的一种具体实施方式中,在所述将所述目标预读数据从所述二级缓存队列移入重置队列中之后,还包括:
在读取到所述目标预读数据,要对所述目标预读数据进行写操作的情况下,将所述目标预读数据从所述重置队列移入写队列中。
在本申请的一种具体实施方式中,还包括:
在监测到已用缓存空间超过设定的空间阈值的情况下,按照预设的失效优先级顺序,对所述重置队列、所述二级缓存队列、所述预读队列中的数据进行老化处理。
在本申请的一种具体实施方式中,所述失效优先级顺序从高到低依次为:所述重置队列、所述二级缓存队列、所述预读队列。
在本申请的一种具体实施方式中,所述按照预设的失效优先级顺序,对所述重置队列、所述二级缓存队列、所述预读队列中的数据进行老化处理,包括:
依次对所述重置队列中的每个数据进行老化处理;
在对所述重置队列中的数据进行老化处理的过程中,如果所述已用缓 存空间小于或等于所述空间阈值,则停止老化处理操作;
否则,在对所述重置队列中的全部数据进行老化处理之后,依次对所述二级缓存队列中的每个数据进行老化处理;
在对所述二级缓存队列中的数据进行老化处理的过程中,如果所述已用缓存空间小于或等于所述空间阈值,则停止老化处理操作;
否则,在对所述二级缓存队列中的全部数据进行老化处理之后,依次对所述预读队列中的每个数据进行老化处理;
在对所述预读队列中的数据进行老化处理过程中,如果所述已用缓存空间小于或等于所述缓存阈值,则停止老化处理操作。
在本申请的一种具体实施方式中,
所述依次对所述重置队列中的每个数据进行老化处理,包括:
按照热度从低到高的顺序,依次对所述重置队列中的每个数据进行老化处理;
和/或,
所述依次对所述二级缓存队列中的每个数据进行老化处理,包括:
按照存入时长从大到小的顺序,依次对所述二级缓存队列中的每个数据进行老化处理;
和/或,
所述依次对所述预读队列中的每个数据进行老化处理,包括:
按照热度从低到高的顺序,依次对所述预读队列中的每个数据进行老化处理。
一种预读数据缓存装置,包括:
读取指令接收模块,用于接收针对目标文件的读取指令;
数据移入第一模块,用于如果确定预读队列中存在所述目标文件的目标预读数据,则将所述目标预读数据从所述预读队列移入二级缓存队列中;
数据读取模块,用于在所述二级缓存队列中读取所述目标预读数据;
数据移入第二模块,用于读取完成后,将所述目标预读数据从所述二级缓存队列移入重置队列中;
其中,所述预读队列的失效优先级最低。
一种预读数据缓存设备,包括:
存储器,用于存储计算机程序;
处理器,用于执行所述计算机程序时实现上述任一项所述预读数据缓存方法的步骤。
一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现上述任一项所述预读数据缓存方法的步骤。
应用本申请实施例所提供的技术方案,在接收到针对目标文件的读取指令后,如果确定预读队列中存在目标文件的目标预读数据,则将目标预读数据从预读队列移入二级缓存队列中,在二级缓存队列中读取目标预读数据,读取完成后,将目标预读数据从二级缓存队列移入重置队列中,预读队列的失效优先级最低。根据数据可被读的程度,设置多级缓存,保护预读数据的有效性,可以提升整体的预读效率、读性能。
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例中一种预读数据缓存方法的实施流程图;
图2为本申请实施例中一种预读数据缓存的具体过程示意图;
图3为本申请实施例中一种预读数据缓存装置的结构示意图;
图4为本申请实施例中一种预读数据缓存设备的结构示意图。
为了使本技术领域的人员更好地理解本申请方案,下面结合附图和具体实施方式对本申请作进一步的详细说明。显然,所描述的实施例仅仅是 本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
参见图1所示,为本申请实施例所提供的一种预读数据缓存方法的实施流程图,该方法可以包括以下步骤:
S110:接收针对目标文件的读取指令。
存储系统中可以存储大量文件。用户或者其他系统在有文件读取需求时,可以向存储系统发出文件读取请求。
在实际应用中,可以根据读IO确定目标文件是否为待预读的文件,如果是,则可以对目标文件进行预读,获得目标文件的目标预读数据。目标文件为存储系统中的任意一个文件。获得目标预读数据后,可以将目标预读数据存入预读队列中。预读队列中可存储多个预读数据。预读队列可表示为readahead队列。
在接收到针对目标文件的读取指令时,可以先查看内存中的预读队列,确定预读队列中是否存在目标文件的目标预读数据。如果存在,则可以继续执行步骤S120的操作。如果不存在,则表明之前可能未对目标文件进行预读,或者目标文件的目标预读数据已经被做老化处理。在确定预读队列中不存在目标文件的目标预读数据的情况下,无法利用目标预读数据,需要在存储系统的存储服务器中查找目标文件的相关数据,并进行读取。
S120:如果确定预读队列中存在目标文件的目标预读数据,则将目标预读数据从预读队列移入二级缓存队列中。
在接收到针对目标文件的读取指令,确定预读队列中存在目标文件的目标预读数据的情况下,表明之前已经对目标文件进行了预读操作,在这种情况下,可以将目标预读数据从预读队列移入二级缓存队列中。
在本申请实施例中,预读队列中存储的数据是对文件进行预读后获得的预读数据,未被实际读取过。当有读业务命中预读队列时,可以将预读队列中的相应数据移入二级缓存队列中。这样可以将未被实际读取过的预读数据和将要被读取的预读数据分别存到不同队列中,通过不同队列进行区分,方便进行数据管理。
在二级缓存队列中,每次触发读时,不进行热度更新,整体按时间老化。
S130:在二级缓存队列中读取目标预读数据。
将目标预读数据从预读队列移入二级缓存队列中之后,可以在二级缓存队列中读取目标预读数据。
S140:读取完成后,将目标预读数据从二级缓存队列移入重置队列中。
本申请实施例设置多个队列进行不同阶段数据的存储。在二级缓存队列中读取目标预读数据,如果读取完成,则可以将目标预读数据从二级缓存队列移入重置队列中。重置队列可表示为reset队列。
预读队列、二级缓存队列、重置队列共用缓存空间,在已用缓存空间超过设定的空间阈值的情况下,需要对队列中的数据进行老化处理,以释放缓存空间,减少已用缓存空间,增加可用缓存空间。不同队列所基于的老化策略可以不同。在本申请实施例中,预读队列的失效优先级最低。即在已用缓存空间超过空间阈值的情况下,先对二级缓存队列和/或重置队列中的数据进行老化处理,在对二级缓存队列和重置队列中的所有数据都进行老化处理之后,如果已用缓存空间还未达到设定要求,再对预读队列中的数据进行老化处理。以保护预读数据的有效性,提升整体的预读效率、读性能。
应用本申请实施例所提供的方法,在接收到针对目标文件的读取指令后,如果确定预读队列中存在目标文件的目标预读数据,则将目标预读数据从预读队列移入二级缓存队列中,在二级缓存队列中读取目标预读数据,读取完成后,将目标预读数据从二级缓存队列移入重置队列中,预读队列的失效优先级最低。根据数据可被读的程度,设置多级缓存,保护预读数据的有效性,可以提升整体的预读效率、读性能。
在本申请的一个实施例中,在步骤S110接收针对目标文件的读取指令之后,该方法还可以包括以下步骤:
在读取未完成的情况下,如果监测到目标文件关闭,则将预读队列和/或二级缓存队列中的目标预读数据移入重置队列中。
在本申请实施例中,在接收到针对目标文件的读取指令后,如果确定 预读队列中存在目标文件的目标预读数据,则将目标预读数据从预读队列移入二级缓存队列中,在二级缓存队列中读取目标预读数据,在这里的每个过程中,目标文件都可能会被关闭,使得对于目标预读数据的读取处于未完成的状态。在这种情况下,目标预读数据可能仍存储在预读队列或者二级缓存队列中,或者部分存储在预读队列部分在二级缓存队列中,可以将预读队列和/或二级缓存队列中的目标预读数据移入重置队列中。
在本申请的一个实施例中,在步骤S140将目标预读数据从二级缓存队列移入重置队列中之后,该方法还可以包括以下步骤:
在读取到目标预读数据,要目标预读数据进行写操作的情况下,将目标预读数据从重置队列移入写队列中。
在本申请实施例中,根据实际需要可以对目标预读数据进行写操作。在读取到目标预读数据,要对目标预读数据进行写操作的情况下,可以将目标预读数据从重置队列移入写队列中,在写队列中进行相应的写操作。以避免在对重置队列中的数据进行老化时,目标预读数据被老化处理。
在本申请的一个实施例中,该方法还可以包括以下步骤:
在监测到已用缓存空间超过设定的空间阈值的情况下,按照预设的失效优先级顺序,对重置队列、二级缓存队列、预读队列中的数据进行老化处理。
在本申请实施例中,可以针对设置的几个队列,预先设定失效优先级顺序,其中,预读队列的失效优先级最低。
同时,可以设定一个空间阈值,在已用缓存空间超过该空间阈值时,认为当前需要对队列中的数据进行失效处理,以释放缓存空间。空间阈值可以根据实际情况进行设定和调整,如设定为与总缓存空间的大小相同,或者为总缓存空间的90%。
预读队列、二级缓存队列、重置队列共享缓存空间,其中保存的数据将会占用缓存空间,随着存储的数据的不断增加,已用缓存空间不断减少。可以对已用缓存空间的大小进行监测。在监测到已用缓存空间超过设定的空间阈值时,可以按照预设的失效优先级顺序,对重置队列、二级缓存队列、预读队列中的数据进行老化处理。具体的,可以利用trim(淘汰失效) 原理进行老化处理。
预读队列的失效优先级最低。具体的,失效优先级顺序从高到低可以依次为:重置队列、二级缓存队列、预读队列。
在本申请的一种具体实施方式中,可以依次对重置队列中的每个数据进行老化处理;在对重置队列中的数据进行老化处理的过程中,如果已用缓存空间小于或等于空间阈值,则停止老化处理操作;否则,在对重置队列中的全部数据进行老化处理之后,依次对二级缓存队列中的每个数据进行老化处理;在对二级缓存队列中的数据进行老化处理的过程中,如果已用缓存空间小于或等于空间阈值,则停止老化处理操作;否则,在对二级缓存队列中的全部数据进行老化处理之后,依次对预读队列中的每个数据进行老化处理;在对预读队列中的数据进行老化处理过程中,如果已用缓存空间小于或等于缓存阈值,则停止老化处理操作。
如果失效优先级顺序依次为:重置队列、二级缓存队列、预读队列,则表明重置队列的失效优先级最高,二级缓存队列的失效优先级其次,预读队列的失效优先级最低。
首先,依次对重置队列中的每个数据进行老化处理,具体的,可以按照热度从低到高的顺序,依次对重置队列中的每个数据进行老化处理,以优先保留热度高的数据。在老化处理过程中,不断释放缓存空间,已用缓存空间不断更新。
在这个过程中,如果已用缓存空间小于或等于空间阈值,则表明当前更新后的已用缓存空间足够,可以停止老化处理操作。
否则,在对重置队列中的全部数据进行老化处理之后,依次对二级缓存队列中的每个数据进行老化处理。具体的,可以按照存入时长从大到小的顺序,依次对二级缓存队列中的每个数据进行老化处理。存入时长小的数据,可能正被读取或者将要被读取,被读取几率较大,优先保留存入时长小的数据,可以提高读取效率。在老化处理过程中,仍在不断释放缓存空间,已用缓存空间不断更新。
在这个过程中,如果已用缓存空间小于或等于空间阈值,则表明当前更新后的已用缓存空间足够,可以停止老化处理操作。
否则,在对二级缓存队列中的全部数据进行老化处理之后,依次对预读队列中的每个数据进行老化处理。具体的,可以按照热度从低到高的顺序,依次对预读队列中的每个数据进行老化处理。以优先保留热度高的数据。在老化处理过程中,仍在不断释放缓存空间,已用缓存空间不断更新。
在这个过程中,如果已用缓存空间小于或等于缓存阈值,则停止老化处理操作。
在实际应用中,数据的热度可以根据数据被访问的次数、被访问时刻与当前时刻的距离等进行确定。
图2所示为本申请实施例的一种具体实施过程示意图。
在确定要对目标文件进行预读时,将对目标文件进行预读获得的目标预读数据存入预读队列。在接收到针对目标文件的读取指令时,如果确定预读队列中存在目标文件的目标预读数据,则将目标预读数据从预读队列移入二级缓存队列中。即读业务命中预读队列时,将目标预读数据移入二级缓存队列中。当读业务读完二级缓存队列中的目标预读数据时,将目标预读数据移入重置队列中。在读取未完成的情况下,如果监测到目标文件关闭,则将预读队列、二级缓存队列中的目标预读数据移入重置队列中。已用缓存空间超过空间阈值时,利用trim原理对队列中的数据进行老化处理,先对重置队列中的数据按照热度老化策略进行老化处理,仍然超空间阈值,再对二级缓存队列中的数据按照时间老化策略进行老化处理,仍然超阈值,最后对预读队列中的数据按照热度老化策略进行老化处理。
另外,在确定要对目标文件进行预读时,如果目标文件的目标预读数据在重置队列中,则将重置队列中的目标预读数据移入预读队列。
本申请实施例所应用的存储系统具体可以是分布式存储文件系统,针对预读数据,根据数据可被读的程度设置了三级缓存机制,将未被读过的预读数据在预读队列中存储,读的预读数据在二级缓存队列中存储,读完成的预读数据在重置队列中存储,未被读过的预读数据被读的可能性往往高于写入完成的数据或者读过的数据,将预读队列设置为最低的失效优先级,保护了预读数据,增强了预读场景的适应性,尤其是读写混合业务场景,提高了读、预读性能。
相应于上面的方法实施例,本申请实施例还提供了一种预读数据缓存装置,下文描述的预读数据缓存装置与上文描述的预读数据缓存方法可相互对应参照。
参见图3所示,该装置可以包括以下模块:
读取指令接收模块310,用于接收针对目标文件的读取指令;
数据移入第一模块320,用于如果确定预读队列中存在目标文件的目标预读数据,则将目标预读数据从预读队列移入二级缓存队列中;
数据读取模块330,用于在二级缓存队列中读取目标预读数据;
数据移入第二模块340,用于读取完成后,将目标预读数据从二级缓存队列移入重置队列中;
其中,预读队列的失效优先级最低。
应用本申请实施例所提供的装置,在接收到针对目标文件的读取指令后,如果确定预读队列中存在目标文件的目标预读数据,则将目标预读数据从预读队列移入二级缓存队列中,在二级缓存队列中读取目标预读数据,读取完成后,将目标预读数据从二级缓存队列移入重置队列中,预读队列的失效优先级最低。根据数据可被读的程度,设置多级缓存,保护预读数据的有效性,可以提升整体的预读效率、读性能。
在本申请的一种具体实施方式中,还包括数据移入第三模块,用于:
在接收针对目标文件的读取指令之后,在读取未完成的情况下,如果监测到目标文件关闭,则将预读队列和/或二级缓存队列中的目标预读数据移入重置队列中。
在本申请的一种具体实施方式中,还包括数据移入第四模块,用于:
在将目标预读数据从二级缓存队列移入重置队列中之后,在读取到目标预读数据,要对目标预读数据进行写操作的情况下,将目标预读数据从重置队列移入写队列中。
在本申请的一种具体实施方式中,还包括数据老化模块,用于:
在监测到已用缓存空间超过设定的空间阈值的情况下,按照预设的失效优先级顺序,对重置队列、二级缓存队列、预读队列中的数据进行老化处理。
在本申请的一种具体实施方式中,失效优先级顺序从高到低依次为:重置队列、二级缓存队列、预读队列。
在本申请的一种具体实施方式中,数据老化处理模块,用于:
依次对重置队列中的每个数据进行老化处理;
在对重置队列中的数据进行老化处理的过程中,如果已用缓存空间小于或等于空间阈值,则停止老化处理操作;
否则,在对重置队列中的全部数据进行老化处理之后,依次对二级缓存队列中的每个数据进行老化处理;
在对二级缓存队列中的数据进行老化处理的过程中,如果已用缓存空间小于或等于空间阈值,则停止老化处理操作;
否则,在对二级缓存队列中的全部数据进行老化处理之后,依次对预读队列中的每个数据进行老化处理;
在对预读队列中的数据进行老化处理过程中,如果已用缓存空间小于或等于缓存阈值,则停止老化处理操作。
在本申请的一种具体实施方式中,数据老化处理模块,用于:
按照热度从低到高的顺序,依次对重置队列中的每个数据进行老化处理;
和/或,
按照存入时长从大到小的顺序,依次对二级缓存队列中的每个数据进行老化处理;
和/或,
按照热度从低到高的顺序,依次对预读队列中的每个数据进行老化处理。
相应于上面的方法实施例,本申请实施例还提供了一种预读数据缓存设备,包括:
存储器,用于存储计算机程序;
处理器,用于执行计算机程序时实现上述预读数据缓存方法的步骤。
如图4所示,为预读数据缓存设备的组成结构示意图,预读数据缓存设备可以包括:处理器10、存储器11、通信接口12和通信总线13。处理 器10、存储器11、通信接口12均通过通信总线13完成相互间的通信。
在本申请实施例中,处理器10可以为中央处理器(Central Processing Unit,CPU)、特定应用集成电路、数字信号处理器、现场可编程门阵列或者其他可编程逻辑器件等。
处理器10可以调用存储器11中存储的程序,具体的,处理器10可以执行预读数据缓存方法的实施例中的操作。
存储器11中用于存放一个或者一个以上程序,程序可以包括程序代码,程序代码包括计算机操作指令,在本申请实施例中,存储器11中至少存储有用于实现以下功能的程序:
接收针对目标文件的读取指令;
如果确定预读队列中存在目标文件的目标预读数据,则将目标预读数据从预读队列移入二级缓存队列中;
在二级缓存队列中读取目标预读数据;
读取完成后,将目标预读数据从二级缓存队列移入重置队列中;
其中,预读队列的失效优先级最低。
在一种可能的实现方式中,存储器11可包括存储程序区和存储数据区,其中,存储程序区可存储操作系统,以及至少一个功能(比如文件读取功能、队列存储功能)所需的应用程序等;存储数据区可存储使用过程中所创建的数据,如优先级数据、读取状态数据等。
此外,存储器11可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件或其他易失性固态存储器件。
通信接口12可以为通信模块的接口,用于与其他设备或者系统连接。
当然,需要说明的是,图4所示的结构并不构成对本申请实施例中预读数据缓存设备的限定,在实际应用中预读数据缓存设备可以包括比图4所示的更多或更少的部件,或者组合某些部件。
相应于上面的方法实施例,本申请实施例还提供了一种计算机可读存储介质,计算机可读存储介质上存储有计算机程序,计算机程序被处理器执行时实现上述预读数据缓存方法的步骤。
本说明书中各个实施例采用递进的方式描述,每个实施例重点说明的 都是与其它实施例的不同之处,各个实施例之间相同或相似部分互相参见即可。
专业人员还可以进一步意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、计算机软件或者二者的结合来实现,为了清楚地说明硬件和软件的可互换性,在上述说明中已经按照功能一般性地描述了各示例的组成及步骤。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
结合本文中所公开的实施例描述的方法或算法的步骤可以直接用硬件、处理器执行的软件模块,或者二者的结合来实施。软件模块可以置于随机存储器(RAM)、内存、只读存储器(ROM)、电可编程ROM、电可擦除可编程ROM、寄存器、硬盘、可移动磁盘、CD-ROM、或技术领域内所公知的任意其它形式的存储介质中。
本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的技术方案及其核心思想。应当指出,对于本技术领域的普通技术人员来说,在不脱离本申请原理的前提下,还可以对本申请进行若干改进和修饰,这些改进和修饰也落入本申请权利要求的保护范围内。
Claims (10)
- 一种预读数据缓存方法,其特征在于,包括:接收针对目标文件的读取指令;如果确定预读队列中存在所述目标文件的目标预读数据,则将所述目标预读数据从所述预读队列移入二级缓存队列中;在所述二级缓存队列中读取所述目标预读数据;读取完成后,将所述目标预读数据从所述二级缓存队列移入重置队列中;其中,所述预读队列的失效优先级最低。
- 根据权利要求1所述的方法,其特征在于,在所述接收针对目标文件的读取指令之后,还包括:在读取未完成的情况下,如果监测到所述目标文件关闭,则将所述预读队列和/或所述二级缓存队列中的所述目标预读数据移入所述重置队列中。
- 根据权利要求1所述的方法,其特征在于,在所述将所述目标预读数据从所述二级缓存队列移入重置队列中之后,还包括:在读取到所述目标预读数据,要对所述目标预读数据进行写操作的情况下,将所述目标预读数据从所述重置队列移入写队列中。
- 根据权利要求1至3之中任一项所述的方法,其特征在于,还包括:在监测到已用缓存空间超过设定的空间阈值的情况下,按照预设的失效优先级顺序,对所述重置队列、所述二级缓存队列、所述预读队列中的数据进行老化处理。
- 根据权利要求4所述的方法,其特征在于,所述失效优先级顺序从高到低依次为:所述重置队列、所述二级缓存队列、所述预读队列。
- 根据权利要求5所述的方法,其特征在于,所述按照预设的失效优先级顺序,对所述重置队列、所述二级缓存队列、所述预读队列中的数据进行老化处理,包括:依次对所述重置队列中的每个数据进行老化处理;在对所述重置队列中的数据进行老化处理的过程中,如果所述已用缓 存空间小于或等于所述空间阈值,则停止老化处理操作;否则,在对所述重置队列中的全部数据进行老化处理之后,依次对所述二级缓存队列中的每个数据进行老化处理;在对所述二级缓存队列中的数据进行老化处理的过程中,如果所述已用缓存空间小于或等于所述空间阈值,则停止老化处理操作;否则,在对所述二级缓存队列中的全部数据进行老化处理之后,依次对所述预读队列中的每个数据进行老化处理;在对所述预读队列中的数据进行老化处理过程中,如果所述已用缓存空间小于或等于所述缓存阈值,则停止老化处理操作。
- 根据权利要求6所述的方法,其特征在于,所述依次对所述重置队列中的每个数据进行老化处理,包括:按照热度从低到高的顺序,依次对所述重置队列中的每个数据进行老化处理;和/或,所述依次对所述二级缓存队列中的每个数据进行老化处理,包括:按照存入时长从大到小的顺序,依次对所述二级缓存队列中的每个数据进行老化处理;和/或,所述依次对所述预读队列中的每个数据进行老化处理,包括:按照热度从低到高的顺序,依次对所述预读队列中的每个数据进行老化处理。
- 一种预读数据缓存装置,其特征在于,包括:读取指令接收模块,用于接收针对目标文件的读取指令;数据移入第一模块,用于如果确定预读队列中存在所述目标文件的目标预读数据,则将所述目标预读数据从所述预读队列移入二级缓存队列中;数据读取模块,用于在所述二级缓存队列中读取所述目标预读数据;数据移入第二模块,用于读取完成后,将所述目标预读数据从所述二级缓存队列移入重置队列中;其中,所述预读队列的失效优先级最低。
- 一种预读数据缓存设备,其特征在于,包括:存储器,用于存储计算机程序;处理器,用于执行所述计算机程序时实现如权利要求1至7任一项所述预读数据缓存方法的步骤。
- 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1至7任一项所述预读数据缓存方法的步骤。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/927,822 US11681623B1 (en) | 2020-05-29 | 2021-01-23 | Pre-read data caching method and apparatus, device, and storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010479026.4A CN111723058B (zh) | 2020-05-29 | 2020-05-29 | 一种预读数据缓存方法、装置、设备及存储介质 |
CN202010479026.4 | 2020-05-29 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021238260A1 true WO2021238260A1 (zh) | 2021-12-02 |
Family
ID=72565575
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/073442 WO2021238260A1 (zh) | 2020-05-29 | 2021-01-23 | 一种预读数据缓存方法、装置、设备及存储介质 |
Country Status (3)
Country | Link |
---|---|
US (1) | US11681623B1 (zh) |
CN (1) | CN111723058B (zh) |
WO (1) | WO2021238260A1 (zh) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114625805A (zh) * | 2022-05-16 | 2022-06-14 | 杭州时代银通软件股份有限公司 | 一种回测配置方法、装置、设备及介质 |
CN116795877A (zh) * | 2023-08-23 | 2023-09-22 | 本原数据(北京)信息技术有限公司 | 数据库的预读方法和装置、计算机设备、存储介质 |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111723058B (zh) | 2020-05-29 | 2023-07-14 | 广东浪潮大数据研究有限公司 | 一种预读数据缓存方法、装置、设备及存储介质 |
CN114442939B (zh) * | 2021-12-31 | 2023-08-29 | 苏州浪潮智能科技有限公司 | 预读数据队列处理方法及电子设备 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106164875A (zh) * | 2014-04-04 | 2016-11-23 | 高通股份有限公司 | 基于专用高速缓存组中的竞争性专用预取策略进行自适应性高速缓存预取以减少高速缓存污染 |
CN109478165A (zh) * | 2016-07-20 | 2019-03-15 | 超威半导体公司 | 基于缓存测试区针对预取数据选择缓存转移策略 |
CN111723058A (zh) * | 2020-05-29 | 2020-09-29 | 广东浪潮大数据研究有限公司 | 一种预读数据缓存方法、装置、设备及存储介质 |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5983324A (en) * | 1996-03-28 | 1999-11-09 | Hitachi, Ltd. | Data prefetch control method for main storage cache for protecting prefetched data from replacement before utilization thereof |
US7523228B2 (en) * | 2006-09-18 | 2009-04-21 | International Business Machines Corporation | Method for performing a direct memory access block move in a direct memory access device |
CN102447610B (zh) * | 2010-10-14 | 2015-05-20 | 中兴通讯股份有限公司 | 实现报文缓存资源共享的方法和装置 |
US9639466B2 (en) * | 2012-10-30 | 2017-05-02 | Nvidia Corporation | Control mechanism for fine-tuned cache to backing-store synchronization |
CN105468305A (zh) | 2015-12-09 | 2016-04-06 | 浪潮(北京)电子信息产业有限公司 | 一种数据缓存方法、装置和系统 |
CN109947720A (zh) * | 2019-04-12 | 2019-06-28 | 苏州浪潮智能科技有限公司 | 一种文件预读方法、装置、设备及可读存储介质 |
-
2020
- 2020-05-29 CN CN202010479026.4A patent/CN111723058B/zh active Active
-
2021
- 2021-01-23 US US17/927,822 patent/US11681623B1/en active Active
- 2021-01-23 WO PCT/CN2021/073442 patent/WO2021238260A1/zh active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106164875A (zh) * | 2014-04-04 | 2016-11-23 | 高通股份有限公司 | 基于专用高速缓存组中的竞争性专用预取策略进行自适应性高速缓存预取以减少高速缓存污染 |
CN109478165A (zh) * | 2016-07-20 | 2019-03-15 | 超威半导体公司 | 基于缓存测试区针对预取数据选择缓存转移策略 |
CN111723058A (zh) * | 2020-05-29 | 2020-09-29 | 广东浪潮大数据研究有限公司 | 一种预读数据缓存方法、装置、设备及存储介质 |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114625805A (zh) * | 2022-05-16 | 2022-06-14 | 杭州时代银通软件股份有限公司 | 一种回测配置方法、装置、设备及介质 |
CN114625805B (zh) * | 2022-05-16 | 2022-09-20 | 杭州时代银通软件股份有限公司 | 一种回测配置方法、装置、设备及介质 |
CN116795877A (zh) * | 2023-08-23 | 2023-09-22 | 本原数据(北京)信息技术有限公司 | 数据库的预读方法和装置、计算机设备、存储介质 |
CN116795877B (zh) * | 2023-08-23 | 2023-12-19 | 本原数据(北京)信息技术有限公司 | 数据库的预读方法和装置、计算机设备、存储介质 |
Also Published As
Publication number | Publication date |
---|---|
US20230195629A1 (en) | 2023-06-22 |
US11681623B1 (en) | 2023-06-20 |
CN111723058B (zh) | 2023-07-14 |
CN111723058A (zh) | 2020-09-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021238260A1 (zh) | 一种预读数据缓存方法、装置、设备及存储介质 | |
US7979631B2 (en) | Method of prefetching data in hard disk drive, recording medium including program to execute the method, and apparatus to perform the method | |
US8949544B2 (en) | Bypassing a cache when handling memory requests | |
WO2021238265A1 (zh) | 一种文件预读方法、装置、设备及存储介质 | |
JP5922740B2 (ja) | メモリデバイスのための装置、メモリデバイスおよびメモリデバイスの制御のための方法 | |
US10417137B2 (en) | Flushing pages from solid-state storage device | |
US8996818B2 (en) | Bypassing memory requests to a main memory | |
US11620219B2 (en) | Storage drive dependent track removal in a cache for storage | |
US20220164316A1 (en) | Deduplication method and apparatus | |
US10324760B2 (en) | Leases for blocks of memory in a multi-level memory | |
US20170004087A1 (en) | Adaptive cache management method according to access characteristics of user application in distributed environment | |
CN109799959B (zh) | 一种提高开放通道固态盘写并行性的方法 | |
CN113835614A (zh) | 一种基于分布式文件存储客户端的ssd智能缓存方法和系统 | |
US8732404B2 (en) | Method and apparatus for managing buffer cache to perform page replacement by using reference time information regarding time at which page is referred to | |
US10346070B2 (en) | Storage control apparatus and storage control method | |
US11960419B2 (en) | Systems and methods for data prefetching for low latency data read from a remote server | |
CN108052296B (zh) | 一种数据读取方法、设备及计算机存储介质 | |
US9384135B2 (en) | System and method of caching hinted data | |
WO2021088587A1 (zh) | 数据访问方法、装置及存储介质 | |
US20230033998A1 (en) | Memory system for maintaining data consistency and operation method thereof | |
CN110347339B (zh) | 一种针对机械硬盘的可控连续写方法、调度器及存储系统 | |
CN108733583B (zh) | 提高NVMe SSD顺序数据读取性能的预读方法及系统 | |
TW202101228A (zh) | 電腦系統、儲存器及資料存取方法 | |
KR101866681B1 (ko) | 페이지 크기를 동적으로 제어하는 페이지 제어 방법 및 장치 | |
CN107621926B (zh) | 栈区数据存取方法、装置、可读存储介质和计算机设备 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21811979 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21811979 Country of ref document: EP Kind code of ref document: A1 |