CN116795877B - Method and device for pre-reading database, computer equipment and storage medium - Google Patents

Method and device for pre-reading database, computer equipment and storage medium Download PDF

Info

Publication number
CN116795877B
CN116795877B CN202311062428.4A CN202311062428A CN116795877B CN 116795877 B CN116795877 B CN 116795877B CN 202311062428 A CN202311062428 A CN 202311062428A CN 116795877 B CN116795877 B CN 116795877B
Authority
CN
China
Prior art keywords
data
read
thread
reading
pool
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311062428.4A
Other languages
Chinese (zh)
Other versions
CN116795877A (en
Inventor
赵金
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Primitive Data Beijing Information Technology Co ltd
Original Assignee
Primitive Data Beijing Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Primitive Data Beijing Information Technology Co ltd filed Critical Primitive Data Beijing Information Technology Co ltd
Priority to CN202311062428.4A priority Critical patent/CN116795877B/en
Publication of CN116795877A publication Critical patent/CN116795877A/en
Application granted granted Critical
Publication of CN116795877B publication Critical patent/CN116795877B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2453Query optimisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5011Pool
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5018Thread allocation

Abstract

The embodiment of the application provides a method and a device for pre-reading a database, computer equipment and a storage medium, and belongs to the technical field of databases. The method comprises the following steps: IO thread pool data of the read-ahead thread pool is obtained, and IO load data is determined according to average queue depth data, IO average delay data and IO maximum delay data of the IO thread pool data; determining the capacity data of the pre-read cache pool according to the IO load data and the capacity table of the pre-set cache pool; according to the IO load data and the capacity data of the pre-read cache pool, pre-read IO thread data of the pre-read thread pool are determined; and receiving the query request data, and performing pre-reading processing on the query request data according to the capacity data of the pre-reading cache pool and the pre-reading IO thread data to obtain the pre-reading cache data. The method and the device can solve the problems of serious IO blocking, resource pre-reading waste and the like in the database, and effectively improve the pre-reading capacity of the database.

Description

Method and device for pre-reading database, computer equipment and storage medium
Technical Field
The present disclosure relates to the field of database technologies, and in particular, to a method and an apparatus for pre-reading a database, a computer device, and a storage medium.
Background
Database pre-reading (Database Prefetching) is a technique for optimizing database performance by reducing subsequent disk access time by reading data from disk into memory in advance, thereby speeding up data retrieval and querying. The read-ahead system of the database consists of a read-ahead Input/Output (IO) thread pool and IO engines. A large number of IO threads responsible for IO reading are arranged in the IO thread pool, and when the database pre-reading method in the related technology carries out database pre-reading, the IO threads read data into a cache pool of the database in advance in the background. Therefore, when the main query thread reads data, the hit probability of the cache pool is improved, the IO reading time of the disk is saved, and the overall query performance is improved.
However, the related art database pre-reading method has some drawbacks, for example, for different IO pressure scenarios, when the active IO threads in the database cannot meet the current requirements, the problems of serious IO blocking and the like easily occur; or when many invalid IO threads exist in the database, the waste of pre-reading resources in the database is easy to cause, and the pre-reading capability of the database is low.
Disclosure of Invention
The main purpose of the embodiment of the application is to provide a method and a device for pre-reading a database, computer equipment and a storage medium, which can solve the problems of serious IO blocking, pre-reading resource waste and the like in the database and effectively improve the pre-reading capability of the database.
To achieve the above object, a first aspect of an embodiment of the present application proposes a method for pre-reading a database, the method including:
IO thread pool data of a read-ahead thread pool is obtained, wherein the IO thread pool data comprises average queue depth data, IO average time delay data and IO maximum time delay data;
determining IO load data according to the average queue depth data, the IO average delay data and the IO maximum delay data;
determining the capacity data of a pre-read cache pool according to the IO load data and a preset cache pool capacity table;
according to the IO load data and the capacity data of the pre-read cache pool, pre-read IO thread data of the pre-read thread pool are determined;
and receiving query request data, and performing pre-reading processing on the query request data according to the pre-reading cache pool capacity data and the pre-reading IO thread data to obtain pre-reading cache data, wherein the pre-reading cache data is used for storing the pre-reading data when the query request data is queried.
In some embodiments, the determining the IO load data according to the average queue depth data, the IO average latency data, and the IO maximum latency data includes:
acquiring a preset first load factor, a preset second load factor and a preset third load factor;
determining first load data according to the first load coefficient and the average queue depth data;
determining second load data according to the second load coefficient and the IO average delay data;
determining third load data according to the third load coefficient and the IO maximum time delay data;
and determining the IO load data according to the first load data, the second load data and the third load data.
In some embodiments, the determining the first load data from the first load factor and the average queue depth data comprises:
determining a queue depth function of the average queue depth data;
and performing queue depth calculation on the first load coefficient and the average queue depth data according to the queue depth function to obtain the first load data.
In some embodiments, the obtaining the IO thread pool data of the read-ahead thread pool includes:
Acquiring a thread pool scanning operator of the read-ahead thread pool;
determining a target IO engine according to the thread pool scanning operator;
and acquiring the IO thread pool data from the read-ahead thread pool according to the target IO engine.
In some embodiments, the obtaining the IO thread pool data from the read-ahead thread pool according to the target IO engine includes:
building a load monitoring thread in the pre-reading thread Chi Zhongchuang;
and acquiring data of the read-ahead thread pool according to the load monitoring thread and the preset acquisition time data to obtain the IO thread pool data.
In some embodiments, the determining the pre-read cache pool capacity data according to the IO load data and the preset cache pool capacity table includes:
determining pre-reading capacity data according to the IO load data;
performing cache pool capacity matching on the pre-reading capacity data according to the preset cache pool capacity table, and determining cache pool capacity matching data;
and determining the pre-read cache pool capacity data according to the cache pool capacity matching data and preset pre-read maximum cache pool capacity data.
In some embodiments, the determining the read-ahead IO thread data of the read-ahead thread pool according to the IO load data and the read-ahead buffer pool capacity data includes:
Acquiring a thread weight coefficient and a cache Chi Quanchong coefficient;
performing capacity calculation according to the cache Chi Quanchong coefficient and the pre-read cache pool capacity data to obtain target cache pool capacity data;
determining target IO thread data according to the IO load data and the target cache pool capacity data;
and calculating the number of threads according to the target IO thread data and the thread weight coefficient to obtain the pre-read IO thread data.
To achieve the above object, a second aspect of the embodiments of the present application proposes a pre-reading device for a database, the device comprising:
the data acquisition module is used for acquiring IO thread pool data of the read-ahead thread pool, wherein the IO thread pool data comprises average queue depth data, IO average time delay data and IO maximum time delay data;
the load data determining module is used for determining IO load data according to the average queue depth data, the IO average delay data and the IO maximum delay data;
the buffer pool capacity determining module is used for determining the pre-read buffer pool capacity data according to the IO load data and a preset buffer pool capacity table;
the thread data determining module is used for determining read-ahead IO thread data of the read-ahead thread pool according to the IO load data and the read-ahead cache pool capacity data;
The pre-reading processing module is used for receiving query request data, pre-reading the query request data according to the pre-reading cache pool capacity data and the pre-reading IO thread data to obtain pre-reading cache data, and the pre-reading cache data is used for storing the pre-reading data when the query request data is queried.
To achieve the above object, a third aspect of the embodiments of the present application proposes a computer device, including:
at least one memory;
at least one processor;
at least one computer program;
the at least one computer program is stored in the at least one memory, and the at least one processor executes the at least one computer program to implement the method of pre-reading a database as described in the first aspect above.
To achieve the above object, a fourth aspect of the embodiments of the present application proposes a computer-readable storage medium storing a computer program for causing a computer to execute the method for pre-reading a database according to the first aspect.
The embodiment of the application provides a method and a device for pre-reading a database, computer equipment and a storage medium, wherein firstly IO thread pool data of a pre-reading thread pool are obtained, and the IO thread pool data comprises average queue depth data, IO average time delay data and IO maximum time delay data. And determining IO load data according to the average queue depth data, the IO average delay data and the IO maximum delay data. And then, determining the pre-read cache pool capacity data according to the IO load data and the pre-set cache pool capacity table. And determining the read-ahead IO thread data of the read-ahead thread pool according to the IO load data and the read-ahead cache pool capacity data. And then, receiving query request data, and performing pre-reading processing on the query request data according to the capacity data of the pre-reading cache pool and the pre-reading IO thread data to obtain pre-reading cache data, wherein the pre-reading cache data is used for storing the pre-reading data when the query request data is queried. The method and the device can solve the problems of serious IO blocking, resource pre-reading waste and the like in the database, and effectively improve the pre-reading capacity of the database.
Drawings
FIG. 1 is a flowchart of a method for pre-reading a database according to an embodiment of the present application;
fig. 2 is a flowchart of step S110 in fig. 1;
fig. 3 is a flowchart of step S230 in fig. 2;
fig. 4 is a flowchart of step S120 in fig. 1;
fig. 5 is a flowchart of step S420 in fig. 4;
fig. 6 is a flowchart of step S130 in fig. 1;
fig. 7 is a flowchart of step S140 in fig. 1;
FIG. 8 is a schematic structural diagram of a pre-reading device of a database according to an embodiment of the present disclosure;
fig. 9 is a schematic hardware structure of a computer device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
It should be noted that although functional block division is performed in a device diagram and a logic sequence is shown in a flowchart, in some cases, the steps shown or described may be performed in a different order than the block division in the device, or in the flowchart. The terms first, second and the like in the description and in the claims and in the above-described figures, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the present application.
First, several nouns referred to in this application are parsed:
database pre-reading (Database Prefetching): is a technique for optimizing database performance by reading data from disk into memory in advance and storing it in a read-ahead cache pool. Thus, when the actual inquiry or the inquiry request comes, the method can immediately respond, and the subsequent disk access time is reduced, so that the speed of data inquiry and inquiry is increased. In database systems, data is typically stored on disk and read from disk to memory for processing as needed. However, frequent disk read operations may result in longer response times due to slower disk access speeds. To reduce this delay, database pre-reading techniques are introduced.
IO engine of read-ahead system: refers to a software component or module for managing and performing read-ahead operations. It is responsible for processing the read-ahead request, interacting with the underlying storage device, and loading the read-ahead data into memory. The IO engine responsible for storage is responsible for processing the read-ahead request, reading data from disk, and loading the data into memory for query and operation. Storage engines typically use caching and prefetching techniques to improve the access performance of data.
Direct IO (DIO): the method is a mode that an application program directly accesses disk data, DIO does not pass through a kernel buffer area, namely bypasses the kernel buffer area, and manages the IO buffer area by itself.
Libaio (Linux asynchronous input/output): is a library that implements asynchronous I/O operations on a Linux system. It provides a set of functions and data structures that enable programs to perform file read and write operations in an asynchronous manner.
Sequential scan operator: refers to that threads in a thread pool sequentially process IO requests according to a fixed sequence. Each thread processes one IO request in turn until all requests are processed. This approach may ensure the sequentiality of the requests, and is suitable for scenarios that require processing in the order of the requests, such as sequentially reading the contents of the file.
Random scan operator: refers to that threads in a thread pool randomly select one IO request to process. Each thread randomly selects a request to be processed in the task queue and takes the request out of the queue for processing. The method can improve concurrency performance, and is suitable for scenes in which IO requests are relatively independent and do not need to be processed strictly in sequence.
The read-ahead system of the database consists of a read-ahead Input/Output (IO) thread pool and IO engines. A large number of IO threads responsible for IO reading are arranged in the IO thread pool, and when the database pre-reading method in the related technology carries out database pre-reading, the IO threads read data into a cache pool of the database in advance in the background. Therefore, when the main query thread reads data, the hit probability of the cache pool is improved, the IO reading time of the disk is saved, and the overall query performance is improved.
However, the related art database pre-reading method has some drawbacks, for example, for different IO pressure scenarios, when the active IO threads in the database cannot meet the current requirements, the problems of serious IO blocking and the like easily occur; or when many invalid IO threads exist in the database, the waste of pre-reading resources in the database is easy to cause, and the pre-reading capability of the database is low.
Based on the above, the embodiment of the application provides a method and a device for pre-reading a database, a computer device and a storage medium, which can solve the problems of serious IO blocking, pre-reading resource waste and the like in the database and effectively improve the pre-reading capability of the database.
The method for pre-reading the database provided by the embodiment of the application can be applied to a terminal, a server side and software running in the terminal or the server side. In some embodiments, the terminal may be a smart phone, tablet, notebook, desktop, etc.; the server side can be configured as an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, and a cloud server for providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, content delivery networks (Content Delivery Network, CDN), basic cloud computing services such as big data and artificial intelligent platforms and the like; the software may be an application or the like that implements a read-ahead method of the database, but is not limited to the above form.
The subject application is operational with numerous general purpose or special purpose computer system environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network personal computers (Personal Computer, PCs), minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like. The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
Referring to fig. 1, fig. 1 is an optional flowchart of a method for pre-reading a database according to an embodiment of the present application, where the method in fig. 1 may specifically include, but is not limited to, steps S110 to S150, and these five steps are described in detail below in conjunction with fig. 1.
Step S110, IO thread pool data of a read-ahead thread pool is obtained, wherein the IO thread pool data comprises average queue depth data, IO average time delay data and IO maximum time delay data;
step S120, determining IO load data according to the average queue depth data, the IO average delay data and the IO maximum delay data;
step S130, according to IO load data and a preset cache pool capacity table, determining the pre-read cache pool capacity data;
step S140, pre-read IO thread data of a pre-read thread pool is determined according to the IO load data and the pre-read cache pool capacity data;
step S150, receiving query request data, and performing pre-reading processing on the query request data according to the pre-reading cache pool capacity data and the pre-reading IO thread data to obtain pre-reading cache data, wherein the pre-reading cache data is used for storing the pre-reading data when the query request data is queried.
It will be appreciated that in steps S110 to S150 of some embodiments, first, IO thread pool data of the read-ahead thread pool is obtained, where the IO thread pool data includes average queue depth data, IO average latency data, and IO maximum latency data. And determining IO load data according to the average queue depth data, the IO average delay data and the IO maximum delay data. And then, determining the pre-read cache pool capacity data according to the IO load data and the pre-set cache pool capacity table. And determining the read-ahead IO thread data of the read-ahead thread pool according to the IO load data and the read-ahead cache pool capacity data. And then, receiving query request data, and performing pre-reading processing on the query request data according to the capacity data of the pre-reading cache pool and the pre-reading IO thread data to obtain pre-reading cache data, wherein the pre-reading cache data is used for storing the pre-reading data when the query request data is queried. The method and the device can solve the problems of serious IO blocking, resource pre-reading waste and the like in the database, and effectively improve the pre-reading capacity of the database.
In step S110 of some embodiments, the read-ahead thread pool is a thread pool in the read-ahead system for processing read-ahead requests. The pre-read system may accelerate subsequent data access operations by reading the data in advance and caching it in memory. The read-ahead thread pool includes a plurality of read-ahead IO threads. The read-ahead IO threads in the thread pool are responsible for reading data from a storage medium (e.g., disk) and caching the data into memory. Therefore, in the subsequent data access, the data can be directly obtained from the memory without performing disk access again, so that the data reading speed is increased. The IO thread pool data refers to related data collected for the read-ahead thread pool.
It should be noted that each pre-read IO thread corresponds to one IO queue. An IO Queue (Input/Output Queue) refers to a Queue used in a computer system to manage Input/Output requests. When a computer system exchanges data with an external device, it is often necessary to perform an input or output operation, such as reading a file from a disk or writing data to a network connection. The queue depth refers to the number of IO read-ahead messages existing in the current IO queue, for example, the IO queue depth is 100, that is, 100 IO read-ahead messages exist in the current IO queue; the current IO queue has 80 IO read messages, then the current queue depth is 80. The average Queue Depth data (Average IO Queue Depth, io_queue_depth) refers to the average number of IO read-ahead messages for all IO queues in the read-ahead thread pool at a certain preset time. For example, the current read-ahead thread pool includes two IO queues, where the first queue has 10 IO read-ahead messages at a preset time and the second queue has 30 IO read-ahead messages at the same preset time, then the average queue depth data is (10+30)/2=20.
Note that, the IO average delay data (Average IO Latency, io_avg_latency) refers to a time delay required for averaging when the computer system performs an input/output operation. The calculation method of the IO average delay data generally adds the delay time of all IO operations and divides the delay time by the total IO operation times. The delay time may be a time for waiting for the device to respond after the IO request is issued, or may be a time from the device to completion of the IO operation, which is not particularly limited herein.
Note that, the IO maximum delay data (io_max_latency): refers to the time taken for the longest single IO operation in a set of IO operations. The IO maximum latency data represents the most time consuming delay of IO operations in the read-ahead system. By recording the time stamp of each IO operation, after the IO operation is carried out for a plurality of times within a period of time, the longest time consuming piece of the recorded IO operation can be found, namely the IO maximum time delay data.
Referring to fig. 2, fig. 2 is an optional flowchart of step S110 provided in the embodiment of the present application, and in some embodiments of the present application, step S110 specifically includes, but is not limited to, steps S210 to S230, and these three steps are described in detail below in connection with fig. 2.
Step S210, obtaining a thread pool scanning operator of a read-ahead thread pool;
step S220, determining a target IO engine according to the thread pool scanning operator;
step S230, IO thread pool data is obtained from the pre-read thread pool according to the target IO engine.
Since the read-ahead method of the related art generally adopts a single type of IO engine to perform IO processing under different service scenarios. For example, for a business scenario of a sequential scanning operator, an IO processing is generally performed by using a Linux universal Libaio asynchronous library. Because Libaio must adopt DIO mode, file system buffer memory can not be fully utilized, but read directly from disk, resulting in lower IO access efficiency of pre-reading system.
In step S210 of some embodiments, in order to improve the overall capability of the pre-reading system, the pre-reading method provided by the embodiments of the present application can support diversification of IO engines, that is, can select an appropriate pre-reading IO engine according to different operator service scenarios. Specifically, a thread pool scanning operator of a current read-ahead thread pool is acquired first. The thread pool scanning operator is used for representing an operator service scene of the current pre-read thread pool, such as a sequential scanning operator, a random scanning operator and the like.
In steps S220 through S230 of some embodiments, a target IO engine is determined from the thread pool scanning operator. The target IO engine corresponds to an engine library, and each engine library refers to a set storing a plurality of implementation functions (such as reading, writing and the like). A mapping relation exists between the thread pool scanning operator and the corresponding target IO engine. For example, when the thread pool scanner is a sequential scanner, the corresponding target IO engine is a Batch Read (Batch Read) synchronous library engine. Under the condition of sequential scanning operator IO, the file system can prefetch IO to the kernel cache, and the Batch Read synchronous library engine can directly access the kernel cache data, so that the efficiency is higher. (Libaio asynchronous library uses direct IO mode, can't access kernel cache data, can only read from disk, IO access efficiency is slower). And when the thread pool scanning operator is a random scanning operator, the corresponding target IO engine is a Libaio asynchronous library engine. Because the cache hit rate of the random scanning operator kernel is low, most of IO needs to be read directly from a disk, the Libaio asynchronous library engine can only read from the disk, and the Libaio asynchronous library engine supports multi-queue IO, so that the disk reading efficiency is much higher than that of the synchronous engine. And then, acquiring IO thread pool data from the read-ahead thread pool according to the target IO engine to perform subsequent read-ahead processing.
It should be noted that, the IO engine of the read-ahead system refers to a software component or module for managing and performing the read-ahead operation. It is responsible for processing read-ahead requests, interacting with underlying storage devices, and loading read-ahead data into a read-ahead thread pool.
According to the embodiment of the application, the proper pre-reading IO engine can be selected according to different operator business scenes, and the diversification of the IO engine can be supported, so that the whole capability of the pre-reading system is improved.
Referring to fig. 3, fig. 3 is an optional flowchart of step S230 provided in the embodiment of the present application, and in some embodiments of the present application, step S230 specifically includes, but is not limited to, steps S310 to S320, and these two steps are described in detail below in conjunction with fig. 3.
Step S310, a load monitoring thread is built in the pre-reading thread Chi Zhongchuang;
step S320, data acquisition is carried out on the pre-read thread pool according to the load monitoring thread and the preset acquisition time data, and IO thread pool data are obtained.
In step S310 of some embodiments, when dynamically adjusting the read-ahead capability of the read-ahead system, the IO thread pool data of the current read-ahead thread pool needs to be acquired first, so that dynamic adjustment is performed according to the acquired data. Specifically, the embodiment of the present application creates a load monitoring Thread responsible for controlling the prefetching capability of the Prefetch Thread pool, which may be denoted as prefetch_control_thread. The Prefetch Control Thread is a preloaded Thread.
In step S320 of some embodiments, the maximum number of threads and the pre-read maximum buffer pool capacity data may be obtained according to the load monitoring thread. In addition, according to the embodiment of the application, the IO thread pool data of the pre-reading thread pool can be periodically obtained from the load monitoring thread according to the preset acquisition time data, so that the pre-reading capability of the corresponding IO load is determined. For example, the preset acquisition time data is 10 minutes, 30 minutes, 1 hour, or the like, and is not particularly limited herein.
In step S120 of some embodiments, after determining the average Queue Depth data io_queue_depth, the IO average delay data io_avg_latency, and the IO maximum delay data io_max_latency, the embodiments of the present application may determine the IO load data according to a preset IO load calculation manner, so as to adjust the pre-reading capability of the pre-reading thread pool in real time according to the current IO load situation.
Referring to fig. 4, fig. 4 is an optional flowchart of step S120 provided in the embodiment of the present application, and in some embodiments of the present application, step S120 specifically includes, but is not limited to, steps S410 to S450, and these five steps are described in detail below in connection with fig. 4.
Step S410, a preset first load factor, a preset second load factor and a preset third load factor are obtained;
Step S420, determining first load data according to the first load coefficient and the average queue depth data;
step S430, determining second load data according to the second load coefficient and the IO average time delay data;
step S440, determining third load data according to the third load coefficient and the IO maximum time delay data;
in step S450, the IO load data is determined according to the first load data, the second load data, and the third load data.
In steps S410 to S450 of some embodiments, the embodiments of the present application calculate the average Queue Depth data io_queue_depth, the IO average delay data io_avg_latency, and the IO maximum delay data io_max_latency according to a preset Load calculation function, and determine the IO Load data, which is denoted as io_load. Determining first load data according to a first load coefficient A and IO_queue_depth; determining second load data according to the second load coefficient B and the IO_Avg_Latency; and determining second load data according to the third load coefficient C and the IO_Max_Latency. Then, the IO load data is determined according to the following formula (1).
(1)
It should be noted that, the first load factor is used to represent the pair of io_queue_depthIs used to influence the weight. The second load factor is used to represent the pair IO_Avg_Latency +. >Is used to influence the weight. The third load factor is used to represent the pair of IO_Max_Latency->Is used to influence the weight.
Note that, the first load factor a, the second load factor B, and the third load factor C may be flexibly adjusted according to actual needs, and the sum of A, B, C may be 1, or the same integer value may be set according to the importance level, which is not particularly limited herein.
Referring to fig. 5, fig. 5 is an optional flowchart of step S420 provided in an embodiment of the present application, and in some embodiments of the present application, step S420 specifically includes, but is not limited to, steps S510 to S520, which are described in detail below in conjunction with fig. 5.
Step S510, determining a queue depth function of average queue depth data;
and step S520, performing queue depth calculation on the first load coefficient and the average queue depth data according to the queue depth function to obtain first load data.
In step S510 and step S520 of some embodiments, the pair is selected according to IO_queue_DepthAnd adjusting the queue depth function. For example, when the IO_queue_Depth pair +.>In general, it may be determined that a Queue Depth function (in which the io_queue_depth is squared) is shown in the following formula (2), and Queue Depth calculation is performed on the first load coefficient and the average Queue Depth data according to the current Queue Depth function, so as to obtain first load data. When IO_queue_Depth pair +. >The influence degree of (1) is higher, it may be determined that a Queue Depth function (in which the cubic square is calculated by the io_queue_depth) is shown in the following formula (3), and Queue Depth calculation is performed on the first load coefficient and the average Queue Depth data according to the current Queue Depth function, so as to obtain first load data.
(2)
(3)
It should be noted that, in the embodiment of the present application, the corresponding functions may be determined according to the influences of the average Queue Depth data io_queue_depth, the IO average delay data io_avg_latency, and the IO maximum delay data io_max_latency on the IO load in practice.
It should be noted that, for the second load data, the load data is selected according to the pair of IO_Avg_LatencyThe mean time delay function is adjusted. For example, when IO_Avg_Latency is to +.>The average delay function may be determined as shown in the following formula (4), and average delay calculation is performed on the second load coefficient B and the io_avg_latency according to the current average delay function, so as to obtain second load data. When IO_Avg_Latency is to +.>The influence degree of (a) is higher, an average delay function (in which the secondary is calculated by the io_avg_latency) may be determined as shown in the following formula (5), and average delay calculation is performed on the second load coefficient B and the io_avg_latency according to the current average delay function, so as to obtain second load data.
(4)
(5)
For the third load data, the data is recorded according to the pair of IO_Max_LatencyAnd adjusting the maximum delay function. For example, when IO_Max_Latency is to +.>In general, the maximum delay function may be determined as shown in the following formula (6), and the maximum delay calculation is performed on the third load coefficient C and the io_max_latency according to the current maximum delay function, so as to obtain third load data. When IO_Max_Latency pair->The influence degree of the (a) is higher, it can be determined that the queue depth function (in which the secondary side is solved by the io_max_latency) is as shown in the following formula (7), and the maximum time delay calculation is performed on the third load coefficient C and the io_avg_latency according to the current maximum time delay function, so as to obtain the third load data.
(6)
(7)
Illustratively, in a specific embodiment, the current system IO backlog condition, i.e., the current IO load condition, can be reflected more directly due to the queue depth. Thus, when the pair of average Queue Depth data IO_queue_Depth is more looked atThe calculation formula of the IO load data can be shown as the following formula (8).
(8)
In practical applications, specific definitions of the first load data, the second load data, and the third load data are not limited, and are not described herein.
According to the embodiment of the invention, the flexibility of the pre-reading capability of the pre-reading system can be improved through dynamic adjustment of the coefficients and functions corresponding to the average queue depth data, the IO average time delay data and the IO maximum time delay data.
In step S130 of some embodiments, after determining the IO load data, the embodiments of the present application may determine corresponding buffer pool capacity data from a preset buffer pool capacity table according to the IO load data.
Referring to fig. 6, fig. 6 is an optional flowchart of step S130 provided in the embodiment of the present application, and in some embodiments of the present application, step S130 specifically includes, but is not limited to, steps S610 to S630, and these three steps are described in detail below in connection with fig. 6.
Step S610, pre-reading capacity data is determined according to IO load data;
step S620, carrying out buffer pool capacity matching on the pre-read capacity data according to a preset buffer pool capacity table, and determining buffer pool capacity matching data;
step S630, the pre-read cache pool capacity data is determined according to the cache pool capacity matching data and the preset pre-read maximum cache pool capacity data.
In step S610 of some embodiments, the pre-read capability data refers to the pre-read capability controllable by the current pool of pre-read threads, denoted as Prefetch_Capacity. Since the IO load determines the prefetching capability of the current thread pool, the embodiment of the application takes the value of IO load data as the prefetching capability data, namely =Prefetch_Capacity。
In step S620 of some embodiments, the preset buffer pool capacity table stores buffer pool capacity data corresponding to different ranges, and the buffer pool capacity matching interval corresponding to the pre-reading capacity data is determined according to the maximum value of the pre-reading capacity through table lookup, that is, the buffer pool capacity matching data is determined. The cache pool capacity matching data refers to a matching interval corresponding to the calculated pre-reading capacity data.
Exemplary, embodiments of the present application provide a range setting of 8 kilobytes (K) per cache pool capacity. The maximum value of the pre-reading capability is Max_Prefetch_Capacity. When the prefetch_capacity is smaller than Max_prefetch_capacity/3 (the calculated prereading Capacity data is smaller than the maximum value of the prereading Capacity of one third), the corresponding cache pool Capacity (the number of 8K pages) is the prereading maximum cache pool Capacity data/3; when max_prefetch_capacity/3< prefetch_capacity < max_prefetch_capacity 2/3 (the calculated pre-reading Capacity data is greater than the maximum value of one third and less than the maximum value of two thirds), the corresponding buffer pool Capacity (the number of 8K pages) is 2/3 of the maximum pre-reading buffer pool Capacity data; when max_prefetch_capacity is 2/3< prefetch_capacity < max_prefetch_capacity (the calculated pre-read Capacity data is greater than the maximum value of two thirds of pre-read Capacity), the corresponding buffer pool Capacity (8K pages) is the data of the maximum buffer pool Capacity.
For example, when the maximum value of the prereading capability is 100000 and the calculated prereading capability data is 45000, 100000/3<45000<100000×2/3, and buffer pool Capacity matching data is a section corresponding to max_prefix_capacity/3 < prefix_capacity < max_prefix_capacity×2/3.
In step S630 of some embodiments, after determining the buffer Pool capacity matching data, the pre-read buffer Pool capacity data is determined according to the buffer Pool capacity matching data and the preset pre-read maximum buffer Pool capacity data, and is denoted as buff_pool_siz. For example, when the buffer pool Capacity matching data indicates that the pre-reading Capacity data is in the interval of max_prefetch_capacity/3< pre-fetch_capacity < max_prefetch_capacity x 2/3, and the pre-reading maximum buffer pool Capacity data is 30000, the pre-reading buffer pool Capacity data=pre-reading maximum buffer pool Capacity data x 2/3, that is, pre-reading buffer pool Capacity data=30000 x 2/3=20000.
In step S140 of some embodiments, since the IO load data and the pre-read capability data, pre-fetch_capability, are the same, pre-fetch_capability is related to the pre-read IO Thread data (thread_cnt) and the pre-read cache Pool Capacity data, buff_pool_siz. Therefore, the read-ahead IO Thread data thread_cnt of the read-ahead Thread Pool can be determined according to the IO load data and the read-ahead cache Pool capacity data Buff_pool_Siz. The read-ahead IO thread data refers to the maximum number of threads in the read-ahead thread pool.
According to the set pre-read IO thread data and the pre-read cache pool capacity data, the method and the device can still maintain stable resource control quantity and stable IO processing capacity in the pre-read system even under the condition of a large load.
Referring to fig. 7, fig. 7 is an optional flowchart of step S140 provided in the embodiment of the present application, and in some embodiments of the present application, step S140 specifically includes, but is not limited to, steps S710 to S740, and these four steps are described in detail below in connection with fig. 7.
Step S710, obtaining a thread weight coefficient and a cache Chi Quanchong coefficient;
step S720, performing capacity calculation according to the cache Chi Quanchong coefficient and the pre-read cache pool capacity data to obtain target cache pool capacity data;
step S730, determining target IO thread data according to the IO load data and the target cache pool capacity data;
step S740, calculating the number of threads according to the target IO thread data and the thread weight coefficient to obtain the pre-read IO thread data.
In step S710 of some embodiments, first, the relation between the pre-read capability data prefetch_capability and the pre-read IO Thread data thread_cnt and the pre-read buffer Pool Capacity data buff_pool_siz is shown in the following formula (9). The Thread weight coefficient refers to the influence degree of read-ahead IO Thread data thread_Cnt on read-ahead capacity data. The buffer Chi Quanchong coefficient refers to the extent to which the pre-read buffer Pool capacity data buff_pool_siz affects the pre-read capacity data.
(9)
Wherein,representing thread weight coefficient, ++>Representing the buffered Chi Quanchong coefficients.
Note that, the thread weight coefficient and the cache Chi Quanchong coefficient may be added to be a weight value of 1, or may be other integers according to actual needs, which is not limited herein. For example, the thread weight coefficient is 0.3, and the cache Chi Quanchong coefficient is set to 0.7; alternatively, the thread weight coefficient is 1000 and the cache Chi Quanchong coefficient is set to 2.
In step S720 of some embodiments, the cache Chi Quanchong coefficient and the read-ahead cache pool capacity data are subjected to capacity calculation to obtain target cache pool capacity data. For example, if the buffer Chi Quanchong coefficient is 2 and the pre-read buffer pool capacity data is 20000, the target buffer pool capacity data is 2×20000=40000.
In step S730 of some embodiments, due to the IO load data=prefetch_capability, then the target IO thread data can be determined by subtracting the target cache pool Capacity data from the IO load data, as shown in equation (9). For example, if the IO load data is 44000 and the target buffer pool capacity data is 40000, the target IO thread data is 44000-40000=4000.
In step S740 of some embodiments, since the target thread data is the product of the thread weight coefficient and the pre-read IO thread data, the thread number is calculated according to the target IO thread data and the thread weight coefficient, so as to obtain the pre-read IO thread data. For example, if the target IO thread data is 4000 and the thread weight coefficient is 1000, the pre-read IO thread data=4000/1000=4.
It should be noted that, when the pre-read IO thread data is a decimal, the integer bit of the pre-read IO thread data is added with 1 to be used as the last pre-read IO thread data.
In step S150 of some embodiments, after the pre-read IO thread data and the pre-read cache pool capacity data are determined, pre-read processing at the time of data query may be performed according to the two parameters. Specifically, query request data is received, the query request data is subjected to pre-reading processing according to the capacity data of the pre-reading cache pool and the pre-reading IO thread data, and pre-reading cache data is obtained and used for storing the pre-reading data when the query request data is queried.
In practical application, redundant IO threads in the read-ahead IO thread data are temporarily suspended to a pending queue, and the read-ahead IO thread data are taken off when the load is increased and the thread resources need to be expanded. In addition, when the pre-reading treatment is carried out, the current use amount of the pre-reading cache pool is recorded at the same time, and when the current theoretical calculation use amount (namely the pre-reading cache pool capacity data) is reached, the application is not carried out.
It should be noted that the Pending queue is a data structure for storing tasks or events waiting to be processed. It is commonly used in asynchronous programming and can help manage and control the order and concurrency of execution of tasks.
Illustratively, in combination with equation (8), equation (9) and the interval setting at 8 kilobytes (K) per cache pool capacity, when a is 100, b is 10, c is 4, t is 1000, p is 2, the maximum value of the pre-read capability is 100000, and the maximum pre-read cache pool capacity data is 30000 (8K). Currently sampled20->Is the number of the components in the mixture to be 200,500. Determining IO load data +.>100×20+10×200+4×500=44000. Then (I)>=prefetch_capability (prereading capability data). The pre-read buffer Pool Capacity data=pre-read maximum buffer Pool Capacity data by 2/3, i.e. pre-read buffer Pool Capacity data buff_pool_size=30000 by 2/3=20000, is determined according to the pre-fetch_capacity. Thereafter, the read-ahead IO thread is determined according to equation (9)Data thread_cnt= (prefetch_capacity-p_buffer_pool_size)/t= (44000-2 x 20000)/1000=4. Therefore, under the condition of the IO load, 4 IO read-ahead threads and 20000 x 8K cache pool capacity are needed.
According to the method and the device, the pre-reading capacity of the pre-reading thread pool can be dynamically adjusted according to the IO load data, namely, the pre-reading capacity of the pre-reading system is finely controlled by sampling the IO thread pool data under the current IO load and adjusting the average queue depth data, the IO average delay data and the IO maximum delay data. The method and the device can solve the problems of serious IO blocking and performance jitter in a large-pressure IO scene and the problem of resource waste of the pre-reading system in a small-pressure IO scene. Therefore, the embodiment of the application improves IO processing efficiency, can enable the performance of the pre-reading system to be stable, does not consume excessive resources, and effectively improves the pre-reading capability of the database.
Referring to fig. 8, fig. 8 is a schematic structural diagram of a database pre-reading device provided in an embodiment of the present application, where the device may implement the database pre-reading method according to the foregoing embodiment, and the device includes a data obtaining module 810, a load data determining module 820, a cache pool capacity determining module 830, a thread data determining module 840, and a pre-reading processing module 850.
The data obtaining module 810 is configured to obtain IO thread pool data of the read-ahead thread pool, where the IO thread pool data includes average queue depth data, IO average delay data, and IO maximum delay data;
the load data determining module 820 is configured to determine IO load data according to the average queue depth data, the IO average delay data, and the IO maximum delay data;
the buffer pool capacity determining module 830 determines pre-read buffer pool capacity data according to the IO load data and a preset buffer pool capacity table;
the thread data determining module 840 determines pre-read IO thread data of the pre-read thread pool according to the IO load data and the pre-read cache pool capacity data;
the pre-reading processing module 850 is configured to receive the query request data, perform pre-reading processing on the query request data according to the capacity data of the pre-reading buffer pool and the pre-reading IO thread data, and obtain pre-reading buffer data, where the pre-reading buffer data is used to store the pre-reading data when the query request data is queried.
It should be noted that, the pre-reading device of the database in the embodiment of the present application is used to implement the pre-reading method of the database in the embodiment of the present application, and the pre-reading device of the database in the embodiment of the present application corresponds to the pre-reading method of the database, and the specific processing procedure refers to the pre-reading method of the database and is not repeated herein.
The embodiment of the application also provides a computer device, which comprises: at least one memory, at least one processor, at least one computer program stored in the at least one memory, the at least one processor executing the at least one computer program to implement a method of pre-reading a database of any of the above embodiments. The computer equipment can be any intelligent terminal including a tablet personal computer, a vehicle-mounted computer and the like.
Referring to fig. 9, fig. 9 illustrates a hardware structure of a computer device according to another embodiment, the computer device includes:
the processor 910 may be implemented by a general purpose central processing unit (Central Processing Unit, CPU), a microprocessor, an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or one or more integrated circuits, etc. for executing related programs to implement the technical solutions provided in the embodiments of the present application;
The Memory 920 may be implemented in the form of a Read Only Memory (ROM), a static storage device, a dynamic storage device, or a random access Memory (Random Access Memory, RAM). Memory 920 may store an operating system and other application programs, and when the technical solutions provided in the embodiments of the present application are implemented in software or firmware, relevant program codes are stored in memory 920, and the processor 910 invokes a method for pre-reading a database to perform the embodiments of the present application;
an input/output interface 930 for inputting and outputting information;
the communication interface 940 is configured to implement communication interaction between the device and other devices, and may implement communication in a wired manner (e.g., USB, network cable, etc.), or may implement communication in a wireless manner (e.g., mobile network, WIFI, bluetooth, etc.);
a bus 950 for transferring information between components of the device (e.g., processor 910, memory 920, input/output interface 930, and communication interface 940);
wherein processor 910, memory 920, input/output interface 930, and communication interface 940 implement communication connections among each other within the device via a bus 950.
The present application also provides a computer-readable storage medium storing a computer program for causing a computer to execute the method of pre-reading a database in the above embodiments.
The memory, as a non-transitory computer readable storage medium, may be used to store non-transitory software programs as well as non-transitory computer executable programs. In addition, the memory may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory optionally includes memory remotely located relative to the processor, the remote memory being connectable to the processor through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The embodiments described in the embodiments of the present application are for more clearly describing the technical solutions of the embodiments of the present application, and do not constitute a limitation on the technical solutions provided by the embodiments of the present application, and as those skilled in the art can know that, with the evolution of technology and the appearance of new application scenarios, the technical solutions provided by the embodiments of the present application are equally applicable to similar technical problems.
It will be appreciated by those skilled in the art that the technical solutions shown in the figures do not constitute limitations of the embodiments of the present application, and may include more or fewer steps than shown, or may combine certain steps, or different steps.
The above described apparatus embodiments are merely illustrative, wherein the units illustrated as separate components may or may not be physically separate, i.e. may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
Those of ordinary skill in the art will appreciate that all or some of the steps of the methods, systems, functional modules/units in the devices disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof.
The terms "first," "second," "third," "fourth," and the like in the description of the present application and in the above-described figures, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that in this application, "at least one" means one or more, and "a plurality" means two or more. "and/or" for describing the association relationship of the association object, the representation may have three relationships, for example, "a and/or B" may represent: only a, only B and both a and B are present, wherein a, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b or c may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the above-described division of units is merely a logical function division, and there may be another division manner in actual implementation, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including multiple instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods of the various embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing a program.
Preferred embodiments of the present application are described above with reference to the accompanying drawings, and thus do not limit the scope of the claims of the embodiments of the present application. Any modifications, equivalent substitutions and improvements made by those skilled in the art without departing from the scope and spirit of the embodiments of the present application shall fall within the scope of the claims of the embodiments of the present application.

Claims (10)

1. A method of pre-reading a database, the method comprising:
IO thread pool data of a read-ahead thread pool is obtained, wherein the IO thread pool data comprises average queue depth data, IO average time delay data and IO maximum time delay data; the average queue depth data refers to an average number of average IO read-ahead messages of all IO queues in the read-ahead thread pool at a preset moment;
determining IO load data according to the average queue depth data, the IO average delay data and the IO maximum delay data;
determining the capacity data of a pre-read cache pool according to the IO load data and a preset cache pool capacity table;
according to the IO load data and the capacity data of the pre-read cache pool, pre-read IO thread data of the pre-read thread pool are determined;
and receiving query request data, and performing pre-reading processing on the query request data according to the pre-reading cache pool capacity data and the pre-reading IO thread data to obtain pre-reading cache data, wherein the pre-reading cache data is used for storing the pre-reading data when the query request data is queried.
2. The method of claim 1, wherein the determining IO load data from the average queue depth data, the IO average latency data, and the IO maximum latency data comprises:
acquiring a preset first load factor, a preset second load factor and a preset third load factor;
determining first load data according to the first load coefficient and the average queue depth data;
determining second load data according to the second load coefficient and the IO average delay data;
determining third load data according to the third load coefficient and the IO maximum time delay data;
and determining the IO load data according to the first load data, the second load data and the third load data.
3. The method of claim 2, wherein said determining first load data from said first load factor and said average queue depth data comprises:
determining a queue depth function of the average queue depth data;
and performing queue depth calculation on the first load coefficient and the average queue depth data according to the queue depth function to obtain the first load data.
4. The method of claim 1, wherein the obtaining IO thread pool data for the read-ahead thread pool comprises:
acquiring a thread pool scanning operator of the read-ahead thread pool;
determining a target IO engine according to the thread pool scanning operator;
and acquiring the IO thread pool data from the read-ahead thread pool according to the target IO engine.
5. The method of claim 4, wherein the obtaining the IO thread pool data from the read-ahead thread pool according to the target IO engine comprises:
building a load monitoring thread in the pre-reading thread Chi Zhongchuang;
and acquiring data of the read-ahead thread pool according to the load monitoring thread and the preset acquisition time data to obtain the IO thread pool data.
6. The method of claim 1, wherein determining the read-ahead buffer pool capacity data from the IO load data and a preset buffer pool capacity table comprises:
determining pre-reading capacity data according to the IO load data;
performing cache pool capacity matching on the pre-reading capacity data according to the preset cache pool capacity table, and determining cache pool capacity matching data;
and determining the pre-read cache pool capacity data according to the cache pool capacity matching data and preset pre-read maximum cache pool capacity data.
7. The method of claim 1, wherein the determining read-ahead IO thread data for the read-ahead thread pool from the IO load data and the read-ahead cache pool capacity data comprises:
acquiring a thread weight coefficient and a cache Chi Quanchong coefficient;
performing capacity calculation according to the cache Chi Quanchong coefficient and the pre-read cache pool capacity data to obtain target cache pool capacity data;
determining target IO thread data according to the IO load data and the target cache pool capacity data;
and calculating the number of threads according to the target IO thread data and the thread weight coefficient to obtain the pre-read IO thread data.
8. A pre-reading apparatus for a database, the apparatus comprising:
the data acquisition module is used for acquiring IO thread pool data of the read-ahead thread pool, wherein the IO thread pool data comprises average queue depth data, IO average time delay data and IO maximum time delay data; the average queue depth data refers to an average number of average IO read-ahead messages of all IO queues in the read-ahead thread pool at a preset moment;
the load data determining module is used for determining IO load data according to the average queue depth data, the IO average delay data and the IO maximum delay data;
The buffer pool capacity determining module is used for determining the pre-read buffer pool capacity data according to the IO load data and a preset buffer pool capacity table;
the thread data determining module is used for determining read-ahead IO thread data of the read-ahead thread pool according to the IO load data and the read-ahead cache pool capacity data;
the pre-reading processing module is used for receiving query request data, pre-reading the query request data according to the pre-reading cache pool capacity data and the pre-reading IO thread data to obtain pre-reading cache data, and the pre-reading cache data is used for storing the pre-reading data when the query request data is queried.
9. A computer device, comprising:
at least one memory;
at least one processor;
at least one computer program;
the at least one computer program is stored in the at least one memory, the at least one processor executing the at least one computer program to implement:
the method of any one of claims 1 to 7.
10. A computer-readable storage medium storing a computer program for causing a computer to execute:
The method of any one of claims 1 to 7.
CN202311062428.4A 2023-08-23 2023-08-23 Method and device for pre-reading database, computer equipment and storage medium Active CN116795877B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311062428.4A CN116795877B (en) 2023-08-23 2023-08-23 Method and device for pre-reading database, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311062428.4A CN116795877B (en) 2023-08-23 2023-08-23 Method and device for pre-reading database, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN116795877A CN116795877A (en) 2023-09-22
CN116795877B true CN116795877B (en) 2023-12-19

Family

ID=88038720

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311062428.4A Active CN116795877B (en) 2023-08-23 2023-08-23 Method and device for pre-reading database, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116795877B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102508638A (en) * 2011-09-27 2012-06-20 华为技术有限公司 Data pre-fetching method and device for non-uniform memory access
CN111723058A (en) * 2020-05-29 2020-09-29 广东浪潮大数据研究有限公司 Pre-read data caching method, device, equipment and storage medium
CN112749013A (en) * 2021-01-19 2021-05-04 广州虎牙科技有限公司 Thread load detection method and device, electronic equipment and storage medium
US11163606B1 (en) * 2021-01-21 2021-11-02 Sailpoint Technologies, Inc. Systems and methods for thread management to optimize resource utilization in a distributed computing environment
CN113760191A (en) * 2021-08-31 2021-12-07 荣耀终端有限公司 Data reading method, data reading apparatus, storage medium, and program product

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8549274B2 (en) * 2010-04-14 2013-10-01 Jade Quantum Technologies, Inc. Distributive cache accessing device and method for accelerating to boot remote diskless computers

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102508638A (en) * 2011-09-27 2012-06-20 华为技术有限公司 Data pre-fetching method and device for non-uniform memory access
CN111723058A (en) * 2020-05-29 2020-09-29 广东浪潮大数据研究有限公司 Pre-read data caching method, device, equipment and storage medium
WO2021238260A1 (en) * 2020-05-29 2021-12-02 广东浪潮智慧计算技术有限公司 Pre-read data caching method and apparatus, device, and storage medium
CN112749013A (en) * 2021-01-19 2021-05-04 广州虎牙科技有限公司 Thread load detection method and device, electronic equipment and storage medium
US11163606B1 (en) * 2021-01-21 2021-11-02 Sailpoint Technologies, Inc. Systems and methods for thread management to optimize resource utilization in a distributed computing environment
CN113760191A (en) * 2021-08-31 2021-12-07 荣耀终端有限公司 Data reading method, data reading apparatus, storage medium, and program product

Also Published As

Publication number Publication date
CN116795877A (en) 2023-09-22

Similar Documents

Publication Publication Date Title
US10895987B2 (en) Memory compression method of electronic device and apparatus thereof
CA2942418C (en) System and method of caching information
US8145859B2 (en) Method and system for spilling from a queue to a persistent store
EP3742306A1 (en) Data query method, apparatus and device
JP5744707B2 (en) Computer-implemented method, computer program, and system for memory usage query governor (memory usage query governor)
US20050015374A1 (en) System and method for utilizing compression in database caches to facilitate access to database information
CN106844740B (en) Data pre-reading method based on memory object cache system
CN102307234A (en) Resource retrieval method based on mobile terminal
CN107197359B (en) Video file caching method and device
WO2016155238A1 (en) File reading method in distributed storage system, and server end
CN110737857A (en) back-end paging acceleration method, system, terminal and storage medium
CN110515920A (en) A kind of mass small documents access method and system based on Hadoop
CN111753065A (en) Request response method, system, computer system and readable storage medium
CN113392863A (en) Method and device for acquiring machine learning training data set and terminal
AU2015201273B2 (en) System and method of caching information
US8549274B2 (en) Distributive cache accessing device and method for accelerating to boot remote diskless computers
CN113094392A (en) Data caching method and device
CN116795877B (en) Method and device for pre-reading database, computer equipment and storage medium
US7581045B2 (en) Method, system, and article of manufacture for mapping programming interfaces
US10341454B2 (en) Video and media content delivery network storage in elastic clouds
KR101694301B1 (en) Method for processing files in storage system and data server thereof
CN114116634A (en) Caching method and device and readable storage medium
JPH10289219A (en) Client-server system, cache management method and recording medium
CN110597772A (en) Multi-instance file processing method and terminal
US20230222039A1 (en) File restore performance using a file handler to disassociate prefetch and read streams

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant