CN113094368A - System and method for improving cache access hit rate - Google Patents

System and method for improving cache access hit rate Download PDF

Info

Publication number
CN113094368A
CN113094368A CN202110392024.6A CN202110392024A CN113094368A CN 113094368 A CN113094368 A CN 113094368A CN 202110392024 A CN202110392024 A CN 202110392024A CN 113094368 A CN113094368 A CN 113094368A
Authority
CN
China
Prior art keywords
query
bitmap
neural network
index
representing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110392024.6A
Other languages
Chinese (zh)
Other versions
CN113094368B (en
Inventor
乔少杰
杨国平
宋海权
韩楠
李勇
闵圣捷
王伟业
孙科
袁犁
张浩东
范勇强
甘戈
冉先进
魏军林
余华
元昌安
黄发良
覃晓
郑皎凌
张永清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hanwang Yunlian Chengdu Technology Co ltd
Chengdu University of Information Technology
Original Assignee
Hanwang Yunlian Chengdu Technology Co ltd
Chengdu University of Information Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hanwang Yunlian Chengdu Technology Co ltd, Chengdu University of Information Technology filed Critical Hanwang Yunlian Chengdu Technology Co ltd
Priority to CN202110392024.6A priority Critical patent/CN113094368B/en
Publication of CN113094368A publication Critical patent/CN113094368A/en
Application granted granted Critical
Publication of CN113094368B publication Critical patent/CN113094368B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • G06F16/2282Tablespace storage structures; Management thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2457Query processing with adaptation to user needs
    • G06F16/24578Query processing with adaptation to user needs using ranking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a system and a method for improving cache access hit rate, which improve the access hit rate of a cache region by setting a DDQN model, can better utilize the cache region and improve the query efficiency. The DDQN model provided by the invention can learn experience, a plurality of queries can be put into the query set storage table and scheduled, more experience can be obtained from the historical executed queries, and the scheduling strategy can be improved. The invention can effectively capture the state of the cache region and the data access mode, better utilize the cache region and improve the decision arrangement of the query; the DDQN model can adapt to the queries which are not executed, and the query scheduling strategy can quickly adapt to a new query template, so that the remarkable effect is generated, and the resource sharing efficiency is improved.

Description

System and method for improving cache access hit rate
Technical Field
The invention belongs to the field of artificial intelligence and databases, and particularly relates to a system and a method for improving cache access hit rate.
Background
Query scheduling problems are an important and challenging task in modern database systems. Query scheduling can have a significant impact on query performance and resource utilization, but it may require consideration of a number of factors, such as cached data sets, available resources (e.g., memory), performance goals for each query, query priority, or inter-query dependencies (e.g., relevant data access patterns).
The database will store the table data and index in the cache in the form of pages, and in some cases (when preprocessing is used) will cache the query plan, but will not cache the specific query results. The data page is cached, and the data page contains continuous data, namely, the data is not only the data to be queried. The traditional caching method is realized by some rule-based algorithms, but for the current large data scene, the query traffic is large in scale and rapidly increases, different and complex queries bring severe challenges to the traditional caching method, and the database system can learn some characteristics by itself by using the AI technology, such as state information of the whole buffer area, characteristic information of query sentences and some load information of services, which are much more accurate than the traditional caching method, so that the cache hit rate is higher.
The existing query scheduling strategy cannot effectively improve the cache hit rate, and the problem of cache invalidation is probably caused by improper query execution sequence, so that I/O operation has to be carried out, and the performance loss is very large. Assuming that the cache of the database is a fixed size, the model can find out an optimal sequence, so that the current query statement can be loaded to the cached data page through IO operation by utilizing the last query statement as much as possible, and the IO operation of the database is reduced. The main objective of the present invention is to reduce the IO operation, i.e. increase the hit rate of the data page of the cache region, because the consumption of IO has a great influence on the performance of the database.
In summary, in order to increase the hit rate of the cache, generate some effective execution plans, and have better scheduling capability for various complex queries, it is necessary to design a method for increasing the hit rate of the cache access.
Disclosure of Invention
Aiming at the defects in the prior art, the system and the method for improving the cache access hit rate have the problem that the database cache hit rate is low in the prior art.
In order to achieve the purpose of the invention, the invention adopts the technical scheme that: a system for improving cache access hit rate comprises a query storage table module, a query feature extractor, a DDQN model and a buffer pool feature extractor;
the query storage table module is connected with the query feature extractor and is used for acquiring a query request submitted by a user and storing the query request into the chained queue; the query feature extractor is connected with the DDQN model and used for converting query information acquired by the query set storage table module into a feature vector and compressing the feature vector into a first bitmap; the DDQN model connection is connected with the buffer pool feature extractor and is used for receiving the first bitmap input by the query feature extractor and the buffer pool feature extractor and executing query; the buffer pool feature extractor is used for converting the state of the database buffer pool into a second bitmap.
Further, the database is used for storing a data table; the buffer pool of the database comprises m columns of data blocks multiplied by n rows, and each data table comprises one row of data blocks; the data block is used for caching data; the query request comprises queries of data blocks corresponding to a plurality of basic relations; each basic relation corresponds to a data table and comprises the state of the data block inquired by the corresponding data table; the data blocks are provided with corresponding index blocks, and all the index blocks form an index table.
The invention has the beneficial effects that:
(1) the invention provides a system for improving cache access hit rate, which improves the access hit rate of a cache region by setting a DDQN model, can better utilize the cache region and improve the query efficiency.
(2) The DDQN model provided by the invention can learn experience, and a plurality of queries can be put into the query set storage table and scheduled; more experience is gained from past executed queries to improve scheduling policies.
A method for improving the hit rate of cache access comprises the following steps:
s1, establishing a chained query queue through the query storage table module, performing query request enqueuing operation, and sequentially storing query requests in the chained query queue to the query storage table module;
s2, scanning the basic relationship contained in each query request through a query feature extractor, representing the basic relationship as a feature vector, marking the data block to be accessed by the feature vector as 1, marking the data block not to be accessed by the feature vector as 0, and constructing a first bitmap;
s3, scanning an index table, acquiring the access probability of the data block and the access probability of the index block, sequentially selecting dequeued query requests according to the access probabilities of the data block and the index block, and taking the dequeued query requests as candidate query requests;
s4, transmitting the first bitmap corresponding to the candidate query request to the DDQN model;
s5, converting the state of the database buffer pool into a second bitmap, constructing a bitmap state according to the characteristics of the second bitmap, and transmitting the bitmap state to the DDQN model;
s6, selecting candidate query requests to query through the DDQN model according to the first bitmap and the bitmap states, and completing the improvement process of the cache access hit rate.
Further, the specific method for establishing the chained query queue through the query storage table module in step S1 and performing the query enqueuing operation includes:
s1.1, establishing a chained query queue through a query storage table module, and initializing the query storage table module to be empty;
s1.2, collecting a query request of a user to a chained query queue;
s1.3, setting a subsequent tail pointer r pointing to a queue tail node, and executing enqueuing operation on a query request pointed by the tail pointer r, wherein the tail pointer r points to the next query request;
s1.4, according to the method in the step S1.3, the inquiry requests in the chain inquiry queue are sequentially enqueued;
after the query enqueue is completed, the tail pointer r points to the head of the queue of the enqueued query request for executing the dequeue operation.
Further, the step S3 is specifically:
s3.1, scanning an index table to perform index operation according to the basic relationship corresponding to the query request, and selecting a row related to the basic relationship;
s3.2, taking the ratio of the selected row in all the rows as the access probability of the data block corresponding to the basic relationship;
s3.3, taking the ratio of the index blocks related to the basic relationship in all the index blocks as the access probability of the index blocks corresponding to the basic relationship;
s3.4, traversing all query requests in the query storage table module, and acquiring the data block access probability and the index block access probability corresponding to each query request;
s3.5, dequeuing the query request with the maximum access probability of the data block and the index block;
and S3.6, repeating the step S3.5, sequentially selecting the dequeued query requests, and taking the dequeued query requests as candidate query requests.
Further, the specific method for converting the buffer pool state into the second bitmap in step S5 and constructing the bitmap state according to the features of the second bitmap includes:
s5.1, converting the buffer pool state into a second bitmap, wherein each row of data blocks of the second bitmap is used as a basic relation;
s5.2, creating a binary group, and recording rows and columns of the second bitmap as X and Y, where X is 1, 2.. and X, Y is 1, 2.. and Y is, where X denotes a total number of rows and Y denotes a total number of columns;
s5.3, representing the data block marked as 1 in the second bitmap as < x, y > -1, and representing the data block marked as 0 in the second bitmap as < x, y > -0;
s5.4, on the basis of the step S5.3, constructing a bitmap State1 by using the characteristics of the second bitmapx,yComprises the following steps:
Figure BDA0003017084530000051
wherein N isxRepresenting the row vector to which the basic relation x corresponds, MxThe quotient of dividing the row vector corresponding to the basic relation x by the number of the nonzero unit cells is represented, and the | represents the vector modulo operation.
Further, the step S6 includes specifically:
s6.1, receiving a first bitmap and a bitmap state through a DDQN model;
s6.2, setting the target of the DDQN model as the sum R of the searched maximum rewards, and constructing a cache scheduling strategy of the DDQN model as a function Qπ(St,At) Said function Qπ(St,At) Representing a deep neural network, StRepresents the state, AtRepresenting an action;
s6.3, constructing a Q neural network to be updated and a target Q neural network, and fixing the target Q neural network as Qπ’(St+1,π(St+1))+rtIn which S ist+1Representing the state of the target Q neural network, pi (S)t+1) Representing the action of the target Q neural network, rtExpressing the reward value obtained by executing the query, wherein pi represents an execution function of the Q neural network;
s6.4, fitting Q by using Q neural network to be updatedπ’(St+1,π(St+1))+rtRepeating the training for N times;
s6.5, covering the parameters of the target Q neural network with the parameters of the Q neural network to be updated after N times of training, and obtaining the updated target Q neural network as follows:
Figure BDA0003017084530000052
wherein Q isπ’(St,At) Representing the updated target Q neural network,
Figure BDA0003017084530000053
a target Q neural network is represented,
Figure BDA0003017084530000061
representing that the Q neural network to be updated is selected through the argmax function and is transmitted into the buffer status S't+1And action A, find order Qπ′Action A with the largest value; the action A represents a first bitmap, and the buffer status S't+1Indicating a buffer pool bitmap state;
and S6.6, executing the obtained query request corresponding to the action A, and completing the cache access hit rate improving process.
The invention has the beneficial effects that:
(1) when more queries are scheduled, the DDQN model can improve the hit rate of the cache region, effectively captures the state of the cache region and the data access mode, better utilizes the cache region and improves the query decision arrangement.
(2) The DDQN model in the invention can adapt to the query which is not executed, and the query scheduling strategy can quickly adapt to a new query template, thereby generating obvious effect and improving the resource sharing efficiency.
(3) The method does not cause over-estimation, the over-estimation refers to that the estimated Q value function is larger than the real Q value function, the root of the over-estimation is the operation of maximizing the Q value in the DQN (deep Q network), the problem is effectively avoided, the selection of the action (query) and the evaluation of the action (query) are respectively realized by different functions, and the cache access hit rate is improved.
Drawings
Fig. 1 is a schematic diagram of a system for increasing a cache access hit rate according to the present invention.
Fig. 2 is a flowchart of a method for increasing a cache access hit rate according to the present invention.
FIG. 3 is a diagram illustrating a DDQN model according to the present invention.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate the understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and it will be apparent to those skilled in the art that various changes may be made without departing from the spirit and scope of the invention as defined and defined in the appended claims, and all matters produced by the invention using the inventive concept are protected.
Embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
As shown in fig. 1, a system for improving cache access hit rate includes a lookup storage table module, a lookup feature extractor, a DDQN model and a buffer pool feature extractor;
the query storage table module is connected with the query feature extractor and is used for acquiring a query request submitted by a user and storing the query request into the chained queue; the query feature extractor is connected with the DDQN model and used for converting query information acquired by the query set storage table module into a feature vector and compressing the feature vector into a first bitmap; the DDQN model connection is connected with the buffer pool feature extractor and is used for receiving the first bitmap input by the query feature extractor and the buffer pool feature extractor and executing query; the buffer pool feature extractor is used for converting the state of the database buffer pool into a second bitmap.
The database is used for storing a data table; the buffer pool of the database comprises m columns of data blocks multiplied by n rows, and each data table comprises one row of data blocks; the data block is used for caching data; the query request comprises queries of data blocks corresponding to a plurality of basic relations; each basic relation corresponds to a data table and comprises the state of the data block inquired by the corresponding data table; the data blocks are provided with corresponding index blocks, and all the index blocks form an index table.
The invention has the beneficial effects that:
(1) the invention provides a system for improving cache access hit rate, which improves the access hit rate of a cache region by setting a DDQN model, can better utilize the cache region and improve the query efficiency.
(2) The DDQN model provided by the invention can learn experience, and a plurality of queries can be put into the query set storage table and scheduled; more experience is gained from past executed queries to improve scheduling policies.
As shown in fig. 2, a method for increasing a cache access hit rate includes the following steps:
s1, establishing a chained query queue through the query storage table module, performing query request enqueuing operation, and sequentially storing query requests in the chained query queue to the query storage table module;
s2, scanning the basic relationship contained in each query request through a query feature extractor, representing the basic relationship as a feature vector, marking the data block to be accessed by the feature vector as 1, marking the data block not to be accessed by the feature vector as 0, and constructing a first bitmap;
s3, scanning an index table, acquiring the access probability of the data block and the access probability of the index block, sequentially selecting dequeued query requests according to the access probabilities of the data block and the index block, and taking the dequeued query requests as candidate query requests;
s4, transmitting the first bitmap corresponding to the candidate query request to the DDQN model;
s5, converting the state of the database buffer pool into a second bitmap, constructing a bitmap state according to the characteristics of the second bitmap, and transmitting the bitmap state to the DDQN model;
s6, selecting candidate query requests to query through the DDQN model according to the first bitmap and the bitmap states, and completing the improvement process of the cache access hit rate.
In step S1, the specific method for establishing the chained query queue through the query storage table module and performing the query enqueuing operation includes:
s1.1, establishing a chained query queue through a query storage table module, and initializing the query storage table module to be empty;
s1.2, collecting a query request of a user to a chained query queue;
s1.3, setting a subsequent tail pointer r pointing to a queue tail node, and executing enqueuing operation on a query request pointed by the tail pointer r, wherein the tail pointer r points to the next query request;
s1.4, according to the method in the step S1.3, the inquiry requests in the chain inquiry queue are sequentially enqueued;
after the query enqueue is completed, the tail pointer r points to the head of the queue of the enqueued query request for executing the dequeue operation.
The step S3 specifically includes:
s3.1, scanning an index table to perform index operation according to the basic relationship corresponding to the query request, and selecting a row related to the basic relationship;
s3.2, taking the ratio of the selected row in all the rows as the access probability of the data block corresponding to the basic relationship;
s3.3, taking the ratio of the index blocks related to the basic relationship in all the index blocks as the access probability of the index blocks corresponding to the basic relationship;
s3.4, traversing all query requests in the query storage table module, and acquiring the data block access probability and the index block access probability corresponding to each query request;
s3.5, dequeuing the query request with the maximum access probability of the data block and the index block;
and S3.6, repeating the step S3.5, sequentially selecting the dequeued query requests, and taking the dequeued query requests as candidate query requests.
In step S5, the buffer pool state is converted into a second bitmap, and a specific method for constructing a bitmap state by using the features of the second bitmap includes:
s5.1, converting the buffer pool state into a second bitmap, wherein each row of data blocks of the second bitmap is used as a basic relation;
s5.2, creating a binary group, and recording rows and columns of the second bitmap as X and Y, where X is 1, 2.. and X, Y is 1, 2.. and Y is, where X denotes a total number of rows and Y denotes a total number of columns;
s5.3, representing the data block marked as 1 in the second bitmap as < x, y > -1, and representing the data block marked as 0 in the second bitmap as < x, y > -0;
s5.4, on the basis of the step S5.3, constructing a bitmap State1 by using the characteristics of the second bitmapx,yComprises the following steps:
Figure BDA0003017084530000091
wherein N isxRepresenting the row vector to which the basic relation x corresponds, MxThe quotient of dividing the row vector corresponding to the basic relation x by the number of the nonzero unit cells is represented, and the | represents the vector modulo operation.
The step S6 includes the specific steps of:
s6.1, receiving a first bitmap and a bitmap state through a DDQN model;
s6.2, setting the target of the DDQN model as the sum R of the searched maximum rewards, and constructing a cache scheduling strategy of the DDQN model as a function Qπ(St,At) Said function Qπ(St,At) Representing a deep neural network, StRepresents the state, AtRepresenting an action;
s6.3, constructing a Q neural network to be updated and a target Q neural network, and fixing the target Q neural network as Qπ’(St+1,π(St+1))+rtIn which S ist+1Representing the state of the target Q neural network, pi (S)t+1) Representing the action of the target Q neural network, rtExpressing the reward value obtained by executing the query, wherein pi represents an execution function of the Q neural network;
s6.4, fitting Q by using Q neural network to be updatedπ’(St+1,π(St+1))+rtRepeating the training for N times;
as shown in fig. 3, S6.5, covering the parameters of the target Q neural network with the parameters of the Q neural network to be updated after N times of training, and obtaining the updated target Q neural network as follows:
Figure BDA0003017084530000101
wherein Q isπ’(St,At) Representing the updated target Q neural network,
Figure BDA0003017084530000102
a target Q neural network is represented,
Figure BDA0003017084530000103
representing that the Q neural network to be updated is selected through the argmax function and is transmitted into the buffer status S't+1And action A, find order Qπ′Action A with the largest value; the action A represents a first bitmap, and the buffer status S't+1Indicating a buffer pool bitmap state;
and S6.6, executing the obtained query request corresponding to the action A, and completing the cache access hit rate improving process.
The invention has the beneficial effects that:
(1) when more queries are scheduled, the DDQN model can improve the hit rate of the cache region, effectively captures the state of the cache region and the data access mode, better utilizes the cache region and improves the query decision arrangement;
(2) the DDQN model in the invention can adapt to the query which is not executed, and the query scheduling strategy can quickly adapt to a new query template, thereby generating obvious effect and improving the resource sharing efficiency;
(3) the method does not cause over-estimation, the over-estimation refers to that the estimated Q value function is larger than the real Q value function, the root of the over-estimation is the operation of maximizing the Q value in the DQN (deep Q network), the problem is effectively avoided, the selection of the action (query) and the evaluation of the action (query) are respectively realized by different functions, and the cache access hit rate is improved.

Claims (7)

1. A system for improving the cache access hit rate is characterized by comprising a query storage table module, a query feature extractor, a DDQN model and a buffer pool feature extractor;
the query storage table module is connected with the query feature extractor and is used for acquiring a query request submitted by a user and storing the query request into the chained queue; the query feature extractor is connected with the DDQN model and used for converting query information acquired by the query set storage table module into a feature vector and compressing the feature vector into a first bitmap; the DDQN model connection is connected with the buffer pool feature extractor and is used for receiving the first bitmap input by the query feature extractor and the buffer pool feature extractor and executing query; the buffer pool feature extractor is used for converting the state of the database buffer pool into a second bitmap.
2. The system for improving cache access hit rate according to claim 1, wherein the database is configured to store a data table; the buffer pool of the database comprises m columns of data blocks multiplied by n rows, and each data table comprises one row of data blocks; the data block is used for caching data; the query request comprises queries of data blocks corresponding to a plurality of basic relations; each basic relation corresponds to a data table and comprises the state of the data block inquired by the corresponding data table; the data blocks are provided with corresponding index blocks, and all the index blocks form an index table.
3. A method for improving cache access hit rate is characterized by comprising the following steps:
s1, establishing a chained query queue through the query storage table module, performing query request enqueuing operation, and sequentially storing query requests in the chained query queue to the query storage table module;
s2, scanning the basic relationship contained in each query request through a query feature extractor, representing the basic relationship as a feature vector, marking the data block to be accessed by the feature vector as 1, marking the data block not to be accessed by the feature vector as 0, and constructing a first bitmap;
s3, scanning an index table, acquiring the access probability of the data block and the access probability of the index block, sequentially selecting dequeued query requests according to the access probabilities of the data block and the index block, and taking the dequeued query requests as candidate query requests;
s4, transmitting the first bitmap corresponding to the candidate query request to the DDQN model;
s5, converting the state of the database buffer pool into a second bitmap, constructing a bitmap state according to the characteristics of the second bitmap, and transmitting the bitmap state to the DDQN model;
s6, selecting candidate query requests to query through the DDQN model according to the first bitmap and the bitmap states, and completing the improvement process of the cache access hit rate.
4. The method according to claim 3, wherein the step S1 of establishing the chained query queue through the query storage table module and performing the query enqueuing operation includes:
s1.1, establishing a chained query queue through a query storage table module, and initializing the query storage table module to be empty;
s1.2, collecting a query request of a user to a chained query queue;
s1.3, setting a subsequent tail pointer r pointing to a queue tail node, and executing enqueuing operation on a query request pointed by the tail pointer r, wherein the tail pointer r points to the next query request;
s1.4, according to the method in the step S1.3, the inquiry requests in the chain inquiry queue are sequentially enqueued;
after the query enqueue is completed, the tail pointer r points to the head of the queue of the enqueued query request for executing the dequeue operation.
5. The method of claim 4, wherein the step S3 specifically comprises:
s3.1, scanning an index table to perform index operation according to the basic relationship corresponding to the query request, and selecting a row related to the basic relationship;
s3.2, taking the ratio of the selected row in all the rows as the access probability of the data block corresponding to the basic relationship;
s3.3, taking the ratio of the index blocks related to the basic relationship in all the index blocks as the access probability of the index blocks corresponding to the basic relationship;
s3.4, traversing all query requests in the query storage table module, and acquiring the data block access probability and the index block access probability corresponding to each query request;
s3.5, dequeuing the query request with the maximum access probability of the data block and the index block;
and S3.6, repeating the step S3.5, sequentially selecting the dequeued query requests, and taking the dequeued query requests as candidate query requests.
6. The method according to claim 5, wherein the step S5 of converting the buffer pool status into the second bitmap, and the specific method of constructing the bitmap status by using the characteristics of the second bitmap includes:
s5.1, converting the buffer pool state into a second bitmap, wherein each row of data blocks of the second bitmap is used as a basic relation;
s5.2, creating a binary group, and recording rows and columns of the second bitmap as X and Y, where X is 1, 2.. and X, Y is 1, 2.. and Y is, where X denotes a total number of rows and Y denotes a total number of columns;
s5.3, representing the data block marked as 1 in the second bitmap as < x, y > -1, and representing the data block marked as 0 in the second bitmap as < x, y > -0;
s5.4, on the basis of the step S5.3, constructing a bitmap State1 by using the characteristics of the second bitmapx,yComprises the following steps:
Figure FDA0003017084520000031
wherein N isxRepresenting the row vector to which the basic relation x corresponds, MxThe quotient of dividing the row vector corresponding to the basic relation x by the number of the nonzero unit cells is represented, and the | represents the vector modulo operation.
7. The method according to claim 6, wherein the step S6 includes steps of:
s6.1, receiving a first bitmap and a bitmap state through a DDQN model;
s6.2, setting the target of the DDQN model as the sum R of the searched maximum rewards, and constructing a cache scheduling strategy of the DDQN model as a function Qπ(St,At) Said function Qπ(St,At) Representing a deep neural network, StRepresents the state, AtRepresenting an action;
s6.3, constructing a Q neural network to be updated and a target Q neural network, and fixing the target Q neural network as Qπ’(St+1,π(St+1))+rtIn which S ist+1Representing the state of the target Q neural network, pi (S)t+1) Representing the action of the target Q neural network, rtExpressing the reward value obtained by executing the query, wherein pi represents an execution function of the Q neural network;
s6.4, fitting Q by using Q neural network to be updatedπ’(St+1,π(St+1))+rtRepeating the training for N times;
s6.5, covering the parameters of the target Q neural network with the parameters of the Q neural network to be updated after N times of training, and obtaining the updated target Q neural network as follows:
Figure FDA0003017084520000041
wherein Q isπ’(St,At) Representing the updated target Q neural network,
Figure FDA0003017084520000042
representing a target Q neural network, argAmaxQπ(S't+1A) represents that the Q neural network to be updated is selected through the argmax function and is transmitted into the buffer status S't+1And action A, find order Qπ’Action A with the largest value; the action A represents a first bitmap, and the buffer status S't+1Indicating a buffer pool bitmap state;
and S6.6, executing the obtained query request corresponding to the action A, and completing the cache access hit rate improving process.
CN202110392024.6A 2021-04-13 2021-04-13 System and method for improving cache access hit rate Active CN113094368B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110392024.6A CN113094368B (en) 2021-04-13 2021-04-13 System and method for improving cache access hit rate

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110392024.6A CN113094368B (en) 2021-04-13 2021-04-13 System and method for improving cache access hit rate

Publications (2)

Publication Number Publication Date
CN113094368A true CN113094368A (en) 2021-07-09
CN113094368B CN113094368B (en) 2022-08-05

Family

ID=76677839

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110392024.6A Active CN113094368B (en) 2021-04-13 2021-04-13 System and method for improving cache access hit rate

Country Status (1)

Country Link
CN (1) CN113094368B (en)

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103294912A (en) * 2013-05-23 2013-09-11 南京邮电大学 Cache optimization method aiming at mobile equipment and based on predication
CN104834607A (en) * 2015-05-19 2015-08-12 华中科技大学 Method for improving distributed cache hit rate and reducing solid state disk wear
CN105740352A (en) * 2016-01-26 2016-07-06 华中电网有限公司 Historical data service system used for smart power grid dispatching control system
CN107247675A (en) * 2017-05-31 2017-10-13 华中科技大学 A kind of caching system of selection and system based on classification prediction
CN107832401A (en) * 2017-11-01 2018-03-23 郑州云海信息技术有限公司 Database data access method, system, device and computer-readable recording medium
US20180307984A1 (en) * 2017-04-24 2018-10-25 Intel Corporation Dynamic distributed training of machine learning models
CN108932288A (en) * 2018-05-22 2018-12-04 广东技术师范学院 A kind of mass small documents caching method based on Hadoop
CN109831806A (en) * 2019-03-06 2019-05-31 西安电子科技大学 The base station of intensive scene User oriented priority cooperates with caching method
US20190213393A1 (en) * 2018-01-10 2019-07-11 International Business Machines Corporation Automated facial recognition detection
CN110062357A (en) * 2019-03-20 2019-07-26 重庆邮电大学 A kind of D2D ancillary equipment caching system and caching method based on intensified learning
CN110245095A (en) * 2019-06-20 2019-09-17 华中科技大学 A kind of solid-state disk cache optimization method and system based on data block map
CN110290510A (en) * 2019-05-07 2019-09-27 天津大学 Support the edge cooperation caching method under the hierarchical wireless networks of D2D communication
CN110389909A (en) * 2018-04-16 2019-10-29 三星电子株式会社 Use the system and method for the performance of deep neural network optimization solid state drive
US20200134420A1 (en) * 2018-10-25 2020-04-30 Shawn Spooner Machine-based prediction of visitation caused by viewing
CN111352419A (en) * 2020-02-25 2020-06-30 山东大学 Path planning method and system for updating experience playback cache based on time sequence difference

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103294912A (en) * 2013-05-23 2013-09-11 南京邮电大学 Cache optimization method aiming at mobile equipment and based on predication
CN104834607A (en) * 2015-05-19 2015-08-12 华中科技大学 Method for improving distributed cache hit rate and reducing solid state disk wear
CN105740352A (en) * 2016-01-26 2016-07-06 华中电网有限公司 Historical data service system used for smart power grid dispatching control system
US20180307984A1 (en) * 2017-04-24 2018-10-25 Intel Corporation Dynamic distributed training of machine learning models
CN107247675A (en) * 2017-05-31 2017-10-13 华中科技大学 A kind of caching system of selection and system based on classification prediction
CN107832401A (en) * 2017-11-01 2018-03-23 郑州云海信息技术有限公司 Database data access method, system, device and computer-readable recording medium
US20190213393A1 (en) * 2018-01-10 2019-07-11 International Business Machines Corporation Automated facial recognition detection
CN110389909A (en) * 2018-04-16 2019-10-29 三星电子株式会社 Use the system and method for the performance of deep neural network optimization solid state drive
CN108932288A (en) * 2018-05-22 2018-12-04 广东技术师范学院 A kind of mass small documents caching method based on Hadoop
US20200134420A1 (en) * 2018-10-25 2020-04-30 Shawn Spooner Machine-based prediction of visitation caused by viewing
CN109831806A (en) * 2019-03-06 2019-05-31 西安电子科技大学 The base station of intensive scene User oriented priority cooperates with caching method
CN110062357A (en) * 2019-03-20 2019-07-26 重庆邮电大学 A kind of D2D ancillary equipment caching system and caching method based on intensified learning
CN110290510A (en) * 2019-05-07 2019-09-27 天津大学 Support the edge cooperation caching method under the hierarchical wireless networks of D2D communication
CN110245095A (en) * 2019-06-20 2019-09-17 华中科技大学 A kind of solid-state disk cache optimization method and system based on data block map
CN111352419A (en) * 2020-02-25 2020-06-30 山东大学 Path planning method and system for updating experience playback cache based on time sequence difference

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JIE YAN ET AL.: "Distributed Edge Caching with Content Recommendation in Fog-RANs Via Deep Reinforcement Learning", 《2020 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS WORKSHOPS》 *
陈镜伊: "带有社交属性的边缘缓存策略研究", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》 *

Also Published As

Publication number Publication date
CN113094368B (en) 2022-08-05

Similar Documents

Publication Publication Date Title
CN109753751B (en) MEC random task migration method based on machine learning
CN111694656B (en) Cluster resource scheduling method and system based on multi-agent deep reinforcement learning
CN109948029B (en) Neural network self-adaptive depth Hash image searching method
CN109961098B (en) Training data selection method for machine learning
CN104168318A (en) Resource service system and resource distribution method thereof
CN108875955A (en) Gradient based on parameter server promotes the implementation method and relevant device of decision tree
CN109215344B (en) Method and system for urban road short-time traffic flow prediction
WO2021244583A1 (en) Data cleaning method, apparatus and device, program, and storage medium
CN115237825A (en) Intelligent cache replacement method based on machine learning
CN109934336A (en) Neural network dynamic based on optimum structure search accelerates platform designing method and neural network dynamic to accelerate platform
CN114546608B (en) Task scheduling method based on edge calculation
CN113094181A (en) Multi-task federal learning method and device facing edge equipment
CN113987196A (en) Knowledge graph embedding compression method based on knowledge graph distillation
CN114841611A (en) Method for solving job shop scheduling based on improved ocean predator algorithm
CN113094368B (en) System and method for improving cache access hit rate
Liu et al. MemNet: memory-efficiency guided neural architecture search with augment-trim learning
CN116991323A (en) Solid-state disk cache management method, system, equipment and storage medium
CN116663680A (en) Method for improving fairness of machine learning, electronic equipment and storage medium
CN116797850A (en) Class increment image classification method based on knowledge distillation and consistency regularization
CN116776950A (en) Lifelong learning method based on sample replay and knowledge distillation
CN113657446A (en) Processing method, system and storage medium of multi-label emotion classification model
CN110532071A (en) A kind of more application schedules system and method based on GPU
CN113642701A (en) Model and sample dual active selection method based on truncation importance sampling
CN110533176A (en) Buffer storage and its associated computing platform for neural computing
AU2021105155A4 (en) Task-based sampling network for Point Cloud Classification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant