CN111625550A - System and method for caching precompiled statements based on EsgynDB database - Google Patents
System and method for caching precompiled statements based on EsgynDB database Download PDFInfo
- Publication number
- CN111625550A CN111625550A CN202010410996.9A CN202010410996A CN111625550A CN 111625550 A CN111625550 A CN 111625550A CN 202010410996 A CN202010410996 A CN 202010410996A CN 111625550 A CN111625550 A CN 111625550A
- Authority
- CN
- China
- Prior art keywords
- statement
- precompiled
- queue
- linked list
- time
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 16
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/242—Query formulation
- G06F16/2433—Query languages
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/12—Replacement control
- G06F12/121—Replacement control using replacement algorithms
- G06F12/123—Replacement control using replacement algorithms with age lists, e.g. queue, most recently used [MRU] list or least recently used [LRU] list
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2453—Query optimisation
- G06F16/24534—Query rewriting; Transformation
- G06F16/24539—Query rewriting; Transformation using cached or materialised query results
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2455—Query execution
- G06F16/24552—Database cache management
Abstract
The invention discloses a precompiled statement cache system and a precompiled statement cache method based on an EsgynDB database, which belong to a precompiled statement cache system and comprise an Add cache module, wherein the Add cache module is used for recording the listing time of a current precompiled statement object when the precompiled statement object is listed; the Add cache module is also used for inserting the precompiled statement object from the middle position of the LRU linked list, and moving the statement object to the head of the LRU linked list after the statement object is hit for the second time; the existing statement objects in the queue are scanned starting at the end of the LRU linked list. Therefore, the statement objects can be eliminated from the queue in time, and the time for scanning the linked list queue can be reduced; and the way of inserting the precompiled statement object from the middle of the LRU linked list is adopted, so that accidental statements can be prevented from being directly added to the head of the LRU linked list, and the statement object which is possibly frequently used is squeezed behind the LRU linked list, so that the problem of cache pollution of an LRU algorithm is solved.
Description
Technical Field
The invention relates to a precompiled statement cache method, in particular to a precompiled statement cache system and method based on an EsgynDB database.
Background
The existing popular databases in the market, such as Oracle and MySQL, provide their own pre-compiled statement cache pool function, or indirectly provide the function through various third party components, such as dbcp and the like. The technical scheme for realizing the method specifically comprises the steps of maintaining a queue with a fixed length at a client, caching compiled statement information in the queue, and eliminating existing statements in an original queue if the queue is full and the latest precompiled statement is added. This queue typically employs a least Recently used algorithm, namely an LRU (least Recently used) queue. The idea of the algorithm is that when the sentences existing in the original queue need to be replaced, the objects which are not used for the longest time are selected to be eliminated. The basic data structure is implemented as a doubly linked list: namely, new data is inserted into the head of the linked list, the data is moved to the head of the linked list when the cache is hit (namely, the cache data is accessed), and the data at the tail of the linked list is discarded when the linked list is full. However, this method still has a drawback in that the access to the linked list can only be performed one by one, and it is impossible to determine whether the cache exists in the time of O (1). Therefore, a data structure of a Hash table is additionally added for optimization, the capacity of the LRU needs to be preset, if the storage is full, the tail part of the doubly linked list can be eliminated through the time of O (1), and each time data is newly added and accessed, new nodes can be added to the head through the efficiency of the O (1), or existing nodes are moved to the head of the queue, and whether a Key exists can be judged in constant time through the HashMap without traversing the linked list. In most cases, after SQL execution, where HitCount is 0 but it is not culled in Cache only that its location may be very late, actually culling this statement may require a lengthy process and cannot be recycled by young gc. Second, the simple LRU algorithm presents a "cache pollution" problem. LRU is efficient primarily when there are hot statements, but sporadic, periodic bulk operations may cause LRU hit rates to drop. Since sporadic statements are added directly to the head of the linked list, in practice they are called only once in a short period of time, and instead "squeeze" the possibly frequently used statements to the back of the queue. Therefore, further research and improvement on the caching method of the database are needed.
Disclosure of Invention
One of the objectives of the present invention is to provide a precompiled statement cache system and method based on the EsgynDB database, so as to solve the technical problems in the prior art that a statement object with an excessively long time and a HitCount of 0 cannot be eliminated in time, occupies too many cache resources, and the LRU algorithm is prone to "cache pollution".
In order to solve the technical problems, the invention adopts the following technical scheme:
the invention provides a precompiled statement cache system based on an EsgynDB database, which comprises an Add cache module, wherein the Add cache module is used for recording the listing time of a current precompiled statement object when the precompiled statement object is listed. The Add cache module is further configured to insert the precompiled statement object from a middle position of the LRU linked list, and move the statement object to the head of the LRU linked list after the statement object is hit for the second time. And scanning the existing statement objects in the queue from the tail part of the LRU linked list, and eliminating the statement objects out of the queue when the statement objects with HitCount of 0 in the queue are found and the existing time in the queue exceeds a predefined time threshold.
In another aspect, the present invention provides a method corresponding to the above system, including the following steps:
and when the precompiled statement objects are listed, recording the listing time of the current precompiled statement objects.
And scanning the existing statement objects in the queue from the tail part of the LRU linked list, and eliminating the statement objects out of the queue when the statement objects with HitCount of 0 in the queue are found and the existing time in the queue exceeds a predefined time threshold.
Inserting the precompiled statement object from the middle position of the LRU linked list, and moving the statement object to the head of the LRU linked list after the statement object is hit for the second time.
Preferably, the further technical scheme is as follows: and the time from the enqueue to the current time of the statement objects out of the queue is greater than a predefined time threshold.
The further technical scheme is as follows: and when the precompiled statement objects are listed, entering a cache pool of an EsgynDB database.
Compared with the prior art, the invention has the following beneficial effects: by recording the enqueue time of the precompiled statement object and scanning the statement object with HitCount of 0 from the tail of the LRU linked list, the statement object can be eliminated from the queue in time and the time for scanning the linked list queue can be reduced. And the way of inserting the precompiled statement object from the middle of the LRU linked list is adopted, so that sporadic statement hit can be avoided and the statement object which is possibly frequently used is directly added to the head of the LRU linked list, and the statement object is squeezed to the back of the LRU linked list, so that the problem of cache pollution caused by an LRU algorithm is solved.
Drawings
FIG. 1 is a diagram illustrating an LRU linked list queue in one embodiment of the invention.
Detailed Description
The invention is further elucidated with reference to the drawing.
The invention belongs to the technical field of distributed databases. In the field of databases, in order to implement a query function, an SQL statement needs to be parsed from a string text into a structure (usually a tree structure) that can be recognized by a computer. In some scenarios of highly concurrent repeated SQL statements, parsing may occupy a large CPU and time overhead. To increase the speed of the query. Usually, the database stores the compiled statement at the database end, and when the precompiled statement is used, the client end can directly acquire information related to the statement from the database end through the network without compiling, so that the SQL statement is compiled for multiple times, and the execution efficiency is greatly improved.
However, under the extremely high requirement on the performance of the database, the scheme still has a place which can be improved, and the network IO still needs to be passed through each time the information of the precompiled statement is acquired. Therefore, the invention specifically provides a precompiled statement cache pool which stores compiled statements in a client, reduces the time overhead of a network and improves the performance of a database.
In view of the above, an embodiment of the present invention is a precompiled statement caching method based on an EsgynDB database, which includes and is generally performed according to the following steps:
and step S1, recording the enqueue time of the current precompiled statement object when the precompiled statement object is enqueued. When the precompiled statement objects are listed, the precompiled statement objects enter a cache pool of an EsgynDB database.
Step S2, scanning the existing statement objects in the queue from the tail of the LRU linked list, and eliminating the statement objects out of the queue when the statement objects with HitCount of 0 in the queue are found and the existing time in the queue exceeds the predefined time threshold. In this step, the time from enqueuing to the current time of the deselected statement object is greater than the predefined time threshold.
And step S3, inserting the precompiled statement object from the middle position of the LRU linked list, and moving the statement object to the head of the LRU linked list after the statement object is hit for the second time.
As shown in fig. 1, in the LRU linked list, head represents the head of the bi-directional linked list, tail represents the tail, when a precompiled statement object is enqueued, the current time is recorded first, then the statement object is inserted into the queue of the LRU linked list from the middle of head and tail, waits for a hit again, and then is moved to the head, that is, the LRU-K algorithm in the prior art is implemented in another way without maintaining another queue, and in the prior art, the LRU-K algorithm in the same class is:
k of LRU-K represents the number of most recent uses, so LRU can be considered LRU-1. The core idea of LRU-K is to extend the criterion of "used last 1 time" to "used last K times". However, such LRU-K algorithm requires maintaining a queue for recording the history of all cache data accesses. That is, data is placed in the cache only when the number of accesses to the data reaches K times. When the data needs to be eliminated, the LRU-K eliminates the data with the Kth access time being the largest from the current time.
Another embodiment of the present invention is a precompiled statement cache system based on an EsgynDB database, which uses an Add cache module as a core, and can be implemented by a computer program compilation module, where the Add cache module is used as a virtual module for recording the listing time of a current precompiled statement object when the precompiled statement object is listed. When the precompiled statement object is listed, the precompiled statement object enters a cache pool of an EsgynDB database
And, corresponding to the above method, the Add cache module is further configured to insert the precompiled statement object from the middle position of the LRU linked list, and move the statement object to the head of the LRU linked list after the statement object is hit for the second time. And scanning the existing statement objects in the queue from the tail part of the LRU linked list, and eliminating the statement objects out of the queue when the statement objects with HitCount of 0 in the queue are found and the existing time in the queue exceeds a predefined time threshold. Similarly, the time from enqueue to current time of the deselected statement object is greater than the predefined time threshold.
In addition to the foregoing, it should be noted that reference throughout this specification to "one embodiment," "another embodiment," "an embodiment," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment described generally throughout this application. The appearances of the same phrase in various places in the specification are not necessarily all referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with any embodiment, it is submitted that it is within the scope of the invention to effect such feature, structure, or characteristic in connection with other embodiments.
Although the invention has been described herein with reference to a number of illustrative embodiments thereof, it should be understood that numerous other modifications and embodiments can be devised by those skilled in the art that will fall within the spirit and scope of the principles of this disclosure. More specifically, various variations and modifications are possible in the component parts and/or arrangements of the subject combination arrangement within the scope of the disclosure, the drawings and the appended claims. In addition to variations and modifications in the component parts and/or arrangements, other uses will also be apparent to those skilled in the art.
Claims (6)
1. A precompiled statement cache method based on an EsgynDB database is characterized by comprising the following steps:
when the precompiled statement objects are listed, recording the listing time of the current precompiled statement objects;
scanning the existing statement objects in the queue from the tail part of the LRU linked list, and eliminating the statement objects out of the queue when the statement objects with HitCount of 0 in the queue are found and the existing time in the queue exceeds a predefined time threshold;
inserting the precompiled statement object from the middle position of the LRU linked list, and moving the statement object to the head of the LRU linked list after the statement object is hit for the second time.
2. The EsgynDB database-based precompiled statement caching method of claim 1, wherein: and the time from the enqueue to the current time of the statement objects out of the queue is greater than a predefined time threshold.
3. The EsgynDB database-based precompiled statement caching method of claim 1, wherein: and when the precompiled statement objects are listed, entering a cache pool of an EsgynDB database.
4. A precompiled statement cache system based on an EsgynDB database is characterized in that: the system comprises an Add cache module, wherein the Add cache module is used for recording the enqueue time of the current precompiled statement object when the precompiled statement object is enqueued;
the Add cache module is also used for inserting the precompiled statement object from the middle position of the LRU linked list, and moving the statement object to the head of the LRU linked list after the statement object is hit for the second time; and scanning the existing statement objects in the queue from the tail part of the LRU linked list, and eliminating the statement objects out of the queue when the statement objects with HitCount of 0 in the queue are found and the existing time in the queue exceeds a predefined time threshold.
5. The EsgynDB database-based precompiled statement cache system of claim 4, wherein: and the time from the enqueue to the current time of the statement objects out of the queue is greater than a predefined time threshold.
6. The EsgynDB database-based precompiled statement cache system of claim 1, wherein: and when the precompiled statement objects are listed, entering a cache pool of an EsgynDB database.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010410996.9A CN111625550A (en) | 2020-05-15 | 2020-05-15 | System and method for caching precompiled statements based on EsgynDB database |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010410996.9A CN111625550A (en) | 2020-05-15 | 2020-05-15 | System and method for caching precompiled statements based on EsgynDB database |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111625550A true CN111625550A (en) | 2020-09-04 |
Family
ID=72258020
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010410996.9A Pending CN111625550A (en) | 2020-05-15 | 2020-05-15 | System and method for caching precompiled statements based on EsgynDB database |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111625550A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112487029A (en) * | 2020-11-11 | 2021-03-12 | 杭州电魂网络科技股份有限公司 | Progressive cache elimination method and device, electronic equipment and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030088739A1 (en) * | 2001-11-02 | 2003-05-08 | John Wilkes | Adaptive data insertion for caching |
US20030236780A1 (en) * | 2002-05-10 | 2003-12-25 | Oracle International Corporation | Method and system for implementing dynamic cache of database cursors |
US7024406B1 (en) * | 2001-06-01 | 2006-04-04 | Oracle International Corporation | Driver and method for processing a database statement |
CN108255477A (en) * | 2018-01-15 | 2018-07-06 | 贵州易鲸捷信息技术有限公司 | A kind of method and system by SQL compiler simulative optimization database performances |
-
2020
- 2020-05-15 CN CN202010410996.9A patent/CN111625550A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7024406B1 (en) * | 2001-06-01 | 2006-04-04 | Oracle International Corporation | Driver and method for processing a database statement |
US20030088739A1 (en) * | 2001-11-02 | 2003-05-08 | John Wilkes | Adaptive data insertion for caching |
US20030236780A1 (en) * | 2002-05-10 | 2003-12-25 | Oracle International Corporation | Method and system for implementing dynamic cache of database cursors |
CN108255477A (en) * | 2018-01-15 | 2018-07-06 | 贵州易鲸捷信息技术有限公司 | A kind of method and system by SQL compiler simulative optimization database performances |
Non-Patent Citations (8)
Title |
---|
张恒瑞等: "Cache替换算法LRU和2Q的深度分析", 《现代计算机(专业版)》 * |
张恒瑞等: "Cache替换算法LRU和2Q的深度分析", 《现代计算机(专业版)》, no. 04, 5 February 2017 (2017-02-05) * |
张舜等: "一种基于访问图优化的缓存替换算法", 《计算机应用与软件》 * |
张舜等: "一种基于访问图优化的缓存替换算法", 《计算机应用与软件》, no. 09, 15 September 2010 (2010-09-15), pages 52 - 54 * |
无: "新一代企业级高并发混合负载大数据解决方案――易鲸捷融合分布式数据库", 互联网天地, no. 06, pages 58 - 59 * |
马福: "易鲸捷与Zaloni合作提升数据湖功能", 《计算机与网络》 * |
马福: "易鲸捷与Zaloni合作提升数据湖功能", 《计算机与网络》, no. 01, 12 January 2018 (2018-01-12), pages 78 - 79 * |
马福;: "易鲸捷与Zaloni合作提升数据湖功能", 计算机与网络, no. 01, pages 78 - 79 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112487029A (en) * | 2020-11-11 | 2021-03-12 | 杭州电魂网络科技股份有限公司 | Progressive cache elimination method and device, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108319654B (en) | Computing system, cold and hot data separation method and device, and computer readable storage medium | |
US8010766B2 (en) | Increasing buffer locality during multiple table access operations | |
CN100543750C (en) | A kind of matrix type data caching method and device of using based on WEB | |
CN109949156B (en) | Data processing method and server of blockchain contracts | |
WO2009000173A1 (en) | Searching method, searching system and searching server | |
Lin et al. | Frame-sliced signature files | |
CN111506604B (en) | Method, apparatus and computer program product for accessing data | |
CN111752804B (en) | Database cache system based on database log scanning | |
CN111625550A (en) | System and method for caching precompiled statements based on EsgynDB database | |
CN111177090A (en) | Client caching method and system based on sub-model optimization algorithm | |
US10241927B2 (en) | Linked-list-based method and device for application caching management | |
CN110413689B (en) | Multi-node data synchronization method and device for memory database | |
CN115982162A (en) | Message forwarding table query method and electronic equipment | |
US9928274B2 (en) | Dynamically adjust duplicate skipping method for increased performance | |
US7937541B2 (en) | Speed selective table scan operation | |
CN112395440A (en) | Caching method, efficient image semantic retrieval method and system | |
CN108628540A (en) | Data storage device and method | |
CN103365897A (en) | Fragment caching method supporting Bigtable data model | |
AU2016277745A1 (en) | Linked-list-based method and device for application caching management | |
CN113139002A (en) | Hot spot data caching method based on Redis | |
CN102479213B (en) | Data buffering method and device | |
CN111209308B (en) | Method and device for optimizing distributed cache | |
CN113177031B (en) | Processing method and device for database shared cache, electronic equipment and medium | |
CN110334251B (en) | Element sequence generation method for effectively solving rehash conflict | |
CN116383250A (en) | Method for generating cache and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200904 |