CN114860723A - Method, storage medium and device for processing shared memory buffer pool of database - Google Patents

Method, storage medium and device for processing shared memory buffer pool of database Download PDF

Info

Publication number
CN114860723A
CN114860723A CN202210451345.3A CN202210451345A CN114860723A CN 114860723 A CN114860723 A CN 114860723A CN 202210451345 A CN202210451345 A CN 202210451345A CN 114860723 A CN114860723 A CN 114860723A
Authority
CN
China
Prior art keywords
buffer pool
fixed
tree index
page
database
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210451345.3A
Other languages
Chinese (zh)
Inventor
冷建全
孙文奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kingbase Information Technologies Co Ltd
Original Assignee
Beijing Kingbase Information Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kingbase Information Technologies Co Ltd filed Critical Beijing Kingbase Information Technologies Co Ltd
Priority to CN202210451345.3A priority Critical patent/CN114860723A/en
Publication of CN114860723A publication Critical patent/CN114860723A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • G06F16/2228Indexing structures
    • G06F16/2246Trees, e.g. B+trees
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/544Buffers; Shared memory; Pipes

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides a processing method, a storage medium and equipment for a shared memory buffer pool of a database. The method comprises the following steps: acquiring a unfixed instruction, wherein the unfixed instruction is used for indicating a fixed buffer pool to release a fixed page of an appointed B-tree index, and the fixed buffer pool is a buffer pool which is independent of a common buffer pool and is pre-opened in a memory cache space of a database; acquiring an access state of the B-tree index; waiting for the access to the B-tree index to be completely finished; and setting the fixed buffer pool mark of the B-tree index to be in a release state, and moving the fixed page of the B-tree index from the fixed buffer pool to the common buffer pool. The scheme of the invention can ensure that the page of the unfixed B-tree index normally participates in the replacement of the common buffer pool, and does not influence the normal access response of the database.

Description

Method, storage medium and device for processing shared memory buffer pool of database
Technical Field
The present invention relates to database technologies, and in particular, to a method, a storage medium, and a device for processing a buffer pool of a shared memory in a B-tree index database.
Background
The B-tree index is the most commonly used index in databases, which can support processing of equivalence and scope queries on orderable data and greatly increase the query speed using the index. The B-tree index enables the actions of data lookup, sequential access, data insertion, and data deletion to be completed in logarithmic time by ordering the data and building a B-tree.
The B-tree is a generalized, self-balancing n-way search tree. Unlike a self-balancing binary search tree, the B-tree optimizes the read-write operation of the system bulk data, and reduces the intermediate process experienced during the positioning recording, thereby increasing the access speed.
In a B-tree, an inner (non-leaf) node may have a variable number of children, with a predefined range of numbers. When data is inserted or removed from a node, the number of its children nodes changes, and internal nodes may be merged or split in order to remain within a preset number. Because the number of child nodes has a certain allowable range, the B-tree does not need to maintain balance as frequently as other self-balancing search trees. A B-tree is balanced by constraining all leaf nodes to be at the same depth, and the depth of the entire tree may grow slowly as data is added to the tree.
In addition, there are some variants of the original B-tree, such as B + tree and B-tree, which can also be regarded as a class of B-tree in a broad sense, and are also commonly used as database indexes.
The search structure of the B-tree constructed index can greatly accelerate the data search speed of the database. However, all search operations in the B-tree are performed from the root node to the nodes storing key values layer by layer, which means that the access operation to the root node may become a bottleneck of the system.
When a database accesses a page, the page control structure needs to be updated, mainly the reference counter is updated, and the reference counter records whether a referrer exists in the page currently or not and whether the referrer exists recently or not, so that the judgment of buffer region replacement is carried out. This overhead is usually small, but as the amount of concurrency increases, the overhead of concurrent updates to the page control structure also increases dramatically and becomes a non-negligible part. Therefore, for the B-tree index, a large overhead is also incurred for updating the control structure of the root node or the underlying node page. However, not updating these page control structures may cause other disturbances, such as affecting memory buffer replacement.
Disclosure of Invention
An object of the present invention is to provide a method for avoiding overhead increase caused by concurrent update of a page control structure of a B-tree index.
A further object of the invention is to enable an increase in the overall performance of the database.
Particularly, the invention provides a processing method of a shared memory buffer pool of a B-tree index database, which comprises the following steps:
acquiring a unfixed instruction, wherein the unfixed instruction is used for indicating a fixed buffer pool to release a fixed page of an appointed B-tree index, and the fixed buffer pool is a buffer pool which is independent of a common buffer pool and is pre-opened in a memory cache space of a database;
acquiring an access state of the B-tree index;
waiting for the access to the B-tree index to be completely finished;
and setting the fixed buffer pool mark of the B-tree index to be in a release state, and moving the fixed page of the B-tree index from the fixed buffer pool to the common buffer pool.
Optionally, the process of waiting for all the accesses to the specified B-tree index to end further includes:
applying for an exclusive relationship lock holding a B-tree index;
new access to the designated B-tree index in response to the request is suspended using the exclusive relationship lock.
Optionally, after the step of moving the fixed page of the B-tree index from the fixed buffer pool to the normal buffer pool, the method further includes:
the exclusive relational lock is released to resume access to the specified B-tree index in response.
Optionally, after the step of restoring access to the specified B-tree index, the method further includes:
the updating of the reference counter of the FixedPage is resumed so that the FixedPage participates in the buffer replacement of the ordinary buffer pool.
Optionally, after the step of obtaining the unfixed instruction, the method further includes:
searching a fixed page in the fixed buffer pool to determine whether the fixed page is stored in the fixed buffer pool;
and if so, executing the step of acquiring the access state of the B-tree index.
Alternatively, in the case where the FixedPage is not stored in the Fixedbuffer pool, the state where the FixedPage participates in the replacement in the normal buffer pool is maintained.
Optionally, after the step of moving the fixed page of the B-tree index from the fixed buffer pool to the normal buffer pool, the method further includes:
and refreshing the residual space of the fixed buffer pool for storing the new fixed page.
Optionally, before the step of obtaining the unfixed instruction, the method further includes:
acquiring a buffer pool setting instruction;
and executing the flow of moving the fixed node page of the appointed B-tree index into the fixed buffer pool according to the buffer pool setting instruction.
According to another aspect of the present invention, there is also provided a machine-readable storage medium having stored thereon a machine-executable program which, when executed by a processor, implements the method of shared memory buffer pool handling for a B-tree index database of any of the above.
According to yet another aspect of the present invention, there is also provided a computer device comprising a memory, a processor, and a machine-executable program stored on the memory and running on the processor, and the processor, when executing the machine-executable program, implements the shared memory buffer pool processing method of the B-tree index database of any of the above.
The processing method of the shared memory buffer pool of the B-tree index database fixes the node pages (generally root nodes and sub-nodes with smaller depth) of the nodes to be fixed in the appointed B-tree index into the fixed buffer pool which is opened in the memory buffer space and is independent of the common buffer pool. And under the condition that the B-tree index needs to be fixed, after all accesses to the B-tree index before the fixation is cancelled are finished, moving the fixed page of the B-tree index from the fixed buffer pool to the common buffer pool. Therefore, the access control structures (reference counters) of the pages can reflect the real access number after the fixation is removed, so that the fixed-removed pages normally participate in the replacement of the common buffer pool, and the normal access response of the database is not influenced.
Furthermore, the processing method of the shared memory buffer pool of the B-tree index database of the invention sets the node page of the B-tree index through the buffer pool setting instruction, realizes flexible configuration, and can effectively improve the overall performance of the database under the condition that the database user correctly sets.
Furthermore, the processing method of the shared memory buffer pool of the B-tree index database optimizes the process of moving the node page of the B-tree index into the fixed buffer pool, and after the node page is moved into the fixed buffer pool, the node page does not participate in the common page replacement any more, thereby avoiding the overhead increase brought by the concurrent update of the page control structure of the B-tree index and greatly improving the performance of database index query under a high-concurrency scene.
The above and other objects, advantages and features of the present invention will become more apparent to those skilled in the art from the following detailed description of specific embodiments thereof taken in conjunction with the accompanying drawings.
Drawings
Some specific embodiments of the invention will be described in detail hereinafter, by way of illustration and not limitation, with reference to the accompanying drawings. The same reference numbers in the drawings identify the same or similar elements or components. Those skilled in the art will appreciate that the drawings are not necessarily drawn to scale. In the drawings:
FIG. 1 is a schematic flow chart diagram of a method for processing a shared memory buffer pool of a B-tree index database according to one embodiment of the invention;
FIG. 2 is a diagram illustrating a database shared memory in a method for processing a buffer pool of a shared memory of a B-tree index database according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating a method for processing a shared memory buffer pool of a B-tree index database according to an embodiment of the present invention to perform node page fixing;
FIG. 4 is a flowchart illustrating a method for processing a shared memory buffer pool of a B-tree index database according to an embodiment of the present invention for accessing a page in a fixed buffer pool;
FIG. 5 is a flowchart illustrating a method for processing a shared memory buffer pool of a B-tree index database according to an embodiment of the present invention to perform node page dismissal;
FIG. 6 is a schematic diagram of a machine-readable storage medium according to one embodiment of the invention; and
FIG. 7 is a schematic diagram of a computer device according to one embodiment of the invention. .
Detailed Description
Fig. 1 is a schematic flowchart of a processing method of a buffer pool of shared memory of a B-tree index database according to an embodiment of the present invention, and fig. 2 is a schematic diagram of a shared memory of a database in the processing method of the buffer pool of shared memory of the B-tree index database according to an embodiment of the present invention. The method for processing the database shared memory buffer pool generally comprises the following steps:
step S102, acquiring a fixing release instruction; the unpin instruction is used for instructing the fixed buffer pool to release the fixed page of the specified B-tree index. The fixed buffer pool is a buffer pool which is developed in advance in the memory cache space of the database and is independent of the common buffer pool. The pages stored in the fixed buffer pool do not participate in the normal replacement of the common buffer pool, and the reference counter does not need to be updated.
The unpin instruction may use SQL (Structured Query Language) and contain information, such as an identifier, of the B-tree index that needs to be specified.
The step S102 may include: analyzing the fixing releasing instruction; and determining the appointed B-tree index according to the B-tree index information in the analysis result. The root node of the designated B-tree index and the child nodes having a depth less than the set value (typically, the bifurcation nodes having a depth of 2) are fixed nodes, allowing the use of a fixed buffer pool.
Step S104, obtaining the access state of the B-tree index; that is, it is determined whether there is a process accessing the B-tree index that needs to be unpinned before unpinning.
Step S106, waiting for all accesses to the B-tree index to be finished. An exclusive relation lock holding the B-tree index can be applied in the waiting process; new access to the designated B-tree index in response to the request is suspended using the exclusive relationship lock.
Step S108, the fixed buffer pool mark of the B-tree index is set to be in a release state, and the fixed page of the B-tree index is moved from the fixed buffer pool to the common buffer pool.
After step S108, the exclusive relational lock may also be released to resume responding to access to the specified B-tree index. Thereafter, updating of the reference counter of the FixedPage is resumed so that the FixedPage participates in the buffer replacement of the ordinary buffer pool. Through the steps, the access control structures (reference counters) of the fixed pages reflect the real access number after the fixation is removed, so that the fixed-removed pages normally participate in the replacement of the common buffer pool, and the normal access response of the database is not influenced.
Considering that there may be a case where the designated B-tree index may have its node pages not yet moved into the pinned buffer pool, the step of obtaining the unpin instruction may further include: searching a fixed page in the fixed buffer pool to determine whether the fixed page is stored in the fixed buffer pool; and if so, executing the step of acquiring the access state of the B-tree index. And under the condition that the fixed page is not stored in the fixed buffer pool, directly keeping the state of the fixed page participating in replacement in the common buffer pool.
The size of the remaining space of the fixed buffer pool may also be refreshed after the step of moving the FixedPage of the B-Tree index from the fixed buffer pool to the ordinary buffer pool for storing new FixedPage.
In the database of this embodiment, a common buffer pool and a fixed buffer pool are created in advance in the buffer space of the shared memory. The common buffer pool performs common replacement of the page by using a conventional buffer replacement algorithm, for example, a replacement algorithm such as LRU (Least Recently Used), LFU (Least Frequently Used), OPT (OPTimal page replacement), and the like. The fixed buffer pool is used for buffering the page specified by the database user. The fixed buffer pool and the common buffer pool are independent, and after the node page is moved into the fixed buffer pool, the node page does not participate in common page replacement any more, so that the overhead increase caused by the concurrent updating of the page control structure of the B-tree index is avoided, and the performance of database index query under a high-concurrency scene is greatly improved.
The size of the fixed buffer pool can be configured through a configuration file and is independent from the common buffer pool. Before the node page of the fixed node is moved into the fixed buffer pool, the node page is cached into the cache space of the shared memory of the database. Namely, the ordinary cache pool is used for completing caching and providing cache service.
The fixed buffer pool of the embodiment is used for storing the pages of the nodes to be fixed of the B-tree index, the pages are accessed at high frequency and controllable in size, and excessive consumption of the fixed buffer pool is avoided. In addition, the excessive unexpected occupation of the system memory by an overlarge fixed buffer pool can be avoided, and the operation of the database is facilitated. The nodes to be fixed may be those nodes with a smaller tree depth, such as a root node and a bifurcation node with a depth of 2. This part of the nodes is fixed in the buffer pool, rather than fixing all pages. Because the number of the child nodes of each node in the B-tree has a strict upper limit, the total size of the node is completely controllable, and the risk of exhaustion of a fixed buffer pool is avoided.
The database can establish mapping from the disk page number to the specific position of the buffer pool through a hash table, the specific position of the buffer pool comprises the buffer pool and the offset relative to the base address of the buffer pool, and the specific position of the page in the buffer pool can be determined through the base address of the buffer pool and the offset.
The head of the page in the buffer pool can be provided with storage bits for recording the relevant state of the page, and the recorded information can comprise whether the page exists in the fixed buffer pool, a reference counter, whether the page can be replaced and cleaned, and the like.
The disk for arranging the database can store a data file 21, a log file 22 and a configuration file 23, and a common buffer pool 11, a fixed buffer pool 12 and other shared memories can be opened up in the shared memory 10 of the database. As is well known to those skilled in the art, the read speed of a magnetic disk is significantly slower than the read speed of a memory. The database of this embodiment develops a normal buffer pool 11 and a fixed buffer pool 12 in the shared memory 10. The normal buffer pool 11 can be used for normal caching of pages, and the fixed buffer pool 12 can enable node pages of nodes to be fixed of the designated B-tree index to be fixedly cached according to the setting of a database user.
Before the step of obtaining the unpinning command, the method may further include a process of pinning a pinned node page of the B-tree index. The step of fixing the fixed node page of the B-tree index to a fixed buffer pool may be to set an instruction for obtaining the buffer pool; and executing the flow of moving the fixed node page of the designated B-tree index into the fixed buffer pool according to the buffer pool setting instruction.
The buffer pool setting instruction may also use SQL (Structured Query Language) and contain information, such as an identifier, of the B-tree index that needs to be specified.
The process of moving the fixed node page of the designated B-tree index into the fixed buffer pool according to the buffer pool setting instruction may include: acquiring a buffer pool setting instruction; determining the appointed B-tree index and a node to be fixed of the appointed B-tree index according to the buffer pool setting instruction; setting a fixed buffer pool mark of the B-tree index to be in a fixed state; and after receiving the access to the appointed B-tree index, executing the process of moving the node page of the node to be fixed into a fixed buffer pool, wherein the fixed buffer pool is a buffer pool which is developed in advance in a memory cache space of the database and is independent of a common buffer pool.
The method comprises the steps that a specified B-tree index and a node to be fixed of the specified B-tree index can be determined according to a buffer pool setting instruction, and the buffer pool setting instruction can be analyzed; determining the appointed B-tree index according to the B-tree index information in the analysis result; and taking the root node of the designated B-tree index and the child nodes with the depth smaller than the set value as nodes to be fixed. Namely after obtaining the setting instruction of the buffer pool, analyzing and determining the appointed B-tree index; the root node of the B-tree index and child nodes having a depth less than a set value (typically, a node with a bifurcation depth of 2) are used as nodes to be fixed.
The fixed buffer pool flag of the B-tree index is modified to a fixed state, i.e., the fixed buffer pool flag is set to identify that the B-tree index has been designated as an index that needs to be fixed. Thus determining whether the B-tree index is specified by fixing the buffer pool tag. The fixed buffer pool flag may be set to a flag bit, which when set to 1, indicates that the B-tree index is specified; it is set to 0, indicating that the B-tree index is specified.
The process of moving the node page of the node to be fixed into the fixed buffer pool may include: caching the node page to a common buffer pool; obtaining the size of the residual space of the fixed buffer pool; judging whether the residual space is enough to store the node page or not; and if so, moving the node page into a fixed buffer pool.
And if the residual space is not enough to store the node page, the node page can be stored in the common buffer pool, and the reference counter of the node page is maintained to be updated. That is, under the condition that the node page is not stored in the fixed buffer pool, the node page can be cached by using the common buffer pool, and the node page participates in the buffer replacement of the common buffer pool.
After the above step S108 moves the fixed pages of other B-tree indexes from the fixed buffer pool to the normal buffer pool, the size of the remaining space of the fixed buffer pool is refreshed, and new fixed pages can be stored. That is, after the step of storing the node page in the normal buffer pool, the step of judging whether the remaining space is enough to store the node page may be performed again after the unfixed event occurs in the fixed buffer pool, and after the remaining space is enough to store the node page, the node page is moved into the fixed buffer pool.
After the step of moving the node page into the fixed buffer pool, the method may further include: setting a fixed mark of a node page to be in a fixed state; and stopping updating the reference counter of the node page. The reference counter of the node page in the fixed buffer pool does not need to be updated, so that the overhead rise caused by the concurrent update of the page control structure of the B-tree index can be avoided.
The process of handling access of the designated B-tree index after performing the step of the flow of moving the node page of the node to be pinned into the pinned buffer pool may include: obtaining the access to the appointed B-tree index and determining an access target page; judging whether the fixed mark of the access target page is in a fixed state or not; if so, ignoring the processing of the reference counter of the access target page, and reading the access target page from the fixed buffer pool; if not, updating the reference counter of the access target page, and reading the access target page from the common buffer pool.
Using the method of the above embodiment, a database user may be allowed to designate a B-tree index as using a fixed buffer pool, and define a fixed range as a high frequency node page of the B-tree index, including, for example, a root node (depth 1) and a fork node of depth 2. All pages within the fixed range, and subsequently expanded pages that fit the range, are moved into the fixed buffer pool unless the fixed buffer pool capacity has failed to meet the storage requirements. Pages that have not been moved into the fixed buffer pool due to insufficient storage capacity are still temporarily stored in the normal buffer pool. For pages that have been pinned, updating their reference counters is stopped while the replacement operation is ignored.
The unfixed pages in the fixed buffer pool are released by the following process: acquiring a fixing releasing instruction; b-tree indexes to be unfixed according to unfixing instructions; and executing the process of moving the B-tree index to be unfixed out of the fixed buffer pool so as to release the space of the fixed buffer pool.
In the method of this embodiment, it is particularly critical to process the control structure (i.e., reference counter) of the node page, and it is necessary to ensure that the reference counter of the node page returns to a normal state when the pinning procedure is released. For a page to be fixed, each process accessing the page before fixing the page needs to record whether the state of the page at the read-in time is fixed, if the state of the page is fixed, the access process needs to normally update a reference counter, otherwise, reference count leakage occurs. Second, for a page to be unpinned, the process that accessed the page prior to unpinning must ensure that it has ended correctly prior to unpinning. For this purpose, a coarse-grained lock (exclusive relational lock) may be used to ensure that the unpinned session will wait for all current access processes to finish accessing through the relational lock in the exclusive mode before starting to execute the unpinned flow. Thus, after the page is unpinned, the reference counter of the page is equal to the real access number at the moment, so that the page can normally participate in the replacement.
Fig. 3 is a schematic flow chart of node page fixing performed by the method for processing a shared memory buffer pool of a B-tree index database according to an embodiment of the present invention, where the flow of fixing a node page of a B-tree index may include:
step S302, obtaining a buffer pool setting instruction input by a database user, wherein the buffer pool setting instruction is used for appointing a B-tree index of the database to use a fixed buffer pool;
step S304, a database analysis buffer pool setting instruction is set, and a B-tree index needing to be specified is determined;
step S306, determining a root node assigned with a B-tree index and a node with a set depth (the general depth is 2);
step S308, opening the appointed B-tree index, and setting the fixed buffer pool mark of the B-tree index to be in a fixed state;
step S310, obtaining the access to the database, and determining whether the access is the first access after the B-tree index is specified;
step S312, reading the node page to be fixed of the B-tree index into a common buffer pool of a shared memory; attempting to fix the node page to be fixed into the fixed buffer pool by performing steps S314 to S318;
step S314, judging whether the size of the residual space of the fixed buffer pool is enough to store the node page, if not, storing the node page to be fixed in a common buffer pool;
step S316, if the residual space exists, the node page is moved into a fixed buffer pool;
step S318, the fixed flag is set to be in a fixed state, and updating of the reference counter of the node page is stopped.
Through the steps, a database user can endow a higher memory buffer viscosity to a page with a smaller size and a higher access frequency of the B-tree index of the database, so that the overhead increase caused by the concurrent update of the page control structure of the B-tree index is avoided, and the performance of database index query under a high-concurrency scene is greatly improved.
Fig. 4 is a schematic flowchart of a page access to a fixed buffer pool by a processing method for a shared memory buffer pool of a B-tree index database according to an embodiment of the present invention, where the access process may include:
step S402, obtaining the access to the appointed B-tree index and determining the access target page;
step S404, judging whether the fixed mark of the access target page is in a fixed state;
step S406, under the condition that the fixed mark of the access target page is in a fixed state, neglecting the processing of the reference counter of the access target page;
step S408, reading an access target page from the fixed buffer pool, and executing access response operation;
step S410, under the condition that the fixed mark of the access target page is in an unfixed state, reading the access target page from the common buffer pool, and executing access response operation; updating a reference counter of the access target page, and accumulating times;
in step S412, after the access to the access target page is finished, if the access target page fixed flag is still in the unfixed state, the reference counter of the access target page is updated, and the number of times is decreased by one.
FIG. 5 is a flowchart illustrating a method for processing a shared memory buffer pool of a B-tree index database according to an embodiment of the present invention to perform node page dismissal. The process of unpinning the node pages of the B-tree index stored in one pinned buffer pool may include:
step S502, a unfixed release instruction is obtained, and the unfixed release instruction is used for indicating a fixed buffer pool to release a fixed page of the appointed B-tree index;
step S504, the database analyzes the unfixed command, and determines the B-tree index needing to be unfixed;
in step S506, the fixed page is searched in the fixed buffer pool to determine whether the fixed page is stored in the fixed buffer pool, for example, whether the fixed page is stored in the fixed buffer pool is determined by the fixed flag of the node page, if the fixed flag is in the fixed state, the fixed page is stored in the fixed buffer pool, and if the fixed flag is in the non-fixed state, the fixed page is still stored in the normal buffer pool.
Step S508, under the situation that whether the fixed page is stored in the fixed buffer pool or not is determined, whether an access process accesses the B-tree index or not is judged;
step S510, if access exists, applying for exclusive relation lock holding B-tree index;
step S510, utilizing the exclusive relation lock to suspend the new access to the appointed B-tree index, and waiting for the end of all accesses;
step S512, the fixed buffer pool mark of the B-tree index is set to be in a release state;
step S514, moving the fixed page of the B-tree index from the fixed buffer pool to the common buffer pool, and setting the fixed mark to be in a non-fixed state;
step S516, releasing the exclusive relation lock to restore the access to the designated B-tree index;
step S518, resumes updating the reference counter of the FixedPage, so that the FixedPage participates in the buffer replacement of the ordinary buffer pool.
In step S520, when the pinned page is not stored in the pinned buffer pool, the pinned buffer pool flag of the B-tree index is set to the released state directly, and then the pinning release process is ended.
The embodiment also provides a machine-readable storage medium and a computer device. Fig. 6 is a schematic diagram of a machine-readable storage medium 40 according to one embodiment of the invention, and fig. 7 is a schematic diagram of a computer device 50 according to one embodiment of the invention.
The machine-readable storage medium 40 has stored thereon a machine-executable program 41, and when the machine-executable program 41 is executed by a processor, the method for processing the shared memory buffer pool of the B-tree index database according to any of the embodiments described above is implemented.
The computer device 50 may include a memory 520, a processor 510, and a machine-executable program 41 stored on the memory 520 and running on the processor 510, and the processor 510 implements the shared memory buffer pool processing method of the B-tree index database of any of the above embodiments when executing the machine-executable program 41.
It should be noted that the logic and/or steps represented in the flowcharts or otherwise described herein, such as an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any machine-readable storage medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions.
For the purposes of this description, a machine-readable storage medium 40 can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium 40 may even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system.
The computer device 50 may be, for example, a server, a desktop computer, a notebook computer, a tablet computer, or a smartphone. In some examples, computer device 50 may be a cloud computing node. Computer device 50 may be described in the general context of computer system-executable instructions, such as program modules, being executed by a computer system. Generally, program modules may include routines, programs, objects, components, logic, data structures, etc. that perform particular tasks or implement particular abstract data types. Computer device 50 may be practiced in distributed cloud computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program modules may be located in both local and remote computer system storage media including memory storage devices.
The computer device 50 may include a processor 510 adapted to execute stored instructions, a memory 520 providing temporary storage space for the operation of the instructions during operation. Processor 510 may be a single core processor, a multi-core processor, a computing cluster, or any number of other configurations. Memory 520 may include Random Access Memory (RAM), read only memory, flash memory, or any other suitable storage system.
Processor 510 may be connected by a system interconnect (e.g., PCI-Express, etc.) to an I/O interface (input/output interface) suitable for connecting computer device 50 to one or more I/O devices (input/output devices). The I/O devices may include, for example, a keyboard and a pointing device, wherein the pointing device may include a touchpad or a touchscreen, among others. The I/O devices may be built-in components of the computing device 50 or may be devices that are externally connected to the computing device.
The processor 510 may also be linked through a system interconnect to a display interface suitable for connecting the computer device 50 to a display device. The display device may include a display screen as a built-in component of the computer device 50. The display device may also include a computer monitor, television, or projector, etc. externally connected to the computer device 50. In addition, a Network Interface Controller (NIC) may be adapted to connect computer device 50 to a network via a system interconnect. In some embodiments, the NIC may use any suitable interface or protocol (such as an internet small computer system interface, etc.) to transfer data. The network may be a cellular network, a radio network, a Wide Area Network (WAN)), a Local Area Network (LAN), the internet, or the like. The remote device may be connected to the computing device through a network.
The flowcharts provided by this embodiment are not intended to indicate that the operations of the method are to be performed in any particular order, or that all the operations of the method are included in each case. Further, the method may include additional operations. Additional variations on the above-described method are possible within the scope of the technical ideas provided by the method of this embodiment.
According to the scheme of the embodiment, the updating of the reference counter is avoided by adopting a fixed buffer pool technology, so that the concurrent conflict detection overhead is reduced; by selecting a proper range of pages using the fixed buffer pool, the impact of the fixed buffer pool on the memory capacity is reduced. The fixed buffer pool flow is released, so that the page without the reference counter can return to normal after the capability is released, and on one hand, the access when the page enters the fixed state is processed through the internal state record of the reference; on the other hand exit from the fixed state is handled by a coarse-grained lock (exclusive relationship lock).
Through the test of the scheme of the embodiment, the access and search performance of the same B-tree index is greatly improved under the condition of ultrahigh concurrency. Because a large amount of B-tree indexes are usually used in the database, and most services have index query of hot spots, the scheme of the embodiment can achieve obvious performance improvement on most actual services or mainstream benchmark tests (such as TPCC).
Thus, it should be appreciated by those skilled in the art that while a number of exemplary embodiments of the invention have been illustrated and described in detail herein, many other variations or modifications consistent with the principles of the invention may be directly determined or derived from the disclosure of the present invention without departing from the spirit and scope of the invention. Accordingly, the scope of the invention should be understood and interpreted to cover all such other variations or modifications.

Claims (10)

1. A processing method for a shared memory buffer pool of a B-tree index database comprises the following steps:
acquiring a unfixed instruction, wherein the unfixed instruction is used for indicating a fixed buffer pool to release a fixed page of an appointed B-tree index, and the fixed buffer pool is a buffer pool which is developed in advance in a memory cache space of the database and is independent of a common buffer pool;
acquiring the access state of the B-tree index;
waiting for all accesses to the B-tree index to end;
and setting a fixed buffer pool mark of the B-tree index to be in a release state, and moving a fixed page of the B-tree index from the fixed buffer pool to the common buffer pool.
2. The method of claim 1, wherein the waiting for the completion of all accesses to the specified B-tree index further comprises:
applying for an exclusive relationship lock holding the B-tree index;
suspending new access to the designated B-tree index in response to the exclusive relationship lock.
3. The method of claim 2, wherein the step of moving the pinned pages of the B-tree index from the fixed buffer pool to the normal buffer pool further comprises:
the exclusive relational lock is released to resume access to the specified B-tree index in response.
4. The method of claim 3, wherein after the step of resuming access to the designated B-tree index, further comprising:
resuming updating of the reference counter of the FixedPage so that the FixedPage participates in a buffer replacement of the ordinary buffer pool.
5. The method of claim 1, wherein after the step of obtaining the unpin command, the method further comprises:
searching the fixed page in the fixed buffer pool to determine whether the fixed page is stored in the fixed buffer pool;
and if so, executing the step of acquiring the access state of the B-tree index.
6. The method of processing a shared memory buffer pool of a B-tree index database in accordance with claim 1,
and under the condition that the fixed page is not stored in the fixed buffer pool, keeping the state that the fixed page participates in replacement in the common buffer pool.
7. The method of claim 1, wherein the step of moving the pinned pages of the B-tree index from the fixed buffer pool to the normal buffer pool is followed by the step of:
and refreshing the residual space of the fixed buffer pool for storing a new fixed page.
8. The method of claim 1, wherein the step of obtaining an unpin command further comprises:
acquiring a buffer pool setting instruction;
and executing the flow of moving the fixed node page of the appointed B-tree index into a fixed buffer pool according to the buffer pool setting instruction.
9. A machine-readable storage medium having stored thereon a machine-executable program which, when executed by a processor, implements the method of shared memory buffer pool handling for a B-tree index database according to any of claims 1 to 8.
10. A computer device comprising a memory, a processor, and a machine-executable program stored on the memory and running on the processor, and the processor when executing the machine-executable program implements the shared memory buffer pool processing method of the B-tree index database of any of claims 1 to 8.
CN202210451345.3A 2022-04-26 2022-04-26 Method, storage medium and device for processing shared memory buffer pool of database Pending CN114860723A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210451345.3A CN114860723A (en) 2022-04-26 2022-04-26 Method, storage medium and device for processing shared memory buffer pool of database

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210451345.3A CN114860723A (en) 2022-04-26 2022-04-26 Method, storage medium and device for processing shared memory buffer pool of database

Publications (1)

Publication Number Publication Date
CN114860723A true CN114860723A (en) 2022-08-05

Family

ID=82633642

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210451345.3A Pending CN114860723A (en) 2022-04-26 2022-04-26 Method, storage medium and device for processing shared memory buffer pool of database

Country Status (1)

Country Link
CN (1) CN114860723A (en)

Similar Documents

Publication Publication Date Title
US11392567B2 (en) Just-in-time multi-indexed tables in a shared log
CN113641596B (en) Cache management method, cache management device and processor
US11669539B2 (en) Virtual database tables with updatable logical table pointers
US20130054727A1 (en) Storage control method and information processing apparatus
CN110489425B (en) Data access method, device, equipment and storage medium
JP5772458B2 (en) Data management program, node, and distributed database system
KR102479394B1 (en) Moving data between caches in a heterogeneous processor system
JPWO2012108175A1 (en) Database update notification method
CN113243008A (en) Distributed VFS with shared page cache
CN110291507A (en) For providing the method and apparatus of the acceleration access to storage system
JP5083408B2 (en) Configuration management apparatus, configuration management program, and configuration management method
CN112395437B (en) 3D model loading method and device, electronic equipment and storage medium
JP2009265840A (en) Cache system for database
CN114791913B (en) Shared memory buffer pool processing method, storage medium and equipment for database
CN114860723A (en) Method, storage medium and device for processing shared memory buffer pool of database
US10430287B2 (en) Computer
CN115061948A (en) Method and system for verifying non-aligned access in multi-core system
CN108614782B (en) Cache access method for data processing system
JP2023511743A (en) Reducing demands using probabilistic data structures
CN114880356B (en) Processing method, storage medium and equipment for database shared memory buffer pool
US8234260B1 (en) Metadata management for scalable process location and migration
JP2007286990A (en) Cache memory device, cache memory control method to be used for the same and program therefor
CN118132598B (en) Database data processing method and device based on multi-level cache
CN114860724A (en) Processing method, storage medium and equipment for database shared memory buffer pool
US11386089B2 (en) Scan optimization of column oriented storage

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Country or region after: China

Address after: 100102 201, 2 / F, 101, No. 5 building, No. 7 Rongda Road, Chaoyang District, Beijing

Applicant after: China Electronics Technology Group Jincang (Beijing) Technology Co.,Ltd.

Address before: 100102 201, 2 / F, 101, No. 5 building, No. 7 Rongda Road, Chaoyang District, Beijing

Applicant before: BEIJING KINGBASE INFORMATION TECHNOLOGIES Inc.

Country or region before: China