CN114791913A - Method, storage medium and device for processing shared memory buffer pool of database - Google Patents

Method, storage medium and device for processing shared memory buffer pool of database Download PDF

Info

Publication number
CN114791913A
CN114791913A CN202210465944.0A CN202210465944A CN114791913A CN 114791913 A CN114791913 A CN 114791913A CN 202210465944 A CN202210465944 A CN 202210465944A CN 114791913 A CN114791913 A CN 114791913A
Authority
CN
China
Prior art keywords
buffer pool
fixed
node
page
tree index
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210465944.0A
Other languages
Chinese (zh)
Inventor
冷建全
孙文奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kingbase Information Technologies Co Ltd
Original Assignee
Beijing Kingbase Information Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kingbase Information Technologies Co Ltd filed Critical Beijing Kingbase Information Technologies Co Ltd
Priority to CN202210465944.0A priority Critical patent/CN114791913A/en
Publication of CN114791913A publication Critical patent/CN114791913A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • G06F16/2228Indexing structures
    • G06F16/2246Trees, e.g. B+trees
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides a processing method, a storage medium and equipment for a shared memory buffer pool of a database. The method comprises the following steps: acquiring a buffer pool setting instruction; determining the appointed B-tree index and a node to be fixed of the appointed B-tree index according to the buffer pool setting instruction; setting a fixed buffer pool mark of the B-tree index to be in a fixed state; and after receiving the access to the appointed B-tree index, executing the process of moving the node page of the node to be fixed into a fixed buffer pool, wherein the fixed buffer pool is a buffer pool which is developed in advance in a memory cache space of the database and is independent of a common buffer pool. The scheme of the invention avoids the overhead increase brought by the concurrent update of the page control structure of the B-tree index, and greatly improves the performance of database index query under the high-concurrency scene.

Description

Method, storage medium and device for processing shared memory buffer pool of database
Technical Field
The present invention relates to database technologies, and in particular, to a method, a storage medium, and a device for processing a shared memory buffer pool of a B-tree index database.
Background
The B-tree index is the most commonly used index in databases, which can support processing of equivalence and scope queries on orderable data and greatly increase the query speed using the index. The B-tree index enables the actions of data lookup, sequential access, data insertion, and data deletion to be completed in logarithmic time by ordering the data and building a B-tree.
The B-tree is generalized to a self-balancing n-way lookup tree. Unlike a self-balancing binary search tree, the B-tree optimizes the read-write operation of the system bulk data, and reduces the intermediate process experienced during the positioning recording, thereby increasing the access speed.
In a B-tree, an inner (non-leaf) node may have a variable number of children, with a predefined range of numbers. When data is inserted or removed from a node, the number of its children nodes changes, and internal nodes may be merged or split in order to remain within a preset number. Because the number of child nodes has a certain allowable range, the B-tree does not need to maintain balance again as frequently as other self-balancing search trees. A B-tree is balanced by constraining all leaf nodes to be at the same depth, and the depth of the entire tree may grow slowly as data is added to the tree.
In addition, there are some variants of the original B-tree, such as B + tree and B-tree, which can also be regarded as a class of B-tree in a broad sense, and are also commonly used as database indexes.
The search structure of the B-tree constructed index can greatly accelerate the data search speed of the database. However, all search operations in the B-tree are performed from the root node to the nodes storing key values layer by layer, which means that the access operation to the root node may become a bottleneck of the system.
When a database accesses a page, the page control structure needs to be updated, mainly updating a reference counter, wherein the reference counter records whether a referrer exists in the page currently and whether the referrer exists recently, and is used for judging buffer zone replacement. This overhead is usually small, but as the amount of concurrency increases, the overhead of concurrent updates to the page control structure also increases dramatically and becomes a non-negligible part. Therefore, for the B-tree index, a large overhead is also caused by updating the control structure of the root node or the underlying node page. However, not updating these page control structures may cause other disturbances, such as affecting memory buffer replacement.
Disclosure of Invention
It is an object of the present invention to provide a method for avoiding overhead increase due to concurrent update of a page control structure of a B-tree index.
It is a further object of the invention to enable an increase in the overall performance of the database.
Particularly, the invention provides a processing method of a shared memory buffer pool of a B-tree index database, which comprises the following steps:
acquiring a buffer pool setting instruction;
determining the appointed B-tree index and a node to be fixed of the appointed B-tree index according to the buffer pool setting instruction;
setting a fixed buffer pool mark of the B-tree index to be in a fixed state;
and after receiving the access to the appointed B-tree index, executing the process of moving the node page of the node to be fixed into a fixed buffer pool, wherein the fixed buffer pool is a buffer pool which is developed in advance in a memory cache space of the database and is independent of a common buffer pool.
Optionally, the step of determining the designated B-tree index and the node to be fixed of the designated B-tree index according to the buffer pool setting instruction includes:
analyzing a buffer pool setting instruction;
determining the appointed B-tree index according to the B-tree index information in the analysis result;
and taking the root node of the designated B-tree index and the child nodes with the depth smaller than the set value as nodes to be fixed.
Optionally, the process of moving the node page of the node to be fixed into the fixed buffer pool includes:
caching the node page to a common buffer pool;
obtaining the size of the residual space of the fixed buffer pool;
judging whether the residual space is enough to store the node page or not;
and if so, moving the node page into a fixed buffer pool.
Optionally, after the step of moving the node page into the fixed buffer pool, the method further includes:
setting a fixed mark of a node page to be in a fixed state;
and stopping updating the reference counter of the node page.
Optionally, in the case that the remaining space is insufficient to store the node page, the method further includes: and storing the node page in a common buffer pool, and maintaining a reference counter of the node page to be updated.
Optionally, after the step of storing the node page in the ordinary buffer pool, the method further includes: and after the unfixed event occurs in the fixed buffer pool, re-executing the step of judging whether the residual space is enough for storing the node page.
Optionally, after the step of executing the process of moving the node page of the node to be fixed into the fixed buffer pool, the method further includes:
obtaining the access to the appointed B-tree index and determining an access target page;
judging whether the fixed mark of the access target page is in a fixed state or not;
if so, ignoring the processing of the reference counter of the access target page, and reading the access target page from the fixed buffer pool;
if not, updating the reference counter of the access target page, and reading the access target page from the common buffer pool.
Optionally, the method for processing a shared memory buffer pool of a B-tree index database further includes:
acquiring a fixing releasing instruction;
b tree indexes to be unfixed according to unfixing instructions;
and executing the process of moving the B-tree index to be unfixed out of the fixed buffer pool so as to release the space of the fixed buffer pool.
According to another aspect of the present invention, there is also provided a machine-readable storage medium having stored thereon a machine-executable program which, when executed by a processor, implements the method of shared memory buffer pool handling for a B-tree index database of any of the above.
According to yet another aspect of the present invention, there is also provided a computer device, including a memory, a processor, and a machine-executable program stored on the memory and running on the processor, and when the processor executes the machine-executable program, the method for processing the shared memory buffer pool of the B-tree index database according to any one of the above aspects is implemented.
The processing method of the shared memory buffer pool of the B-tree index database fixes the node page (generally a root node and a child node with smaller depth) of the node to be fixed in the appointed B-tree index into the fixed buffer pool which is opened in the memory buffer space and is independent of the common buffer pool. The fixed buffer pool and the common buffer pool are independent, and after the node page is moved into the fixed buffer pool, the node page does not participate in common page replacement any more, so that the overhead increase caused by the concurrent updating of the page control structure of the B-tree index is avoided, and the performance of database index query under a high-concurrency scene is greatly improved.
Furthermore, the processing method of the shared memory buffer pool of the B-tree index database of the invention sets the node page of the B-tree index through the buffer pool setting instruction, realizes flexible configuration, and can effectively improve the overall performance of the database under the condition that the database user correctly sets.
Furthermore, the processing method of the shared memory buffer pool of the B-tree index database optimizes the process of moving the node page of the B-tree index into the fixed buffer pool, marks the fixed buffer pool mark of the B-tree index and the fixed mark of the node page of the B-tree index and the fixed buffer state of the node page of the B-tree index, and is convenient for the operations of caching the node page, responding to the access request and the like.
The above and other objects, advantages and features of the present invention will become more apparent to those skilled in the art from the following detailed description of specific embodiments thereof, taken in conjunction with the accompanying drawings.
Drawings
Some specific embodiments of the invention will be described in detail hereinafter by way of example and not by way of limitation with reference to the accompanying drawings. The same reference numbers in the drawings identify the same or similar elements or components. Those skilled in the art will appreciate that the drawings are not necessarily to scale. In the drawings:
FIG. 1 is a schematic flow diagram of a method for processing a shared memory buffer pool of a B-tree index database according to one embodiment of the invention;
FIG. 2 is a diagram illustrating a database shared memory in a method for processing a buffer pool of a shared memory of a B-tree index database according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating a method for processing a buffer pool of a shared memory of a B-tree index database according to an embodiment of the present invention to perform node page fixing;
FIG. 4 is a flowchart illustrating a method for processing a shared memory buffer pool of a B-tree index database according to an embodiment of the present invention for accessing a page in a fixed buffer pool;
FIG. 5 is a flowchart illustrating a method for processing a buffer pool of a shared memory of a B-tree index database according to an embodiment of the present invention to perform a node page release;
FIG. 6 is a schematic diagram of a machine-readable storage medium according to one embodiment of the invention; and
FIG. 7 is a schematic diagram of a computer device according to one embodiment of the invention.
Detailed Description
Fig. 1 is a schematic flowchart of a processing method of a buffer pool of a shared memory of a B-tree index database according to an embodiment of the present invention, and fig. 2 is a schematic diagram of a shared memory of a database in the processing method of the buffer pool of the shared memory of the B-tree index database according to an embodiment of the present invention. The method for processing the database shared memory buffer pool generally comprises the following steps:
step S102, obtaining a buffer pool setting instruction;
step S104, determining the appointed B-tree index and the node to be fixed of the appointed B-tree index according to the buffer pool setting instruction;
step S106, setting the fixed buffer pool mark of the B-tree index to be in a fixed state;
step S108, after receiving the access to the specified B-tree index, executing a process of moving the node page of the node to be fixed into a fixed buffer pool, where the fixed buffer pool is a buffer pool that is opened in advance in the memory cache space of the database and is independent of a common buffer pool.
In the database of this embodiment, a common buffer pool and a fixed buffer pool are created in advance in the buffer space of the shared memory. The ordinary buffer pool performs ordinary replacement of the page by using a conventional buffer replacement algorithm, for example, a replacement algorithm such as LRU (Least Recently Used), LFU (Least Frequently Used), OPT (OPTimal page replacement), and the like is Used. The fixed buffer pool is used for buffering the page appointed by the database user. The fixed buffer pool and the common buffer pool are independent, and after the node page is moved into the fixed buffer pool, the node page does not participate in common page replacement any more, so that the overhead increase caused by the concurrent updating of the page control structure of the B-tree index is avoided, and the performance of database index query under a high-concurrency scene is greatly improved.
The size of the fixed buffer pool can be configured through a configuration file and is independent of the common buffer pool. Before the node page of the fixed node is moved into the fixed buffer pool, the node page is firstly cached to the cache space of the shared memory of the database. Namely, the ordinary cache pool is used for completing caching, and caching service is provided.
The fixed buffer pool of the embodiment is used for storing the pages of the nodes to be fixed of the B-tree index, the pages are accessed at high frequency and are controllable in size, and excessive consumption of the fixed buffer pool is avoided. In addition, the phenomenon that too large fixed buffer pool unexpectedly occupies too much system memory can be avoided, and the operation of the database is facilitated. The nodes to be fixed may be those nodes whose tree depth is small, such as a root node and a bifurcation node of depth 2. This part of the nodes is fixed in the buffer pool, rather than fixing all pages. Because the number of the child nodes of each node in the B-tree has a strict upper limit, the total size of the node is completely controllable, and the risk of exhaustion of a fixed buffer pool is avoided.
The database can establish mapping from the disk page number to the specific position of the buffer pool through a hash table, the specific position of the buffer pool comprises the buffer pool where the page is located and the offset relative to the base address of the buffer pool, and the specific position of the page in the buffer pool can be determined through the base address of the buffer pool and the offset.
The page header in the buffer pool can be provided with storage bits for recording the relevant state of the page, and the recorded information can include whether the page exists in the fixed buffer pool, a reference counter, whether the page can be replaced and cleaned, and the like.
The disk for arranging the database can store a data file 21, a log file 22 and a configuration file 23, and a common buffer pool 11, a fixed buffer pool 12 and other shared memories can be opened up in the shared memory 10 of the database. As is well known to those skilled in the art, the read speed of a magnetic disk is significantly slower than the read speed of a memory. The database of this embodiment develops a normal buffer pool 11 and a fixed buffer pool 12 in the shared memory 10. The normal buffer pool 11 can be used for normal caching of pages, and the fixed buffer pool 12 can enable node pages of nodes to be fixed of the designated B-tree index to be fixedly cached according to the setting of a database user.
The buffer pool setting instruction may use SQL (Structured Query Language) and contain information, such as an identifier, of the B-tree index that needs to be specified.
The step S104 may include: analyzing a buffer pool setting instruction; determining the appointed B-tree index according to the B-tree index information in the analysis result; and taking the root node of the designated B-tree index and the child nodes with the depth smaller than a set value as nodes to be fixed. Namely, after the buffer pool setting instruction is obtained, the specified B-tree index is analyzed and determined; the root node of the B-tree index and child nodes (generally, bifurcation nodes with a depth of 2) with depths smaller than a set value are used as nodes to be fixed.
In step S106, the fixed buffer pool flag of the B-tree index is modified to be in a fixed state, that is, the fixed buffer pool flag is set to identify that the B-tree index is designated as an index that needs to be fixed. Thus determining whether the B-tree index is specified by fixing the buffer pool tag. The fixed buffer pool flag may be set to a flag bit, which when set to 1, indicates that the B-tree index is specified; it is set to 0, indicating that the B-tree index is specified.
The process of moving the node page of the node to be fixed into the fixed buffer pool in step S108 may include: caching the node page to a common buffer pool; obtaining the size of the residual space of the fixed buffer pool; judging whether the residual space is enough to store the node page or not; and if so, moving the node page into a fixed buffer pool.
And if the residual space is not enough to store the node page, the node page can be stored in the common buffer pool, and the reference counter of the node page is maintained to be updated. That is, under the condition that the node page is not stored in the fixed buffer pool, the node page can be cached by using the common buffer pool, and the node page participates in the buffer replacement of the common buffer pool.
After the step of moving the node page into the fixed buffer pool, the method may further include: setting a fixed mark of a node page to be in a fixed state; and stopping updating the reference counter of the node page. Because the reference counter of the node page in the fixed buffer pool does not need to be updated, the overhead rise caused by the concurrent update of the page control structure of the B-tree index can be avoided.
After the step of storing the node page in the common buffer pool, the method further comprises the following steps: and after the unfixed event occurs in the fixed buffer pool, re-executing the step of judging whether the residual space is enough for storing the node page, and after the residual space is enough for storing the node page, moving the node page into the fixed buffer pool.
The process of handling access of the designated B-tree index after performing the step of moving the node page of the node to be pinned into the pinned buffer pool may include: obtaining the access to the appointed B-tree index, and determining an access target page; judging whether the fixed mark of the access target page is in a fixed state or not; if so, ignoring the processing of the reference counter of the access target page, and reading the access target page from the fixed buffer pool; if not, updating the reference counter of the access target page, and reading the access target page from the common buffer pool.
Using the method of the above embodiment, a database user may be allowed to designate a B-tree index as using a fixed buffer pool, and to define a fixed range as the high frequency node pages of the B-tree index, including, for example, the root node (depth 1) and the forked node of depth 2. All pages within the fixed range, and subsequently expanded pages that fit the range, are moved into the fixed buffer pool unless the fixed buffer pool capacity has failed to meet the storage requirements. Pages that have not been moved into the fixed buffer pool due to insufficient storage capacity are still temporarily stored in the normal buffer pool. For pages that have been pinned, updating their reference counters is stopped while the replacement operation is ignored.
The unfixed pages in the fixed buffer pool are released by the following process: acquiring a fixing releasing instruction; b tree indexes to be unfixed according to unfixing instructions; and executing the process of moving the B-tree index to be unfixed out of the fixed buffer pool so as to release the space of the fixed buffer pool.
In the method of this embodiment, it is particularly critical to process the control structure (i.e., reference counter) of the node page, and it is necessary to ensure that the reference counter of the node page returns to a normal state when the pinning process is released. For a page to be fixed, each process accessing the page before fixing the page needs to record whether the state of the page is fixed at the reading time, if so, the accessing process must normally update a reference counter, otherwise, reference count leakage occurs. Second, for a page to be unpinned, the process that accessed the page prior to unpinning must ensure that it has ended correctly prior to unpinning. For this purpose, a coarse-grained lock (exclusive relational lock) may be used to ensure that the unpinned session will wait for all current access processes to finish accessing through the relational lock in the exclusive mode before starting to execute the unpinned flow. Thus, after the page is unpinned, the reference counter of the page is equal to the real access number at the moment, so that the page can normally participate in the replacement.
Fig. 3 is a schematic flow chart illustrating a node page fixing process performed by the method for processing a shared memory buffer pool of a B-tree index database according to an embodiment of the present invention, where the flow for fixing a node page of a B-tree index may include:
step S302, a buffer pool setting instruction input by a database user is obtained, and the buffer pool setting instruction is used for appointing a B-tree index of a database to use a fixed buffer pool;
step S304, a database analysis buffer pool setting instruction is set, and a B-tree index needing to be specified is determined;
step S306, determining a root node assigned with a B-tree index and a node with a set depth (the general depth is 2);
step S308, opening the appointed B-tree index, and setting the fixed buffer pool mark of the B-tree index to be in a fixed state;
step S310, obtaining the access to the database, and determining whether the access is the first access after the B-tree index is specified;
step S312, reading the node page to be fixed of the B-tree index into a common buffer pool of a shared memory; attempting to fix the node page to be fixed into the fixed buffer pool by performing steps S314 to S318;
step S314, judging whether the size of the residual space of the fixed buffer pool is enough to store the node page, if not, storing the node page to be fixed in a common buffer pool;
step S316, if the residual space exists, the node page is moved into a fixed buffer pool;
in step S318, the fixed flag is set to be in a fixed state, and updating of the reference counter of the node page is stopped.
Through the steps, a database user can endow a higher memory buffer viscosity to a page with a smaller size and a higher access frequency of the B-tree index of the database, so that the overhead increase caused by the concurrent update of the page control structure of the B-tree index is avoided, and the performance of database index query under a high-concurrency scene is greatly improved.
Fig. 4 is a schematic flowchart of a page access to a fixed buffer pool by a processing method for a shared memory buffer pool of a B-tree index database according to an embodiment of the present invention, where the access process may include:
step S402, obtaining the access to the appointed B-tree index and determining the access target page;
step S404, judging whether the fixed mark of the access target page is in a fixed state;
step S406, under the condition that the fixed mark of the access target page is in a fixed state, ignoring the processing of the reference counter of the access target page;
step S408, reading an access target page from the fixed buffer pool, and executing access response operation;
step S410, under the condition that the fixed mark of the access target page is in an unfixed state, reading the access target page from the common buffer pool, and executing access response operation; updating a reference counter of the access target page, and accumulating times;
in step S412, after the access to the access target page is finished, if the access target page fixed flag is still in the unfixed state, the reference counter of the access target page is updated, and the number of times is decreased by one.
FIG. 5 is a flowchart illustrating a method for processing a shared memory buffer pool of a B-tree index database according to an embodiment of the present invention to perform node page dismissal. The process of unpinning a node page of a B-tree index stored in a pinned buffer pool may include:
step S502, a unfixed instruction is obtained, and the unfixed instruction is used for indicating the fixed buffer pool to release the fixed page of the appointed B-tree index;
step S504, the database analyzes the unfixed command, and determines the B-tree index needing to be unfixed;
in step S506, the fixed page is searched in the fixed buffer pool to determine whether the fixed page is stored in the fixed buffer pool, for example, whether the fixed page is stored in the fixed buffer pool is determined by the fixed flag of the node page, if the fixed flag is in the fixed state, the fixed page is stored in the fixed buffer pool, and if the fixed flag is in the non-fixed state, the fixed page is still stored in the normal buffer pool.
Step S508, under the situation that whether the fixed page is stored in the fixed buffer pool or not is determined, whether an access process accesses the B-tree index or not is judged;
step S510, if access exists, applying for exclusive relation lock holding B-tree index;
step S510, utilizing the exclusive relation lock to suspend responding to the new access of the appointed B-tree index, and waiting for the end of all accesses;
step S512, setting the fixed buffer pool mark of the B-tree index as a release state;
step S514, moving the fixed page of the B-tree index from the fixed buffer pool to the common buffer pool, and setting the fixed mark to be in a non-fixed state;
step S516, release the exclusive relational lock, in order to resume responding to the visit to the designated B tree index;
step S518, resumes updating the reference counter of the FixedPage, so that the FixedPage participates in the buffer replacement of the ordinary buffer pool.
In step S520, when the pinned page is not stored in the pinned buffer pool, the pinned buffer pool flag of the B-tree index is set to the released state directly, and then the pinning release process is ended.
The embodiment also provides a machine-readable storage medium and a computer device. Fig. 6 is a schematic diagram of a machine-readable storage medium 40 according to one embodiment of the invention, and fig. 7 is a schematic diagram of a computer device 50 according to one embodiment of the invention.
The machine-readable storage medium 40 has stored thereon a machine-executable program 41, and when the machine-executable program 41 is executed by a processor, the method for processing the shared memory buffer pool of the B-tree index database according to any of the embodiments described above is implemented.
Computer device 50 may include a memory 520, a processor 510, and a machine executable program 41 stored on memory 520 and running on processor 510, and when processor 510 executes machine executable program 41, implements the shared memory buffer pool processing method of the B-tree index database of any of the embodiments described above.
It should be noted that the logic and/or steps shown in the flowcharts or otherwise described herein, such as an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any machine-readable storage medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions.
For the purposes of this description, a machine-readable storage medium 40 can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium 40 may even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system.
The computer device 50 may be, for example, a server, a desktop computer, a notebook computer, a tablet computer, or a smartphone. In some examples, computer device 50 may be a cloud computing node. Computer device 50 may be described in the general context of computer system-executable instructions, such as program modules, being executed by a computer system. Generally, program modules may include routines, programs, objects, components, logic, data structures, etc. that perform particular tasks or implement particular abstract data types. Computer device 50 may be practiced in distributed cloud computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program modules may be located in both local and remote computer system storage media including memory storage devices.
The computer device 50 may include a processor 510 adapted to execute stored instructions, a memory 520 providing temporary storage for the operation of the instructions during operation. Processor 510 may be a single core processor, a multi-core processor, a computing cluster, or any number of other configurations. Memory 520 may include Random Access Memory (RAM), read only memory, flash memory, or any other suitable storage system.
Processor 510 may be connected by a system interconnect (e.g., PCI-Express, etc.) to an I/O interface (input/output interface) suitable for connecting computer device 50 to one or more I/O devices (input/output devices). The I/O devices may include, for example, a keyboard and a pointing device, wherein the pointing device may include a touchpad or a touchscreen, among others. The I/O devices may be built-in components of the computing device 50 or may be devices that are externally connected to the computing device.
The processor 510 may also be linked by a system interconnect to a display interface suitable for connecting the computer device 50 to a display device. The display device may include a display screen as a built-in component of the computer device 50. The display device may also include a computer monitor, television, or projector, etc. externally connected to the computer device 50. In addition, a Network Interface Controller (NIC) may be adapted to connect computer device 50 to a network via a system interconnect. In some embodiments, the NIC may use any suitable interface or protocol (such as an internet small computer system interface, etc.) to transfer data. The network may be a cellular network, a radio network, a Wide Area Network (WAN)), a Local Area Network (LAN), or the internet, among others. The remote device may be connected to the computing device through a network.
The flowcharts provided in this example are not intended to indicate that the operations of the method are to be performed in any particular order, or that all of the operations of the method are to be included in each case. Further, the method may include additional operations. Additional variations on the above-described method are possible within the scope of the technical idea provided by the method of the present embodiment.
According to the scheme of the embodiment, the updating of the reference counter is avoided by adopting a fixed buffer pool technology, so that the concurrent conflict detection overhead is reduced; by selecting a proper range of pages using the fixed buffer pool, the impact of the fixed buffer pool on the memory capacity is reduced. The fixed buffer pool flow is released, so that the page without the reference counter can return to normal after the capability is released, and on one hand, the access when the page enters the fixed state is processed through the internal state record of the reference; on the other hand exit from the fixed state is handled by a coarse-grained lock (exclusive relational lock).
Through the test of the scheme of the embodiment, the access and search performance of the same B-tree index is greatly improved under the condition of ultrahigh concurrency. Because the database usually uses a large amount of B-tree indexes and most services have hot index query, the scheme of the embodiment can realize obvious performance improvement on most actual services or mainstream benchmark tests (such as TPCC) and the like.
Thus, it should be appreciated by those skilled in the art that while a number of exemplary embodiments of the invention have been illustrated and described in detail herein, many other variations or modifications consistent with the principles of the invention may be directly determined or derived from the disclosure of the present invention without departing from the spirit and scope of the invention. Accordingly, the scope of the invention should be understood and interpreted to cover all such other variations or modifications.

Claims (10)

1. A processing method for a shared memory buffer pool of a B-tree index database comprises the following steps:
acquiring a buffer pool setting instruction;
determining the appointed B-tree index and a node to be fixed of the appointed B-tree index according to the buffer pool setting instruction;
setting a fixed buffer pool mark of the B-tree index to be in a fixed state;
and after receiving the access to the appointed B-tree index, executing a process of moving the node page of the node to be fixed into a fixed buffer pool, wherein the fixed buffer pool is a buffer pool which is opened in advance in a memory buffer space of the database and is independent of a common buffer pool.
2. The method of claim 1, wherein the determining the designated B-tree index and the node to be fixed of the designated B-tree index according to the buffer pool setting instruction comprises:
analyzing the buffer pool setting instruction;
determining the appointed B-tree index according to B-tree index information in an analysis result;
and taking the root node of the appointed B-tree index and the child node with the depth smaller than a set value as the nodes to be fixed.
3. The method for processing the buffer pool of the shared memory of the B-tree index database according to claim 1, wherein the step of moving the node page of the node to be fixed into the fixed buffer pool comprises:
caching the node page to the common buffer pool;
obtaining the size of the residual space of the fixed buffer pool;
judging whether the residual space is enough to store the node page or not;
and if so, moving the node page into the fixed buffer pool.
4. The method of claim 3, wherein after the step of moving the node page into the fixed buffer pool, the method further comprises:
setting the fixed mark of the node page to be in a fixed state;
and stopping updating the reference counter of the node page.
5. The method of claim 3, wherein if the remaining space is not sufficient to store the node page, further comprising:
and storing the node page in the common buffer pool, and maintaining the reference counter of the node page to be updated.
6. The method of claim 5, wherein after the step of storing the node page in the normal buffer pool, the method further comprises:
and after the unfixed event occurs in the fixed buffer pool, re-executing the step of judging whether the residual space is enough for storing the node page.
7. The method for processing the buffer pool of the shared memory in the B-tree index database according to claim 3, wherein after the step of moving the node page of the node to be pinned into the fixed buffer pool, the method further comprises:
obtaining the access to the appointed B-tree index, and determining an access target page;
judging whether the fixed mark of the access target page is in a fixed state or not;
if so, ignoring the processing of the reference counter of the access target page, and reading the access target page from the fixed buffer pool;
if not, updating the reference counter of the access target page, and reading the access target page from the common buffer pool.
8. The method of claim 1, further comprising:
acquiring a fixing releasing instruction;
b-tree indexes to be unfixed according to the unfixing command;
and executing the flow of moving the B-tree index to be unfixed out of the fixed buffer pool so as to release the space of the fixed buffer pool.
9. A machine-readable storage medium having stored thereon a machine-executable program which, when executed by a processor, implements the method of processing shared memory buffer pools of a B-tree index database in accordance with any one of claims 1 to 8.
10. A computer device comprising a memory, a processor, and a machine-executable program stored on the memory and running on the processor, and the processor when executing the machine-executable program implements the shared memory buffer pool processing method of the B-tree index database of any of claims 1 to 8.
CN202210465944.0A 2022-04-26 2022-04-26 Method, storage medium and device for processing shared memory buffer pool of database Pending CN114791913A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210465944.0A CN114791913A (en) 2022-04-26 2022-04-26 Method, storage medium and device for processing shared memory buffer pool of database

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210465944.0A CN114791913A (en) 2022-04-26 2022-04-26 Method, storage medium and device for processing shared memory buffer pool of database

Publications (1)

Publication Number Publication Date
CN114791913A true CN114791913A (en) 2022-07-26

Family

ID=82462084

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210465944.0A Pending CN114791913A (en) 2022-04-26 2022-04-26 Method, storage medium and device for processing shared memory buffer pool of database

Country Status (1)

Country Link
CN (1) CN114791913A (en)

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050192935A1 (en) * 2004-02-03 2005-09-01 Oracle International Corporation Method and apparatus for efficient runtime memory access in a database
CN101339538A (en) * 2007-07-04 2009-01-07 三星电子株式会社 Data tree storage methods, systems and computer program products using page structure
US20100306222A1 (en) * 2009-05-29 2010-12-02 Microsoft Corporation Cache-friendly b-tree accelerator
CN102654863A (en) * 2011-03-02 2012-09-05 华北计算机系统工程研究所 Real-time database history data organizational management method
CN102819586A (en) * 2012-07-31 2012-12-12 北京网康科技有限公司 Uniform Resource Locator (URL) classifying method and equipment based on cache
US8375012B1 (en) * 2011-08-10 2013-02-12 Hewlett-Packard Development Company, L.P. Computer indexes with multiple representations
US20150134709A1 (en) * 2013-11-08 2015-05-14 Samsung Electronics Co., Ltd. Hybrid buffer management scheme for immutable pages
CN106599040A (en) * 2016-11-07 2017-04-26 中国科学院软件研究所 Layered indexing method and search method for cloud storage
CN108763508A (en) * 2018-05-30 2018-11-06 中兴通讯股份有限公司 Data page access method, storage engines and computer readable storage medium
CN110162525A (en) * 2019-04-17 2019-08-23 平安科技(深圳)有限公司 Read/write conflict solution, device and storage medium based on B+ tree
CN110489425A (en) * 2019-08-26 2019-11-22 上海达梦数据库有限公司 A kind of data access method, device, equipment and storage medium
CN112579612A (en) * 2020-12-31 2021-03-30 厦门市美亚柏科信息股份有限公司 Database index table record analysis method and device, computing equipment and storage medium
CN113392089A (en) * 2021-06-25 2021-09-14 瀚高基础软件股份有限公司 Database index optimization method and readable storage medium
CN113918535A (en) * 2020-07-08 2022-01-11 腾讯科技(深圳)有限公司 Data reading method, device, equipment and storage medium
CN114282074A (en) * 2022-03-04 2022-04-05 阿里云计算有限公司 Database operation method, device, equipment and storage medium

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050192935A1 (en) * 2004-02-03 2005-09-01 Oracle International Corporation Method and apparatus for efficient runtime memory access in a database
CN101339538A (en) * 2007-07-04 2009-01-07 三星电子株式会社 Data tree storage methods, systems and computer program products using page structure
US20100306222A1 (en) * 2009-05-29 2010-12-02 Microsoft Corporation Cache-friendly b-tree accelerator
CN102654863A (en) * 2011-03-02 2012-09-05 华北计算机系统工程研究所 Real-time database history data organizational management method
US8375012B1 (en) * 2011-08-10 2013-02-12 Hewlett-Packard Development Company, L.P. Computer indexes with multiple representations
CN102819586A (en) * 2012-07-31 2012-12-12 北京网康科技有限公司 Uniform Resource Locator (URL) classifying method and equipment based on cache
US20150134709A1 (en) * 2013-11-08 2015-05-14 Samsung Electronics Co., Ltd. Hybrid buffer management scheme for immutable pages
CN106599040A (en) * 2016-11-07 2017-04-26 中国科学院软件研究所 Layered indexing method and search method for cloud storage
CN108763508A (en) * 2018-05-30 2018-11-06 中兴通讯股份有限公司 Data page access method, storage engines and computer readable storage medium
CN110162525A (en) * 2019-04-17 2019-08-23 平安科技(深圳)有限公司 Read/write conflict solution, device and storage medium based on B+ tree
CN110489425A (en) * 2019-08-26 2019-11-22 上海达梦数据库有限公司 A kind of data access method, device, equipment and storage medium
CN113918535A (en) * 2020-07-08 2022-01-11 腾讯科技(深圳)有限公司 Data reading method, device, equipment and storage medium
CN112579612A (en) * 2020-12-31 2021-03-30 厦门市美亚柏科信息股份有限公司 Database index table record analysis method and device, computing equipment and storage medium
CN113392089A (en) * 2021-06-25 2021-09-14 瀚高基础软件股份有限公司 Database index optimization method and readable storage medium
CN114282074A (en) * 2022-03-04 2022-04-05 阿里云计算有限公司 Database operation method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
US8868595B2 (en) Enhanced control to users to populate a cache in a database system
US11392567B2 (en) Just-in-time multi-indexed tables in a shared log
US9037557B2 (en) Optimistic, version number based concurrency control for index structures with atomic, non-versioned pointer updates
CN113641596B (en) Cache management method, cache management device and processor
US8032706B2 (en) Method and apparatus for detecting a data access violation
US10838875B2 (en) System and method for managing memory for large keys and values
US10489310B2 (en) Determining cache value currency using persistent markers
US9442837B2 (en) Accelerating multiversion concurrency control using hardware transactional memory
US20180300132A1 (en) Method and system for restructuring of collections for synchronization
US7908268B2 (en) Predictive database pool preparation
CN112395437B (en) 3D model loading method and device, electronic equipment and storage medium
JP2009265840A (en) Cache system for database
CN114791913A (en) Method, storage medium and device for processing shared memory buffer pool of database
CN114860723A (en) Method, storage medium and device for processing shared memory buffer pool of database
CN113243008A (en) Distributed VFS with shared page cache
CN115061948A (en) Method and system for verifying non-aligned access in multi-core system
JP4867451B2 (en) Cache memory device, cache memory control method used therefor, and program thereof
CN108614782B (en) Cache access method for data processing system
US8234260B1 (en) Metadata management for scalable process location and migration
CA3072655C (en) Scan optimization of column oriented storage
US11940994B2 (en) Mechanisms for maintaining chains without locks
CN114860724A (en) Processing method, storage medium and equipment for database shared memory buffer pool
US11681705B2 (en) Trie data structure with subtrie data structures
CN114880356A (en) Processing method, storage medium and equipment for database shared memory buffer pool
CN116257519A (en) Data reading and writing method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination