CN115757438B - Index node processing method and device of database, computer equipment and medium - Google Patents

Index node processing method and device of database, computer equipment and medium Download PDF

Info

Publication number
CN115757438B
CN115757438B CN202310015649.XA CN202310015649A CN115757438B CN 115757438 B CN115757438 B CN 115757438B CN 202310015649 A CN202310015649 A CN 202310015649A CN 115757438 B CN115757438 B CN 115757438B
Authority
CN
China
Prior art keywords
node
hot
index
page
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310015649.XA
Other languages
Chinese (zh)
Other versions
CN115757438A (en
Inventor
郝宇
金毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Primitive Data Beijing Information Technology Co ltd
Original Assignee
Primitive Data Beijing Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Primitive Data Beijing Information Technology Co ltd filed Critical Primitive Data Beijing Information Technology Co ltd
Priority to CN202310015649.XA priority Critical patent/CN115757438B/en
Publication of CN115757438A publication Critical patent/CN115757438A/en
Application granted granted Critical
Publication of CN115757438B publication Critical patent/CN115757438B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the application provides a method and a device for processing index nodes of a database, computer equipment and a medium, and belongs to the technical field of data storage. The method comprises the following steps: acquiring data writing information of a memory page; wherein the data writing information includes: page information and data writing times; screening candidate index nodes from preset original index nodes according to the page information; classifying the candidate index nodes according to the data writing times and a preset writing times threshold value to obtain hot index nodes and cold index nodes; performing node splitting treatment on the hot index node to obtain a first target node; and carrying out node merging processing on the cold gate index nodes to obtain a second target node. According to the embodiment of the application, node competition can be reduced, and the space utilization rate can be saved, so that the system performance of the database is improved.

Description

Index node processing method and device of database, computer equipment and medium
Technical Field
The present invention relates to the field of data storage technologies, and in particular, to a method and an apparatus for processing an inode of a database, a computer device, and a medium.
Background
At present, the storage structure of the database storage is mainly based on the index structure of the B-tree, and the index structure of the B-tree is suitable for an SSD-based optimized storage engine, but the conventional index structure of the B-tree cannot solve the physical lock contention problem generated by a processor, namely, the tuple in a frequently updated node frequently contends with a physical lock, so that the system performance of the database is affected, and therefore, how to solve the problem of competing with the tuple for the physical lock in the frequently updated node becomes a technical problem to be solved urgently.
Disclosure of Invention
The main purpose of the embodiments of the present application is to provide a method and apparatus for processing an index node of a database, a computer device, and a medium, which aim to reduce node competition, and can save space utilization, thereby improving system performance of the database.
To achieve the above object, a first aspect of an embodiment of the present application provides a method for processing an inode of a database, where the method includes:
acquiring data writing information of a memory page; wherein the data writing information includes: page information and data writing times;
screening candidate index nodes from preset original index nodes according to the page information;
Classifying the candidate index nodes according to the data writing times and a preset writing times threshold value to obtain hot index nodes and cold index nodes;
performing node splitting treatment on the hot index node to obtain a first target node;
and carrying out node merging processing on the cold gate index nodes to obtain a second target node.
In some embodiments, the obtaining the data writing information of the memory page includes:
acquiring state information of a data zone bit in a preset paging table entry according to a preset time interval to acquire the state information of the zone bit; the state information of the data flag bit is used for representing the updating state of the memory page;
calculating the state update times according to a preset period and the flag bit state information to obtain the data writing times; wherein the preset period comprises at least two preset time intervals;
and acquiring the zone bit information of the data zone bit in the paging list item to obtain the page information.
In some embodiments, before the retrieving the data write information of the memory page, the method further includes:
the step of constructing the paging list item specifically comprises the following steps:
Acquiring page data of the memory page according to a preset time interval; wherein the page data includes: page information and page update status information;
setting a data flag bit on a preset data table according to the page information, and setting state information of the data flag bit according to the page update state information to obtain the paging table.
In some embodiments, the performing node splitting processing on the hot index node to obtain a first target node includes:
obtaining tuples in the hot index node to obtain hot candidate tuples;
performing competition conflict analysis on the hot candidate tuples to obtain competition conflict information;
dividing the hot candidate tuples into a first hot tuple and a second hot tuple according to the competition conflict information;
node splitting is carried out on the hot index node to obtain a first split node and a second split node;
and storing the first hot tuple into the first split node, and storing the second hot tuple into the second split node to obtain the first target node.
In some embodiments, the node merging processing is performed on the cold gate index node to obtain a second target node, including:
Obtaining tuples in the cold gate index node to obtain cold gate candidate tuples;
performing residual space calculation on the cold gate index node to obtain a memory residual space;
combining the cold gate index nodes according to the memory residual space to obtain combined nodes;
and combining and storing the cold candidate elements to the merging node according to the memory residual space to obtain the second target node.
In some embodiments, the merging processing of the cold gate index node according to the remaining memory space to obtain a merged node includes:
summing the residual space of the memory to obtain the sum of the residual spaces;
screening a selected index node from the cold door index node according to the sum of the residual spaces and a preset node memory space;
and combining the selected index nodes to obtain the combined node.
In some embodiments, after the classifying the candidate inodes according to the data writing times and the preset writing times threshold value to obtain the hot inodes and the cold inodes, the method further includes:
screening a target page from the memory page according to the cold gate index node;
And storing the target page into a preset exchange area.
To achieve the above object, a second aspect of an embodiment of the present application proposes an inode processing apparatus of a database, the apparatus including:
the information acquisition module is used for acquiring data writing information of the memory page; wherein the data writing information includes: page information and data writing times;
the node screening module is used for screening candidate index nodes from preset original index nodes according to the page information;
the node classification module is used for classifying the candidate index nodes according to the data writing times and a preset writing times threshold value to obtain hot index nodes and cold index nodes;
the node splitting module is used for performing node splitting processing on the hot index node to obtain a first target node;
and the node merging module is used for carrying out node merging processing on the cold gate index node to obtain a second target node.
To achieve the above object, a third aspect of the embodiments of the present application proposes a computer device, the computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the method according to the first aspect described above when executing the computer program.
To achieve the above object, a fourth aspect of the embodiments of the present application proposes a computer-readable storage medium storing a computer program that, when executed by a processor, implements the method of the first aspect.
According to the method, the device, the computer equipment and the medium for processing the index node of the database, the candidate index node is screened out from the original index node according to the page information by acquiring the data writing times and the page information of the memory page, the data writing times and the writing times threshold value are compared, the candidate index node is divided into the cold index node and the hot index node, the hot index node is divided into the first target node, the cold index node is combined into the second target node, unnecessary competition on the nodes is reduced, the aim of improving concurrency is achieved, and meanwhile, the B-tree index structure space management is considered to improve the node space utilization rate, so that the system performance of the database is improved.
Drawings
FIG. 1 is a flowchart of a method for processing an inode of a database according to an embodiment of the present disclosure;
FIG. 2 is a flowchart of a method for processing an inode of a database according to another embodiment of the present application;
FIG. 3 is a system flow diagram of a method for processing an inode of a database according to an embodiment of the present application;
fig. 4 is a flowchart of step S101 in fig. 1;
fig. 5 is a flowchart of step S104 in fig. 1;
FIG. 6 is a schematic diagram of hot node splitting in a method for processing an index node of a database according to an embodiment of the present application;
fig. 7 is a flowchart of step S105 in fig. 1;
FIG. 8 is a cold gate inode merging schematic diagram of an inode processing method of a database according to an embodiment of the present disclosure;
fig. 9 is a flowchart of step S703 in fig. 7;
FIG. 10 is a flowchart of a method for processing an inode of a database according to another embodiment of the present application;
FIG. 11 is a block diagram of an inode processing apparatus for a database according to an embodiment of the present disclosure;
fig. 12 is a schematic hardware structure of a computer device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
It should be noted that although functional block division is performed in a device diagram and a logic sequence is shown in a flowchart, in some cases, the steps shown or described may be performed in a different order than the block division in the device, or in the flowchart. The terms first, second and the like in the description and in the claims and in the above-described figures, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the present application.
First, several nouns referred to in this application are parsed:
b-tree: the B-tree is a common index structure, and the intermediate process experienced in positioning records can be remarkably reduced by using the B-tree structure, so that the access speed is increased. In terms of translation, B is generally considered to be an abbreviation for Basnce. The index structure is generally used for indexing the database, and the comprehensive efficiency is high. For an m-th order B-tree, each node may have at most m child nodes. The key words of each node and the number of the available sub-nodes are limited, and the number of the key words in the root node is 1~m-1 in the m-order B-tree, wherein at least 2 sub-nodes are provided for the root node, and the number of the key words in the root node is 1~m-1; the non-root node has at least [ m/2] ([ ], rounding up) child nodes, and the corresponding keyword number is [ m/2] -1~m-1.
Memory management unit (Memory Management Unit, MMU): the memory management unit is also referred to as a paged memory management unit (paged memory management unit, PMMU). A memory management unit is a piece of computer hardware responsible for handling memory access requests of a Central Processing Unit (CPU). The functions of the memory management unit include virtual address to physical address translation (i.e. virtual memory management), memory protection, central processor cache control, and in a simpler computer architecture, bus arbitration and bank switching (especially on 8-bit systems).
Page Table Entry (PTE): page table entries are constituent elements in the storage mechanism, pointed to by PDEs (page directory tables), one for each physical page. When a process requests its own memory, the operating system is responsible for mapping the virtual address generated by the program to the physical memory that is actually stored. The operating system stores a mapping of virtual addresses to physical addresses in a paging table. Each mapping is referred to as a page table entry.
Index: in a relational database, an index is a separate, physical storage structure that orders the values of one or more columns in a database table, which is a collection of one or more columns of values in a table and a corresponding list of logical pointers to pages of data in the table that physically identify the values. The index function is equivalent to the catalogue of books, and the needed content can be quickly found according to the page numbers in the catalogue.
Latch (Latch): latches are pulse level sensitive memory cell circuits that change state under the action of a particular input pulse level. Latching is to temporarily store a signal to maintain a certain level state. The latch has the main functions of buffering, finishing the asynchronous problem of a high-speed controller and a slow peripheral, solving the driving problem, and finally solving the problem that an I/O port can output and input.
Thread (thread): a thread is the smallest unit that an operating system can perform operational scheduling. It is included in the process and is the actual unit of operation in the process. One thread refers to a single sequential control flow in a process, and multiple threads can be concurrent in a process, each thread executing different tasks in parallel. Lightweight processes (lightweight processes) are also called in Unix System V and SunOS, but lightweight processes refer more to kernel threads (kernel threads) and user threads (user threads) are called threads.
The B-tree is used as a common index structure, and is widely applied to the index of the database because of the characteristic of higher comprehensive efficiency, so as to assist in quickly inquiring and updating the data in the database table. Meanwhile, the balance index structure with a large number of nodes is suitable for an SSD-based optimized storage engine. The scalability on the CPU of the related art in the classical B-tree index structure is poor because the B-tree index structure cannot handle latch contention that may occur in the CPU of the related art. For the B-tree index structure, if the nodes frequently updated in the B-tree index structure can be efficiently detected, the nodes can be timely segmented so as to disperse competition and achieve the purpose of efficient competition management. However, the detection technology of the frequently updated nodes in the related art mainly increases the calculation and memory access cost of the database, thereby affecting the system performance of the database. In the related art, methods for reducing latch contention mainly include three types: first, by partitioning data, transactions are allocated into the appropriate cores and executed in series, such as static data partitioning methods, etc.; second, realize no index, such as KISS tree and Bw tree, etc.; third, the fifteen intervals of the hardware are divided as an index stage and an operation stage to reduce the possibility of occurrence of contention. However, the related art has the disadvantages that, first, the node with frequent update cannot be detected, and the node with frequent update is predicted based on the sampling algorithm with relatively low success probability, so that a large error exists in the prediction result. The second traditional static partitioning method cannot partition the kernel which is selected to be effective by cross-partition transactions, and cannot process the problem of dynamic changing hot data partitioning; third, in existing index-free references, threads do not compete on nodes, but rather compete at the beginning of the incremental record list, resulting in similar scalability issues.
Based on this, the embodiment of the application provides a method and a device for processing index nodes of a database, a computer device and a medium, by acquiring the data writing times of a memory page, classifying candidate index nodes according to the data writing times and a preset writing times threshold value to divide the candidate index nodes into hot index nodes and cold index nodes, and performing node splitting on the hot index nodes to obtain a first target node so as to reduce frequent access times to the nodes; and carrying out node merging processing on the cold gate index node to obtain a second target node so as not to increase the competitive burden and improve the space utilization rate. Therefore, the detection of the hot index node is realized, extra calculation and memory access cost are not added for the detection, and the application range is wide, so that the method can be used for the database index based on the tree structure; classifying candidate index nodes into hot index nodes according to the data writing times and the writing times threshold value so as to accurately quantify lock competition and accurately predict nodes where the lock competition occurs; node splitting processing is carried out on the hot index node, so that the hot index node based on competition conflict can avoid large-scale unnecessary lock competition, the space utilization rate of the B-tree index structure is improved, and the system performance of a database is improved.
The method and apparatus for processing an index node of a database, a computer device and a medium provided in the embodiments of the present application are specifically described through the following embodiments, and the method for processing an index node in the embodiments of the present application is described first.
The embodiment of the application can acquire and process the related data based on the artificial intelligence technology. Among these, artificial intelligence (Artificial Intelligence, AI) is the theory, method, technique and application system that uses a digital computer or a digital computer-controlled machine to simulate, extend and extend human intelligence, sense the environment, acquire knowledge and use knowledge to obtain optimal results.
Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a robot technology, a biological recognition technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
The embodiment of the application provides an index node processing method of a database, and relates to the technical field of data storage. The method for processing the index node of the database provided by the embodiment of the application can be applied to a terminal, a server side and software running in the terminal or the server side. In some embodiments, the terminal may be a smart phone, tablet, notebook, desktop, etc.; the server side can be configured as an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, and a cloud server for providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, basic cloud computing services such as big data and artificial intelligent platforms and the like; the software may be an application or the like that implements the inode processing method of the database, but is not limited to the above form.
The subject application is operational with numerous general purpose or special purpose computer system environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like. The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
In the embodiments of the present application, when related processing is required according to user information, user behavior data, user history data, user location information, and other data related to user identity or characteristics, permission or consent of the user is obtained first, and the collection, use, processing, and the like of the data comply with related laws and regulations and standards of related countries and regions. In addition, when the embodiment of the application needs to acquire the sensitive personal information of the user, the independent permission or independent consent of the user is acquired through a popup window or a jump to a confirmation page or the like, and after the independent permission or independent consent of the user is explicitly acquired, necessary user related data for enabling the embodiment of the application to normally operate is acquired.
Fig. 1 is an optional flowchart of a method for processing an inode of a database according to an embodiment of the present application, where the method in fig. 1 may include, but is not limited to, steps S101 to S105.
Step S101, obtaining data writing information of a memory page; wherein the data writing information includes: page information and data writing times;
step S102, candidate index nodes are screened out from preset original index nodes according to page information;
step S103, classifying candidate index nodes according to the data writing times and a preset writing times threshold value to obtain hot index nodes and cold index nodes;
step S104, performing node splitting treatment on the hot index node to obtain a first target node;
and step S105, carrying out node merging processing on the cold gate index nodes to obtain a second target node.
In the steps S101 to S105 illustrated in the embodiments of the present application, by acquiring page information and data writing times of a memory page, and screening candidate inodes from original inodes according to the page information, that is, the candidate inodes are used as candidate inodes for performing cold and hot classification this time, so as to classify the candidate inodes according to the data writing times and a preset writing times threshold, that is, divide the candidate inodes into hot inodes and cold inodes, so as to realize efficient and low-cost detection of the hot inodes, without adding additional calculation and memory access nodes for detection, and accurately quantize latch competition and accurately predict latch competition candidate inodes. The first target node is obtained by carrying out node splitting treatment on the hot index node, and the second target node is obtained by carrying out node merging treatment on the cold index node, so that the hot index node is split to avoid large-range unnecessary latch competition, thereby reducing frequent access times to the nodes, improving the space utilization rate of the B-tree index structure and improving the system performance of the database.
In some embodiments, prior to step S101, the method of processing an inode of a database further comprises building a page table entry. The page table item is a mapping from a virtual address to a physical address stored in the page table by the operating system, each mapping is a page table item, and the CPU management unit sets a data flag bit in the page table item for each memory page, so that the data flag bit on the page table item is used for representing the state information of each memory page, and the page update state of the memory page can be determined according to the data flag bit.
Referring to fig. 2, constructing the page table entry may include, but is not limited to, including steps S201 to S202:
step S201, acquiring page data of a memory page according to a preset time interval; wherein the page data includes: page information and page update status information;
step S202, setting a data flag bit on a preset data table item according to page information, and setting state information of the data flag bit according to page update state information to obtain a page table item.
In step S201 of some embodiments, page information and page update status information of a memory page are obtained at preset time intervals, so that the update status of the memory page is known through the page update information. Wherein the page update status information includes: the accessed state information characterizes the memory page as accessed and the modified state information characterizes the memory page as modified.
In step S202 of some embodiments, data flag bits are set on a preset data entry according to page information, and each page information corresponds to flag bit information of the data flag bits, that is, the number of the data flag bits is set according to the number of pages of the memory, and a data flag of each data flag bit is set according to the page. And setting the state information of each data zone bit according to the page update state information to obtain page table entries. When the state information of the memory page is acquired, the data flag bit is only acquired from the paging list item. The state information of the data flag bit is reset according to the page update state information of the preset time interval.
For example, referring to fig. 3, fig. 3 is a system flowchart of an inode processing method of a database, and the data flag bit includes: page0, page1, page2, page3, page4, page5, and each data tag corresponds to a memory Page. After the data zone bit is constructed, setting the state information of the data zone bit according to the page update state information of the preset time interval. Thus, the page update status information of the page table entry is known by the status information of the data flag bit to enable monitoring access to the database with little overhead for proper operation of the database system. Therefore, the page update status information of the memory page is represented by the data flag bit, and when the memory page is accessed or modified, the corresponding data flag bit in the paging entry is correspondingly modified, that is, the status information of the data flag bit is changed, so that the memory page is represented by the data flag bit.
In step S201 to step S202 illustrated in the embodiment of the present application, page information and page update status information of each memory page are obtained at preset time intervals, and data flag bits on a data table entry are set according to the page information, and status information of the data flag bits is set according to the page update status information, so as to obtain a page table entry. Therefore, the memory page is represented by the data flag bit of the paging table entry, and the page update state information of the memory page is represented by the state information of the data flag bit, so that state acquisition of all the memory pages is not required when the page update state information of the memory page is acquired, and only the state information of the data flag bit of the paging table entry is required, so that the data writing frequency calculation of the memory page is simpler.
Referring to fig. 4, in some embodiments, step S101 may include, but is not limited to, steps S401 to S403:
step S401, acquiring state information of a data zone bit in a preset paging list item according to a preset time interval to acquire the state information of the zone bit; the state information of the data flag bit is used for representing the updating state of the memory page;
step S402, calculating the state update times according to the preset period and the state information of the zone bit to obtain the data writing times; wherein the preset period comprises at least two preset time intervals;
Step S403, obtaining the flag bit information of the data flag bit in the page table item to obtain page information.
In step S401 of some embodiments, since the CPU memory management unit may implement efficient virtual memory management to achieve the purpose of monitoring access to the database, and the operating system stores the mapping from the virtual address to the physical address in the page table, and each mapping is called a page table entry, the update status of the memory page is recorded through the page table entry, and meanwhile, a data flag bit is set on the page table entry, and the data flag bit is used to characterize the update status of each memory page. And acquiring the state information of the flag bit in the page table entry according to the preset time interval to obtain the state information of the flag bit, namely the updated state of the memory page, so that the updated state of the memory page is easy to acquire. Therefore, through the preset time interval, the state information of the data zone bit in the page table item is collected according to the preset time interval, the method is not limited by the data type, is applicable to static data scenes and dynamic data scenes, and can dynamically process the changed hot index data, so that frequently updated index nodes in the B-tree index structure are efficiently detected with extremely low cost, namely, the hot index nodes with high competition are detected, and node splitting is conveniently carried out on the hot index nodes, so that the occurrence of high competition is avoided.
In step S402 of some embodiments, since the status information of the data flag bit will change, and it is required to determine whether the memory page corresponding to the data flag bit is frequently updated according to the preset period and the status information change times, the status update times are calculated according to the preset period and the status information of the flag bit, that is, the number of times that the status information of the flag bit changes is calculated to obtain the data writing times. Therefore, the state information of the data flag bits in the paging list item is traversed at preset time intervals to obtain the state information of the flag bits, and the data flag bits are reset while traversing so that the number of times of updating the data flag bits can be updated in time during traversing at the next preset time intervals, namely the number of times that the memory page is accessed and modified is counted, so that the number of times of updating the memory page is calculated simply.
It should be noted that, if the preset period is set and the preset period includes at least two preset time intervals, the flag bit state information in the preset period is obtained, that is, at least two flag bit state information is obtained, and each flag bit state information corresponds to one memory page. The flag bit state information comprises access state information and modification state information, so that the number of times of access state information and modification state information of each data flag bit is counted. For example, if the preset period includes 5 preset time intervals, when the status information of the 5 flag bits is collected, the access status information of the flag bit of the data a is obtained for 1 time, and the status information is modified for 2 times; the access state information of the B data zone bit is 0 times, the modification state information is 1 time, the access state information of the C data zone bit is 1 time, and the modification state information is 1 time; the access state information of the D data state flag bit is 1 time and the modification state information is 3 times. Therefore, the state update times of the flag bit state information of each data flag bit are calculated to obtain that the data writing times of the A data flag bit are 3 times, the data writing times of the B data flag bit are 1 time, the data writing times of the C data flag bit are 2 times, and the data writing times of the D data flag bit are 4 times.
In step S403 of some embodiments, the flag bit information of the flag bit in the data in the page table entry is acquired, and the flag bit information corresponds to the page information of the memory page, so the flag bit information is used as the page information of the memory page, so that the page information is easy to acquire.
In steps S401 to S403 illustrated in the embodiment of the present application, the status information of the flag bit is obtained by obtaining the status information of the data flag bit in the page table entry according to the preset time interval, that is, the status information of each memory page is obtained, and the number of data writing times is obtained by performing status update according to the preset period and the status information of the flag bit, that is, the number of data writing times is obtained by obtaining the number of times of change of the status information of the flag bit in the preset period. And meanwhile, the flag bit information of the data flag bit in the page table item is acquired to obtain page information, so that the page information and the data writing times are spliced to form the data writing information of the memory page, and the updating frequency of each memory page is conveniently judged according to the data writing times and the page information of the memory page, namely the updating frequency of the candidate index node corresponding to the memory page is determined.
In step S102 of some embodiments, candidate inodes representing the memory page are obtained by screening candidate inodes from the original inodes according to the page information, that is, obtaining the original inodes corresponding to the page information as the candidate inodes.
In step S103 of some embodiments, please refer to fig. 3, the B-tree hot and cold point detection module performs hot and cold point classification of the candidate index node to output a hot index node or a cold index node. After the data writing times are obtained through calculation, presetting a writing times threshold, comparing the data writing times with the preset writing times threshold, and classifying the candidate index nodes as hot index nodes if the data writing times are larger than the writing times threshold; and if the data writing times are smaller than the writing times threshold value, classifying the candidate index nodes as cold index nodes. Therefore, the frequency of the memory page being accessed and modified is greater than the threshold of the writing times, the candidate index node corresponding to the memory page is judged to be the hot index node, and if the frequency of the memory page being accessed and modified is less than the threshold of the writing times, the candidate index node corresponding to the memory page is judged to be the cold index node. Therefore, the hot index node of the B-tree index structure is detected very low efficiently by monitoring the number of times the memory page is accessed and modified based on the MMU so as to be divided into the hot index node and the cold index node.
Referring to fig. 5, in some embodiments, step S104 may include, but is not limited to, steps S501 to S505:
Step S501, obtaining tuples in hot index nodes to obtain hot candidate tuples;
step S502, performing competition conflict analysis on the hot candidate tuples to obtain competition conflict information;
step S503, dividing the hot candidate tuples into a first hot tuple and a second hot tuple according to the competition conflict information;
step S504, performing node splitting on the hot index node to obtain a first split node and a second split node;
in step S505, the first hot tuple is stored in the first split node, and the second hot tuple is stored in the second split node, so as to obtain the first target node.
In step S501 of some embodiments, where the hot index node includes at least one tuple, the tuples in the hot index node are obtained to obtain hot candidate tuples.
In step S502 of some embodiments, the hot candidate tuple is analyzed by performing a contention conflict analysis, that is, determining whether the frequently updated tuples are the same to determine whether the hot index node is an unnecessarily contending high contention conflict node, and if the frequently updated tuples are not the same, indicating that the hot candidate tuple can be split by the node to avoid node contention. The contention conflict information is obtained by judging whether the frequently updated tuples are identical.
In step S503 of some embodiments, the hot candidate nodes are divided into a first hot tuple and a second hot tuple according to the contention conflict information, that is, after two hot candidate tuples that are frequently updated are found, the candidate hot tuples are divided according to positions between the hot candidate tuples, so that the hot candidate tuples that are frequently updated are stored in the split nodes after being divided, and the hot candidate tuples that are frequently updated are separated into independent nodes protected by different latches.
It should be noted that, two hot candidate tuples in competition are determined according to the competition conflict information, and the hot candidate tuples on the hot index node are divided by taking the two hot candidate tuples updated frequently as a reference, that is, the middle position of the two hot candidate tuples updated frequently is found out, and the hot candidate tuples are split by the middle position. For example, referring to fig. 6, the hot index node in fig. 6 includes A, B, C, D hot candidate tuples, and thread 1 and thread 2 frequently update hot candidate tuples a and C, thus contending for the latch of the hot index node unnecessarily, in order to reduce contention, the split node is displayed at the midpoint between the two frequently updated hot candidate tuples a and C, i.e., between the hot candidate tuples B and C, i.e., A, B as the first hot tuple and C, D as the second hot tuple.
In step S504 of some embodiments, the hot index node is split to form two nodes, i.e., a first split node and a second split node, so that two frequently updated hot candidate tuples are separated by the first split node and the second split node.
In step S505 of some embodiments, the frequently updated tuples are split into different independent nodes by storing a first hot tuple in a first split node and a second hot tuple in a second split node to reduce contention conflicts on the nodes.
It should be noted that, by constructing the first split node and the second split node, storing the first hot tuple A, B into the first split node and storing the second hot tuple C, D into the second split node, the frequently updated A, C hot candidate tuples may reduce the contention. Therefore, the B-tree hot index node is detected with extremely low cost and the split node measures are adopted for the hot index node with high competition so as to avoid the occurrence of high competition, thereby improving concurrency performance with high efficiency and flexibility.
In step S501 to step S505 illustrated in the embodiment of the present application, by acquiring hot candidate tuples in the hot index node, and determining whether the frequently updated hot candidate tuples are the same, if different hot candidate tuples are obtained, the hot candidate tuples are divided into a first hot tuple and a second hot tuple according to the contention conflict information, the hot index node is split into a first split node and a second split node, the first hot tuple is stored in the first split node, and the second hot tuple is stored in the second split node, so that the tuples of the contention conflict are separated to obtain a first target node, thereby avoiding occurrence of high contention, and improving concurrency performance efficiently and flexibly.
Referring to fig. 7, in some embodiments, step S105 may include, but is not limited to, steps S701 to S704:
step S701, obtaining tuples in cold gate index nodes to obtain cold gate candidate tuples;
step S702, calculating the residual space of the cold gate index node to obtain the residual space of the memory;
step S703, merging the cold gate index nodes according to the remaining memory space to obtain merged nodes;
and step S704, combining and storing the cold candidate elements to the merging node according to the residual space of the memory to obtain a second target node.
In step S701 of some embodiments, after the cold-gate index node is detected, that is, the cold-gate index node that can be combined is detected, so that the cold-gate index node is combined to save a new control node, thereby achieving the purposes of increasing the space utilization of the B-tree index structure and improving the performance of the database system. Therefore, the tuples in the cold index node are acquired to obtain cold candidate tuples, and then the cold candidate tuples can be fused into one node, so that the nodes are saved.
In step S702 of some embodiments, the remaining space of the cold-gate index nodes is calculated to obtain the remaining space of the memory, that is, the free space of each cold-gate index node is calculated, so as to determine which cold-gate index nodes can be combined according to the remaining space of the memory.
In step S703 of some embodiments, the merging node is obtained by screening cold gate index nodes that can be merged according to the remaining memory space, and merging the cold gate index nodes. Wherein, the cold door index nodes which can be combined are adjacent.
In step S704 of some embodiments, the cold door candidate tuples are combined according to the remaining space of the memory and stored into a merging node, where the merging cold door index node, that is, the middle cold door index node, is deleted, and the cold door candidate tuple deleted from the cold door index node is obtained, so as to store the cold door candidate tuple into the adjacent cold door index node, and if the remaining space of the memory of the adjacent cold door index node is insufficient, a part of the cold door candidate tuple is stored into another cold door index node, so as to completely store the cold door candidate tuple of the deleted cold door index node.
For example, referring to fig. 8, if the cold-gate index nodes are m, n, o, and the cold-gate index nodes are combined to obtain new nodes n ', o', so as to improve the space utilization of the B-tree index structure, then, starting from the rightmost side of the found cold-gate index node, selecting the rightmost two nodes, such as n and o in the example, and moving the tuple under the node n to the node o (the remaining memory space is 70%) as much as possible; then, moving the cold gate candidate tuple under the m cold gate index nodes on the left side; since the remaining space (from 20% to 30%) of the cold index node n can accommodate all the cold candidate tuples of the cold index node m, the merging operation is finished after this movement, and at this time, merging the cold index nodes m and n results in an empty node and deleting the redundant node segmenter 2 in the parent node.
In steps S701 to S704 illustrated in the embodiment of the present application, a memory residual space is obtained by performing residual space calculation on each cold gate index node, the cold gate index nodes are combined into a combined node according to the memory residual space, and the cold gate candidate tuples are stored into the combined node according to the memory residual space to implement node combination of the cold gate index nodes so as to obtain a second target node, thereby increasing the space utilization rate of the B-tree index structure and improving the database system performance.
Referring to fig. 9, in some embodiments, step S703 may include, but is not limited to, steps S901 to S903:
step S901, carrying out summation calculation on the residual space of the memory to obtain the total sum of the residual space;
step S902, screening out a selected index node from cold door index nodes according to the sum of the residual spaces and a preset node memory space;
step S903, merging the selected index nodes to obtain merged nodes.
In steps S901 and S902 of some embodiments, the remaining memory spaces of several adjacent cold-gate index nodes are summed to obtain a remaining space sum, and if the remaining space sum is greater than the node memory space of one node, it indicates that the adjacent cold-gate index nodes can perform a merging operation, so as to achieve the purpose of sorting the free spaces and obtaining a new empty node, and therefore, the adjacent cold-gate index nodes are selected as selected index nodes.
In step S903 of some embodiments, merging the selected index nodes is achieved by merging the selected index nodes, that is, deleting the selected index node that determines the most middle of the selected index nodes, and then assigning cold candidate tuples of the most middle selected index node to the selected index nodes on both sides.
In step S901 to step S903 illustrated in the embodiment of the present application, a sum of remaining spaces is obtained by summing the remaining spaces of the memories of adjacent cold-gate index nodes, and the cold-gate index node of the memory space of the node with the sum of remaining spaces being greater than one node is used as a selected index node, and the selected index node is combined to obtain a combined node. Therefore, the space utilization rate of the B-tree number structure can be increased and the performance of a database system can be improved by merging the cold gate index nodes.
Referring to fig. 10, in some embodiments, after step S103, the method for processing the index node of the database further includes, but is not limited to, steps S1001 to S1002:
step S1001, screening out a target page from a memory page according to a cold gate index node;
step S1002, storing the target page in a preset exchange area.
In steps S1001 to S1002 illustrated in the embodiment of the present application, after the division into the hot index node and the cold index node, determining which memory pages are frequently used according to the hot index node and the cold index node, and screening out the target pages from the memory pages according to the cold index node, that is, obtaining the memory pages corresponding to the cold index node as the target pages. And then writing the target page into a preset exchange area, so that the memory page which is frequently updated is prevented from being called into the exchange area, and the memory page of the cold door is stored into the exchange area.
According to the embodiment of the application, page information and page update state information of each memory page are obtained at preset time intervals, data zone bits on the data table item are set according to the page information, and state information of the data zone bits is set according to the page update state information, so that the page table item is obtained. And meanwhile, obtaining the status information of the flag bit of the data in the page table item according to the preset time interval to obtain the status information of the flag bit, namely obtaining the status information of each memory page, calculating the status update times according to the preset period and the status information of the flag bit to obtain the data writing times, and obtaining the status information of the flag bit of the data in the page table item to obtain the page information. And screening candidate index nodes from the original index nodes according to the page information. Comparing the data writing times with a preset writing times threshold value, and classifying the candidate index nodes as hot index nodes if the data writing times are larger than the writing times threshold value; and if the data writing times are smaller than the writing times threshold value, classifying the candidate index nodes as cold index nodes. And acquiring hot candidate tuples in the hot index node, judging whether the frequently updated hot candidate tuples are the same, if not, obtaining competition conflict information, dividing the hot candidate tuples into a first hot tuple and a second hot tuple according to the competition conflict information, carrying out node splitting on the hot index node into a first splitting node and a second splitting node, storing the first hot tuple in the first splitting node, and storing the second hot tuple in the second splitting node, so as to separate the conflicting tuples to obtain the first target node. And calculating the residual space of each cold gate index node to obtain a memory residual space, merging the cold gate index nodes into a merging node according to the memory residual space, and storing the cold gate candidate tuples to the merging node according to the memory residual space to realize node merging of the cold gate index nodes so as to obtain a second target node, thereby increasing the space utilization rate of the B-tree index structure and improving the system performance of the database.
Referring to fig. 11, an embodiment of the present application further provides an apparatus for processing an inode of a database, which may implement the method for processing an inode of a database, where the apparatus includes:
the information acquisition module 1101 is configured to acquire data writing information of a memory page; wherein the data writing information includes: page information and data writing times;
the node screening module 1102 is configured to screen candidate index nodes from preset original index nodes according to page information;
the node classification module 1103 is configured to classify candidate index nodes according to the number of data writing times and a preset threshold of writing times, so as to obtain hot index nodes and cold index nodes;
the node splitting module 1104 is configured to perform node splitting processing on the hot index node to obtain a first target node;
and the node merging module 1105 is configured to perform node merging processing on the cold gate index node to obtain a second target node.
The specific implementation manner of the node processing device of the database is basically the same as the specific embodiment of the node processing method of the database, and will not be described herein.
The embodiment of the application also provides computer equipment, which comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the index node processing method of the database when executing the computer program. The computer equipment can be any intelligent terminal including a tablet personal computer, a vehicle-mounted computer and the like.
Referring to fig. 12, fig. 12 illustrates a hardware structure of a computer device according to another embodiment, where the computer device includes:
the processor 1201 may be implemented by a general purpose CPU (central processing unit), a microprocessor, an application specific integrated circuit (ApplicationSpecificIntegratedCircuit, ASIC), or one or more integrated circuits, etc. for executing related programs to implement the technical solutions provided by the embodiments of the present application;
memory 1202 may be implemented in the form of read-only memory (ReadOnlyMemory, ROM), static storage, dynamic storage, or random access memory (RandomAccessMemory, RAM). Memory 1202 may store an operating system and other application programs, and when implementing the technical solutions provided in the embodiments of the present application by software or firmware, relevant program codes are stored in memory 1202, and the processor 1201 invokes an inode processing method for executing the database of the embodiments of the present application;
an input/output interface 1203 for implementing information input and output;
the communication interface 1204 is configured to implement communication interaction between the device and other devices, and may implement communication in a wired manner (e.g., USB, network cable, etc.), or may implement communication in a wireless manner (e.g., mobile network, WIFI, bluetooth, etc.);
A bus 1205 for transferring information between various components of the device such as the processor 1201, memory 1202, input/output interface 1203, and communication interface 1204;
wherein the processor 1201, the memory 1202, the input/output interface 1203 and the communication interface 1204 enable communication connection between each other inside the device via a bus 1205.
The embodiment of the application also provides a computer readable storage medium, wherein the computer readable storage medium stores a computer program, and the computer program realizes the method for processing the index nodes of the database when being executed by a processor.
The memory, as a non-transitory computer readable storage medium, may be used to store non-transitory software programs as well as non-transitory computer executable programs. In addition, the memory may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory optionally includes memory remotely located relative to the processor, the remote memory being connectable to the processor through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
According to the method, the device, the computer equipment and the medium for processing the index node of the database, the candidate index node is screened out from the original index node according to the page information by acquiring the data writing times and the page information of the memory page, the data writing times and the writing times threshold value are compared, the candidate index node is divided into the cold index node and the hot index node, the hot index node is subjected to node splitting to obtain a first target node, the cold index node is subjected to node merging to obtain a second target node, unnecessary competition on the nodes is reduced to achieve the aim of improving concurrency, and meanwhile, the B-tree index structure space management is considered to improve the node space utilization rate, so that the system performance of the database is improved.
The embodiments described in the embodiments of the present application are for more clearly describing the technical solutions of the embodiments of the present application, and do not constitute a limitation on the technical solutions provided by the embodiments of the present application, and as those skilled in the art can know that, with the evolution of technology and the appearance of new application scenarios, the technical solutions provided by the embodiments of the present application are equally applicable to similar technical problems.
It will be appreciated by those skilled in the art that the technical solutions shown in the figures do not constitute limitations of the embodiments of the present application, and may include more or fewer steps than shown, or may combine certain steps, or different steps.
The above described apparatus embodiments are merely illustrative, wherein the units illustrated as separate components may or may not be physically separate, i.e. may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
Those of ordinary skill in the art will appreciate that all or some of the steps of the methods, systems, functional modules/units in the devices disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof.
The terms "first," "second," "third," "fourth," and the like in the description of the present application and in the above-described figures, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that in this application, "at least one" means one or more, and "a plurality" means two or more. "and/or" for describing the association relationship of the association object, the representation may have three relationships, for example, "a and/or B" may represent: only a, only B and both a and B are present, wherein a, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b or c may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the above-described division of units is merely a logical function division, and there may be another division manner in actual implementation, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including multiple instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods of the various embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing a program.
Preferred embodiments of the present application are described above with reference to the accompanying drawings, and thus do not limit the scope of the claims of the embodiments of the present application. Any modifications, equivalent substitutions and improvements made by those skilled in the art without departing from the scope and spirit of the embodiments of the present application shall fall within the scope of the claims of the embodiments of the present application.

Claims (9)

1. A method for processing an index node of a database, the method comprising:
acquiring data writing information of a memory page; wherein the data writing information includes: page information and data writing times;
screening candidate index nodes from preset original index nodes according to the page information;
classifying the candidate index nodes according to the data writing times and a preset writing times threshold value to obtain hot index nodes and cold index nodes;
and performing node splitting processing on the hot index node to obtain a first target node, wherein the method specifically comprises the following steps:
obtaining tuples in the hot index node to obtain hot candidate tuples;
performing competition conflict analysis on the hot candidate tuples to obtain competition conflict information;
dividing the hot candidate tuples into a first hot tuple and a second hot tuple according to the competition conflict information;
Node splitting is carried out on the hot index node to obtain a first split node and a second split node;
storing the first hot tuple into the first split node, and storing the second hot tuple into the second split node to obtain the first target node;
and carrying out node merging processing on the cold gate index nodes to obtain a second target node.
2. The method of claim 1, wherein the obtaining the data write information of the memory page comprises:
acquiring state information of a data zone bit in a preset paging table entry according to a preset time interval to acquire the state information of the zone bit; the state information of the data flag bit is used for representing the updating state of the memory page;
calculating the state update times according to a preset period and the flag bit state information to obtain the data writing times; wherein the preset period comprises at least two preset time intervals;
and acquiring the zone bit information of the data zone bit in the paging list item to obtain the page information.
3. The method of claim 2, wherein prior to the retrieving the data write information for the memory page, the method further comprises:
The step of constructing the paging list item specifically comprises the following steps:
acquiring page data of the memory page according to a preset time interval; wherein the page data includes: page information and page update status information;
setting a data flag bit on a preset data table according to the page information, and setting state information of the data flag bit according to the page update state information to obtain the paging table.
4. A method according to any one of claims 1 to 3, wherein the performing node merging processing on the cold gate index node to obtain a second target node includes:
obtaining tuples in the cold gate index node to obtain cold gate candidate tuples;
performing residual space calculation on the cold gate index node to obtain a memory residual space;
combining the cold gate index nodes according to the memory residual space to obtain combined nodes;
and combining and storing the cold candidate elements to the merging node according to the memory residual space to obtain the second target node.
5. The method of claim 4, wherein the merging the cold gate index nodes according to the remaining memory space to obtain merged nodes comprises:
Summing the residual space of the memory to obtain the sum of the residual spaces;
screening a selected index node from the cold door index node according to the sum of the residual spaces and a preset node memory space;
and combining the selected index nodes to obtain the combined node.
6. A method according to any one of claims 1 to 3, wherein after classifying the candidate inodes according to the number of data writes and a preset threshold number of writes to obtain a hot inode and a cold inode, the method further comprises:
screening a target page from the memory page according to the cold gate index node;
and storing the target page into a preset exchange area.
7. An inode processing apparatus for a database, the apparatus comprising:
the information acquisition module is used for acquiring data writing information of the memory page; wherein the data writing information includes: page information and data writing times;
the node screening module is used for screening candidate index nodes from preset original index nodes according to the page information;
the node classification module is used for classifying the candidate index nodes according to the data writing times and a preset writing times threshold value to obtain hot index nodes and cold index nodes;
The node splitting module is configured to perform node splitting processing on the hot index node to obtain a first target node, and specifically includes:
obtaining tuples in the hot index node to obtain hot candidate tuples;
performing competition conflict analysis on the hot candidate tuples to obtain competition conflict information;
dividing the hot candidate tuples into a first hot tuple and a second hot tuple according to the competition conflict information;
node splitting is carried out on the hot index node to obtain a first split node and a second split node;
storing the first hot tuple into the first split node, and storing the second hot tuple into the second split node to obtain the first target node;
and the node merging module is used for carrying out node merging processing on the cold gate index node to obtain a second target node.
8. A computer device, characterized in that it comprises a memory storing a computer program and a processor implementing the method of processing inodes of a database according to any of claims 1 to 6 when the computer program is executed by the processor.
9. A computer readable storage medium storing a computer program, wherein the computer program when executed by a processor implements the method of processing inode of a database according to any one of claims 1 to 6.
CN202310015649.XA 2023-01-06 2023-01-06 Index node processing method and device of database, computer equipment and medium Active CN115757438B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310015649.XA CN115757438B (en) 2023-01-06 2023-01-06 Index node processing method and device of database, computer equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310015649.XA CN115757438B (en) 2023-01-06 2023-01-06 Index node processing method and device of database, computer equipment and medium

Publications (2)

Publication Number Publication Date
CN115757438A CN115757438A (en) 2023-03-07
CN115757438B true CN115757438B (en) 2023-05-12

Family

ID=85348265

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310015649.XA Active CN115757438B (en) 2023-01-06 2023-01-06 Index node processing method and device of database, computer equipment and medium

Country Status (1)

Country Link
CN (1) CN115757438B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102799679A (en) * 2012-07-24 2012-11-28 河海大学 Hadoop-based massive spatial data indexing updating system and method

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8712984B2 (en) * 2010-03-04 2014-04-29 Microsoft Corporation Buffer pool extension for database server
CN102523256B (en) * 2011-11-30 2014-07-30 华为技术有限公司 Content management method, device and system
US10210191B2 (en) * 2014-03-20 2019-02-19 International Business Machines Corporation Accelerated access to objects in an object store implemented utilizing a file storage system
CN112099908A (en) * 2020-08-27 2020-12-18 腾讯科技(深圳)有限公司 Virtual machine live migration method and device and computer equipment
CN113590612A (en) * 2021-07-13 2021-11-02 华中科技大学 Construction method and operation method of DRAM-NVM (dynamic random Access memory-non volatile memory) hybrid index structure

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102799679A (en) * 2012-07-24 2012-11-28 河海大学 Hadoop-based massive spatial data indexing updating system and method

Also Published As

Publication number Publication date
CN115757438A (en) 2023-03-07

Similar Documents

Publication Publication Date Title
CN109739849B (en) Data-driven network sensitive information mining and early warning platform
Ding et al. Tsunami: A learned multi-dimensional index for correlated data and skewed workloads
CN105630409B (en) Dual data storage using in-memory array and on-disk page structure
US9710517B2 (en) Data record compression with progressive and/or selective decomposition
EP3131021A1 (en) Hybrid data storage system and method and program for storing hybrid data
US8229916B2 (en) Method for massively parallel multi-core text indexing
EP2874073A1 (en) System, apparatus, program and method for data aggregation
Xiao et al. Efficient top-(k, l) range query processing for uncertain data based on multicore architectures
Li et al. ASLM: Adaptive single layer model for learned index
US11327985B2 (en) System and method for subset searching and associated search operators
US9323798B2 (en) Storing a key value to a deleted row based on key range density
CN106294815B (en) A kind of clustering method and device of URL
US11080196B2 (en) Pattern-aware prefetching using parallel log-structured file system
Sekhar et al. Optimized focused web crawler with natural language processing based relevance measure in bioinformatics web sources
WO2023143095A1 (en) Method and system for data query
CN108052535B (en) Visual feature parallel rapid matching method and system based on multiprocessor platform
Ma et al. FILM: a fully learned index for larger-than-memory databases
Jalili et al. Next generation indexing for genomic intervals
US10558636B2 (en) Index page with latch-free access
Deng et al. Information re-finding by context: A brain memory inspired approach
CN115757438B (en) Index node processing method and device of database, computer equipment and medium
Elmeiligy et al. An efficient parallel indexing structure for multi-dimensional big data using spark
Petrov Algorithms behind modern storage systems
Huang et al. Pisa: An index for aggregating big time series data
Kvet Database Block Management using Master Index

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant