CN117312004A - Database access method and device - Google Patents

Database access method and device Download PDF

Info

Publication number
CN117312004A
CN117312004A CN202210722775.4A CN202210722775A CN117312004A CN 117312004 A CN117312004 A CN 117312004A CN 202210722775 A CN202210722775 A CN 202210722775A CN 117312004 A CN117312004 A CN 117312004A
Authority
CN
China
Prior art keywords
memory
data
database
cache
cache data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210722775.4A
Other languages
Chinese (zh)
Inventor
许瀚
陈晓雨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Huawei Technology Co Ltd
Original Assignee
Chengdu Huawei Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Huawei Technology Co Ltd filed Critical Chengdu Huawei Technology Co Ltd
Priority to CN202210722775.4A priority Critical patent/CN117312004A/en
Publication of CN117312004A publication Critical patent/CN117312004A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/544Buffers; Shared memory; Pipes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/28Databases characterised by their database models, e.g. relational or object models
    • G06F16/284Relational databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/545Interprogram communication where tasks reside in different layers, e.g. user- and kernel-space

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application provides a database access method, which is applied to a server deployed with a database, wherein the server comprises a network card, and the network card supports RDMA functions, and the method comprises the following steps: when the stored Cache data in the Cache layer is updated, the index structure corresponding to the Cache data is subjected to atomic updating through an atomic operation interface of RDMA; the Cache layer is located in the shared memory, and Cache data shared by a plurality of processes related to the database are stored in the Cache layer. According to the access method of the database, the cache data shared by a plurality of processes related to the database are moved to the shared memory, and when the cache data are updated, only the shared cache data in the shared memory are updated, so that frequent operation of the memory and dirty reading possibility are avoided.

Description

Database access method and device
Technical Field
The present disclosure relates to the field of database technologies, and in particular, to a method and an apparatus for accessing a database.
Background
PostgreSQL is an object-relational database management system (ordms) of very well-defined free software. PostgreSQL supports most of the SQL standards and provides many other modern features such as complex queries, foreign keys, triggers, views, transaction integrity, multi-version concurrency control, etc. Likewise, postgreSQL may be extended in many ways, for example by adding new data types, functions, operators, aggregation functions, indexing methods, procedural languages, etc.
As shown in fig. 1, in the conventional database, cache (Cache) data is often stored in a private memory space of each process, for example, a frequently accessed system table is cached in the private memory space of each process, when a system table cached by a certain process is updated (for example, the process performs a write operation on the system table), other processes (for example, a pipeline, a shared message queue or a communication mode of a shared memory) need to be notified to update the cached system table synchronously, so as to ensure the consistency of Cache memories of different processes. This makes in-process cache updates subject to multiple memory operations, memory operations are frequent, and dirty reads are possible.
Disclosure of Invention
The embodiment of the application provides a database access method and device, wherein after cache miss occurs in data reading, the data is read without copying, so that the time consumption of data reading is reduced, and the performance of a database system is improved.
In a first aspect, the present application provides a method for accessing a database, applied to a server deployed with the database, where the server includes a network card, and the network card supports an RDMA function, the method includes: when the stored Cache data in the Cache layer is updated, the index structure corresponding to the Cache data is subjected to atomic updating through an atomic operation interface of RDMA; the Cache layer is located in the shared memory, and Cache data shared by a plurality of processes related to the database are stored in the Cache layer.
According to the access method of the database, the cache data shared by a plurality of processes related to the database are moved to the shared memory, and when the cache data are updated, only the shared cache data in the shared memory are updated, so that frequent operation of the memory and dirty reading possibility are avoided.
In one possible implementation, the index structure includes a logical pointer and an index address, where the logical pointer points to a first memory space, where the index address is stored in the first memory space, and the index address points to a second memory space, where the second memory space stores the updated cache data or a descriptor of the updated cache data, where the descriptor includes a length and a first address of the updated cache data;
the atomic updating of the index structure corresponding to the cache data through the atomic operation interface of RDMA includes:
and carrying out atomic updating on the index address in the index structure corresponding to the cache data through an atomic operation interface of RDMA.
Optionally, the descriptor may further include data attribute information, and the data attribute may be any data attribute, for example, float, int, etc., and the user may select an appropriate data attribute according to needs.
In another possible implementation, the index address points to a start position of a storage space of the updated cache data;
when the updated cache data is of a fixed size, the second memory space stores the updated cache data;
and when the updated cache data is of a non-fixed size, the second memory space stores descriptors of the updated cache data.
In another possible implementation, the cache data is table data with the accessed frequency greater than or equal to a preset frequency in the database.
In another possible implementation, the shared memory is located in a memory layer, and the storage medium of the memory layer includes a dynamic random access memory and a persistent memory type medium.
In another possible implementation, the method further includes:
responding to a read operation request of a target process, and executing read operation;
if the target data are not read in the Cache layer and the memory layer, the target data are read in the storage layer, and the storage medium of the storage layer is a lasting memory medium;
and copying the target data to a memory space corresponding to the target process. In another possible implementation, the database is a relational database.
In another possible implementation, the database is a relational database.
In a second aspect, the present application further provides an access device for a database, where the access device is applied to a server deployed with the database, where the server includes a network card, where the network card supports an RDMA function, and includes:
the updating module is used for carrying out atomic updating on the index structure corresponding to the Cache data through an atomic operation interface of RDMA when the stored Cache data in the Cache layer is updated;
the Cache layer is located in the shared memory, and Cache data shared by a plurality of processes related to the database are stored in the Cache layer.
In one possible implementation, the index structure includes a logical pointer and an index address, where the logical pointer points to a first memory space, where the index address is stored in the first memory space, and the index address points to a second memory space, where the second memory space stores the updated cache data or a descriptor of the updated cache data, where the descriptor includes a length and a first address of the updated cache data;
the updating module is specifically configured to:
and carrying out atomic updating on the index address in the index structure corresponding to the cache data through an atomic operation interface of RDMA.
In another possible implementation, the index address points to a start position of a storage space of the updated cache data;
when the updated cache data is of a fixed size, the second memory space stores the updated cache data;
and when the updated cache data is of a non-fixed size, the second memory space stores descriptors of the updated cache data.
In another possible implementation, the cache data is table data with the accessed frequency greater than or equal to a preset frequency in the database.
In another possible implementation, the shared memory is located in a memory layer, and the storage medium of the memory layer includes a dynamic random access memory and a persistent memory type medium.
In another possible implementation, the method further includes:
the response module is used for responding to the read operation request of the target process and executing the read operation;
the reading module is used for reading the target data from the storage layer if the target data are not read in the Cache layer and the memory layer, and the storage medium of the storage layer is a lasting memory medium;
and copying the target data to a memory space corresponding to the target process.
In another possible implementation, the database is a relational database.
In a third aspect, the present application provides a computing device comprising a memory having executable code stored therein and a processor executing the executable code to implement the method provided in the first aspect of the present application.
In a fourth aspect, the present application provides a computer readable storage medium having stored thereon a computer program which, when executed in a computer, causes the computer to perform the method provided in the first aspect of the present application.
In a fifth aspect, the present application provides a computer program or computer program product comprising instructions which, when executed, implement the method provided in the first aspect of the present application.
Drawings
FIG. 1 is a schematic diagram of a Cache of a conventional database being placed in a private memory space of a process;
FIG. 2 is a schematic diagram of a shared Cache of a database provided in an embodiment of the present application being placed in a shared memory;
FIG. 3 illustrates an application scenario diagram of a database system according to an embodiment of the present application;
FIG. 4 is a block diagram of a server to which the data access method provided in the embodiments of the present application may be applied;
FIG. 5 is a flowchart of a method for accessing a database according to an embodiment of the present application;
FIG. 6 is a schematic diagram of an index structure of a database and an index structure of an embodiment of the present application in the prior art;
FIG. 7 shows a database processing flow diagram of the database of the embodiments of the present application when a shared Cache is updated;
FIG. 8 illustrates an index structure diagram of a database according to an embodiment of the present application;
FIG. 9 shows a schematic diagram of a database reading data;
FIG. 10 is a flowchart of a method for accessing a database according to an embodiment of the present application;
fig. 11 and 12 show a data reading process of a database in the related art and a data reading process of a database of an embodiment of the present application, respectively;
fig. 13 is a schematic structural diagram of a database access device according to an embodiment of the present application;
fig. 14 is a schematic structural diagram of a computing device according to an embodiment of the present application.
Detailed Description
The technical scheme of the present application is described in further detail below through the accompanying drawings and examples.
In order to solve the problem in fig. 1, the embodiment of the present application provides a database, where the caches of all processes are moved to a shared memory (see fig. 2), and when each process related to the database needs to access Cache data, the process directly accesses the shared Cache in the shared memory, and when the Cache is updated, only the shared Cache is updated, and communication and updating between processes are not required.
FIG. 3 illustrates an application scenario diagram of a database system according to an embodiment of the present application. As shown in fig. 3, a database user may connect to the server 100 in the present application through a database client 200. The server 100 has a database deployed therein, and the server 100 includes a processor 11, a memory 12, and a persistent memory medium 13. Wherein the processor 11 processes the request from the user and persists the save data to the persistent memory medium 13. The processor 11 may be deployed as a single server process or as multiple processes that share or separate memory space. The processor 11 is responsible for the functions of space management, data organization, record access, data caching, etc. of the data table. After receiving the data table access request, the method interprets and optimizes the request statement, and performs operations such as insertion, deletion, update, query and the like of the data table record. And stores the recorded data on the persistent memory medium 13.
Fig. 4 is a schematic diagram of a server to which the data access method according to the embodiment of the present application may be applied. As shown in fig. 4, the server 100 includes a processor 11, and a background thread pool can be set based on the processor, where threads in the thread pool are execution carriers for processing user requests in a database, and the threads can also be execution bodies of processes. The server 100 further includes a memory 12 and a persistent memory-type medium 13, where the persistent memory-type medium 13 is a byte-level storage medium, and has byte addressing characteristics, such as a phase-change memory (PCM), which may also be referred to as a phase-change memory.
Fig. 5 is a flowchart of a method for accessing a database according to an embodiment of the present application. The method can be applied to the server shown in fig. 4, wherein the server comprises a network card, and the network card supports a remote memory direct access (remote direct memory access, RDMA) function, for example, the network card can be CX5 network card, 1822 network card and the like; as shown in fig. 5, a method for database provided in the embodiments of the present application at least includes steps S501 to S502.
In step S501, when the stored Cache data in the Cache layer is updated, the index structure corresponding to the Cache data is updated atomically through the atomic operation interface of RDMA.
The Cache layer is located in the shared memory, and Cache data shared by a plurality of processes related to the database are stored in the Cache layer.
The cache data is table data with the accessed frequency greater than or equal to the preset frequency in the database, that is, the cache data is usually some frequently accessed system table or common table view.
When a process operates on shared cache data (such as a system table and/or a common table view) in a shared memory (such as the process performs a write operation on the system table or the common view table) to cause shared cache update, an index structure corresponding to the shared cache data needs to be atomically updated, so that dirty reading possibility is avoided.
For this purpose, a new index structure is designed, as shown in fig. 6, where the original index structure includes hash tables corresponding to Data tables (e.g. Table1, table2, table3 … Table n), the hash tables point to memory spaces storing Data (Data), and in this embodiment, the index structure stores index addresses pointing to the memory spaces (may be referred to as first memory spaces) storing Data (Data), and when Data needs to be acquired, the index addresses are accessed to acquire Data after 1-2 hops.
When the index of the shared cache needs to be updated, RDMA atoms are adopted to update the index address, so that the locking range of the index during updating is reduced to the greatest extent, the performance of a multi-write scene is improved, dirty reading of data is avoided, and strong consistency of the data is ensured.
FIG. 7 shows a database processing flow diagram when a shared Cache is updated. As shown in fig. 7, when the update occurs in the shared Cache in the shared memory, the following steps are executed:
s701, applying for a space A in a shared memory;
s702, writing data to be cached into a space A;
s703, updating the corresponding position of the hash table indexed by the Cache by using RDMA, and writing the 64-bit address of the space A into the position.
Optionally, the storage medium of the memory layer includes a dynamic random access memory (dynamic random access memory, DRAM) and a persistent memory medium, that is, the DRAM and the persistent memory medium form a larger memory space, so that the new index structure is satisfied, the memory space occupied by one layer of index address is increased, and simultaneously, a larger Cache capacity is provided, and the Cache miss probability is reduced.
Alternatively, the persistent memory medium may be a phase-change memory (PCM), a storage class memory (storage class memory, SCM), a Ferroelectric RAM (FRAM), a Magnetoresistive RAM (MRAM), or the like.
In one example, the index address in the new index structure points to the starting position of the storage space of the updated cache data, and when the updated cache data is of a fixed size, the second memory space stores the updated cache data; and when the updated cache data is of a non-fixed size, the second memory space stores descriptors of the updated cache data, wherein the descriptors comprise the length and the first address information of the updated cache data.
Optionally, the descriptor may further include data attribute information, and the data attribute may be any data attribute, for example, float, int, etc., and the user may select an appropriate data attribute according to needs.
Note that, the embodiment of the present application is not limited to the index type, and may be tree, kv, or the like, in addition to the hash table (see fig. 8).
The index address is not limited to 64 bits, but may be another bit, which is related to the RDMA support bit, for example, may be 32 bits or 128 bits that may be higher in the future, which is not limited in this application, and a suitable index address bit may be selected according to the actual situation.
It should be explained that, in the embodiment of the present application, the Cache is software-logically, so that in order to achieve a faster reading and writing speed, some frequently accessed data tables are stored in the memory in advance, when data needs to be read, the data is read in the Cache layer first, if the Cache hits, the read data is directly returned, and if the Cache misses, the data is read from the persistent memory medium. The Cache mentioned in the embodiment of the present application is different from the Cache understood on the hardware level, for example, the Cache medium of the CPU is generally formed by using SRAM with a faster speed than DRAM, and the algorithm for selecting the Cache data is generally a general algorithm such as LRU (least recently used). The Cache medium mentioned in the embodiment of the application adopts a DRAM or a persistent memory medium, and the algorithm of selecting data by the Cache can be customized by a user.
According to the access method of the database, the caches of all the processes are moved to the shared memory (see fig. 2), when all the processes related to the database need to access Cache data, the shared Cache is directly accessed in the shared memory, and when the Cache is updated, only the shared Cache is updated, and communication and updating among the processes are not needed. Meanwhile, a new index structure is designed, when the index of the shared cache needs to be updated, RDMA atoms are adopted to update the index address, the locking range of the index during updating is reduced to the greatest extent, the multi-write scene performance is improved, dirty reading of data is avoided, and strong consistency of the data is ensured.
The above-mentioned access method of the database may be applied to any type of database, such as a hierarchical database, a network-type database, a relational database, and the like.
Fig. 9 shows a schematic diagram of a database reading data. As shown in fig. 9, when the tuple is accessed, if the Cache hits, the read target tuple data is returned to the access process (see the right side of the dotted line in fig. 9); if the Cache miss needs to copy the target tuple data from the bottom storage to the memory, then fine-grained reading the target tuple data in the memory, and returning the read target tuple data to the access process (see left side of the dotted line in fig. 9), that is, when the Cache miss occurs in the existing most databases during data reading, the target tuple data must be copied to the memory layer through the storage layer, which increases the time consumption of reading and affects the I/O performance of the databases.
In order to solve the above problem, the method for accessing a database according to the embodiment of the present application further includes the following steps (see fig. 10):
in step S1001, a read operation is performed in response to a read operation request of the target process.
The target process sends a read operation request to a processor of the server, the processor executes the read operation to read the data, and the read data is returned to the target process to complete the data reading of the target process.
The read operation request of the target process carries an identifier of the target data, where the identifier is used to identify the data, so that the processor identifies the target data.
It will be appreciated that the target process and the target data are processes and data that the processor needs to process, for example, the target process is a process that issues a read operation request, the processor needs to process, and the target data is data that the target process needs to read.
The target data may be various types of data, such as system table data or a general table view.
In step S1002, if the target data is not read in both the Cache and the memory layer, the target data is read in the storage layer, and the storage medium in the storage layer is a persistent memory type medium, for example, PCM, SCM, FRAM, MRAM.
And the processor executes the reading operation, firstly, the reading operation is performed in the Cache, when the target data (namely the Cache miss) is not read in the Cache, the reading operation is continued in the memory, and when the target data is not read in the memory, the target data is read in the storage layer.
The storage layer is a persistent medium layer in which all data of the database is stored, and is a persistent memory medium (such as PCM), and the target data to be read can be directly read from the storage layer in fine granularity (such as byte granularity) by utilizing the byte addressing characteristic of the PCM.
In step S1003, the target data is copied to the memory space corresponding to the target process, so as to implement reading and writing of the target data by the target process.
Fig. 11 and 12 show a data reading process of a database in the related art and a data reading process of a database of an embodiment of the present application, respectively.
Compared with the prior database, which uses a storage layer of a disk, a flash memory and other block storage devices as a persistent medium, the disk or the flash memory takes pages as units when reading and writing data, and each time the reading and writing granularity is one page of data size, that is, the prior database reads a minimum data granularity of one data block from the storage layer, for example, the data granularity of the data block is 4KB, the data minimum granularity of the data which can be read from the storage layer by the database is 4KB, and the database needs to copy the data block (one or more) of 4KB into a memory when reading the data, and then searches the target data in the memory in a fine granularity. The storage medium of the storage layer of the database in the embodiment of the application is a persistent memory medium (for example, PCM), and the PCM has byte addressing characteristics similar to memory, can meet the requirement of reading data with database granularity (similar to the byte granularity of DRAM), and can read target data without copying the data in the storage layer to a cache.
Preferably, the database is a relational database, for example a class PostgreSQL database or a PostgreSQL database; it is easy to understand that the PostgreSQL-like database is a database with PostgreSQL as a kernel, or a database developed on the basis of a PostgreSQL database, such as a PostgreSQL PRO database, openGauss database.
According to the access method of the database, copying from the bottom storage layer to the memory layer is omitted, the bottom storage layer adopts a persistent memory medium (for example, PCM), byte granularity addressing capability of the PCM is utilized, byte granularity reading of target data can be achieved, a result is directly returned after the target data is read, copying to the memory is not needed, copying is omitted, I/O speed of the database is increased, and meanwhile software stack is optimized.
According to the database access method, a data reading mechanism after Cache miss is changed, and the data is directly read without copying by means of medium characteristics. For example, after Cache miss is read, the data is directly returned to the upper part by utilizing the byte addressing capability of the PCM storage medium, and the data is not required to be copied to a shared buffer pool for reading, so that one data copy is reduced.
Based on the same conception as the foregoing embodiments of the method for accessing a database, another apparatus 1300 for accessing a database is also provided in the embodiments of the present application, where the apparatus 1300 for accessing a database includes a unit or a module for implementing each step in the method for accessing a database shown in fig. 5-12.
Fig. 13 is a schematic structural diagram of a database access device according to an embodiment of the present application. The apparatus is applied to the server shown in fig. 4, and as shown in fig. 13, the database accessing apparatus 1300 at least includes:
an updating module 1301, configured to, when stored Cache data in the Cache layer is updated, perform atomic update on an index structure corresponding to the Cache data through an atomic operation interface of RDMA;
the Cache layer is located in the shared memory, and Cache data shared by a plurality of processes related to the database are stored in the Cache layer.
In one possible implementation, the index structure includes a logical pointer and an index address, where the logical pointer points to a first memory space, where the index address is stored in the first memory space, and the index address points to a second memory space, where the second memory space stores the updated cache data or a descriptor of the updated cache data, where the descriptor includes a length and a first address of the updated cache data;
the update module 1301 is specifically configured to:
and carrying out atomic updating on the index address in the index structure corresponding to the cache data through an atomic operation interface of RDMA.
In another possible implementation, the index address points to a start position of a storage space of the updated cache data;
when the updated cache data is of a fixed size, the second memory space stores the updated cache data;
and when the updated cache data is of a non-fixed size, the second memory space stores descriptors of the updated cache data.
In another possible implementation, the cache data is table data with the accessed frequency greater than or equal to a preset frequency in the database.
In another possible implementation, the shared memory is located in a memory layer, and the storage medium of the memory layer includes a dynamic random access memory and a persistent memory type medium.
In another possible implementation, the method further includes:
a response module 1302, configured to respond to a read operation request of the target process and perform a read operation;
the reading module 1303 is configured to read, if target data is not read in both the Cache layer and the memory layer, the target data in the storage layer, where a storage medium in the storage layer is a persistent memory medium;
and copying the target data to a memory space corresponding to the target process.
In another possible implementation, the database is a relational database.
The database access device 1300 according to the embodiments of the present application may correspond to performing the methods described in the embodiments of the present application, and the above and other operations and/or functions of each module in the database access device 1300 are respectively for implementing the corresponding flow of each method in fig. 5-12, which are not repeated herein for brevity.
Fig. 14 is a schematic structural diagram of a computing device according to an embodiment of the present application.
As shown in fig. 14, the computing device 1400 includes at least one processor 1401, a memory 1402, and a communication interface 1403. The processor 1401, the memory 1402, and the communication interface 1403 are communicatively connected, and the communication connection may be implemented by a wired (e.g., bus) method or may be implemented by a wireless method. The communication interface 1403 is used for receiving data sent by other devices; the memory 1402 stores computer instructions that the processor 1401 executes to perform the database access method in the foregoing method embodiment.
It should be appreciated that in the embodiments of the present application, the processor 1401 may be a central processing unit CPU, the processor 1401 may also be other general purpose processors, digital signal processors (digital signal processor, DSP), application specific integrated circuits (application specific integrated circuit, ASIC), field programmable gate arrays (field programmable gate array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or any conventional processor or the like.
The memory 1402 may include read only memory and random access memory, and provides instructions and data to the processor 1401. Memory 1402 may also include nonvolatile random access memory.
The memory 1402 may be volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The nonvolatile memory may be read-only memory (ROM), programmable ROM (PROM), erasable PROM (EPROM), electrically Erasable EPROM (EEPROM) or flash memory or persistent memory, such as phase-change memory (PCM), storage class memory (storage class memory, SCM), ferroelectric RAM (FRAM), magnetoresistive RAM (MRAM), etc. The volatile memory may be random access memory (random access memory, RAM) which acts as an external cache. By way of example, and not limitation, many forms of RAM are available, such as Static RAM (SRAM), dynamic Random Access Memory (DRAM), synchronous Dynamic Random Access Memory (SDRAM), double data rate synchronous dynamic random access memory (DDR SDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), and direct memory bus RAM (DR RAM).
It should be appreciated that the computing device 1400 according to the embodiments of the present application may perform the method shown in fig. 5-12 in the embodiments of the present application, and the detailed description of the implementation of the method is referred to above, which is not repeated herein for brevity.
Embodiments of the present application provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, causes the above-mentioned method of accessing a database to be implemented.
Embodiments of the present application provide a chip comprising at least one processor and an interface through which the at least one processor determines program instructions or data; the at least one processor is configured to execute the program instructions to implement the above-mentioned method of accessing a database.
Embodiments of the present application provide a computer program or computer program product comprising instructions which, when executed, cause a computer to perform the above-mentioned method of accessing a database.
Those of ordinary skill would further appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, in computer software, or in a combination of the two, and that the elements and steps of the examples have been generally described in terms of function in the foregoing description to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Those of ordinary skill in the art may implement the described functionality using different approaches for each particular application, but such implementation is not to be considered as beyond the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied in hardware, in a software module executed by a processor, or in a combination of the two. The software modules may be disposed in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The foregoing embodiments have been provided for the purpose of illustrating the general principles of the present application, and are not meant to limit the scope of the invention, but to limit the scope of the invention.

Claims (16)

1. A method for accessing a database, the method being applied to a server deployed with the database, the server including a network card, the network card supporting RDMA functions, the method comprising:
when the stored Cache data in the Cache layer is updated, the index structure corresponding to the Cache data is subjected to atomic updating through an atomic operation interface of RDMA;
the Cache layer is located in the shared memory, and Cache data shared by a plurality of processes related to the database are stored in the Cache layer.
2. The method of claim 1, wherein the index structure comprises a logical pointer and an index address, the logical pointer pointing to a first memory space, the first memory space storing the index address, the index address pointing to a second memory space storing the updated cache data or a descriptor of the updated cache data, the descriptor comprising a length and a first address of the updated cache data;
the atomic updating of the index structure corresponding to the cache data through the atomic operation interface of RDMA includes:
and carrying out atomic updating on the index address in the index structure corresponding to the cache data through an atomic operation interface of RDMA.
3. The method of claim 2, wherein the index address points to a starting location of a storage space of the updated cache data;
when the updated cache data is of a fixed size, the second memory space stores the updated cache data;
and when the updated cache data is of a non-fixed size, the second memory space stores descriptors of the updated cache data.
4. A method according to any one of claims 1 to 3, wherein the cached data is table data in the database having a frequency of access greater than or equal to a predetermined frequency.
5. The method of any of claims 1-4, wherein the shared memory is located in a memory layer, and wherein the storage medium of the memory layer comprises dynamic random access memory and persistent memory type media.
6. The method of any one of claims 1-5, further comprising:
responding to a read operation request of a target process, and executing read operation;
if the target data are not read in the Cache layer and the memory layer, the target data are read in the storage layer, and the storage medium of the storage layer is a lasting memory medium;
and copying the target data to a memory space corresponding to the target process.
7. The method of claim 6, wherein the database is a relational database.
8. An access device for a database, applied to a server deployed with the database, the server including a network card, the network card supporting RDMA functions, comprising:
the updating module is used for carrying out atomic updating on the index structure corresponding to the Cache data through an atomic operation interface of RDMA when the stored Cache data in the Cache layer is updated;
the Cache layer is located in the shared memory, and Cache data shared by a plurality of processes related to the database are stored in the Cache layer.
9. The apparatus of claim 8, wherein the index structure comprises a logical pointer and an index address, the logical pointer pointing to a first memory space, the first memory space storing the index address, the index address pointing to a second memory space storing the updated cache data or a descriptor of the updated cache data, the descriptor comprising a length and a first address of the updated cache data;
the updating module is specifically configured to:
and carrying out atomic updating on the index address in the index structure corresponding to the cache data through an atomic operation interface of RDMA.
10. The apparatus of claim 9, wherein the index address points to a starting location of a storage space of the updated cache data;
when the updated cache data is of a fixed size, the second memory space stores the updated cache data;
and when the updated cache data is of a non-fixed size, the second memory space stores descriptors of the updated cache data.
11. The apparatus according to any one of claims 8-10, wherein the cached data is table data in the database having a frequency of access greater than or equal to a preset frequency.
12. The apparatus of any of claims 8-11, wherein the shared memory is located in a memory layer, and wherein the storage medium of the memory layer comprises dynamic random access memory and persistent memory type media.
13. The apparatus according to any one of claims 8-12, further comprising:
the response module is used for responding to the read operation request of the target process and executing the read operation;
the reading module is used for reading the target data from the storage layer if the target data are not read in the Cache layer and the memory layer, and the storage medium of the storage layer is a lasting memory medium;
and copying the target data to a memory space corresponding to the target process.
14. The apparatus of claim 13, wherein the database is a relational database.
15. A computing device comprising a memory and a processor, wherein the memory has executable code stored therein, the processor executing the executable code to implement the method of any of claims 1-7.
16. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when executed in a computer, causes the computer to perform the method of any of claims 1-7.
CN202210722775.4A 2022-06-24 2022-06-24 Database access method and device Pending CN117312004A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210722775.4A CN117312004A (en) 2022-06-24 2022-06-24 Database access method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210722775.4A CN117312004A (en) 2022-06-24 2022-06-24 Database access method and device

Publications (1)

Publication Number Publication Date
CN117312004A true CN117312004A (en) 2023-12-29

Family

ID=89235949

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210722775.4A Pending CN117312004A (en) 2022-06-24 2022-06-24 Database access method and device

Country Status (1)

Country Link
CN (1) CN117312004A (en)

Similar Documents

Publication Publication Date Title
US10990533B2 (en) Data caching using local and remote memory
US10019377B2 (en) Managing cache coherence using information in a page table
US5813031A (en) Caching tag for a large scale cache computer memory system
JP3493409B2 (en) Computer equipment
Zhou et al. Spitfire: A three-tier buffer manager for volatile and non-volatile memory
US20140143502A1 (en) Predicting Outcomes for Memory Requests in a Cache Memory
CN111858404B (en) Method and system for address translation, and computer readable medium
US20130290636A1 (en) Managing memory
US7590802B2 (en) Direct deposit using locking cache
JP3866447B2 (en) Directory entry allocation system and method for use in multiprocessor node data processing system
JPH08235052A (en) System and method for storage of address tag into directory
US7117312B1 (en) Mechanism and method employing a plurality of hash functions for cache snoop filtering
US20210318987A1 (en) Metadata table resizing mechanism for increasing system performance
US7325102B1 (en) Mechanism and method for cache snoop filtering
US20200183846A1 (en) Method and device for optimization of data caching
US7596665B2 (en) Mechanism for a processor to use locking cache as part of system memory
Li et al. Phast: Hierarchical concurrent log-free skip list for persistent memory
KR102321346B1 (en) Data journaling method for large solid state drive device
KR20160121819A (en) Apparatus for data management based on hybrid memory
US11829292B1 (en) Priority-based cache-line fitting in compressed memory systems of processor-based systems
US11687392B2 (en) Method and system for constructing persistent memory index in non-uniform memory access architecture
CN117312004A (en) Database access method and device
US7383390B1 (en) Resource-limited directories with fine-grained eviction
US11868244B2 (en) Priority-based cache-line fitting in compressed memory systems of processor-based systems
US20230359556A1 (en) Performing Operations for Handling Data using Processor in Memory Circuitry in a High Bandwidth Memory

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination