CN115878301A - Acceleration framework, acceleration method and equipment for database network load performance - Google Patents

Acceleration framework, acceleration method and equipment for database network load performance Download PDF

Info

Publication number
CN115878301A
CN115878301A CN202111136877.XA CN202111136877A CN115878301A CN 115878301 A CN115878301 A CN 115878301A CN 202111136877 A CN202111136877 A CN 202111136877A CN 115878301 A CN115878301 A CN 115878301A
Authority
CN
China
Prior art keywords
database
data
network
protocol stack
user mode
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111136877.XA
Other languages
Chinese (zh)
Inventor
梁家琦
吕温
钟舟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202111136877.XA priority Critical patent/CN115878301A/en
Priority to PCT/CN2022/121232 priority patent/WO2023046141A1/en
Publication of CN115878301A publication Critical patent/CN115878301A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/21Design, administration or maintenance of databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/25Integrating or interfacing systems involving database management systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Computer And Data Communications (AREA)

Abstract

The application discloses an acceleration framework, an acceleration method and equipment for database network load performance, wherein the framework comprises the following steps: the system comprises a user mode network protocol stack, a database network architecture and a database multithreading architecture, wherein the database network architecture comprises a database network thread, and the database multithreading architecture comprises a database service thread. The user mode network protocol stack is used for receiving initial data sent by the network card equipment and analyzing the initial data through the TCP/IP protocol stack to obtain first data; the database network thread is used for acquiring first data; the database multithreading framework is used for reading the first data through a communication control transceiving interface between the database multithreading framework and the database network framework so as to execute the corresponding service. According to the method, the user mode network protocol stack is used for replacing the kernel mode network protocol stack, the kernel bypass of the operating system is achieved, the pure soft technology does not depend on novel network equipment, and the controllability is friendly. And the acceleration framework decouples the database and the user mode network protocol stack, so that high concurrency of the user mode network can be dealt with.

Description

Acceleration framework, acceleration method and equipment for database network load performance
Technical Field
The present application relates to the field of databases, and in particular, to an acceleration framework, an acceleration method, and an acceleration apparatus for database network load performance.
Background
Databases (DBs) are an organic collection of large amounts of data that are organized in a structure and stored for long periods within a computer that can be shared. Database systems (DBS) are systems composed of computer software, hardware, and data resources that enable organized, dynamic storage of large amounts of associated data, and facilitate access by multiple users. The main factors affecting the network load performance of the database include network load delay, central Processing Unit (CPU) utilization, input/output (I/O) of a disk, memory utilization efficiency, and database kernel technology. The current technology for accelerating the network load performance of the database mainly comprises the following steps: the software layer deeply digs the kernel technology in the database, combines the software and the hardware with a CPU and a novel storage medium.
Communication between network protocols and databases is an important factor affecting the performance of database systems. For this reason, the ways to improve the performance of database systems based on network protocols and communication between databases are: 1) The method is characterized in that load performance of an on-line transaction processing (OLTP) process is accelerated by a high-performance libesary network framework, which is proposed by the ari OceanBase (a high-performance distributed database system supporting mass data), and the network framework is realized based on an event-driven model Libev of a kernel-mode network protocol stack and uses a coroutine to manage task scheduling. 2) An Ali PolarDB (cloud native relational database self-developed by Ali cloud) builds a database kernel engine based on remote direct data access (RDMA) based on a novel hardware technology, the memory of the Ali PolarDB is directly written into the memory address of another machine through the RDMA, and the middle communication protocol coding and decoding and retransmission mechanisms are completed by the RDMA network card without the participation of a CPU.
However, the above method 1 may cause frequent switching between the user mode and the kernel mode, multiple memory copies of kernel mode protocol stack data, etc., which causes system resource loss and network load time delay, thereby losing database performance; in the above mode 2, although the RDMA interaction is used to implement the acceleration of the load performance by the kernel of the database bypass Operating System (OS), the RDMA interaction is dependent on the RDMA network card hardware device, and belongs to a novel hardware technology. In practical application occasions, end-to-end physical hardware cooperation is needed, flexibility and universality are poor, and meanwhile, in a software level, the realization of the RDMA protocol needs a large amount of complex adaptation and modification for an application layer database kernel to ensure the usability of the RDMA protocol.
Disclosure of Invention
The embodiment of the application provides an acceleration framework, an acceleration method and acceleration equipment for network load performance of a database. Moreover, the framework decouples the database and the user mode network protocol stack so as to deal with the high concurrency of the user mode network; service and communication of the traditional database are decoupled, and system overhead is reduced.
Based on this, the embodiment of the present application provides the following technical solutions:
in a first aspect, an embodiment of the present invention first provides an acceleration framework for a database network load performance, which can be used in the field of databases, where the acceleration framework for a database network load performance provided in an embodiment of the present invention runs on a computer device, and the computer device is composed of hardware and software, where the software mainly includes an operating system and a database. The acceleration framework provided by the embodiment of the application realizes data transceiving from a client (which is different from other equipment of the computer equipment) to the computer equipment through the network card equipment, and provides services such as database data adding, deleting, modifying, checking and the like by using a user mode network protocol stack and database software. The acceleration frame includes: the system comprises a user mode network protocol stack, a database network architecture (also called as a database network communication framework) and a database multithreading architecture, wherein the database network architecture comprises at least one database network thread, the database multithreading architecture comprises at least one database service thread, the database multithreading architecture is connected with the database network architecture through a communication control transceiving interface, and the database network architecture and the database multithreading architecture are both contained in a database kernel. When the opposite-end device sends data (which may be called initial data) to the computer device through the network card device, the user mode network protocol stack is used for receiving the initial data (such as one or more initial data packets sent by the opposite-end device) sent by the network card device, and analyzing the initial data through a TCP/IP protocol stack therein to obtain first data; the database network architecture comprises at least one database network thread used for acquiring the first data and instructing the database multithreading architecture to read the first data from the database network architecture; and the database multithreading architecture is used for reading the first data through a communication control transceiving interface between the database multithreading architecture and the database network architecture so as to execute the service (which can be called as a first service) corresponding to the first data in the database.
In the above embodiment of the present application, the acceleration framework replaces the kernel-mode network protocol stack with the user-mode network protocol stack, and the user-mode network protocol stack is in the user mode of the operating system, so that the kernel bypass of the operating system can be realized. And the acceleration framework decouples the database and the user mode network protocol stack, so that high concurrency of the user mode network can be dealt with. In addition, the interaction mode between the database multithreading architecture and the database network thread in the database network architecture decouples the service and communication of the traditional database (the database network thread and the database service thread are consumers and producers) so as to reduce the system overhead.
In a possible implementation manner of the first aspect, the acceleration framework may further include a user-mode network configuration module, where the user-mode network configuration module is configured to configure the user-mode network protocol stack by creating a daemon process.
In the above embodiment of the present application, the acceleration framework may further include a user mode network configuration module, which is responsible for enabling a current operating system of the computer device to have a capability of a user mode network protocol stack, so as to implement automatic deployment of the user mode network protocol stack, and the configuration process is more convenient.
In a possible implementation manner of the first aspect, the user mode network configuration module is specifically configured to perform at least one of the following configuration operations: setting a Data Plane Development Kit (DPDK) user mode driver, setting a large page memory (large pages), setting a timing task, setting a Kernel Network Interface (KNI), setting a control authority of a user mode component, and the like. The timing task is responsible for controlling a configuration environment of the user mode network protocol stack and ensuring high availability of the database during the use of the user mode network protocol stack.
In the foregoing embodiments of the present application, what aspects can be included in the configuration operation performed by the user mode network configuration module is specifically described, and operability and flexibility are provided.
In a possible implementation manner of the first aspect, after the database multithreading architecture executes an upper-layer service (which may be referred to as a second service) of the database on the computer device, the obtained data may be referred to as second data, and the database multithreading architecture may be further configured to send the second data to the database network architecture through a communication control transceiver interface between the database multithreading architecture and the database network architecture. At least one database network thread in the database network architecture is configured to send the second data further to the user mode network protocol stack.
In the above embodiments of the present application, an interaction process between a database multithreading architecture and a database network thread in a database network architecture is specifically set forth, so that service and communication of a traditional database are decoupled (the database network thread and the database service thread are consumers and producers for each other), thereby reducing system overhead.
In a possible implementation manner of the first aspect, the user-mode network protocol stack includes a user-mode process (also referred to as an Ltran process) and a network protocol stack component (also referred to as a dynamic library lstack. In this case, the user mode network protocol stack is specifically configured to: starting the user mode process in the user mode space, receiving initial data sent by the network card equipment through the user mode process, and storing the initial data in a shared memory; and then, analyzing the initial data in the shared memory through the network protocol stack component based on the TCP/IP protocol stack to obtain first data (namely, analyzing the initial data into a data packet format which can be identified by the computer equipment), wherein the obtained first data is still stored in the shared memory.
In the foregoing embodiment of the present application, the user mode network protocol stack may further include a user mode process and a network protocol stack component, where the user mode process and the network protocol stack component share a memory, and both of them perform message interaction in a memory sharing manner, including a TCP/IP protocol stack data parsing process, and have realizability.
In a possible implementation manner of the first aspect, the user-mode network protocol stack includes a user-mode process (also referred to as an Ltran process) and a network protocol stack component (also referred to as a dynamic library lstack. In this case, when the database on the computer device writes data, the database network thread in the communication pool is specifically configured to send the second data to the network protocol stack component, and the network protocol stack component is also specifically configured to store the second data in the shared memory.
In the foregoing embodiment of the present application, it is specifically stated that the memory shared by the user mode process and the network protocol stack component may also be used to store second data obtained after processing an upper layer service of a database, so that the network card device reads the second data from the shared memory, and the method has wide applicability.
In a possible implementation manner of the first aspect, the database network architecture may further include a data sharing buffer (buffer), which may be referred to as a data resource pool, that is, data resource pooling is implemented by creating the data sharing buffer, and the data resource pool is used for storing the first data from the user mode network protocol stack.
In the above embodiments of the present application, by creating the data sharing buffer, resource sharing and buffer area multiplexing of network protocol stack data (i.e. first data) in a multiple concurrent scenario are implemented, and overhead of data copying and resource creation is reduced.
In a possible implementation manner of the first aspect, the database network architecture may further include a data sharing buffer, where the data sharing buffer may be referred to as a data resource pool, that is, the data sharing buffer is created to implement data resource pooling, and the data resource pool is responsible for performing packet aggregation and/or batch transceiving on data of the user mode network protocol stack to implement dynamic flow control and capacity reduction. In particular, the data resource pool is to store second data from the database multithreading architecture.
In the above embodiment of the present application, by creating the data sharing buffer, resource sharing and cache area multiplexing of database service data (i.e. second data) in a multiple concurrent scenario are implemented, and overhead of data copying and resource creation is reduced.
In a possible implementation manner of the first aspect, in a case that the database network architecture may further include a data resource pool, the database network thread in the communication pool is configured to place the first data read from the user-mode network protocol stack into the data resource pool, and instruct the database multithreading architecture to read the first data from the data resource pool, thereby completing data interaction with the user-mode network protocol stack.
In the foregoing embodiment of the present application, it is specifically stated that the first data acquired by the database network thread all exist in the data resource pool, and the database service thread also calls the first data from the data resource pool, so that the flow control of the database is implemented.
A second aspect of the embodiments of the present application provides a method for accelerating load performance of a my network of a database, where the method includes: when the opposite-end device sends data (which may be called initial data) to the computer device through the network card device, the computer device receives the initial data sent by the opposite-end device from the network card device through the user-mode network protocol stack, and further analyzes the initial data through the TCP/IP protocol stack in the user-mode network protocol stack to obtain the first data. After the user-mode network protocol stack receives the initial data and parses the initial data to obtain first data, the computer device may obtain the first data from the user-mode network protocol stack through a network thread in a communication pool (the communication pool is composed of at least one database network thread, and each database network thread in the communication pool is responsible for packet control processing and packet transceiving processing), for example, the database network thread in the communication pool may obtain the first data from the user-mode network protocol stack based on a polling mode (or another mode, such as a periodic check, a wakeup check, and the like), and further instruct a database service thread in a database multithreading architecture (the database multithreading architecture includes at least one database service thread) to read the first data from a database network architecture, where the communication pool belongs to the database network architecture, that is, the database network thread belongs to the database network architecture. And finally, the computer equipment reads the first data through the communication control transceiving interface between the database multithreading architecture and the database network architecture through the database multithreading architecture and executes a first task corresponding to the first data in the database according to the first data.
In the above embodiments of the present application, since the user mode network protocol stack is in the user mode of the operating system, the bypass of the kernel of the operating system can be realized. In addition, due to the traditional RTC communication model, the database service and the network are in the same thread, and the user-mode network capability is an independent process and data resource in the user space, which results in the unavailability of the RTC in the user mode. Therefore, the acceleration method provided by the embodiment of the application decouples the service and the network of the database. In addition, in order to cope with the high concurrency of the network and adapt to the multithreading architecture of the database, the communication pool is used in the network architecture of the database to improve the resource reuse rate and reduce the system overhead.
In a possible implementation manner of the second aspect, the user-mode network protocol stack is configured by a user-mode network configuration module deployed on the computer device by creating a daemon process.
In the above embodiment of the present application, the acceleration framework may further include a user mode network configuration module, which is responsible for enabling a current operating system of the computer device to have a capability of a user mode network protocol stack, so as to implement automatic deployment of the user mode network protocol stack, and the configuration process is more convenient.
In one possible implementation manner of the second aspect, the daemon process performs at least one of the following configuration operations: setting a DPDK user mode driver of a data plane development kit, setting a large page memory, setting a timing task, setting a kernel virtual network card KNI and setting the control authority of a user mode component. The timing task is responsible for managing and controlling the configuration environment of the user mode network protocol stack, and high availability of the database during the use of the user mode network protocol stack is guaranteed.
In the above embodiments of the present application, it is specifically described what aspects can be included in the configuration operations included in the daemon process, and the daemon process has operability and flexibility.
In a possible implementation manner of the second aspect, the acceleration method may further include: after the database multithreading architecture executes the upper-layer service (which may be referred to as a second service) of the database on the computer device, the obtained data may be referred to as second data, and then, the computer device may send the second data to the database network architecture through the database multithreading architecture via a communication control transceiving interface between the database multithreading architecture and the database network architecture, and further send the second data to the user mode network protocol stack through a database network thread in a communication pool in the database network architecture.
In the above embodiments of the present application, an interaction process between a database multithreading architecture and a database network thread in a database network architecture is described when data is written to a database on a computer device, and service and communication of a traditional database are decoupled (the database network thread and the database service thread are consumers and producers each other), so that system overhead is reduced.
In a possible implementation manner of the second aspect, the user-mode network protocol stack may further include a user-mode process (also referred to as an Ltran process) and a network protocol stack component (also referred to as a dynamic library lstack. The user mode process and the network protocol stack component share a memory, and message interaction is carried out between the user mode process and the network protocol stack component in a memory sharing mode. Specifically, the computer device receives initial data sent by the network card device through the user mode process, and stores the initial data in a memory shared by the user mode process and the network protocol stack component, and then the computer device analyzes the initial data in the shared memory through the network protocol stack component based on a TCP/IP protocol stack to obtain first data (i.e., the initial data is analyzed into a data packet format that can be recognized by the computer device), and the obtained first data is still stored in the shared memory.
In the foregoing embodiment of the present application, the user mode network protocol stack may further include a user mode process and a network protocol stack component, where the user mode process and the network protocol stack component share a memory, and both of them perform message interaction in a memory sharing manner, including a TCP/IP protocol stack data parsing process, and have realizability.
In a possible implementation manner of the second aspect, in a case that the user mode network protocol stack further includes a user mode process and a network protocol stack component, and the user mode process and the network protocol stack component share a memory, a manner that the computer device sends the second data to the user mode network protocol stack through the database network thread may specifically be: and the computer equipment sends the second data to the network protocol stack component through the database network thread, and then the network protocol stack component stores the received second data in the shared memory.
In the foregoing embodiment of the present application, it is specifically stated that the memory shared by the user mode process and the network protocol stack component may also be used to store second data obtained after processing an upper layer service of a database, so that the network card device reads the second data from the shared memory, and the method has wide applicability.
In a possible implementation manner of the second aspect, the database network architecture may further include a data sharing buffer in addition to the communication pool, where the data sharing buffer may be referred to as a data resource pool, that is, the data resource pooling is implemented by creating the data sharing buffer. The data resource pool is responsible for carrying out packet aggregation and/or batch receiving and sending on data of the user mode network protocol stack so as to realize dynamic flow control and capacity expansion. Specifically, after the computer device obtains the first data from the user mode network protocol stack through the communication pool, for example, the database network thread in the communication pool may obtain the first data from the user mode network protocol stack based on a polling mode (or another mode, such as a periodic check, a wakeup check, or the like), and then deposit the first data in the data resource pool.
In the above embodiments of the present application, resource sharing and buffer area multiplexing of network protocol stack data (i.e. first data) in a multiple concurrent scenario are implemented through the created data sharing buffer, so that overheads of data copying and resource creation are reduced.
In a possible implementation manner of the second aspect, in a case that the database network architecture may further include a data sharing buffer (i.e., a data resource pool), after the computer device sends the second data to the database network architecture through the database multithreading architecture via the communication control transceiver interface, the acceleration method may further include: the computer device stores the second data in a data resource pool through the database network thread.
In the above embodiment of the present application, resource sharing and cache area multiplexing of database service data (i.e. second data) in a multiple concurrent scenario are implemented through the created data sharing buffer, so that overheads of data copying and resource creation are reduced.
A third aspect of embodiments of the present application provides a computer device having a function of implementing a method according to any one of the second aspect and the second possible implementation manner. The function can be realized by hardware, and can also be realized by executing corresponding software by hardware. The hardware or software includes one or more modules corresponding to the functions described above.
A fourth aspect of the embodiments of the present application provides a computer device, which may include a memory, a processor, and a bus system, where the memory is used to store a program, and the processor is used to call the program stored in the memory to execute the method of any one of the second aspect or the possible implementation manner of the second aspect of the embodiments of the present application.
A fifth aspect of embodiments of the present application provides a computer-readable storage medium, which stores instructions that, when executed on a computer, enable the computer to perform the method of any one of the second aspect or the second possible implementation manner.
A sixth aspect of embodiments of the present application provides a computer program which, when run on a computer, causes the computer to perform the method of the second aspect or any one of the possible implementations of the second aspect.
A seventh aspect of the embodiments of the present application provides a chip, where the chip includes at least one processor and at least one interface circuit, the interface circuit is coupled to the processor, the at least one interface circuit is configured to perform a transceiving function and send an instruction to the at least one processor, and the at least one processor is configured to execute a computer program or an instruction, and has a function of implementing the method according to any one of the above-mentioned second aspect or any one of the above-mentioned second aspect, where the function may be implemented by hardware, software, or a combination of hardware and software, and the hardware or software includes one or more modules corresponding to the above-mentioned function. In addition, the interface circuit is used for communicating with other modules outside the chip.
Drawings
FIG. 1 is a schematic diagram of a system architecture of a Libeasy network framework;
FIG. 2 is a schematic diagram of an implementation architecture of a reactivor model based on libev for a Libeasy network framework;
FIG. 3 is a schematic diagram of a Libeasy network framework thread sharing model;
FIG. 4 is a schematic diagram of a system architecture for implementing database load performance acceleration by an RDMA-based database kernel engine;
FIG. 5 is a schematic diagram of an acceleration framework for database network load performance according to an embodiment of the present application;
FIG. 6 is another schematic diagram of an acceleration framework for database network load performance according to an embodiment of the present disclosure;
FIG. 7 is a system diagram of an acceleration framework according to an embodiment of the present disclosure;
FIG. 8 is a diagram illustrating interaction between a database multithreading architecture and a database network architecture according to an embodiment of the present application;
fig. 9 is a schematic flowchart of a method for accelerating the network load performance of a database according to an embodiment of the present application;
fig. 10 is a flowchart of a core implementation of a method for accelerating the network load performance of a database according to an embodiment of the present application;
fig. 11 is a flowchart of an implementation of a method for accelerating the network load performance of a database according to an embodiment of the present application;
FIG. 12 is a schematic structural diagram of a computer device according to an embodiment of the present application;
fig. 13 is another schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
The embodiment of the application provides an acceleration framework, an acceleration method and acceleration equipment for network load performance of a database. Moreover, the framework decouples the database and the user mode protocol stack so as to deal with the high concurrency of the user mode network; service and communication of the traditional database are decoupled, and system overhead is reduced.
The terms "first," "second," and the like in the description and claims of this application and in the foregoing drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances and are merely descriptive of the various embodiments of the application and how objects of the same nature can be distinguished. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of elements is not necessarily limited to those elements, but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
The embodiments of the present application relate to a lot of related knowledge about databases, and in order to better understand the solution of the embodiments of the present application, the following first introduces related terms and concepts that the embodiments of the present application may relate to. It should be understood that the related conceptual explanations may be limited by the specific details of the embodiments of the present application, but do not mean that the present application is limited to the specific details, and that the specific details of the embodiments may vary from one embodiment to another, and are not limited herein.
(1) Database (database, DB)
The database is a warehouse which organizes, stores and manages data according to a data structure, and is essentially a file system, the data are stored according to a specific format, and a user can add, modify, delete and query the data in the database. The data are stored in the database according to a data structure, and for the database, especially a relational database, such as Oracle, SQLServer, DB2 and the like, the stored data are mainly structured data, have a regular format, and are generally stored in a row-column form.
(2) Database system (database system, DBS)
A database system is a system composed of computer software, hardware, and data resources that enable organized, dynamic storage of large amounts of associated data, and facilitate access by multiple users.
(3) Online transaction processing (OLTP)
OLTP is a typical application of databases, and OLTP is a very transactional system, dominated by a frequent large number of small transactions. In such systems, a single database often processes more than a few thousand transactions per second, with the query statement executing in quantities of even tens of thousands per second. Thus, OLTP, also known as a transaction-oriented processing system, is essentially characterized by the fact that the customer's raw data can be immediately transmitted to a computing center for processing and the processing results given in a very short time, which has the greatest advantage of processing the incoming data instantaneously and answering it in a timely manner. Typical OLTP systems are e-commerce systems such as banking, stock exchange, etc. OLTP is done by the responsibility of the database engine.
An important performance measure of OLTP is system performance, which is embodied as real-time Response Time (RT), i.e. the time required by a user to respond to a request by a computer device after entering data on a terminal.
(4) User mode (user mode) and kernel mode (kernel mode)
The operating system requires two CPU states, one called the user state and the other called the kernel state, where the kernel state allows operating system programs and the user state runs user programs.
The kernel mode and the user mode are two running levels of the operating system, when the program runs on the 3-level privilege level, the program can be called to run on the user mode, because the program is the lowest privilege level and is the privilege level of the running of the common user process, and most programs directly faced by users are run on the user mode; conversely, when the program runs at the privilege level 0, it can be referred to as running in the kernel mode. Programs running in user mode cannot directly access operating system kernel data structures and programs. When a user executes a program in the system, most of the time is in the user mode, and the user switches to the kernel mode when he needs the operating system to help perform some tasks he has no power and capability to do.
The main differences between the user mode and kernel mode are: when the system is in a user state, the memory space and the objects which can be accessed by the process are limited, and the processor which is occupied by the process can be preempted; and the process in kernel mode execution can access all memory space and objects, and the occupied processor is not allowed to be preempted.
Generally, the following three cases will result in a user-mode to kernel-mode switch:
a. system call
The method is a mode that a user mode process actively requires to be switched to a kernel mode, and the user mode process applies for using a service program provided by an operating system to complete work through system call. The core of the system call mechanism is implemented by using an interrupt specifically opened by the operating system for the user, such as int 80h interrupt of Linux.
b. Abnormality (S)
When some unknown exception occurs in advance when the CPU executes the program running in the user mode, the current running process is triggered to switch to the kernel-related program for handling the exception, and the kernel mode is switched to, for example, a page fault exception.
c. Interrupts for peripheral devices
When the peripheral device completes the operation requested by the user, it will send out the corresponding interrupt signal to the CPU, at this time, the CPU will suspend executing the next instruction to be executed and then execute the processing program corresponding to the interrupt signal, if the previously executed instruction is the program in the user mode, the switching process from the user mode to the kernel mode will naturally occur. For example, when the hard disk read-write operation is completed, the system switches to an interrupt handler for hard disk read-write to execute subsequent operations.
The above three modes are the most main modes for the operating system to be switched from the user mode to the kernel mode during running, wherein the system call can be regarded as the active initiation of a user process, and the exception and the interruption of the peripheral equipment are passive.
(4) User mode network protocol
The user mode network protocol may also be referred to as a user mode network protocol stack, and as can be seen from the above, the kernel mode and the user mode are two running levels of the operating system. When a task (i.e., a process) executes a system call while trapped in operating system kernel code, the process is in a kernel run state, i.e., kernel state. When the process executes the user's own code, it is said to be in the user running state, i.e., the user state. A conventional transmission control protocol/internet protocol (TCP/IP) protocol stack is operated in a kernel mode, and a user mode network protocol is a user mode in which the TCP/IP protocol stack is operated in an operating system.
In addition, before the embodiments of the present application are introduced, several common ways for accelerating the performance of the database network system are briefly introduced, so that the embodiments of the present application are convenient to understand in the following.
Method I, accelerating OLTP database load performance through high-performance Libeasy network framework
The method is characterized in that a high-performance Libeasy network framework is proposed by an Ali OceanBase (a high-performance distributed database system supporting mass data), the network framework is realized based on an event-driven model Libev of a kernel-mode network protocol stack, and the network framework uses a coroutine to manage task scheduling. The system architecture diagram of the network framework is shown in fig. 1, which comprises software modules: a database server (i.e., DB in fig. 1), a libesary network framework (i.e., libesary in fig. 1), an event-driven model libev (i.e., libev in fig. 1), and a network card device (i.e., nic in fig. 1). The database server is responsible for receiving and processing Structured Query Language (SQL) requests from the client, and performing data receiving/sending interaction through data reading/writing of the network card. The Libeasy network framework is based on an event-driven model libev and is responsible for message processing and resource management such as connection, message and request organization. The threads in the libesasy are divided into service logic threads and network I/O threads and are responsible for service processing and network I/O processing. The event-driven model libev is realized based on a reactivor mode, and multi-path I/O multiplexing is carried out by calling an inner core state TCP/IP protocol stack interface of an operating system, so that control, receiving and sending processing of network card data messages are completed. The client requests to reach the network card device of the database server through the ethernet, and the server obtains the data (also called nic data) of the network device through Direct Memory Access (DMA) and interrupt wakeup technology. libev processes nic data.
Specifically, the libeas network framework is implemented based on the reactiver model of libev, and the main implementation architecture thereof is shown in fig. 2, and may specifically include the following modules: (1) and EventHandler: interfaces for event data, such as timer events, I/O events, etc.; (2) and Reactor: multipath I/O multiplexing and Timer are used in the Reactor, and when EventHandler registers, a corresponding interface is called. I/O multiplexing and Timer are required to be called in handleEvents of the Reactor first, ready events are acquired, and finally each EventHandler is called. (3) Timer: the management timer, which is mainly responsible for registering events, obtaining a timeout event list, and the like, is generally implemented by a network framework developer. (4) And the multi-path I/O multiplexing model reads and writes data from the kernel of the operating system through epoll, and realizes the data receiving and sending of kernel-mode TCP/IP of a plurality of monitoring handles.
The Libeasy network framework supports two common thread models, namely a network I/O thread and a working thread share the same thread, and the network I/O thread and the working thread are separated. As shown in fig. 3, fig. 3 is a schematic diagram of a libeas network framework thread sharing model, wherein a Process I/O Read: read I/O is processed. The Process: and analyzing the request and calculating a result. Process I/O Write: for processing write I/O, returning network data and computation results. Specifically, in a shared thread architecture of the libesasy network framework, each network I/O thread is responsible for one event _ loop to perform data interaction reading and writing. The Process I/O Read processes Read data, analyzes the request, generates a task, pushes the task to a queue of a working thread, and informs the working thread to Process in an asynchronous event mode. After receiving the asynchronous event through the working thread, the Process takes out the tasks from the working queue, processes the tasks in sequence, generates a result after the processing is finished, puts the result into the queue of the I/O thread, and informs the I/O thread of processing in an asynchronous event mode, wherein the processing corresponds to the network I/O thread. The Process Write I/O processes the Write data requests in sequence after receiving the notification via the I/O thread.
The implementation scheme of the first mode is mainly to accelerate the OLTP database load performance through a high-performance Libeasy network framework. The technical scheme has the following disadvantages: a. the event-driven model libev is a communication framework based on a kernel-state TCP/IP network protocol stack. User mode-kernel mode frequent switching and multiple memory copies of kernel mode protocol stack data can be brought. Resulting in system resource loss and network load time delay, thereby losing database performance under OLTP. b. The thread sharing model of the Libeasy network framework can be directly processed in the same thread after the request is analyzed, the expense of thread switching is saved, and the method is very suitable for the request with less time consumption of a Process. However, the working mechanism of the RTC forwarding model is that the physical CPU core is responsible for processing the life cycle of the entire packet, and cannot be used in the user mode network protocol stack and cannot handle OLTP high-concurrency network transceiving. c. The thread separation model of the libeas network framework is not suitable for dense small task requests, a large amount of time is consumed in thread switching overhead, and extra performance loss is brought.
Second mode, building a database kernel engine based on RDMA to realize acceleration of database load performance
An Ali PolarDB (cloud native relational database of Ali cloud self-research) builds an RDMA-based database kernel engine based on a novel hardware technology. The memory of the machine is directly written into the memory address of another machine through an RMDA network, the middle communication protocol coding, decoding and retransmission mechanisms are completed by an RDMA network card without the participation of a CPU, and a whole set of I/O and network protocol stack running in a user mode is provided. As shown in fig. 4, the implementation thereof includes: polarDB adopts the design of a distributed cluster architecture, and computing nodes and storage nodes are interconnected by adopting a high-speed network. Data transmission is carried out through an RDMA protocol, so that the I/O performance does not become a bottleneck any more, and the bypass of an operating system kernel CPU is realized, thereby accelerating the performance of a database kernel. DB data files, redolog, etc. are transmitted to a remote data server by a high-speed network and RDMA protocol through a user mode file system and a block device data management route. The reliability of the data is ensured by adopting a plurality of copies of the data of the Chunk Server, and the consistency of the data is ensured by a Parallel-Raft protocol. The mode directly transmits data from other servers into a local storage area by virtue of the RDMA network card, and the data transmission is directly carried out on a user layer without entering a kernel state or carrying out system memory or causing any influence on an operating system.
Although the implementation scheme of the second mode utilizes RDMA interaction to realize the acceleration of the load performance by bypassing the kernel of the operating system of the database, the implementation scheme depends on RDMA network card hardware equipment, and belongs to a novel hardware technology. In practical application occasions, end-to-end physical hardware cooperation is needed, flexibility and universality are poor, and meanwhile, in a software level, for an application layer database kernel, realization of an RDMA protocol needs a large number of complex adaptation and modification to ensure the usability of the application layer database kernel.
In summary, to solve the above problems, in the embodiment of the present application, an acceleration framework of a database network load performance is provided first, where the acceleration framework uses a user mode network protocol stack to replace a kernel mode network protocol stack, so as to implement a kernel bypass of an operating system. Moreover, the framework decouples the database and the user mode protocol stack so as to deal with the high concurrency of the user mode network; service and communication of the traditional database are decoupled, and system overhead is reduced.
Embodiments of the present application are described below with reference to the accompanying drawings. As can be known to those skilled in the art, with the development of technology and the emergence of new scenarios, the technical solution provided in the embodiments of the present application is also applicable to similar technical problems.
The acceleration framework of the network load performance of the database provided by the embodiment of the application runs on computer equipment, the computer equipment is composed of hardware and software, and the software mainly comprises an operating system and the database. The acceleration framework provided by the embodiment of the application realizes data transceiving from a client (which is different from other equipment of the computer equipment) to the computer equipment through the network card equipment, and provides services such as database data adding, deleting, modifying, checking and the like by using a user mode network protocol stack and database software. Referring to fig. 5, fig. 5 is a schematic diagram of an acceleration framework for database network load performance according to an embodiment of the present application, where the acceleration framework 500 is deployed on a computer device (e.g., a server) having a network card (e.g., a 1822 network card device), the computer device has an operating system and a database deployed thereon, the user mode and the kernel mode (including a kernel mode application, such as a Linux kernel) in fig. 5 are two running states of the operating system, and the acceleration framework 500 runs in the user mode and specifically includes the following modules: the system comprises a user mode network protocol stack 501, a database network architecture (also called as a database network communication framework) 502 and a database multithreading architecture 503, wherein the database network architecture 502 comprises at least one database network thread, the database multithreading architecture 503 comprises at least one database service thread, the database multithreading architecture 503 is connected with the database network architecture 502 through a communication control transceiving interface, and the database network architecture 502 and the database multithreading architecture 503 are both contained in a database kernel.
It should be noted that, in some embodiments of the present application, the acceleration framework 500 may further include a user-mode network configuration module 504, where the user-mode network configuration module 504 is responsible for enabling a current operating system to have the capability of the user-mode network protocol stack 501, and specifically, for performing automatic configuration on the user-mode network protocol stack 501 by creating a daemon process, for example, the user-mode network configuration module 504 is configured to perform at least one configuration operation of: setting DPDK user state drive, setting large page memories (big pages), setting timing tasks, setting KNI, setting control authority of user state components and the like. It should be noted here that, in some embodiments of the present application, a network card device provided in a computer device in which the acceleration frame 500 is deployed needs to support a DPDK driver, but the type of the network card device is not limited.
Specifically, the DPDK is an open-source data plane development tool set, and provides a high-efficiency data packet processing library function in a user mode, which realizes a high-performance packet forwarding capability in an x86 (a complex instruction set proposed by Intel, a program for controlling the operation of a chip) or ARM processor architecture by using a plurality of technologies such as bypass kernel-mode network protocol stack, polling-mode packet uninterrupted transceiving, optimized memory/buffer/queue management, load balancing based on network card multi-queue and flow identification, and the like, and a user can develop various high-speed network frameworks in a user mode space. And loading a DPDK drive on the physical network card, and mapping a hardware register of the network card to a user state to realize the takeover of the DPDK network card. The user mode network protocol stack 501 may also provide a KNI driver in order to inherit the original kernel interface. The acceleration framework 500 of the embodiment of the application mainly realizes high availability of network card DPDK takeover and KNI drive loading through a daemon technology. The configuration of the DPDK to the large-page memory is realized through the configured number of the large-page memories and the mounting of the hugetlbfs. In addition, process management can be implemented under user-mode network configuration by utilizing a timed task technology.
In the embodiment of the application, the opposite-end device can write data into a database deployed by the computer device through the network card, namely a process of reading data from the database; the computer device can also send the data of the database to the opposite terminal device through the network card, namely, the process of writing the data of the database. The following describes in detail the operations performed by the acceleration framework provided by the embodiment of the present application based on the two different data processing scenarios on the basis of the acceleration framework 500 described in fig. 5 above:
1. case of database read data on a computer device
When the peer device sends data (which may be referred to as initial data) to the computer device through the network card device, the user mode network protocol stack 501 is configured to receive the initial data (e.g., one or more initial data packets sent by the peer device) sent by the network card device, and parse the initial data through a TCP/IP protocol stack therein to obtain first data.
It should be noted that, in some embodiments of the present application, the user mode network protocol stack 501 may further include a user mode process and a network protocol stack component, as shown in fig. 6, the user mode network protocol stack 501 may further include a user mode process (also referred to as Ltran process) 5011 and a network protocol stack component (also referred to as dynamic library lstack. So) 5012, that is, the user mode network protocol stack 501 is embodied as an Ltran process and a dynamic library lstack. So of a user mode space at a software level. The user mode process 5011 and the network protocol stack component 5012 share a memory, and the two perform message interaction in a memory sharing manner, including a TCP/IP protocol stack data analysis process.
Specifically, in this implementation, the user mode network protocol stack 501 is specifically configured to: the user mode process 5011 is started in the user mode space, and the user mode process 5011 receives the initial data transmitted from the network card device and stores the initial data in the shared memory. Then, the network protocol stack component 5012 parses the initial data in the shared memory based on the TCP/IP protocol stack to obtain the first data (i.e., the initial data is parsed into a data packet format that can be recognized by the computer device), and the obtained first data is still stored in the shared memory.
It should also be noted that, in the embodiment of the present application, the relevant service processes in the database (i.e., the collection of various threads on the database, which is embodied in the database multithreading framework 503 in fig. 5) dynamically link the network protocol stack component 5012 to implement the communication interface calls of the entire user mode network protocol stack 501. The first data analyzed by the network protocol stack component 5012 is delivered to a dedicated database network thread in the database multithreading framework 503 through the communication interface to perform data transceiving control. It is noted here that dynamic linking means that it is only needed at runtime, and does not require compilation or the like, and the database software does not rely on the network protocol stack component 5012. In this embodiment, the network threads in the database network architecture 502 form a communication pool 5021 of the database network architecture 502, and the network threads in the communication pool 5021 are used to obtain the first data parsed by the user mode network protocol stack 501 and instruct the database multithreading architecture 503 to read the first data from the database network architecture 502. The resource reuse rate is improved and the system overhead is reduced through the pooling of the database network threads (namely, forming a communication pool).
It should be noted that, in the case that the user mode network protocol stack 501 includes the user mode process 5011 and the network protocol stack component 5012, the network thread in the communication pool 5021 in the database network architecture 502 is used to obtain the first data from the memory shared by the user mode process and the network protocol stack component 5011.
It should be noted that, in other embodiments of the present application, the database network architecture 502 may further include a data sharing buffer, where the data sharing buffer may be referred to as a data resource pool 5022, that is, pooling of data resources is implemented by creating the data sharing buffer, the data resource pool 5022 may be configured to store the first data from the user mode network protocol stack 501, and specifically, the data resource pool 5022 is responsible for performing packet aggregation and/or batch transceiving on the data of the user mode network protocol stack 501, so as to implement dynamic flow control and capacity expansion.
It should be further noted that, in other embodiments of the present application, in a case that the database network architecture 502 may further include a data resource pool 5022, the database network thread in the communication pool 5021 is used to place the first data read from the user-mode network protocol stack 501 into the data resource pool 5022, and instruct the database multithreading architecture 503 to read the first data from the data resource pool 5022, thereby completing data interaction with the user-mode network protocol stack 501.
In the embodiment of the present application, the database multithreading framework 503 reads the first data through the communication control transceiving interface between the database multithreading framework 503 and the database network framework 502 based on the indication message of the database network framework 502, so as to execute the service (which may be referred to as a first service) in the database corresponding to the first data.
In a traditional RTC communication model, the service of a database and a network are in the same thread, and the user-mode network capability is an independent process and data resource in a user space, so that the RTC in a user mode is unavailable. Therefore, the acceleration framework provided by the embodiment of the present application decouples the services and networks of the database, and decouples the services and networks into the database network architecture 502 and the database multithreading architecture 503 described above. In addition, in order to cope with the high concurrency of the network and adapt to the database multithreading architecture 503, the communication pool 5021 and the data resource pool 5022 shared by data are used in the database network architecture 502 to solve the high concurrency of the user mode protocol load network, and the flow control and transceiving of communication are realized.
2. Case of database writing data on computer device
When the database multithreading structure 503 has executed the upper layer service (may be referred to as a second service) of the database on the computer device, the obtained data may be referred to as second data, the second data is sent to the database network structure 502 by the database multithreading structure 503 through the communication control transceiving interface between the database multithreading structure 503 and the database network structure 502, and the database network thread in the communication pool 5021 in the database network structure 502 is used to further send the second data to the user mode network protocol stack 501.
It should be noted that, in some embodiments of the present application, in the case that the user mode network protocol stack 501 further includes a user mode process 5011 and a network protocol stack component 5012, where the user mode process 5011 and the network protocol stack component 5012 share a memory, the database network thread in the communication pool 5021 is specifically configured to send the second data to the network protocol stack component 5012, and the network protocol stack component 5012 is configured to store the second data in the shared memory, so that the network card device can read the second data from the shared memory.
It should also be noted that in other embodiments of the present application, in the case that the database network architecture 502 may further include a data sharing buffer (i.e., the data resource pool 5022), the data resource pool 5022 may be further used for storing the second data from the database multithreading architecture 503.
For the computer device where the database is located, the default of the operating system is to use a kernel-mode network protocol stack to accept data from the network card device. In the above embodiments of the present application, the user mode network protocol stack is used to replace the kernel mode network protocol stack, so as to avoid system performance loss caused by configuration switching and memory copy. The method eliminates the form switching overhead of the operating system, reduces the data copy from the kernel to the user process, thereby freeing the memory bandwidth and the CPU period for improving the performance of the application system and improving the network load performance of the database.
To further understand the acceleration framework, a specific example is provided below to describe a system structure of the acceleration framework, and referring to fig. 7 in detail, fig. 7 is a system structure diagram of the acceleration framework provided in the embodiment of the present application, where a postmaster is a database service main thread (one of service threads) and is responsible for execution and scheduling of the whole service layer; the CommProxyLayer is a communication interface layer of the communication pool (namely, a communication control transceiving interface between the database service thread and the data resource pool 5022 in fig. 6), and is responsible for providing calls for the service layer; the comm client is a proprietary network transceiving thread entity layer of the communication pool (that is, the database network architecture 502 in fig. 6, the buffer in fig. 7 is a data resource pool, the communicator is a database network thread, and a plurality of communicators form the communication pool), and is responsible for network message transceiving control to complete data communication processing of the protocol stack database; the LtranProcess refers to a network thread of a user mode network protocol stack (namely, a user mode process 5011 in fig. 6), and is responsible for data interaction between the user mode network protocol stack and a network card device; the Physical Nic is a Physical network card of the computer device.
Referring to fig. 8, fig. 8 is a schematic view of interaction between a database multithreading architecture and a database network architecture provided in the embodiment of the present application, where a communication pool (i.e., the communication pool 5021 in fig. 6) composed of dedicated network transceiving processes (i.e., database network threads) comm _ proxy is used to provide capabilities of network thread control processing, simplex reception and simplex transmission, and is responsible for performing data transceiving with a user mode protocol stack and performing data interaction with a database service thread (i.e., a worker in fig. 8), and proxy1, proxy2, proxy3, and 8230in fig. 8 is different database network threads in the communication pool. The data resource pool (i.e. the data resource pool 5022 in fig. 6) formed by the annular buffers in the data buffer area is responsible for caching network and service communication data, and realizes data transceiving control and reading and writing under the condition of multiple concurrent database services through atomic operation, and the whole buffer can dynamically expand capacity, control data flow and process messages in batch.
Based on the acceleration framework, the following describes the acceleration method for the network load performance of the database provided in the embodiment of the present application, and specifically please refer to fig. 9, where fig. 9 is a schematic flowchart of the acceleration method for the network load performance of the database provided in the embodiment of the present application, which specifically includes the following steps:
901. the computer equipment acquires initial data from the network card equipment through a user mode network protocol stack, and analyzes the initial data through a TCP/IP protocol stack to obtain first data.
When the opposite-end device sends data (which may be called initial data) to the computer device through the network card device, the computer device receives the initial data sent by the opposite-end device from the network card device through the user-mode network protocol stack, and further analyzes the initial data through the TCP/IP protocol stack in the user-mode network protocol stack to obtain the first data.
It should be noted that, in some embodiments of the present application, the user mode network protocol stack may be configured by a user mode network configuration module deployed on the computer device by creating a daemon process. Specifically, the database on the computer device is firstly responsible for enabling the user-mode network configuration in the installation and startup phase, for example, the creation daemon process performs at least one of the following configuration operations: setting a DPDK user mode driver of a data plane development kit, setting a large page memory, setting a timing task, setting a kernel virtual network card KNI and setting the control authority of a user mode component. The automatic deployment of the user mode network protocol stack is realized, and the high availability of the user mode network is realized. It should be noted that, in the embodiment of the present application, the specific process of the configuration operation executed by the daemon process may refer to the operation process of the user mode network configuration module 504, which is not described herein again.
It should also be noted that, in some embodiments of the present application, the user-mode network protocol stack may further include a user-mode process (also referred to as an Ltran process) and a network protocol stack component (also referred to as a dynamic library lstack. The user mode process and the network protocol stack component share a memory, and message interaction is carried out between the user mode process and the network protocol stack component in a memory sharing mode, wherein the message interaction comprises a TCP/IP protocol stack data analysis process. Specifically, the computer device receives initial data sent by the network card device through the user mode process, and stores the initial data in a memory shared by the user mode process and the network protocol stack component, and then the computer device analyzes the initial data in the shared memory through the network protocol stack component based on a TCP/IP protocol stack to obtain first data (i.e., the initial data is analyzed into a data packet format that can be recognized by the computer device), and the obtained first data is still stored in the shared memory.
It should be noted that, in the embodiment of the present application, after the computer device completes configuration of the user-mode network protocol stack, for example, after the user-mode network configuration module deployed by the computer device completes configuration of the user-mode network protocol stack, a communication pool (the communication pool is composed of at least one database network thread) in the database network architecture needs to be further created and initialized, and a communication control transceiving interface required between the database network architecture and the database multithreading architecture needs to be initialized.
902. The computer device obtains first data from a user mode network protocol stack through at least one database network thread and instructs a database multithreading architecture to read the first data from the database network architecture, wherein the database network thread belongs to the database network architecture, and the database multithreading architecture comprises at least one database service thread.
After the user-mode network protocol stack receives the initial data and parses the initial data to obtain first data, the computer device may obtain the first data from the user-mode network protocol stack through a network thread in a communication pool (the communication pool is composed of at least one database network thread, and each database network thread in the communication pool is responsible for packet control processing and packet transceiving processing), for example, the database network thread in the communication pool may obtain the first data from the user-mode network protocol stack based on a polling mode (or another mode, such as a periodic check, a wakeup check, and the like), and further instruct a database service thread in a database multithreading architecture (the database multithreading architecture includes at least one database service thread) to read the first data from a database network architecture, where the communication pool belongs to the database network architecture, that is, the database network thread belongs to the database network architecture.
As an example, the database may start a back-end listening process and a background service thread (both belonging to different types of processes in the database service thread) required by its own service, implement high-concurrency communication event listening based on multi-path I/O multiplexing, and perform data interaction with a data resource pool in the database network architecture (e.g., read first data from the database network architecture) by invoking a communication control transceiving interface provided by the communication pool.
It should be noted that, in some embodiments of the present application, the database network architecture may further include a data sharing buffer in addition to the communication pool, where the data sharing buffer may be referred to as a data resource pool, that is, the data resource pooling is implemented by creating the data sharing buffer. The data resource pool is responsible for carrying out packet aggregation and/or batch receiving and sending on data of the user mode network protocol stack so as to realize dynamic flow control and capacity expansion. Specifically, after the computer device obtains the first data from the user mode network protocol stack through the communication pool, for example, the database network thread in the communication pool may obtain the first data from the user mode network protocol stack based on a polling mode (or another mode, such as a periodic check, a wakeup check, or the like), and then deposit the first data in the data resource pool.
It should be noted here that, in the case that the database network architecture further includes a data resource pool, then after the computer device completes configuration of the user-mode network protocol stack, for example, after the user-mode network configuration module deployed by the computer device completes configuration of the user-mode network protocol stack, in addition to creating and initializing the communication pool in the database network architecture, the data resource pool in the database network architecture needs to be created and initialized, and a communication control transceiving interface needed between the database network architecture and the database multithreading architecture needs to be initialized.
903. The computer equipment reads the first data through a communication control transceiving interface between the database multithreading architecture and the database network architecture through the database multithreading architecture, and executes a first task corresponding to the first data in the database according to the first data.
And finally, the computer equipment reads the first data through the communication control transceiving interface between the database multithreading architecture and the database network architecture through the database multithreading architecture, and executes a first task corresponding to the first data in the database according to the first data.
It should be noted that, in some embodiments of the present application, the database on the computer device may have a case of writing data in addition to a case of reading data (i.e., receiving data from the network card device). Therefore, the method of the embodiment of the present application may further include: after the database multithreading architecture executes the upper-layer service (which may be referred to as a second service) of the database on the computer device, the obtained data may be referred to as second data, and then, the computer device may send the second data to the database network architecture through the database multithreading architecture via a communication control transceiving interface between the database multithreading architecture and the database network architecture, and further send the second data to the user mode network protocol stack through a database network thread in a communication pool in the database network architecture.
It should be noted that, in some embodiments of the present application, in a case that the user mode network protocol stack further includes a user mode process and a network protocol stack component, and the user mode process and the network protocol stack component share a memory, a manner that the computer device sends the second data to the user mode network protocol stack through the database network thread may specifically be: the computer device sends the second data to the network protocol stack component through the database network thread, and then the network protocol stack component stores the received second data in the shared memory, so that the network card device can read the second data from the shared memory.
It should be further noted that, in other embodiments of the present application, in a case that the database network architecture may further include a data sharing buffer (i.e., a data resource pool), after the computer device sends the second data to the database network architecture through the database multithreading architecture via the communication control transceiver interface, the acceleration method further includes: the computer device stores the second data in a data resource pool through the database network thread.
For convenience of understanding, in the following, taking as an example that the computer device includes a user mode network protocol stack, a database network architecture, a database multithreading architecture, and a user mode network configuration module, and the database network architecture includes a communication pool and a resource data pool, and the user mode network protocol stack includes a user mode process and a network protocol stack component, the implementation steps of the acceleration method for the database network load performance described in the foregoing embodiment are summarized, specifically referring to fig. 10, where fig. 10 is a core implementation flowchart of the acceleration method for the database network load performance provided in the embodiment of the present application, and specifically may include the following core steps:
step1, in the installation and starting stage of a database on computer equipment, a user mode network configuration module is firstly responsible for enabling user mode network configuration. The automatic deployment of user mode network protocol stacks such as DPDK takeover, drive loading, memory large page configuration, user mode process starting and the like is carried out by establishing a daemon process, and the high availability of a user mode network is realized.
And 2, after the database completes the configuration of the user mode network protocol stack through the user mode network configuration module, establishing and initializing a communication pool and a data resource pool in the database network architecture, and initializing a communication control transceiving interface required by an upper layer service application (namely a database service thread in the database multithreading architecture).
And step3, the database starts to start a back-end monitoring process and a background service thread (both belong to the processes in the database service thread) required by the self service. And realizing high-concurrency communication event monitoring based on multi-path I/O multiplexing, and starting to perform data interaction by calling a communication control transceiving interface provided by a communication pool.
And 4, calling a communication pool control interface by the upper layer service, specifically, enabling each database network thread in the communication pool to be responsible for message control processing and message transceiving processing, and storing the data of the user mode network protocol stack in a data resource pool by the database network threads based on a polling mode.
And 5, the data resource pool is responsible for carrying out packet aggregation and batch receiving and transmitting on the data of the network protocol stack so as to realize dynamic flow control and capacity expansion.
And 6, when the upper database business thread senses a communication event, the database business thread reads data from the data resource pool by using a simplex receiving blocking type interface or writes data into the data resource pool (namely writes data) by using a simplex sending asynchronous type interface, so that the whole data interaction process of the business layer and the communication layer is completed.
Referring to fig. 11, fig. 11 is a flowchart of an implementation of the acceleration method for the network load performance of the database based on the acceleration framework provided above, where the acceleration framework mainly implements the following steps:
step1, in the service call of the database, a comm _ XXX interface is used for replacing the original call interface, for example, comm _ recv replaces recv, comm _ send replaces send, and comm _ socket replaces socket. And only the change of the calling layer of the sensing interface is externally sensed.
step2, establishing a socket request: comm _ proxy _ socket creates a server fd for each network agent thread by using PORTRESS, and realizes the logical mapping of the user mode network protocol stack fd. In order to deal with the limitation that fd cannot cross threads, all network threads can monitor fd so as to create a new connection, the server fd monitoring established in the embodiment of the application uses a REUSEPORT form, and each network thread entity removes a list/bind address of the server fd.
step3, fd broadcast: the broadcast broadcasts server fd to each network proxy thread.
step4, event driven model based on multiplexing of I/O: and realizing the event processing of the protocol stack data by adopting an epoll multipath I/O model.
step5, processing the control message of the fd by the database network thread: the method comprises the steps of receiving a socket, poll, epoll _ wait/epoll _ ctl \8230, creating and modifying fd of all service sessions of a socket communication control transceiving interface, and realizing the processing that the fd does not cross threads.
step6, data reception: a simplex receiving mode is realized, all data fd is added into epoll fd of a network agent thread, a data request of a user mode network protocol stack is received in a polling (polling) mode, and the data from the user mode network protocol stack is put into a recv buffer.
step7, data transmission: and when the service session needs to send data, the corresponding data is added into the send buffer corresponding to the fd, the network agent thread uniformly processes the monitored data of all the fd, and finally, the data are packaged and sent as required.
In the foregoing embodiments of the present application, compared with solutions in the prior art, in the acceleration framework and the acceleration method provided in the embodiments of the present application, a user mode network protocol stack is used to replace a kernel mode network protocol stack, so that an operating system bypass is implemented, and system performance is improved. In addition, the communication resource pooling technology realizes the communication pool with fd not crossing threads, message wholesale processing and data reading/writing of the annular buffer, and effectively reduces the performance loss caused by switching of database business threads and database network threads.
To sum up, the acceleration framework for the network load performance of the database provided in the embodiment of the present application specifically includes a user mode network protocol stack, a database network architecture, and a database multithreading architecture, and in some embodiments, the acceleration framework may further include a user mode network configuration module, configured to perform automatic configuration on the user mode network protocol stack by creating a daemon process. Under the condition that a database on computer equipment reads data, a network card device receives initial data sent by opposite-end equipment and sends the initial data to a user mode process of an acceleration frame, the user mode process further puts the initial data into a shared memory of a network protocol stack component, the network protocol stack component analyzes the initial data into a format which can be identified by a server, and first data obtained after analysis are also put into the shared memory. The database network thread in the communication pool will take the first data out of the shared memory and put it into the data resource pool in a polling manner (which may be otherwise, and is not limited herein) and then notify the corresponding database service thread in the database multithreading architecture to read the first data, and the corresponding database service thread reads the first data and is used to execute the first task corresponding to the first data, and the communication pool may be notified when the first task is executed completely, or the communication pool is not notified when the first task is executed completely, which is not limited herein. In the case of writing data in a database on the computer device, the service thread in the database multithreading architecture directly puts the data (i.e., the second data) after the execution of the task into the data resource pool in the database network architecture, and the database network thread in the threading pool puts the second data in the data resource pool into the shared memory of the user-mode process and the network protocol stack component, so that the network card device can read the second data from the shared memory.
On the basis of the embodiments corresponding to fig. 9 to 11, in order to better implement the above-mentioned scheme of the embodiments of the present application, the following also provides related equipment for implementing the above-mentioned scheme. Referring to fig. 12 specifically, fig. 12 is a schematic structural diagram of a computer device according to an embodiment of the present application, which may specifically include: the system comprises an analysis module 1201, an acquisition module 1202 and a reading and writing module 1203, wherein the analysis module 1201 is used for acquiring initial data from a network card device by a computer device through a user mode network protocol stack, and analyzing the initial data through a TCP/IP protocol stack to obtain first data; an obtaining module 1202, configured to obtain the first data from the user mode network protocol stack through at least one database network thread, and instruct a database multithreading architecture to read the first data from the database network architecture, where the database network thread belongs to the database network architecture, and the database multithreading architecture includes at least one database service thread; the read/write module 1203 is configured to read the first data through the communication control transceiving interface between the database multithreading architecture and the database network architecture through the database multithreading architecture, and execute a first task corresponding to the first data in a database according to the first data.
In one possible design, the user-mode network protocol stack is configured by a user-mode network configuration module deployed on the computer device by creating a daemon process.
In one possible design, the daemon process performs at least one of the following configuration operations: setting a DPDK user mode driver of a data plane development suite, setting a large page memory, setting a timing task, setting a kernel virtual network card KNI and setting the control authority of a user mode assembly.
In one possible design, the read/write module 1203 is further configured to: sending second data to the database network architecture through the communication control transceiving interface through the database multithreading architecture, wherein the second data is obtained after the at least one service thread executes a second service in the database; and sending the second data to the user mode network protocol stack through the at least one database network thread.
In a possible design, the user mode network protocol stack includes a user mode process and a network protocol stack component, where the user mode process and the network protocol stack component share a memory, and the parsing module 1201 is specifically configured to: receiving initial data sent by a network card device through the user mode process, and storing the initial data in the memory; and analyzing the initial data in the memory through the network protocol stack component based on a TCP/IP protocol stack to obtain first data, wherein the first data is stored in the memory.
In a possible design, the user mode network protocol stack includes a user mode process and a network protocol stack component, where the user mode process and the network protocol stack component share a memory, and the read-write module 1203 is specifically configured to send the second data to the network protocol stack component through the at least one database network thread; the parsing module 1201 is further specifically configured to store the second data in the memory through the network protocol stack component.
In a possible design, the database network architecture further includes a data sharing buffer, and the obtaining module 1202 is specifically configured to: after the first data is acquired from the user mode network protocol stack by at least one database network thread, the first data is stored in the data sharing buffer by the at least one database network thread.
In a possible design, the database network architecture further includes a data sharing buffer, and the obtaining 1202 module is specifically configured to: storing, by the at least one database network thread, the second data in the data sharing buffer.
It should be noted that, the contents of information interaction, execution process, and the like between the modules/units in the computer device 1200 described in the embodiment corresponding to fig. 12 are based on the same concept as the method embodiments corresponding to fig. 9 to fig. 11 in the present application, and specific contents may refer to the operation process and description in the foregoing method embodiments of the present application, and are not described again here.
Referring to fig. 13, fig. 13 is a schematic structural diagram of a computer device provided in the embodiment of the present application, and modules described in the embodiment corresponding to fig. 12 may be disposed on the computer device 1300 to implement functions of the computer device 1200 in the embodiment corresponding to fig. 12. Computer device 1300 is implemented by one or more servers, which computer device 1300 may vary widely in configuration or performance, and may include one or more Central Processing Units (CPUs) 1322 (e.g., one or more central processing units) and memory 1332, one or more storage media 1330 (e.g., one or more mass storage devices) storing applications 1342 or data 1344. Memory 1332 and storage medium 1330 may be, among other things, transitory or persistent storage. The program stored on the storage medium 1330 may include one or more modules (not shown), each of which may include a sequence of instructions operating on the computer device 1300. Still further, central processor 1322 may be disposed in communication with storage medium 1330 such that a series of instruction operations in storage medium 1330 are executed on computer device 1300.
Computer device 1300 may also include one or more power supplies 1326, one or more wired or wireless network interfaces 1350, one or more input-output interfaces 1358, and/or one or more operating systems 1341, such as Windows Server, mac OS XTM, unixTM, linuxTM, freeBSDTM, etc.
In this embodiment, the computer device 1300 may be configured to perform the steps performed by the computer device in the corresponding embodiments of fig. 9 to 11, for example, the central processor 1322 may be configured to: when the opposite terminal device sends data (which may be called initial data) to the computer device through the network card device, the initial data sent by the opposite terminal device is received from the network card device through the user mode network protocol stack, and the initial data is further analyzed through the TCP/IP protocol stack in the user mode network protocol stack to obtain the first data. After receiving the initial data and analyzing the initial data to obtain first data, the user-mode network protocol stack obtains the first data from the user-mode network protocol stack through a network thread in a communication pool (the communication pool is composed of at least one database network thread, and each database network thread in the communication pool is responsible for message control processing and message transceiving processing), for example, the database network thread in the communication pool may obtain the first data from the user-mode network protocol stack based on a polling mode (or other modes, such as periodic check, wakeup check, and the like), and further instruct a database service thread in a database multithreading architecture (the database multithreading architecture includes at least one service thread) to read the first data from a database network architecture, where the communication pool belongs to the database network architecture, that is, the database network thread belongs to the database network architecture. And finally, reading the first data through a communication control transceiving interface between the database multithreading architecture and the database network architecture through the database multithreading architecture, and executing a first task corresponding to the first data in the database according to the first data.
Central processor 1322 is configured to perform any one of the steps performed by the computer device in the embodiments corresponding to fig. 9-11. For details, reference may be made to the description of the method embodiments described above in the present application, and details are not described herein.
In an embodiment of the present application, a computer-readable storage medium is further provided, in which a program for signal processing is stored, and when the program runs on a computer, the computer is caused to execute the steps performed by the computer device in the description of the foregoing illustrated embodiment.
It should be noted that the above-described embodiments of the apparatus are merely schematic, where the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. In addition, in the drawings of the embodiments of the apparatus provided in the present application, the connection relationship between the modules indicates that there is a communication connection therebetween, which may be specifically implemented as one or more communication buses or signal lines.
Through the above description of the embodiments, those skilled in the art will clearly understand that the present application can be implemented by software plus necessary general-purpose hardware, and certainly can also be implemented by special-purpose hardware including special-purpose integrated circuits, special-purpose CPUs, special-purpose memories, special-purpose components and the like. Generally, functions performed by computer programs can be easily implemented by corresponding hardware, and specific hardware structures for implementing the same functions may be various, such as analog circuits, digital circuits, or dedicated circuits. However, for the present application, the implementation of a software program is more preferable. Based on such understanding, the technical solutions of the present application or portions thereof that contribute to the prior art may be embodied in the form of a software product, where the computer software product is stored in a readable storage medium, such as a floppy disk, a usb disk, a removable hard disk, a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk of a computer, and includes several instructions for enabling a computer device (which may be a personal computer, an exercise device, or a network device) to execute the methods described in the embodiments of the present application.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product.
The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, the computer instructions may be transmitted from one website site, computer, training device, or data center to another website site, computer, training device, or data center via wired (e.g., coaxial cable, fiber optics, digital subscriber line) or wireless (e.g., infrared, wireless, microwave, etc.) means. The computer-readable storage medium may be any available medium that a computer can store or a data storage device, such as a training device, data center, etc., that includes one or more available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a Digital Video Disk (DVD)), or a semiconductor medium (e.g., a Solid State Disk (SSD)), among others.

Claims (22)

1. An acceleration framework for database network load performance, the framework being deployed on a computer device on which an operating system and a database are deployed, the framework comprising:
the system comprises a user mode network protocol stack, a database network architecture and a database multithreading architecture, wherein the database network architecture comprises at least one database network thread, the database multithreading architecture comprises at least one database service thread, and the database multithreading architecture is connected with the database network architecture through a communication control transceiving interface;
the user mode network protocol stack is used for receiving initial data sent by the network card equipment and analyzing the initial data through the TCP/IP protocol stack to obtain first data;
the at least one database network thread is used for acquiring the first data and instructing the database multithreading architecture to read the first data from the database network architecture;
the database multithreading architecture is used for reading the first data through the communication control transceiving interface so as to execute a first service corresponding to the first data in the database.
2. The frame of claim 1, further comprising:
and the user mode network configuration module is used for configuring the user mode network protocol stack by creating a daemon process.
3. The framework of claim 2, wherein the user-mode network configuration module is specifically configured to perform at least one of the following configuration operations:
setting a DPDK user mode driver of a data plane development suite, setting a large page memory, setting a timing task, setting a kernel virtual network card KNI and setting the control authority of a user mode assembly.
4. The framework of any of claims 1-3, wherein the database multithreading architecture is further to:
sending second data to the database network architecture through the communication control transceiving interface, wherein the second data is obtained after the at least one service thread executes a second service in the database;
the at least one database network thread is further configured to send the second data to the user mode network protocol stack.
5. The framework of any of claims 1-4, wherein the user mode network protocol stack comprises:
the system comprises a user mode process and a network protocol stack component, wherein the user mode process and the network protocol stack component share a memory;
the user mode network protocol stack is specifically configured to:
receiving initial data sent by a network card device through the user mode process, and storing the initial data in the memory;
and analyzing the initial data in the memory through the network protocol stack component based on a TCP/IP protocol stack to obtain first data, wherein the first data is stored in the memory.
6. The framework of claim 4, wherein the user mode network protocol stack comprises:
the system comprises a user mode process and a network protocol stack component, wherein the user mode process and the network protocol stack component share a memory;
the at least one database network thread is specifically configured to send the second data to the network protocol stack component;
the user mode network protocol stack is specifically configured to store the second data in the memory through the network protocol stack component.
7. The framework of any of claims 1-6, wherein the database network architecture further comprises:
and the data sharing buffer is used for storing the first data from the user mode network protocol stack.
8. The framework according to any of claims 4, 6, wherein the database network architecture further comprises:
a data-sharing buffer to store the second data from the database multithreading architecture.
9. The framework according to any of claims 1-8, wherein the at least one database network thread is specifically configured to:
acquiring the first data and storing the first data in the data sharing buffer;
and instructing the database multithreading architecture to read the first data from the data sharing buffer.
10. A method for accelerating the network load performance of a database is characterized by comprising the following steps:
the method comprises the steps that a computer device obtains initial data from a network card device through a user mode network protocol stack, and analyzes the initial data through a TCP/IP protocol stack to obtain first data;
the computer device acquires the first data from the user mode network protocol stack through at least one database network thread and instructs a database multithreading architecture to read the first data from the database network architecture, wherein the database network thread belongs to the database network architecture, and the database multithreading architecture comprises at least one database service thread;
and the computer equipment reads the first data through the database multithreading architecture and a communication control transceiving interface between the database multithreading architecture and the database network architecture, and executes a first task corresponding to the first data in a database according to the first data.
11. The method of claim 10, wherein the user-mode network protocol stack is configured by a user-mode network configuration module deployed on the computer device by creating a daemon process.
12. The method of claim 11, wherein the daemon process performs at least one of the following configuration operations:
setting a DPDK user mode driver of a data plane development kit, setting a large page memory, setting a timing task, setting a kernel virtual network card KNI and setting the control authority of a user mode component.
13. The method according to any one of claims 10-12, further comprising:
the computer equipment sends second data to the database network architecture through the communication control transceiving interface through the database multithreading architecture, wherein the second data is obtained after the at least one service thread executes a second service in the database;
the computer device sends the second data to the user mode network protocol stack through the at least one database network thread.
14. The method according to any one of claims 10 to 13, wherein the user mode network protocol stack includes a user mode process and a network protocol stack component, the user mode process and the network protocol stack component share a memory, and the obtaining, by the computer device, initial data from a network card device through the user mode network protocol stack and parsing the initial data through a TCP/IP protocol stack to obtain first data includes:
the computer equipment receives initial data sent by network card equipment through the user mode process and stores the initial data in the memory;
and the computer equipment analyzes the initial data in the memory through the network protocol stack component based on a TCP/IP protocol stack to obtain first data, and the first data is stored in the memory.
15. The method of claim 13, wherein the user-mode network protocol stack comprises a user-mode process and a network protocol stack component, wherein the user-mode process and the network protocol stack component share a memory, and wherein sending, by the computer device, the second data to the user-mode network protocol stack via the at least one database network thread comprises:
the computer device sending the second data to the network protocol stack component via the at least one database network thread;
and the computer equipment stores the second data in the memory through the network protocol stack component.
16. The method according to any of claims 10-15, wherein the database network architecture further comprises a data sharing buffer, and wherein after the computer device retrieves the first data from the user mode network protocol stack via at least one database network thread, the method further comprises:
the computer device stores the first data in the data sharing buffer through the at least one database network thread.
17. The method according to any one of claims 13, 15, wherein the database network architecture further comprises a data-sharing buffer, and after the computer device sends second data to the database network architecture via the communication control transceiver interface through the database multithreading architecture, the method further comprises:
the computer device stores the second data in the data sharing buffer through the at least one database network thread.
18. A computer device having functionality for implementing the method of any one of claims 10-17, the functionality being implemented by hardware or by hardware executing corresponding software, the hardware or the software comprising one or more modules corresponding to the functionality.
19. A computer device comprising a processor and a memory, the processor coupled with the memory,
the memory is used for storing programs;
the processor to execute the program in the memory to cause the computer device to perform the method of any of claims 10-17.
20. A computer-readable storage medium comprising a program which, when run on a computer, causes the computer to perform the method of any one of claims 10-17.
21. A computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of any one of claims 10-17.
22. A chip comprising a processor and a data interface, the processor reading instructions stored on a memory through the data interface to perform the method of any one of claims 10-17.
CN202111136877.XA 2021-09-27 2021-09-27 Acceleration framework, acceleration method and equipment for database network load performance Pending CN115878301A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111136877.XA CN115878301A (en) 2021-09-27 2021-09-27 Acceleration framework, acceleration method and equipment for database network load performance
PCT/CN2022/121232 WO2023046141A1 (en) 2021-09-27 2022-09-26 Acceleration framework and acceleration method for database network load performance, and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111136877.XA CN115878301A (en) 2021-09-27 2021-09-27 Acceleration framework, acceleration method and equipment for database network load performance

Publications (1)

Publication Number Publication Date
CN115878301A true CN115878301A (en) 2023-03-31

Family

ID=85720121

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111136877.XA Pending CN115878301A (en) 2021-09-27 2021-09-27 Acceleration framework, acceleration method and equipment for database network load performance

Country Status (2)

Country Link
CN (1) CN115878301A (en)
WO (1) WO2023046141A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116599917B (en) * 2023-05-31 2024-03-01 中科驭数(北京)科技有限公司 Network port determining method, device, equipment and storage medium
CN116781650B (en) * 2023-07-11 2024-03-19 中科驭数(北京)科技有限公司 Data processing method and system
CN117076542B (en) * 2023-08-29 2024-06-07 中国中金财富证券有限公司 Data processing method and related device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106897278B (en) * 2015-12-17 2020-10-30 阿里巴巴集团控股有限公司 Data read-write processing method and device for key value database
US11100058B2 (en) * 2017-09-06 2021-08-24 Oracle International Corporation System and method for connection concentration in a database environment
CN110602154A (en) * 2018-06-13 2019-12-20 网宿科技股份有限公司 WEB server and method for processing data message thereof
CN113296974B (en) * 2020-08-31 2022-04-26 阿里巴巴集团控股有限公司 Database access method and device, electronic equipment and readable storage medium

Also Published As

Publication number Publication date
WO2023046141A1 (en) 2023-03-30

Similar Documents

Publication Publication Date Title
Jose et al. Memcached design on high performance RDMA capable interconnects
US20190377604A1 (en) Scalable function as a service platform
US9535863B2 (en) System and method for supporting message pre-processing in a distributed data grid cluster
US10572553B2 (en) Systems and methods for remote access to DB2 databases
CN115878301A (en) Acceleration framework, acceleration method and equipment for database network load performance
US20080263554A1 (en) Method and System for Scheduling User-Level I/O Threads
CN111459418B (en) RDMA (remote direct memory Access) -based key value storage system transmission method
US8874638B2 (en) Interactive analytics processing
US9218226B2 (en) System and methods for remote access to IMS databases
Zhang et al. Compucache: Remote computable caching using spot vms
CN113641410A (en) Netty-based high-performance gateway system processing method and system
US20240179092A1 (en) Traffic service threads for large pools of network addresses
Li et al. HatRPC: Hint-accelerated thrift RPC over RDMA
Sun et al. SKV: A SmartNIC-Offloaded Distributed Key-Value Store
CN106131162A (en) A kind of method realizing network service agent based on IOCP mechanism
CN116954944A (en) Distributed data stream processing method, device and equipment based on memory grid
CN113923212B (en) Network data packet processing method and device
US7320044B1 (en) System, method, and computer program product for interrupt scheduling in processing communication
Rosa et al. INSANE: A Unified Middleware for QoS-aware Network Acceleration in Edge Cloud Computing
US10788987B2 (en) Data storage system employing service infrastructure for functional modules
Argyroulis Recent Advancements In Distributed System Communications
Shi CoAP infrastructure for IoT
CN116662008A (en) Heterogeneous hardware unified nano-tube scheduling node controller
CN118250331A (en) Communication method of asynchronous network input/output assembly
Liu et al. FUYAO: DPU-enabled Direct Data Transfer for Serverless Computing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination