CN114640716A - Cloud network cache acceleration system and method based on fast network path - Google Patents

Cloud network cache acceleration system and method based on fast network path Download PDF

Info

Publication number
CN114640716A
CN114640716A CN202210506406.1A CN202210506406A CN114640716A CN 114640716 A CN114640716 A CN 114640716A CN 202210506406 A CN202210506406 A CN 202210506406A CN 114640716 A CN114640716 A CN 114640716A
Authority
CN
China
Prior art keywords
cache
data packet
processing module
hash value
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210506406.1A
Other languages
Chinese (zh)
Inventor
杨林
冯涛
王雯
张京京
高先明
陶沛琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Network Engineering Institute of Systems Engineering Academy of Military Sciences
Original Assignee
Institute of Network Engineering Institute of Systems Engineering Academy of Military Sciences
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Network Engineering Institute of Systems Engineering Academy of Military Sciences filed Critical Institute of Network Engineering Institute of Systems Engineering Academy of Military Sciences
Priority to CN202210506406.1A priority Critical patent/CN114640716A/en
Publication of CN114640716A publication Critical patent/CN114640716A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0823Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability
    • H04L41/083Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability for increasing network speed

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention provides a cloud network cache acceleration system and method based on a fast network path, and belongs to the technical field of data processing. The system comprises a host network card, a cache acceleration path-in processing module, a kernel network protocol stack, a cache application and a cache acceleration path-out processing module; the cache acceleration path processing module comprises an XDP hook of a host network card driving layer, the cache acceleration path processing module comprises a Linux flow control TC hook, and the XDP hook and the TC hook are respectively mounted with a plurality of eBPF programs to realize the storage, calling and updating of cache information in a kernel. The system is based on the Linux fast network path XDP, and realizes a layer of cache on a network card driving layer, so that the request of a user can be processed by a CPU at the first time and a reply is returned.

Description

Cloud network cache acceleration system and method based on fast network path
Technical Field
The invention belongs to the technical field of digital processing, and particularly relates to a cloud network cache acceleration system and method based on a fast network path.
Background
With the rapid growth of network services such as e-commerce, web search, social networking, etc., more and more data is created and used by network applications. In large-scale network service and cloud provider scenarios, modern databases need to handle hundreds of requests, and have high requirements on throughput and time delay. However, the speed and delay of the hard disk-based database are not enough to meet the current faster and faster network speed, and high-speed memory cache application is developed at the same time, so that the requirements of the database on high throughput and low delay are met. However, as the network speed of the cloud data center is continuously increased, performance bottlenecks begin to occur in the cache in the memory. These cloud network caching applications' requests are handled by the linux network stack, which, because of its greater versatility, appears inefficient and slow to handle large numbers of cache requests, and suffer from latency and throughput problems introduced by the network stack.
Existing solutions rely primarily on kernel bypass, hardware acceleration, and optimization of existing applications. The kernel bypass solution improves the performance of cloud network caching by transferring the processing of network packets to the network stack of the user space, and unlike the generality of linux operating systems, the kernel bypass solution focuses on the high-performance processing of packets in network applications. By having user space direct access to the underlying hardware, applications can perform packet I/O from user space. Hardware acceleration uses dedicated hardware such as FPGAs, programmable switches, or ASICs to implement the cache. On the other hand, the overall performance is improved by directly optimizing the application programs, such as multi-core processing and the like.
At present, a hardware acceleration scheme needs to purchase special hardware, the cost is generally high, the development is inflexible, the period is long, and the hardware acceleration scheme is not suitable for the existing data center. The core bypass scheme can achieve higher performance and lower cost, but also has its inherent disadvantages. Firstly, replacing the linux network stack with the network stack in the user space requires redesigning the cloud network cache application to adapt to the dedicated stack in the user space, which often requires a kernel module and a network driver of a third party. Secondly, abandoning linux network processing means abandoning the security mechanisms provided by the linux kernel, such as iptables and memory isolation. To implement the same security mechanism, we would need to re-add hardware or software based security components, which increases the complexity and maintainability of the system. Third, the kernel bypass approach has high CPU overhead even at low system loads to ensure high rates of network processing. In the post molar era, such a cost is prohibitive. The application acceleration for the cache is a new proposed acceleration method by using a Berkeley Packet Filter (Berkeley Packet Filter), and the acceleration for the cache application is realized at the lowest layer of Linux network processing by using the eBPF technology of a Linux kernel, however, the method is only suitable for Memcached application, only supports UDP-based protocol, and has no universality.
In the prior art, kernel bypass technology, such as dpdk (data Plane Development kit), is a Development platform and interface for fast processing data packets. The traditional method for processing data packets is a CPU interrupt method, namely, a network card driver receives a data packet and informs a CPU to process the data packet through interrupt, and then the CPU copies data and delivers the data to a protocol stack. When there is a large amount of data, this method will generate a large amount of CPU interrupts, which will disable the CPU from running other programs. The DPDK implements the packet processing procedure in a polling manner. The DPDK reloads the network card driver, the network card driver does not interrupt and inform the CPU after receiving the data packet, but stores the data packet into the memory through a zero copy technology, and then the application layer program can directly read the data packet from the memory through an interface provided by the DPDK. The method saves CPU interruption time and memory copy time, provides a simple, convenient and efficient data packet processing mode for an application layer, and facilitates the development of network application. The defects of the technical scheme are as follows: (1) lack of security mechanisms to access hardware; since the user space handles packets directly, the kernel bypass scheme eliminates kernel-enforced security policies such as memory isolation and firewalls, resulting in the need to add extra specific hardware extensions such as IOMMU and SR-IOV, or maintain security policies based on software isolation. This further complicates the scheme, resulting in high system maintenance costs. (2) The resource consumption is high; memory megapages need to be allocated and CPU cores monopolized to poll process packets, sacrificing CPU utilization for low latency and high throughput. Even under the condition of low load, the utilization rate of the CPU is higher, and the waste of resources is caused. (3) Poor maintainability; the kernel bypass scheme requires extensive redesign and modification of existing applications in order to fit into a dedicated network stack for high performance.
In the prior art, hardware acceleration techniques, such as using dedicated hardware, programmable switches, intelligent network cards, etc. for acceleration or offloading, can achieve the highest throughput and the lowest latency of all schemes because these devices are used as dedicated offloading or processing. The defects of the technical scheme are as follows: (1) the cost is high; the use of hardware acceleration schemes requires the addition of additional dedicated hardware, which is often costly and not suitable for large-scale data centers. And expensive on-chip memory resources are needed in part of schemes, which easily causes resource waste. (2) The development period is long; the special hardware has a long research and development period and cannot adapt to a rapidly changing data center.
Therefore, the kernel network processing of the cloud network cache application has high performance, meets the applicability and the kernel security of various applications, and is a problem to be solved urgently.
Disclosure of Invention
Aiming at the technical problem, the invention provides a cloud network cache acceleration scheme based on a fast network path.
The invention discloses a cloud network cache acceleration system based on a fast network path in a first aspect. Wherein:
the system comprises a host network card, a cache acceleration path-in processing module, a kernel network protocol stack, a cache application and a cache acceleration path-out processing module;
the cache acceleration path entering processing module comprises an XDP hook of a host network card driving layer, the cache acceleration path exiting processing module comprises a Linux flow control TC hook, and the XDP hook and the TC hook are respectively mounted with a plurality of eBPF programs to realize the storage, calling and updating of cache information in a kernel;
the cache accelerated entry path processing module at least comprises a request filtering module, a hash value calculating module, an invalidation cache module, a data packet writing module, a first protocol processing module, a second protocol processing module and a reply module; the cache accelerated path processing module at least comprises a return filtering module, an updating cache module and a third protocol processing module.
According to the system of the first aspect of the present invention, after receiving the data packet sent by the host network card, the cache acceleration entry path processing module invokes the request filtering module to analyze the data packet to obtain the data request type; the method specifically comprises the following steps:
(1) when the data request type is GET obtaining cache request, calling the hash value calculation module to calculate a corresponding hash value according to the key value key of the data packet, and comparing the hash value with the inner-core cache information stored in the eBPF program mounted on the XDP hook; wherein:
(1-i) when the hash value is present in the kernel cache information:
the data packet writing module writes the data entry corresponding to the hash value into the data packet, and returns the data packet written into the data entry through the first protocol processing module and the reply module; the cache information in the kernel comprises a plurality of cache information formed by hash values and corresponding data entries.
According to the system of the first aspect of the present invention, (1) when the data request type is the GET cache request, invoking the hash value calculation module to calculate the corresponding hash value according to the key value key of the data packet, and comparing the hash value with the in-kernel cache information stored in the eBPF program mounted on the XDP hook; wherein:
(1-ii) when the hash value is not present in the in-core cache information:
the data packet is forwarded to the kernel network protocol stack through the second protocol processing module, is further sent to the cache application after being processed, determines a data entry corresponding to the data packet key value through calculation by the cache application, writes the data entry determined by the cache application into the data packet, and sends the data packet to the cache accelerated egress path processing module;
the return filtering module of the cache accelerated path processing module filters out the data packet with the written data entry, an eBPF program mounted on the TC hook extracts a hash value of a key value key of the data packet from the data packet with the written data entry, and the extracted hash value and the data entry determined by the cache application are used as a piece of cache information;
and the cache updating module sends the piece of cache information and a cache updating instruction to the cache acceleration path processing module together, and updates the piece of cache information in the eBPF program mounted on the XDP hook for cache acceleration processing of a subsequent GET acquisition cache request.
According to the system of the first aspect of the present invention, after receiving the data packet sent by the host network card, the cache acceleration entry path processing module invokes the request filtering module to analyze the data packet to obtain the data request type; the method specifically comprises the following steps:
(2) when the data request type is a SET (SET event configuration) cache entry request, calling the hash value calculation module to calculate a corresponding hash value according to a key value key of the data packet, and comparing the hash value with the in-core cache information stored in an eBPF program mounted on the XDP hook; wherein:
(2-i) when the hash value is present in the kernel cache information:
the invalidation caching module invalidates the data entry corresponding to the hash value, and then the XDP hook sends the data packet to the kernel network protocol stack through the second protocol processing module by means of PASS action of the XDP hook, and further sends the data packet to the caching application after processing, so as to complete setting of the caching entry in the caching application;
(2-ii) when the hash value is not present in the in-core cache information:
and the XDP hook directly sends the data packet to the kernel network protocol stack through the second protocol processing module through the PASS action of the XDP hook, and the data packet is further sent to the cache application after being processed so as to complete the setting of the cache item in the cache application.
According to the system of the first aspect of the present invention, after the setting of the cache entry in the cache application is completed, when the cache acceleration entry path processing module subsequently receives a data packet sent by the host network card, and the data request type of the data packet is GET to obtain a cache request, and the cache information in the kernel does not have a hash value of the subsequently received data packet, the subsequently received data packet is forwarded to the kernel network protocol stack through the second protocol processing module, and is further sent to the cache application after being processed, the cache application determines a data entry corresponding to the data packet key value key based on the set cache entry therein, and writes the data entry determined by the cache application into the data packet, and sends the data packet to the cache acceleration exit path processing module.
The invention discloses a cloud network cache acceleration method based on a fast network path in a second aspect. The method is implemented based on the system of the first aspect of the invention, wherein:
the system comprises a host network card, a cache acceleration path-in processing module, a kernel network protocol stack, a cache application and a cache acceleration path-out processing module;
the cache acceleration path-entering processing module comprises an XDP hook of a host network card driving layer, the cache acceleration path-exiting processing module comprises a Linux flow control TC hook, and the XDP hook and the TC hook are respectively mounted with a plurality of eBPF programs to realize the storage, calling and updating of cache information in a kernel;
the cache accelerated entry path processing module at least comprises a request filtering module, a hash value calculating module, an invalidation cache module, a data packet writing module, a first protocol processing module, a second protocol processing module and a reply module; the cache accelerated path processing module at least comprises a return filtering module, an updating cache module and a third protocol processing module.
The method according to the second aspect of the present invention specifically includes:
after receiving the data packet sent by the host network card, the cache acceleration path-entering processing module calls the request filtering module to analyze the data packet so as to acquire a data request type; the method specifically comprises the following steps:
(1) when the data request type is GET obtaining cache request, calling the hash value calculation module to calculate a corresponding hash value according to the key value key of the data packet, and comparing the hash value with the inner-core cache information stored in the eBPF program mounted on the XDP hook; wherein:
(1-i) when the hash value is present in the kernel cache information:
calling the data writing packet module to write the data entry corresponding to the hash value into the data packet, and returning the data packet written into the data entry through the first protocol processing module and the reply module; the cache information in the kernel comprises a plurality of cache information formed by hash values and corresponding data entries.
According to the method of the second aspect of the present invention, (1) when the data request type is the GET cache request, calling the hash value calculation module to calculate the corresponding hash value according to the key value key of the data packet, and comparing the hash value with the cache information in the kernel stored in the eBPF program mounted on the XDP hook; wherein:
(1-ii) when the hash value is not present in the in-core cache information:
the data packet is forwarded to the kernel network protocol stack through the second protocol processing module, is further sent to the cache application after being processed, determines a data entry corresponding to the data packet key value through calculation by the cache application, writes the data entry determined by the cache application into the data packet, and sends the data packet to the cache accelerated egress path processing module;
calling the return filtering module of the cache accelerated exit path processing module to filter the data packet written with the data entry, extracting the hash value of the key value key of the data packet from the data packet written with the data entry by the eBPF program mounted on the TC hook, and taking the extracted hash value and the data entry determined by the cache application as a piece of cache information;
and the cache updating module sends the piece of cache information and a cache updating instruction to the cache acceleration path processing module together, and updates the piece of cache information in the eBPF program mounted on the XDP hook for cache acceleration processing of a subsequent GET acquisition cache request.
According to the method of the second aspect of the present invention, after receiving the data packet sent by the host network card, the cache acceleration entry path processing module invokes the request filtering module to analyze the data packet to obtain the data request type; the method specifically comprises the following steps:
(2) when the data request type is a SET (SET event configuration) cache entry request, calling the hash value calculation module to calculate a corresponding hash value according to a key value key of the data packet, and comparing the hash value with the in-core cache information stored in an eBPF program mounted on the XDP hook; wherein:
(2-i) when the hash value is present in the kernel cache information:
calling the invalidation caching module to invalidate the data entry corresponding to the hash value, then sending the data packet to the kernel network protocol stack through the second protocol processing module by the XDP hook through the PASS action of the XDP hook, and further sending the data packet to the caching application after processing so as to complete the setting of the caching entry in the caching application;
(2-ii) when the hash value is not present in the in-core cache information:
and the XDP hook directly sends the data packet to the kernel network protocol stack through the second protocol processing module through the PASS action of the XDP hook, and the data packet is further sent to the cache application after being processed so as to complete the setting of the cache item in the cache application.
According to the method of the second aspect of the present invention, after the setting of the cache entry in the cache application is completed, for a data packet sent by the host network card and subsequently received by the cache acceleration entry path processing module, when the data request type is GET to obtain a cache request and the hash value of the subsequently received data packet does not exist in the cache information in the kernel, the subsequently received data packet is forwarded to the kernel network protocol stack through the second protocol processing module, and is further sent to the cache application after being processed, the cache application determines a data entry corresponding to the data packet key value key based on the set cache entry therein, and writes the data entry determined by the cache application into the data packet and sends the data packet to the cache acceleration exit path processing module.
A third aspect of the invention discloses an electronic device. The electronic device comprises a memory and a processor, wherein the memory stores a computer program, and the processor implements the steps of the fast network path-based cloud network cache acceleration method according to any one of the second aspects of the present disclosure when executing the computer program.
A fourth aspect of the invention discloses a computer-readable storage medium. The computer readable storage medium has stored thereon a computer program, which when executed by a processor implements the steps in a fast network path based cloud network cache acceleration method according to any one of the second aspects of the present disclosure.
In summary, the technical solution provided by the present invention is based on the Linux fast network path XDP, and implements a layer of cache in the network card driver layer, so that the user's request can be processed by the CPU and returned back at the first time. The method has better performance than a DPDK kernel bypass scheme, does not need to monopolize CPU resources, does not need to allocate a huge page memory, can reuse a security mechanism of a kernel, does not need to modify the existing cloud network cache application, has universality, and is suitable for the same type of cloud network cache application. And the XDP is a Linux kernel community maintenance project and has a more stable interface. The following technical problems are specifically solved:
(1) a cloud network cache acceleration architecture based on a kernel collaborative network acceleration technology; the high performance of the cloud network cache application is guaranteed without monopolizing CPU resources, the security strategy of the kernel can be reused by the design of the cloud network cache application and the kernel, and the program is prevented from possibly influencing the whole operating system due to the existence of the verifier in the kernel. In addition, the acceleration program can be updated to the kernel to be executed in real time without recompiling the kernel or using the kernel module.
(2) Designing a fast path of the XDP-based cache request; in actual use, most requests applied to the cloud network cache are GET requests, and the scheme is based on XDP, so that a quick path for quickly returning the GET requests is realized on a network driver layer. Meanwhile, the method can work cooperatively with the traditional kernel network protocol stack, and other complex requests are submitted to the kernel for processing, so that the method is focused on accelerating the processing of the cache application GET request.
(3) Modular cache request processing is designed; and (3) modularizing the whole system by using an eBPF tail calling mechanism, and connecting the program modules by tail calling. If the support of other applications needs to be updated, only the protocol processing module needs to be modified, and the cache application itself does not need to be modified.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the description in the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic structural diagram of a cloud network cache acceleration system based on a fast network path according to an embodiment of the present invention;
FIG. 2 is a flow diagram illustrating a process for processing a cache request according to an embodiment of the invention;
FIG. 3 is a block diagram of a module for processing cache requests according to an embodiment of the present invention;
fig. 4 is a block diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
The invention discloses a cloud network cache acceleration system based on a fast network path in a first aspect. Fig. 1 is a schematic structural diagram of a cloud network cache acceleration system based on a fast network path according to an embodiment of the present invention; as shown in fig. 1, the system includes a host network card, a cache accelerated in-path processing module, a kernel network protocol stack, a cache application, and a cache accelerated out-path processing module;
the cache acceleration path-entering processing module comprises an XDP hook of a host network card driving layer, the cache acceleration path-exiting processing module comprises a Linux flow control TC hook, and the XDP hook and the TC hook are respectively mounted with a plurality of eBPF programs to realize the storage, calling and updating of cache information in a kernel;
the cache accelerated entry path processing module at least comprises a request filtering module, a hash value calculating module, an invalidation cache module, a data packet writing module, a first protocol processing module, a second protocol processing module and a reply module; the cache accelerated path processing module at least comprises a return filtering module, an updating cache module and a third protocol processing module.
Wherein:
after receiving the data packet sent by the host network card, the cache acceleration path-entering processing module calls the request filtering module to analyze the data packet so as to acquire a data request type; the method specifically comprises the following steps:
(1) when the data request type is GET obtaining cache request, calling the hash value calculation module to calculate a corresponding hash value according to the key value key of the data packet, and comparing the hash value with the inner-core cache information stored in the eBPF program mounted on the XDP hook; wherein:
(1-i) when the hash value is present in the kernel cache information:
the data packet writing module writes the data entry corresponding to the hash value into the data packet, and returns the data packet written into the data entry through the first protocol processing module and the reply module; the cache information in the kernel comprises a plurality of cache information formed by hash values and corresponding data entries.
(1-ii) when the hash value is not present in the in-core cache information:
the data packet is forwarded to the kernel network protocol stack through the second protocol processing module, is further sent to the cache application after being processed, determines a data entry corresponding to the data packet key value through calculation by the cache application, writes the data entry determined by the cache application into the data packet, and sends the data packet to the cache accelerated egress path processing module;
the return filtering module of the cache accelerated egress path processing module filters the data packet written with the data entry, an eBPF program mounted on the TC hook extracts a hash value of a key value key of the data packet from the data packet written with the data entry, and the extracted hash value and the data entry determined by the cache application are used as a piece of cache information;
and the cache updating module sends the piece of cache information and a cache updating instruction to the cache acceleration path processing module together, and updates the piece of cache information in the eBPF program mounted on the XDP hook for cache acceleration processing of a subsequent GET acquisition cache request.
(2) When the data request type is a SET (SET event configuration) cache entry request, calling the hash value calculation module to calculate a corresponding hash value according to a key value key of the data packet, and comparing the hash value with the in-core cache information stored in an eBPF program mounted on the XDP hook; wherein:
(2-i) when the hash value is present in the kernel cache information:
the invalidation caching module invalidates the data entry corresponding to the hash value, and then the XDP hook sends the data packet to the kernel network protocol stack through the second protocol processing module by means of PASS action of the XDP hook, and further sends the data packet to the caching application after processing, so as to complete setting of the caching entry in the caching application;
(2-ii) when the hash value is not present in the in-core cache information:
and the XDP hook directly sends the data packet to the kernel network protocol stack through the second protocol processing module through the PASS action of the XDP hook, and the data packet is further sent to the cache application after being processed so as to complete the setting of the cache item in the cache application.
In some embodiments, after the setting of the cache entry in the cache application is completed, when a data request type of a data packet sent by the host network card and subsequently received by the cache acceleration entry path processing module is GET to obtain a cache request and a hash value of the subsequently received data packet does not exist in the cache information in the kernel, the subsequently received data packet is forwarded to the kernel network protocol stack through the second protocol processing module, and is further sent to the cache application after being processed, the cache application determines a data entry corresponding to the data packet key based on the set cache entry therein, and sends the data entry determined by the cache application to the cache acceleration exit path processing module after writing the data packet in the data entry determined by the cache application.
Specifically, in a data center network, the invention selects a fast network path XDP cooperated by a kernel to accelerate the cloud network cache application request load, and a cache layer is realized in the kernel to relieve the network communication performance bottleneck of the cache in the memory. At the earliest moment when the CPU can process the network data packet, namely the network driving layer, the network driving layer analyzes the network data packet and searches the cache in the inner core, so that the cache application GET request is quickly returned, and the aim of accelerating the acceleration of the cloud network cache request is fulfilled. Meanwhile, in the aspect of cache consistency, a direct-write scheme is used, and the problem of cache inconsistency during cache writing is avoided, so that a cache value is returned wrongly. According to the invention, through a modular design, the system can support new cloud network cache application and protocol with minimal modification, and the system can be conveniently used for the realization of the existing cache application without specially defining a new data packet format or modifying the existing cache application.
Compared with the existing kernel bypass and hardware acceleration schemes, the cache application GET access method and the cache application GET access system do not need to occupy huge page resources of a memory and monopolize CPU resources, can maintain a kernel-forced safety mechanism while accelerating the cache application GET request processing, and can realize higher cache application throughput with better CPU utilization rate. The invention has lower requirement on hardware, the Linux system on the current data center server supports XDP, most high-speed network cards support XDP driving mode, the cost is relatively lower, and the invention can be applied to the existing system in large scale.
Detailed description of the preferred embodiments example 1 (see FIG. 1)
A plurality of eBPF programs are mounted on an XDP hook and a Linux flow control TC hook of a host network card driving layer, and eBPF mapping is used for storing cache information in a kernel and other system related information. The analysis information in the eBPF mapping stores the offset of the read/write data packet of the current data packet relative to the initial position of the message; the TCP information stores context information such as a serial number transmitted by the TCP; the key information stores keys and hash values corresponding to the keys; mapping between the hash value and the value corresponding to the cache information storage key; the TC program maps and stores the eBPF program which is mounted on the TC hook by the system; the XDP program maps and stores the eBPF program which is mounted on the XDP hook by the system; the cache state maps the GET and SET command quantity received by the storage system and the system information such as the GET hit quantity, the GET miss quantity and the like.
When the network card receives any data packet, the system firstly filters out a request from a client according to a protocol type, a destination port and a cache protocol header, and takes Redis application as an example, a receiving port (default 6379) with the protocol type being TCP and the destination port being SET by Redis needs to be filtered out, and the protocol header is analyzed into a data packet of a GET or SET request; the filtering of the request type from the client can be implemented in a conventional manner, and is not limited in particular. For the GET request, a Hash value is calculated according to keys in a data packet, whether an entry exists in a cache in a system kernel is judged, if yes, a corresponding value is directly written into the data packet, and the data packet is directly replied to a client after source, a destination port, an MAC address and an IP address are exchanged and protocol processing is carried out. And if the request SETs a cache entry for the SET, invalidating the kernel cache and sending the request to the cache application for processing. If the cache misses the inner core cache, the request is sent to the cache application for processing, and if the corresponding value is obtained, the corresponding inner core cache item is updated in the outgoing path. The specific process is as follows:
(1) after the data packet arrives at the network card, the cache acceleration system of the invention analyzes the data packet at the first time that the CPU can process, and judges whether the data packet is a request for getting cache by GET or setting cache entry by SET according to the information of protocol type, destination port and the like of the data packet. If the GET request is destined to the cache application, executing the step 2, if the SET request is destined to the cache application, executing the step 3, otherwise executing the step 4.
(2) If the request is a GET request, calculating a corresponding hash value according to a key value in the data packet, comparing the hash value with the cache information in the kernel stored in the eBPF mapping, if the hash value is hit, writing a cache entry corresponding to the hash value into the data packet, and directly returning the request to the client after exchanging and processing a protocol through a source port, a destination port, an IP address and an MAC address without further processing of a kernel protocol stack. If not, the data is continuously sent to the kernel and the application, and if the data is hit by the application, the returned reply is intercepted, and the corresponding cache entry in the kernel is updated.
(3) And if the request is the SET request, directly invalidating the cache entry in the kernel and continuously sending the cache entry to the cache application.
(4) If the GET or SET request of the non-cache application is received, the system directly sends the GET or SET request to a kernel protocol stack without influencing the normal protocol stack processing.
Detailed description of the preferred embodiment 2 (see FIG. 2)
A fast path for XDP-based cache requests is designed to achieve acceleration of cache applications. Since more than 80% of cache applications are GET cache requests when in use, the target request with the largest proportion is returned at a network card driver layer based on the XDP, the maximum throughput is obtained with the minimum change, and meanwhile, an updating mechanism of the cache in the kernel is redesigned, so that the cache consistency of the whole system is ensured. As shown in fig. 2, if the GET cache request hits in the kernel cache of the system, the GET cache request is directly returned through a fast path without further processing by the kernel network protocol stack, thereby greatly reducing the processing delay related to the network. The cache in the kernel is invalidated in the SET updating request without updating, otherwise, the cache application cannot update the cache at the first time, and the problem of cache inconsistency is easily caused. The system intercepts and captures the reply of the cache application at the TC hook under the condition that the GET request is not hit so as to update the cache in the kernel, thereby ensuring that the entry of the cache application is always kept up to date. The mechanism comprises the following specific processes:
(1) for the case of GET hit, the system analyzes the request and obtains the value corresponding to the key in the request through an eBPF program mounted on an XDP fast network data path of the Linux system, and directly replies the client through a TX action on an XDP layer.
(2) For the case of GET miss, the system sends the GET miss to a kernel protocol stack and a cache application through the PASS action of the XDP, and the cache application replies. If hit occurs in the cache application, the reply is parsed at the TC exit to update the key value entry corresponding to the kernel cache.
(3) For the request of the SET for setting the cache entry, the system directly invalidates the corresponding key value pair, and sends the key value pair to a kernel protocol stack and a cache application through the PASS action of the XDP, so that the entry of the cache application is updated at the first time, and the cache consistency of the system is ensured.
Detailed description of the preferred embodiments 3 (see FIG. 3)
Modularized cache request processing is designed, and cache request processing modularization is realized by using a plurality of eBPF programs mounted on XDP and TC. And the program on the XDP hook is used for filtering out the cache request data packet, processing and returning quickly, and the program on the TC hook is used for filtering out the reply sent by the cloud network cache application and updating the cache. Through the modularized design, the protocol processing module can be only modified by the scheme, so that the universality of the cloud network cache software is achieved. The overall flow is as follows:
(1) the xdp _ rx _ filter module filters out the cache request packet, if it is a GET request, then step 2 is executed, if it is a SET request, then step 3 is executed, otherwise step 4 is executed.
(2) If the request is a GET request, calling xdp _ hash module, calculating corresponding hash value according to key value in the data packet, comparing with the cache information in the kernel stored in eBPF mapping, if hit occurs, calling xdp _ write _ pkt module to write the cache entry corresponding to the key value into the data packet, and after the protocol of the mediation xdp _ protocol _ tx _ process module is processed by a source, a destination port, an IP address and an MAC address, directly returning the xdp _ tx _ reply module to the client without further processing of the kernel protocol stack. If not, the data is continuously sent to the kernel and the application through an xdp _ protocol _ rx _ process module, and if the data is hit in the application, a returned reply is intercepted through a TC _ tx _ filter module on a TC hook, and a TC _ update _ cache module is used for updating a corresponding cache entry in the kernel.
(3) If the request is a SET request, the in-core cache entry is invalidated directly by the xdp _ invalidate module and continues to the cache application.
(4) If not caching GET or SET request of application, the system will be sent to kernel directly.
It can be seen that the beneficial effects that can be achieved by the solution of the first aspect of the present invention include, but are not limited to:
(1) the effect brought by the acceleration of the cloud network cache based on the network fast path is as follows:
the safety is enhanced: the scheme and the kernel work cooperatively, the kernel is reused to check the safety access of the hardware, and the program can pass through the verifier of the kernel, so that the breakdown of the kernel can be effectively avoided.
No application change requirement: according to the scheme, the cloud network cache application can be accelerated only by mounting the program to a quick path of the Linux network without any additional modification on the cloud network cache application.
The CPU utilization rate is improved: the scheme does not need to monopolize the CPU core of the host, and uses less CPU resources under the same throughput than the kernel bypass scheme.
(2) The cloud network caches the effect brought by the request general processing module:
different homogeneous cache applications can be compatible: the method can be applied to new cloud network cache application only by minimally modifying the cache request protocol processing module.
The invention discloses a cloud network cache acceleration method based on a fast network path in a second aspect. The method is implemented based on the system of the first aspect of the invention, wherein:
the system comprises a host network card, a cache acceleration path-in processing module, a kernel network protocol stack, a cache application and a cache acceleration path-out processing module;
the cache acceleration path-entering processing module comprises an XDP hook of a host network card driving layer, the cache acceleration path-exiting processing module comprises a Linux flow control TC hook, and the XDP hook and the TC hook are respectively mounted with a plurality of eBPF programs to realize the storage, calling and updating of cache information in a kernel;
the cache accelerated entry path processing module at least comprises a request filtering module, a hash value calculating module, an invalidation cache module, a data packet writing module, a first protocol processing module, a second protocol processing module and a reply module; the cache accelerated path processing module at least comprises a return filtering module, an updating cache module and a third protocol processing module.
The method according to the second aspect of the present invention specifically includes:
after receiving the data packet sent by the host network card, the cache acceleration path-entering processing module calls the request filtering module to analyze the data packet so as to acquire a data request type; the method specifically comprises the following steps:
(1) when the data request type is GET obtaining cache request, calling the hash value calculation module to calculate a corresponding hash value according to the key value key of the data packet, and comparing the hash value with the inner-core cache information stored in the eBPF program mounted on the XDP hook; wherein:
(1-i) when the hash value is present in the kernel cache information:
calling the data writing packet module to write the data entry corresponding to the hash value into the data packet, and returning the data packet written into the data entry through the first protocol processing module and the reply module; the cache information in the kernel comprises a plurality of cache information formed by hash values and corresponding data entries.
According to the method of the second aspect of the present invention, (1) when the data request type is the GET cache request, calling the hash value calculation module to calculate the corresponding hash value according to the key value key of the data packet, and comparing the hash value with the cache information in the kernel stored in the eBPF program mounted on the XDP hook; wherein:
(1-ii) when the hash value is not present in the in-core cache information:
the data packet is forwarded to the kernel network protocol stack through the second protocol processing module, is further sent to the cache application after being processed, determines a data entry corresponding to the data packet key value through calculation by the cache application, writes the data entry determined by the cache application into the data packet, and sends the data packet to the cache accelerated egress path processing module;
calling the return filtering module of the cache accelerated exit path processing module to filter the data packet written with the data entry, extracting the hash value of the key value key of the data packet from the data packet written with the data entry by the eBPF program mounted on the TC hook, and taking the extracted hash value and the data entry determined by the cache application as a piece of cache information;
and the cache updating module sends the piece of cache information and a cache updating instruction to the cache acceleration path processing module together, and updates the piece of cache information in the eBPF program mounted on the XDP hook for cache acceleration processing of a subsequent GET acquisition cache request.
According to the method of the second aspect of the present invention, after receiving the data packet sent by the host network card, the cache acceleration entry path processing module invokes the request filtering module to analyze the data packet to obtain the data request type; the method specifically comprises the following steps:
(2) when the data request type is a SET (SET event configuration) cache entry request, calling the hash value calculation module to calculate a corresponding hash value according to a key value key of the data packet, and comparing the hash value with the in-core cache information stored in an eBPF program mounted on the XDP hook; wherein:
(2-i) when the hash value is present in the kernel cache information:
calling the invalidation caching module to invalidate the data entry corresponding to the hash value, then sending the data packet to the kernel network protocol stack through the second protocol processing module by the XDP hook through the PASS action of the XDP hook, and further sending the data packet to the caching application after processing so as to complete the setting of the caching entry in the caching application;
(2-ii) when the hash value is not present in the in-core cache information:
and the XDP hook directly sends the data packet to the kernel network protocol stack through the second protocol processing module through the PASS action of the XDP hook, and the data packet is further sent to the cache application after being processed so as to complete the setting of the cache item in the cache application.
According to the method of the second aspect of the present invention, after the setting of the cache entry in the cache application is completed, for a data packet sent by the host network card and subsequently received by the cache acceleration entry path processing module, when the data request type is GET to obtain a cache request and the hash value of the subsequently received data packet does not exist in the cache information in the kernel, the subsequently received data packet is forwarded to the kernel network protocol stack through the second protocol processing module, and is further sent to the cache application after being processed, the cache application determines a data entry corresponding to the data packet key value key based on the set cache entry therein, and writes the data entry determined by the cache application into the data packet and sends the data packet to the cache acceleration exit path processing module.
Fig. 4 is a block diagram of an electronic device according to an embodiment of the present invention, and as shown in fig. 4, the electronic device includes a processor, a memory, a communication interface, a display screen, and an input device, which are connected by a system bus. Wherein the processor of the electronic device is configured to provide computing and control capabilities. The memory of the electronic equipment comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the electronic device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, an operator network, Near Field Communication (NFC) or other technologies. The display screen of the electronic equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the electronic equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the electronic equipment, an external keyboard, a touch pad or a mouse and the like.
It will be understood by those skilled in the art that the structure shown in fig. 4 is only a partial block diagram related to the technical solution of the present disclosure, and does not constitute a limitation of the electronic device to which the solution of the present application is applied, and a specific electronic device may include more or less components than those shown in the drawings, or combine some components, or have a different arrangement of components.
A third aspect of the invention discloses an electronic device. The electronic device comprises a memory and a processor, wherein the memory stores a computer program, and the processor implements the steps of the fast network path-based cloud network cache acceleration method according to any one of the second aspects of the present disclosure when executing the computer program.
A fourth aspect of the invention discloses a computer-readable storage medium. The computer readable storage medium has stored thereon a computer program, which when executed by a processor implements the steps in a fast network path based cloud network cache acceleration method according to any one of the second aspects of the present disclosure.
In summary, the technical solution provided by the present invention is based on the Linux fast network path XDP, and implements a layer of cache in the network card driver layer, so that the user's request can be processed by the CPU and returned back at the first time. The method has better performance than a DPDK kernel bypass scheme, does not need to monopolize CPU resources, does not need to allocate a huge page memory, can reuse a security mechanism of a kernel, does not need to modify the existing cloud network cache application, has universality, and is suitable for the same type of cloud network cache application. And the XDP is a Linux kernel community maintenance project and has a more stable interface. The following technical problems are specifically solved:
(1) a cloud network cache acceleration architecture based on a kernel collaborative network acceleration technology; the high performance of the cloud network cache application is guaranteed without monopolizing CPU resources, the security strategy of the kernel can be reused by the design of the cloud network cache application and the kernel, and the program is prevented from possibly influencing the whole operating system due to the existence of the verifier in the kernel. In addition, the acceleration program can be updated to the kernel to be executed in real time without recompiling the kernel or using the kernel module.
(2) Designing a fast path of the XDP-based cache request; in actual use, most requests of the cloud network cache application are GET requests [8], and the scheme is based on XDP and realizes a rapid path for rapidly returning the cache application GET requests in a network driving layer. Meanwhile, the method can work cooperatively with the traditional kernel network protocol stack, and other complex requests are submitted to the kernel for processing, so that the method is focused on accelerating the processing of the cache application GET request.
(3) Modular cache request processing is designed; and (3) modularizing the whole system by using an eBPF tail calling mechanism, and connecting the program modules by tail calling. If the support of other applications needs to be updated, only the protocol processing module needs to be modified, and the cache application itself does not need to be modified.
Note that, the technical features of the above embodiments may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description in the present specification. The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A cloud network cache acceleration system based on a fast network path is characterized in that:
the system comprises a host network card, a cache acceleration path-in processing module, a kernel network protocol stack, a cache application and a cache acceleration path-out processing module;
the cache acceleration path-entering processing module comprises an XDP hook of a host network card driving layer, the cache acceleration path-exiting processing module comprises a Linux flow control TC hook, and the XDP hook and the TC hook are respectively mounted with a plurality of eBPF programs to realize the storage, calling and updating of cache information in a kernel;
the cache accelerated entry path processing module at least comprises a request filtering module, a hash value calculating module, an invalidation cache module, a data packet writing module, a first protocol processing module, a second protocol processing module and a reply module; the cache accelerated path processing module at least comprises a return filtering module, an updating cache module and a third protocol processing module;
wherein:
after receiving the data packet sent by the host network card, the cache acceleration path-entering processing module calls the request filtering module to analyze the data packet so as to acquire a data request type; the method specifically comprises the following steps:
(1) when the data request type is GET obtaining cache request, calling the hash value calculation module to calculate a corresponding hash value according to the key value key of the data packet, and comparing the hash value with the inner-core cache information stored in the eBPF program mounted on the XDP hook; wherein:
(1-i) when the hash value is present in the kernel cache information:
the data packet writing module writes the data entry corresponding to the hash value into the data packet, and returns the data packet written into the data entry through the first protocol processing module and the reply module; the cache information in the kernel comprises a plurality of cache information formed by hash values and corresponding data entries.
2. The cloud network cache acceleration system based on the fast network path according to claim 1, characterized in that:
(1) when the data request type is the GET cache acquisition request, calling the hash value calculation module to calculate the corresponding hash value according to the key value key of the data packet, and comparing the hash value with the inner-core cache information stored in the eBPF program mounted on the XDP hook; wherein:
(1-ii) when the hash value is not present in the in-core cache information:
the data packet is forwarded to the kernel network protocol stack through the second protocol processing module, is further sent to the cache application after being processed, determines a data entry corresponding to the data packet key value key through calculation by the cache application, writes the data entry determined by the cache application into the data packet, and sends the data packet to the cache accelerated path processing module;
the return filtering module of the cache accelerated egress path processing module filters the data packet written with the data entry, an eBPF program mounted on the TC hook extracts a hash value of a key value key of the data packet from the data packet written with the data entry, and the extracted hash value and the data entry determined by the cache application are used as a piece of cache information;
and the cache updating module sends the piece of cache information and a cache updating instruction to the cache acceleration path processing module together, and updates the piece of cache information in the eBPF program mounted on the XDP hook for cache acceleration processing of a subsequent GET acquisition cache request.
3. The cloud network cache acceleration system based on the fast network path according to claim 2, characterized in that the cache acceleration entry path processing module calls the request filtering module to analyze the data packet after receiving the data packet sent by the host network card to obtain the data request type; the method specifically comprises the following steps:
(2) when the data request type is a SET (SET event configuration) cache entry request, calling the hash value calculation module to calculate a corresponding hash value according to a key value key of the data packet, and comparing the hash value with the in-core cache information stored in an eBPF program mounted on the XDP hook; wherein:
(2-i) when the hash value is present in the kernel cache information:
the invalidation caching module invalidates the data entry corresponding to the hash value, and then the XDP hook sends the data packet to the kernel network protocol stack through the second protocol processing module by means of PASS action of the XDP hook, and further sends the data packet to the caching application after processing, so as to complete setting of the caching entry in the caching application;
(2-ii) when the hash value is not present in the in-core cache information:
and the XDP hook directly sends the data packet to the kernel network protocol stack through the second protocol processing module through the PASS action of the XDP hook, and the data packet is further sent to the cache application after being processed so as to complete the setting of the cache item in the cache application.
4. The fast network path based cloud network cache acceleration system of claim 3, characterized in that after the setting of the cache entries in the cache application is completed, the cache accelerated entry path processing module receives the data packets sent by the host network card subsequently, when the data request type is GET acquisition cache request and the hash value of the subsequently received data packet does not exist in the cache information in the kernel, the subsequently received data packet is forwarded to the kernel network protocol stack through the second protocol processing module, and is further sent to the cache application after being processed, the cache application determines a data entry corresponding to the data packet key value based on the cache entry set therein, and writing the data item determined by the cache application into the data packet and then sending the data packet to the cache accelerated-out path processing module.
5. A cloud network cache acceleration method based on a fast network path, wherein the method is implemented based on the system of any one of claims 1-4, wherein:
the system comprises a host network card, a cache acceleration path-in processing module, a kernel network protocol stack, a cache application and a cache acceleration path-out processing module;
the cache acceleration path entering processing module comprises an XDP hook of a host network card driving layer, the cache acceleration path exiting processing module comprises a Linux flow control TC hook, and the XDP hook and the TC hook are respectively mounted with a plurality of eBPF programs to realize the storage, calling and updating of cache information in a kernel;
the cache accelerated entry path processing module at least comprises a request filtering module, a hash value calculating module, an invalidation cache module, a data packet writing module, a first protocol processing module, a second protocol processing module and a reply module; the cache accelerated out-path processing module at least comprises a return filtering module, an updating cache module and a third protocol processing module;
the method specifically comprises the following steps:
after receiving the data packet sent by the host network card, the cache acceleration path-entering processing module calls the request filtering module to analyze the data packet so as to acquire a data request type; the method specifically comprises the following steps:
(1) when the data request type is GET obtaining cache request, calling the hash value calculation module to calculate a corresponding hash value according to the key value key of the data packet, and comparing the hash value with the inner-core cache information stored in the eBPF program mounted on the XDP hook; wherein:
(1-i) when the hash value is present in the kernel cache information:
calling the data writing packet module to write the data entry corresponding to the hash value into the data packet, and returning the data packet written into the data entry through the first protocol processing module and the reply module; the cache information in the kernel comprises a plurality of cache information formed by hash values and corresponding data entries.
6. The cloud network cache acceleration method based on the fast network path according to claim 5, characterized in that:
(1) when the data request type is the GET cache acquisition request, calling the hash value calculation module to calculate the corresponding hash value according to the key value key of the data packet, and comparing the hash value with the inner-core cache information stored in the eBPF program mounted on the XDP hook; wherein:
(1-ii) when the hash value is not present in the in-core cache information:
the data packet is forwarded to the kernel network protocol stack through the second protocol processing module, is further sent to the cache application after being processed, determines a data entry corresponding to the data packet key value through calculation by the cache application, writes the data entry determined by the cache application into the data packet, and sends the data packet to the cache accelerated egress path processing module;
calling the return filtering module of the cache accelerated exit path processing module to filter the data packet written with the data entry, extracting the hash value of the key value key of the data packet from the data packet written with the data entry by the eBPF program mounted on the TC hook, and taking the extracted hash value and the data entry determined by the cache application as a piece of cache information;
and the cache updating module sends the piece of cache information and a cache updating instruction to the cache acceleration path processing module together, and updates the piece of cache information in the eBPF program mounted on the XDP hook for cache acceleration processing of a subsequent GET acquisition cache request.
7. The cloud network cache acceleration method based on the fast network path according to claim 6, characterized in that the cache acceleration entry path processing module calls the request filtering module to analyze the data packet after receiving the data packet sent by the host network card to obtain the data request type; the method specifically comprises the following steps:
(2) when the data request type is a SET (SET event configuration) cache entry request, calling the hash value calculation module to calculate a corresponding hash value according to a key value key of the data packet, and comparing the hash value with the in-core cache information stored in an eBPF program mounted on the XDP hook; wherein:
(2-i) when the hash value is present in the kernel cache information:
calling the invalidation caching module to invalidate the data entry corresponding to the hash value, then sending the data packet to the kernel network protocol stack through the second protocol processing module by the XDP hook through the PASS action of the XDP hook, and further sending the data packet to the caching application after processing so as to complete the setting of the caching entry in the caching application;
(2-ii) when the hash value is not present in the in-core cache information:
and the XDP hook directly sends the data packet to the kernel network protocol stack through the second protocol processing module through the PASS action of the XDP hook, and the data packet is further sent to the cache application after being processed so as to complete the setting of the cache item in the cache application.
8. The method according to claim 7, characterized in that after the setting of the cache entries in the cache application is completed, for the data packets sent by the host network card and subsequently received by the cache acceleration path-entering processing module, when the data request type is GET acquisition cache request and the hash value of the subsequently received data packet does not exist in the kernel cache information, the subsequently received data packet is forwarded to the kernel network protocol stack through the second protocol processing module, and is further sent to the cache application after being processed, the cache application determines a data entry corresponding to the data packet key value based on the cache entry set therein, and writing the data entry determined by the cache application into the data packet and then sending the data packet to the cache accelerated-out path processing module.
9. An electronic device, comprising a memory and a processor, wherein the memory stores a computer program, and the processor implements the steps of the fast network path based cloud network cache acceleration method according to any one of claims 5 to 8 when executing the computer program.
10. A computer-readable storage medium, wherein a computer program is stored on the computer-readable storage medium, and when being executed by a processor, the computer program implements the steps in the fast network path based cloud network cache acceleration method according to any one of claims 5 to 8.
CN202210506406.1A 2022-05-11 2022-05-11 Cloud network cache acceleration system and method based on fast network path Pending CN114640716A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210506406.1A CN114640716A (en) 2022-05-11 2022-05-11 Cloud network cache acceleration system and method based on fast network path

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210506406.1A CN114640716A (en) 2022-05-11 2022-05-11 Cloud network cache acceleration system and method based on fast network path

Publications (1)

Publication Number Publication Date
CN114640716A true CN114640716A (en) 2022-06-17

Family

ID=81953314

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210506406.1A Pending CN114640716A (en) 2022-05-11 2022-05-11 Cloud network cache acceleration system and method based on fast network path

Country Status (1)

Country Link
CN (1) CN114640716A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115801482A (en) * 2023-02-08 2023-03-14 银河麒麟软件(长沙)有限公司 Method, system and medium for realizing eBPF-based multicast in cloud native environment
CN115883255A (en) * 2023-02-02 2023-03-31 中信证券股份有限公司 Data filtering method, device and computer readable medium
CN116016295A (en) * 2022-12-14 2023-04-25 鹏城实验室 Ethernet performance monitoring method, system, industrial control equipment and storage medium
CN116431356A (en) * 2023-06-13 2023-07-14 中国人民解放军军事科学院系统工程研究院 Cloud network cache acceleration method and system based on intelligent network card

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
PENG WU ET AL.: "NCA: Accelerating Network Caching with eXpress Data Path", 《2021 4TH INTERNATIONAL CONFERENCE ON HOT INFORMATION-CENTRIC NETWORKING (HOTICN)》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116016295A (en) * 2022-12-14 2023-04-25 鹏城实验室 Ethernet performance monitoring method, system, industrial control equipment and storage medium
CN116016295B (en) * 2022-12-14 2024-04-09 鹏城实验室 Ethernet performance monitoring method, system, industrial control equipment and storage medium
CN115883255A (en) * 2023-02-02 2023-03-31 中信证券股份有限公司 Data filtering method, device and computer readable medium
CN115883255B (en) * 2023-02-02 2023-06-23 中信证券股份有限公司 Data filtering method, device and computer readable medium
CN115801482A (en) * 2023-02-08 2023-03-14 银河麒麟软件(长沙)有限公司 Method, system and medium for realizing eBPF-based multicast in cloud native environment
CN116431356A (en) * 2023-06-13 2023-07-14 中国人民解放军军事科学院系统工程研究院 Cloud network cache acceleration method and system based on intelligent network card
CN116431356B (en) * 2023-06-13 2023-08-22 中国人民解放军军事科学院系统工程研究院 Cloud network cache acceleration method and system based on intelligent network card

Similar Documents

Publication Publication Date Title
CN114640716A (en) Cloud network cache acceleration system and method based on fast network path
US11044314B2 (en) System and method for a database proxy
Huggahalli et al. Direct cache access for high bandwidth network I/O
Ghigoff et al. {BMC}: Accelerating Memcached using Safe In-kernel Caching and Pre-stack Processing
Tezuka et al. Pin-down cache: A virtual memory management technique for zero-copy communication
US7996569B2 (en) Method and system for zero copy in a virtualized network environment
CN112929299B (en) SDN cloud network implementation method, device and equipment based on FPGA accelerator card
CN110402568A (en) A kind of method and device of communication
CN111431757B (en) Virtual network flow acquisition method and device
US20200364080A1 (en) Interrupt processing method and apparatus and server
US11349922B2 (en) System and method for a database proxy
CN111371920A (en) DNS front-end analysis method and system
CN111107081A (en) DPDK-based multi-process DNS service method and system
CN111459418A (en) RDMA (remote direct memory Access) -based key value storage system transmission method
WO2014206129A1 (en) Computing device and method for executing database operation command
Alian et al. Data direct I/O characterization for future I/O system exploration
EP3742307A1 (en) Managing network traffic flows
Watanabe et al. Accelerating NFV application using CPU-FPGA tightly coupled architecture
KR20120121668A (en) High Performance System and Method for Blocking Harmful Sites Access on the basis of Network
CN111371804A (en) DNS (Domain name Server) back-end forwarding method and system
Su et al. Pipedevice: a hardware-software co-design approach to intra-host container communication
CN117240935A (en) Data plane forwarding method, device, equipment and medium based on DPU
US11093405B1 (en) Shared mid-level data cache
Tang et al. Towards high-performance packet processing on commodity multi-cores: current issues and future directions
CN116016313A (en) Flow table aging control method, system, equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20220617