WO2022100310A1 - 一种网卡数据包缓存管理方法、装置、终端及存储介质 - Google Patents

一种网卡数据包缓存管理方法、装置、终端及存储介质 Download PDF

Info

Publication number
WO2022100310A1
WO2022100310A1 PCT/CN2021/121432 CN2021121432W WO2022100310A1 WO 2022100310 A1 WO2022100310 A1 WO 2022100310A1 CN 2021121432 W CN2021121432 W CN 2021121432W WO 2022100310 A1 WO2022100310 A1 WO 2022100310A1
Authority
WO
WIPO (PCT)
Prior art keywords
queue
busy
data packets
data packet
network card
Prior art date
Application number
PCT/CN2021/121432
Other languages
English (en)
French (fr)
Inventor
马旭
Original Assignee
苏州浪潮智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 苏州浪潮智能科技有限公司 filed Critical 苏州浪潮智能科技有限公司
Priority to US18/245,791 priority Critical patent/US11777873B1/en
Publication of WO2022100310A1 publication Critical patent/WO2022100310A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/901Buffering arrangements using storage descriptor, e.g. read or write pointers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/625Queue scheduling characterised by scheduling criteria for service slots or service orders
    • H04L47/6255Queue scheduling characterised by scheduling criteria for service slots or service orders queue load conditions, e.g. longest queue first
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/80Actions related to the user profile or the type of traffic
    • H04L47/803Application aware
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/9031Wraparound memory, e.g. overrun or underrun detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/9047Buffering arrangements including multiple buffers, e.g. buffer pools

Definitions

  • the present application belongs to the technical field of network card data processing, and in particular relates to a network card data packet cache management method, device, terminal and storage medium.
  • the network card driver In order to improve the efficiency of receiving packets, the network card driver will cache the received data packets, and the upper-layer network application will consume the data packets from the cache structure, and process the data packets in the form of caching, which can greatly improve the efficiency of the network card sending and receiving packets. efficiency.
  • the network card driver and the upper-layer application need to process the data packets in the cache structure at the same time, when adding or deleting elements in the cache structure, it is often necessary to lock/unlock the elements of the cache structure. It affects the efficiency of the network card sending and receiving packets, and may also deadlock under abnormal conditions, causing the driver to crash.
  • the present application provides a network card data packet cache management method, device, and terminal and storage medium to solve the above technical problems.
  • the present application provides a network card data packet cache management method, comprising the following steps:
  • the network card driver receives the data packet from the data link, classifies the data packet, and sequentially buffers the classified data packet into the busy queue through the write pointer of the busy queue, and then maps the addresses of the buffered data packets in the busy queue to the idle queue in turn. queue;
  • the upper-layer application sequentially obtains the buffered data packets through the read pointer of the busy queue, processes the data packets, and releases the processed buffered data packets in the busy queue in turn through the write pointer of the idle queue after the data packet processing is completed. address.
  • S5. Acquire the real-time read pointers and write pointers of the busy queue and the idle queue, and judge the status of the buffered data packets in the storage pool according to the relationship between the real-time read and write pointers of the two queues, and whether a buffer pool needs to be added.
  • step S1 is as follows:
  • step S2 is as follows:
  • the network card driver receives data packets from the data link to the receiving link ring;
  • the network card driver performs hash classification on the data packet
  • the network card driver sequentially buffers the classified data packets into the busy queue through the write pointer of the busy queue;
  • the network card driver maps the addresses of the buffered data packets in the busy queue to the idle queue in sequence.
  • step S4 is as follows:
  • the upper-layer application sequentially obtains the buffered data packets through the read pointer of the busy queue;
  • the upper-layer application processes the cached data packets in sequence
  • the upper-layer application sequentially acquires and releases the addresses in the busy queue of the processed buffered data packets from the idle queue through the write pointer of the idle queue.
  • step S5 is as follows:
  • step S53 If not, go to step S53, or go to step S54;
  • step S55 If yes, determine that the busy queue is full, and go to step S55;
  • step S55 If so, it is determined that the free memory address of the free queue has been consumed, and the process proceeds to step S55;
  • step S55 Repeat step S1 to add a buffer pool.
  • the present application provides a network card data packet cache management device, including:
  • the buffer pool setting module is used to set the ring buffer queue, and set the length of the ring buffer queue according to the total buffer space size and the number of upper-layer application threads, and then set the two ring buffer queues as one buffer pool, and set the two buffer pools
  • the ring buffer queues are the busy queue and the idle queue, respectively;
  • the data cache module is used to set the network card driver to receive data packets from the data link, classify the data packets, and sequentially buffer the classified data packets into the busy queue through the write pointer of the busy queue, and then cache the data packets in the busy queue.
  • the addresses are mapped to the free queue in turn;
  • the cache address acquisition module obtains the latest address of the cached data packet in the busy queue through the idle queue read pointer;
  • the data processing module is used to sequentially obtain the buffered data packets through the read pointer of the busy queue through the upper-layer application, process the data packet, and release the processed buffered data in turn through the write pointer of the idle queue after the data packet processing is completed.
  • the address of the packet in the busy queue is used to sequentially obtain the buffered data packets through the read pointer of the busy queue through the upper-layer application, process the data packet, and release the processed buffered data in turn through the write pointer of the idle queue after the data packet processing is completed.
  • the module for determining the number of caches is used to obtain the real-time read and write pointers of the busy queue and the idle queue, and according to the relationship between the real-time read and write pointers of the two queues, determine the status of the cached data packets in the storage pool and whether it is necessary to add a cache pool.
  • a terminal including:
  • processor memory, which,
  • the memory is used to store computer programs
  • the processor is used to call and run the computer program from the memory, so that the terminal executes the method of the first aspect described above.
  • a computer storage medium is provided, and instructions are stored in the computer-readable storage medium, which, when executed on a computer, cause the computer to execute the method described in the first aspect.
  • the network card data packet buffer management method, device, terminal and storage medium manage the sending, receiving and releasing of data packet buffers through two ring queues, the network card driver and the upper-layer application respectively operate the read and write pointers on the ring queue, and use the ring
  • the characteristic of the NIC implements the sending, receiving and releasing of data packets, so that there is no need to lock and unlock the cache structure elements in the ring queue during the data packet processing process, improve the efficiency of sending and receiving packets of the network card, and improve the stability of the network card driver operation.
  • Fig. 1 is the method flow schematic diagram one of the present application
  • Fig. 2 is the method flow schematic diagram two of the present application.
  • Fig. 3 is the system schematic diagram of the present application.
  • 1- buffer pool setting module 1.1- queue length calculation unit; 1.2- ring buffer queue creation unit; 1.3- ring buffer queue division of labor setting unit; 1.4- buffer pool setting unit; 2- data buffer module; 2.1- Data packet receiving unit; 2.2-data packet classification unit; 2.3-data packet buffer unit; 2.4-address storage unit; 3-cache address acquisition module; 4-data processing module; 4.1-data packet acquisition unit; 4.2-data packet processing unit; 4.3-address release unit; 5-cache quantity determination module; 5.1-pointer acquisition unit; 5.2-pointer equality judgment unit; 5.3-queue is empty judgment unit; 5.4-busy queue write pointer judgment unit; 5.5-busy queue has been Full judgment unit; 5.6 - Idle queue read pointer judgment unit; 5.7 - Idle address exhaustion judgment unit; 5.8 - Buffer pool extension unit.
  • the present application provides a network card data packet cache management method, which includes the following steps:
  • the network card driver receives the data packet from the data link, classifies the data packet, and caches the classified data packet in the busy queue through the write pointer of the busy queue, and then sequentially maps the addresses of the buffered data packets in the busy queue to the idle queue. queue;
  • the upper-layer application sequentially obtains the buffered data packets through the read pointer of the busy queue, processes the data packets, and releases the processed buffered data packets in the busy queue in turn through the write pointer of the idle queue after the processing of the data packets is completed. address.
  • the present application provides a network card data packet cache management method, comprising the following steps:
  • the network card driver receives the data packet from the data link, classifies the data packet, and sequentially buffers the classified data packet into the busy queue through the write pointer of the busy queue, and then maps the addresses of the buffered data packets in the busy queue to the idle queue in turn. Queue; the specific steps are as follows:
  • the network card driver receives data packets from the data link to the receiving link ring;
  • the network card driver performs hash classification on the data packet
  • the network card driver sequentially buffers the classified data packets into the busy queue through the write pointer of the busy queue;
  • the network card driver maps the addresses of the busy queue buffered packets to the free queue in turn;
  • the upper-layer application sequentially obtains the buffered data packets through the read pointer of the busy queue, processes the data packets, and releases the processed buffered data packets in the busy queue in turn through the write pointer of the idle queue after the processing of the data packets is completed. address; the specific steps are as follows:
  • the upper-layer application sequentially obtains the buffered data packets through the read pointer of the busy queue;
  • the upper-layer application processes the cached data packets in sequence
  • the upper-layer application sequentially acquires and releases the addresses in the busy queue of the processed buffered data packets from the idle queue through the write pointer of the idle queue;
  • step S53 If not, go to step S53, or go to step S54;
  • step S55 If yes, determine that the busy queue is full, and go to step S55;
  • step S55 If so, it is determined that the free memory address of the free queue has been consumed, and the process proceeds to step S55;
  • step S55 Repeat step S1 to add a buffer pool.
  • the buffer pool may also be set to multiple groups in advance according to the number of threads for sending and receiving packets of the upper-layer application.
  • the network card driver only operates the write pointer and the read pointer of the busy queue
  • the read pointer of the busy queue can never exceed the write pointer.
  • the read pointer of the busy queue In the initial state, the read pointer of the busy queue is in the same position as the write pointer;
  • the upper-layer application only operates the write pointer and the read pointer of the free queue
  • the read pointer of the free queue can never exceed the write pointer, the initial state, the read pointer and the write pointer are located at the first free memory address;
  • the present application provides a network card data packet cache management device, including:
  • the buffer pool setting module 1 is used to set the ring buffer queue, and set the length of the ring buffer queue according to the total buffer space size and the number of upper-layer application threads, and then set the two ring buffer queues as one buffer pool, and set the buffer pool
  • the two ring buffer queues are the busy queue and the free queue respectively;
  • the buffer pool setting module 1 includes:
  • the ring buffer queue creation unit 1.2 is used to create two ring buffer queues with a set ring buffer queue length
  • the ring buffer queue division of labor setting unit 1.3 is used to set one of the two ring buffer queues as a busy queue and the other as an idle queue;
  • the buffer pool setting unit 1.4 is used to set the busy queue and the free queue as a buffer pool
  • the data cache module 2 is used to set the network card driver to receive data packets from the data link, classify the data packets, and sequentially buffer the classified data packets into the busy queue through the write pointer of the busy queue, and then cache the data packets in the busy queue.
  • the addresses of s are mapped to the idle queues in turn; the data cache module 2 includes:
  • the data packet receiving unit 2.1 is used to set the network card driver to receive data packets from the data link to the receiving link ring;
  • the data packet classification unit 2.2 is used to set the network card driver to perform hash classification on the data packets
  • the data packet buffer unit 2.3 is used to set the network card driver to sequentially buffer the classified data packets into the busy queue through the write pointer of the busy queue;
  • the address storage unit 2.4 is used to set the network card driver to sequentially map the addresses of the busy queue buffer data packets to the idle queue;
  • the cache address obtaining module 3 obtains the latest address of the cached data packet in the busy queue through the idle queue read pointer;
  • the data processing module 4 is used to sequentially acquire the buffered data packets through the read pointer of the busy queue through the upper-layer application, process the data packet, and after the data packet processing is completed, sequentially release the processed buffer through the write pointer of the idle queue
  • the data packet acquisition unit 4.1 is used to set the upper-layer application to sequentially acquire the buffered data packets through the read pointer of the busy queue;
  • the data packet processing unit 4.2 is used for sequentially processing the buffered data packets
  • the address release unit 4.3 is used to set the upper-layer application to obtain and release the address of the processed buffer data packet in the busy queue successively from the idle queue through the write pointer of the idle queue;
  • the buffer quantity determination module 5 is used to obtain the real-time read pointer and write pointer of the busy queue and the idle queue, and according to the relationship between the real-time read and write pointers of the two queues, determine the status of the buffered data packets in the storage pool, and whether it needs to be added.
  • Pointer acquisition unit 5.1 used to acquire real-time read pointers and write pointers of busy queues and idle queues;
  • the pointer equality judgment unit 5.2 is used to judge whether the read pointer and the write pointer of the busy queue are equal;
  • the queue is empty determination unit 5.3, when the read pointer and the write pointer for the busy queue are equal, it is determined that the busy queue is empty and no data packets are buffered;
  • the busy queue is full determination unit 5.5, when the next write pointer used for the busy queue is a read pointer, it is determined that the busy queue is full;
  • the free queue read pointer judgment unit 5.6 is used to judge that the next read pointer of the free queue is greater than the write pointer, that is, the free queue read pointer+1>the free queue write pointer;
  • the idle address exhaustion determination unit 5.7 is used to determine that the idle memory address of the idle queue has been exhausted when the next read pointer of the idle queue is greater than the write pointer;
  • the buffer pool adding unit 5.8 is used to add a buffer pool when the busy queue is full and the free memory address of the free queue has been consumed.
  • the present application provides a terminal
  • processor memory, which,
  • the memory is used to store a computer program
  • the processor is used to call and run the computer program from the memory, so that the terminal executes the method described in Embodiment 1 or Embodiment 2 above.
  • the present application provides a storage medium, where instructions are stored in the computer-readable storage medium, when the computer-readable storage medium runs on a computer, the computer executes the method described in Embodiment 1 or Embodiment 2 above.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本申请提供一种网卡数据包缓存管理方法、装置、终端及存储介质,所述方法:设置环形缓存队列,并根据总缓存空间大小及上层应用线程数设定环形缓存队列长度,再将两个环形缓存队列设置为一个缓存池,且设定缓存池中两个环形缓存队列分别为忙碌队列和空闲队列;网卡驱动从数据链路接收数据包,将数据包分类,并通过忙碌队列的写指针依次将分类后数据包缓存到忙碌队列中,再将忙碌队列中缓存数据包的地址依次映射到空闲队列;通过空闲队列读指针获取缓存数据包在忙碌队列的最新地址;上层应用通过忙碌队列的读指针依次获取并处理缓存的数据包,通过空闲队列的写指针依次释放该处理完的缓存数据包在忙碌队列的地址。

Description

一种网卡数据包缓存管理方法、装置、终端及存储介质
本申请要求在2020年11月12日提交中国专利局、申请号为202011264058.9、发明名称为“一种网卡数据包缓存管理方法、装置、终端及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请属于网卡数据处理技术领域,具体涉及一种网卡数据包缓存管理方法、装置、终端及存储介质。
背景技术
为提高收包效率,网卡驱动对收到的数据包会进行缓存处理,上层网络应用会从缓存结构中消费数据包,通过缓存的形式处理数据包,能够较大程度上提高上网卡收发包的效率。但由于网卡驱动和上层应用需要同时对缓存结构中的数据包进行处理,对缓存结构中的元素进行添加或删除处理时,往往需要对缓存结构元素进行加锁/解锁处理,在一定程度上会影响网卡收发包的效率,同时异常情况下还有可能死锁,导致驱动崩溃。
此为现有技术的不足,因此,针对现有技术中的上述缺陷,提供一种网卡数据包缓存管理方法、装置、终端及存储介质,是非常有必要的。
发明内容
针对现有技术的上述网卡驱动和上层应用同时进行数据处理时,需要对缓存结构元素进行加解锁处理,影响网卡收发包效率的缺陷,本申请提供一种网卡数据包缓存管理方法、装置、终端及存储介质,以解决上述技术问题。
第一方面,本申请提供一种网卡数据包缓存管理方法,包括如下步骤:
S1.设置环形缓存队列,并根据总缓存空间大小及上层应用线程数设定环形 缓存队列长度,再将两个环形缓存队列设置为一个缓存池,且设定缓存池中两个环形缓存队列分别为忙碌队列和空闲队列;
S2.网卡驱动从数据链路接收数据包,将数据包分类,并通过忙碌队列的写指针依次将分类后数据包缓存到忙碌队列中,再将忙碌队列中缓存数据包的地址依次映射到空闲队列;
S3.通过空闲队列读指针获取缓存数据包在忙碌队列的最新地址;
S4.上层应用通过忙碌队列的读指针依次获取缓存的数据包,对数据包进行处理,并在数据包处理完成后,通过空闲队列的写指针依次释放该处理完的缓存数据包在忙碌队列的地址。
进一步地,还包括如下步骤:
S5.获取忙碌队列及空闲队列实时的读指针和写指针,并根据两个队列各自实时的读写指针的关系,判断缓存数据包在存储池的状态,以及是否需要增设缓存池。
进一步地,步骤S1具体步骤如下:
S11.计算环形缓存队列长度=总缓存空间大小/(上层应用线程数*分类后单个数据包大小);
S12.以设定的环形缓存队列长度创建两个环形缓存队列;
S13.设定两个环形缓存队列中一个为忙碌队列,另一个为空闲队列;
S14.将忙碌队列和空闲队列设定为一个缓存池。
进一步地,步骤S2具体步骤如下:
S21.网卡驱动从数据链路接收数据包到接收链路环中;
S22.网卡驱动对数据包进行hash分类;
S23.网卡驱动通过忙碌队列的写指针将分类后数据包依次缓存到忙碌队列中;
S24.网卡驱动将忙碌队列缓存数据包的地址依次映射到空闲队列。
进一步地,步骤S4具体步骤如下:
S41.上层应用通过忙碌队列的读指针依次获取缓存的数据包;
S42.上层应用依次对缓存数据包进行处理;
S43.上层应用通过空闲队列的写指针从空闲队列依次获取并释放处理完的缓存数据包在忙碌队列的地址。
进一步地,步骤S5具体步骤如下:
S51.获取忙碌队列及空闲队列实时的读指针和写指针;
S52.判断忙碌队列的读指针和写指针是否相等;
若是,判定忙碌队列为空,无缓存数据包,返回步骤S2;
若否,进入步骤S53,或者进入步骤S54;
S53.判断是否忙碌队列的下一个写指针为读指针,即忙碌队列写指针+1=忙碌队列读指针;
若是,判定忙碌队列已满,进入步骤S55;
若否,返回步骤S2;
S54.判断空闲队列的下一个读指针大于写指针,即空闲队列读指针+1>空闲队列写指针;
若是,判定空闲队列的空闲内存地址已消耗完毕,进入步骤S55;
若否,返回步骤S2;
S55.重复步骤S1步骤增设缓存池。
第二方面,本申请提供一种网卡数据包缓存管理装置,包括:
缓存池设置模块,用于设置环形缓存队列,并根据总缓存空间大小及上层应用线程数设定环形缓存队列长度,再将两个环形缓存队列设置为一个缓存池,且设定缓存池中两个环形缓存队列分别为忙碌队列和空闲队列;
数据缓存模块,用于设置网卡驱动从数据链路接收数据包,将数据包分类,并通过忙碌队列的写指针依次将分类后数据包缓存到忙碌队列中,再将忙碌队列中缓存数据包的地址依次映射到空闲队列;
缓存地址获取模块,通过空闲队列读指针获取缓存数据包在忙碌队列的最新地址;
数据处理模块,用于通过上层应用通过忙碌队列的读指针依次获取缓存的数据包,对数据包进行处理,并在数据包处理完成后,通过空闲队列的写指针依次释放该处理完的缓存数据包在忙碌队列的地址。
进一步地,还包括:
缓存数量确定模块,用于获取忙碌队列及空闲队列实时的读指针和写指针,并根据两个队列各自实时的读写指针的关系,判断缓存数据包在存储池的状态,以及是否需要增设缓存池。
第三方面,提供一种终端,包括:
处理器、存储器,其中,
该存储器用于存储计算机程序,
该处理器用于从存储器中调用并运行该计算机程序,使得终端执行上述的 第一方面的方法。
第四方面,提供了一种计算机存储介质,所述计算机可读存储介质中存储有指令,当其在计算机上运行时,使得计算机执行上述第一方面所述的方法。
本申请的有益效果在于,
本申请提供的网卡数据包缓存管理方法、装置、终端及存储介质,通过两个环形队列来管理数据包缓存的收发和释放,网卡驱动和上层应用分别操作环形队列上的读写指针,利用环形的特性实现数据包收发和释放,从而在数据包处理过程中无须对环形队列中的缓存结构元素进行加锁和解锁处理,提高网卡收发包效率,提升网卡驱动运行的稳定性。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,对于本领域普通技术人员而言,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请的方法流程示意图一;
图2是本申请的方法流程示意图二;
图3是本申请的系统示意图;
图中,1-缓存池设置模块;1.1-队列长度计算单元;1.2-环形缓存队列创建单元;1.3-环形缓存队列分工设置单元;1.4-缓存池设定单元;2-数据缓存模块;2.1-数据包接收单元;2.2-数据包分类单元;2.3-数据包缓存单元;2.4-地址存储单元;3-缓存地址获取模块;4-数据处理模块;4.1-数据包获取单元;4.2-数据包处理单元;4.3-地址释放单元;5-缓存数量确定模块;5.1-指针获取单元;5.2-指针相等判断单元;5.3-队列为空判定单元;5.4-忙碌队列写指 针判断单元;5.5-忙碌队列已满判定单元;5.6-空闲队列读指针判断单元;5.7-空闲地址用尽判定单元;5.8-缓存池增设单元。
具体实施方式
为了使本技术领域的人员更好地理解本申请中的技术方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本申请保护的范围。
实施例1:
如图1所示,本申请提供一种网卡数据包缓存管理方法,包括如下步骤:
S1.设置环形缓存队列,并根据总缓存空间大小及上层应用线程数设定环形缓存队列长度,再将两个环形缓存队列设置为一个缓存池,且设定缓存池中两个环形缓存队列分别为忙碌队列和空闲队列;
S2.网卡驱动从数据链路接收数据包,将数据包分类,并通过忙碌队列的写指针将分类后数据包缓存到忙碌队列中,再依次将忙碌队列中缓存数据包的地址依次映射到空闲队列;
S3.通过空闲队列读指针获取缓存数据包在忙碌队列的最新地址;
S4.上层应用通过忙碌队列的读指针依次获取缓存的数据包,对数据包进行处理,并在数据包处理完成后,通过空闲队列的写指针依次释放该处理完的缓存数据包在忙碌队列的地址。
实施例2:
如图2所示,本申请提供一种网卡数据包缓存管理方法,包括如下步骤:
S1.设置环形缓存队列,并根据总缓存空间大小及上层应用线程数设定环形缓存队列长度,再将两个环形缓存队列设置为一个缓存池,且设定缓存池中两个环形缓存队列分别为忙碌队列和空闲队列;具体步骤如下:
S11.计算环形缓存队列长度=总缓存空间大小/(上层应用线程数*分类后单个数据包大小);
S12.以设定的环形缓存队列长度创建两个环形缓存队列;
S13.设定两个环形缓存队列中一个为忙碌队列,另一个为空闲队列;
S14.将忙碌队列和空闲队列设定为一个缓存池;
S2.网卡驱动从数据链路接收数据包,将数据包分类,并通过忙碌队列的写指针依次将分类后数据包缓存到忙碌队列中,再将忙碌队列中缓存数据包的地址依次映射到空闲队列;具体步骤如下:
S21.网卡驱动从数据链路接收数据包到接收链路环中;
S22.网卡驱动对数据包进行hash分类;
S23.网卡驱动通过忙碌队列的写指针将分类后数据包依次缓存到忙碌队列中;
S24.网卡驱动将忙碌队列缓存数据包的地址依次映射到空闲队列;
S3.通过空闲队列读指针获取缓存数据包在忙碌队列的最新地址;
S4.上层应用通过忙碌队列的读指针依次获取缓存的数据包,对数据包进行处理,并在数据包处理完成后,通过空闲队列的写指针依次释放该处理完的缓存数据包在忙碌队列的地址;具体步骤如下:
S41.上层应用通过忙碌队列的读指针依次获取缓存的数据包;
S42.上层应用依次对缓存数据包进行处理;
S43.上层应用通过空闲队列的写指针从空闲队列依次获取并释放处理完的缓存数据包在忙碌队列的地址;
S5.获取忙碌队列及空闲队列实时的读指针和写指针,并根据两个队列各自实时的读写指针的关系,判断缓存数据包在存储池的状态,以及是否需要增设缓存池;具体步骤如下:
S51.获取忙碌队列及空闲队列实时的读指针和写指针;
S52.判断忙碌队列的读指针和写指针是否相等;
若是,判定忙碌队列为空,无缓存数据包,返回步骤S2;
若否,进入步骤S53,或者进入步骤S54;
S53.判断是否忙碌队列的下一个写指针为读指针,即忙碌队列写指针+1=忙碌队列读指针;
若是,判定忙碌队列已满,进入步骤S55;
若否,返回步骤S2;
S54.判断空闲队列的下一个读指针大于写指针,即空闲队列读指针+1>空闲队列写指针;
若是,判定空闲队列的空闲内存地址已消耗完毕,进入步骤S55;
若否,返回步骤S2;
S55.重复步骤S1步骤增设缓存池。
在某些实施例中,也可根据上层应用的收发包线程数提前将缓存池设置为多组。
在某些实施例中,网卡驱动只操作忙碌队列的写指针和读指针;
忙碌队列的读指针永远不能超过写指针,初始状态,忙碌队列的读指针与写指针在相同位置;
当忙碌队列读指针与写指针相等时,代表忙碌队列为空,缓存数据包被处理完毕,此时无缓存数据包;
而当写指针下一个位置为读指针,即bwp+1==brp,代表忙碌队列满,此时会出现丢包。
在某些实施例中,上层应用只操作空闲队列的写指针和读指针;
空闲队列的读指针永远不能超过写指针,初始状态,读指针和写指针位于第一个空闲的内存地址上;
当空闲队列读指针下一个位置超过写指针时,代表空闲队列的内存地址已经消耗完毕,此时无法缓存数据包,将出现丢包。
实施例3:
如图3所示,本申请提供一种网卡数据包缓存管理装置,包括:
缓存池设置模块1,用于设置环形缓存队列,并根据总缓存空间大小及上层应用线程数设定环形缓存队列长度,再将两个环形缓存队列设置为一个缓存池,且设定缓存池中两个环形缓存队列分别为忙碌队列和空闲队列;缓存池设置模块1包括:
队列长度计算单元1.1,用于计算环形缓存队列长度=总缓存空间大小/(上层应用线程数*分类后单个数据包大小);
环形缓存队列创建单元1.2,用于以设定的环形缓存队列长度创建两个环形缓存队列;
环形缓存队列分工设置单元1.3,用于设定两个环形缓存队列中一个为忙碌队列,另一个为空闲队列;
缓存池设定单元1.4,用于将忙碌队列和空闲队列设定为一个缓存池;
数据缓存模块2,用于设置网卡驱动从数据链路接收数据包,将数据包分类,并通过忙碌队列的写指针依次将分类后数据包缓存到忙碌队列中,再将忙碌队列中缓存数据包的地址依次映射到空闲队列;数据缓存模块2包括:
数据包接收单元2.1,用于设置网卡驱动从数据链路接收数据包到接收链路环中;
数据包分类单元2.2,用于设置网卡驱动对数据包进行hash分类;
数据包缓存单元2.3,用于设置网卡驱动通过忙碌队列的写指针依次将分类后数据包缓存到忙碌队列中;
地址存储单元2.4,用于设置网卡驱动将忙碌队列缓存数据包的地址依次映射到空闲队列;
缓存地址获取模块3,通过空闲队列读指针获取缓存数据包在忙碌队列的最新地址;
数据处理模块4,用于通过上层应用通过忙碌队列的读指针依次获取缓存的数据包,对数据包进行处理,并在数据包处理完成后,通过空闲队列的写指针依次释放该处理完的缓存数据包在忙碌队列的地址;数据处理模块4包括:
数据包获取单元4.1,用于设置上层应用通过忙碌队列的读指针依次获取缓存的数据包;
数据包处理单元4.2,用于依次对缓存数据包进行处理;
地址释放单元4.3,用于设置上层应用通过空闲队列的写指针从空闲队 列依次获取并释放处理完的缓存数据包在忙碌队列的地址;
缓存数量确定模块5,用于获取忙碌队列及空闲队列实时的读指针和写指针,并根据两个队列各自实时的读写指针的关系,判断缓存数据包在存储池的状态,以及是否需要增设缓存池;缓存数量确定模块5包括:
指针获取单元5.1,用于获取忙碌队列及空闲队列实时的读指针和写指针;
指针相等判断单元5.2,用于判断忙碌队列的读指针和写指针是否相等;
队列为空判定单元5.3,用于忙碌队列的读指针和写指针相等时,判定忙碌队列为空,无缓存数据包;
忙碌队列写指针判断单元5.4,用于判断是否忙碌队列的下一个写指针为读指针,即忙碌队列写指针+1=忙碌队列读指针;
忙碌队列已满判定单元5.5,用于忙碌队列的下一个写指针为读指针时,判定忙碌队列已满;
空闲队列读指针判断单元5.6,用于判断空闲队列的下一个读指针大于写指针,即空闲队列读指针+1>空闲队列写指针;
空闲地址用尽判定单元5.7,用于空闲队列的下一个读指针大于写指针时,判定空闲队列的空闲内存地址已消耗完毕;
缓存池增设单元5.8,用于忙碌队列已满及空闲队列的空闲内存地址已消耗完毕时,增设缓存池。
实施例4:
本申请提供一种终端,
处理器、存储器,其中,
该存储器用于存储计算机程序,该处理器用于从存储器中调用并运行该计算机程序,使得终端执行上述实施例1或实施例2所述的方法。
实施例5:
本申请提供一种存储介质,所述计算机可读存储介质中存储有指令,当其在计算机上运行时,使得计算机执行上述实施例1或实施例2所述的方法。
尽管通过参考附图并结合优选实施例的方式对本申请进行了详细描述,但本申请并不限于此。在不脱离本申请的精神和实质的前提下,本领域普通技术人员可以对本申请的实施例进行各种等效的修改或替换,而这些修改或替换都应在本申请的涵盖范围内/任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (10)

  1. 一种网卡数据包缓存管理方法,其特征在于,包括如下步骤:
    S1.设置环形缓存队列,并根据总缓存空间大小及上层应用线程数设定环形缓存队列长度,再将两个环形缓存队列设置为一个缓存池,且设定缓存池中两个环形缓存队列分别为忙碌队列和空闲队列;
    S2.网卡驱动从数据链路接收数据包,将数据包分类,并通过忙碌队列的写指针依次将分类后数据包缓存到忙碌队列中,再将忙碌队列中缓存数据包的地址依次映射到空闲队列;
    S3.通过空闲队列读指针获取缓存数据包在忙碌队列的最新地址;
    S4.上层应用通过忙碌队列的读指针依次获取缓存的数据包,对数据包进行处理,并在数据包处理完成后,通过空闲队列的写指针依次释放该处理完的缓存数据包在忙碌队列的地址。
  2. 如权利要求1所述的网卡数据包缓存管理方法,其特征在于,还包括如下步骤:
    S5.获取忙碌队列及空闲队列实时的读指针和写指针,并根据两个队列各自实时的读写指针的关系,判断缓存数据包在存储池的状态,以及是否需要增设缓存池。
  3. 如权利要求2所述的网卡数据包缓存管理方法,其特征在于,步骤S1具体步骤如下:
    S11.计算环形缓存队列长度=总缓存空间大小/(上层应用线程数*分类后单个数据包大小);
    S12.以设定的环形缓存队列长度创建两个环形缓存队列;
    S13.设定两个环形缓存队列中一个为忙碌队列,另一个为空闲队列;
    S14.将忙碌队列和空闲队列设定为一个缓存池。
  4. 如权利要求3所述的网卡数据包缓存管理方法,其特征在于,步骤S2具体步骤如下:
    S21.网卡驱动从数据链路接收数据包到接收链路环中;
    S22.网卡驱动对数据包进行hash分类;
    S23.网卡驱动通过忙碌队列的写指针将分类后数据包依次缓存到忙碌队列中;
    S24.网卡驱动将忙碌队列缓存数据包的地址依次映射到空闲队列。
  5. 如权利要求4所述的网卡数据包缓存管理方法,其特征在于,步骤S4具体步骤如下:
    S41.上层应用通过忙碌队列的读指针依次获取缓存的数据包;
    S42.上层应用依次对缓存数据包进行处理;
    S43.上层应用通过空闲队列的写指针从空闲队列依次获取并释放处理完的缓存数据包在忙碌队列的地址。
  6. 如权利要求5所述的网卡数据包缓存管理方法,其特征在于,步骤S5具体步骤如下:
    S51.获取忙碌队列及空闲队列实时的读指针和写指针;
    S52.判断忙碌队列的读指针和写指针是否相等;
    若是,判定忙碌队列为空,无缓存数据包,返回步骤S2;
    若否,进入步骤S53,或者进入步骤S54;
    S53.判断是否忙碌队列的下一个写指针为读指针,即忙碌队列写指针+1= 忙碌队列读指针;
    若是,判定忙碌队列已满,进入步骤S55;
    若否,返回步骤S2;
    S54.判断空闲队列的下一个读指针大于写指针,即空闲队列读指针+1>空闲队列写指针;
    若是,判定空闲队列的空闲内存地址已消耗完毕,进入步骤S55;
    若否,返回步骤S2;
    S55.重复步骤S1步骤增设缓存池。
  7. 一种网卡数据包缓存管理装置,其特征在于,包括:
    缓存池设置模块(1),用于设置环形缓存队列,并根据总缓存空间大小及上层应用线程数设定环形缓存队列长度,再将两个环形缓存队列设置为一个缓存池,且设定缓存池中两个环形缓存队列分别为忙碌队列和空闲队列;
    数据缓存模块(2),用于设置网卡驱动从数据链路接收数据包,将数据包分类,并通过忙碌队列的写指针依次将分类后数据包缓存到忙碌队列中,再将忙碌队列中缓存数据包的地址依次映射到空闲队列;
    缓存地址获取模块(3),通过空闲队列读指针获取缓存数据包在忙碌队列的最新地址;
    数据处理模块(4),用于通过上层应用通过忙碌队列的读指针依次获取缓存的数据包,对数据包进行处理,并在数据包处理完成后,通过空闲队列的写指针依次释放该处理完的缓存数据包在忙碌队列的地址。
  8. 如权利要求7所述的网卡数据包缓存管理装置,其特征在于,还包括:
    缓存数量确定模块(5),用于获取忙碌队列及空闲队列实时的读指针和写指针,并根据两个队列各自实时的读写指针的关系,判断缓存数据包在存储池的状态,以及是否需要增设缓存池。
  9. 一种终端,其特征在于,
    处理器、存储器,其中,
    该存储器用于存储计算机程序,该处理器用于从存储器中调用并运行该计算机程序,使得终端执行上述权利要求1-6任一项所述的方法。
  10. 一种存储介质,其特征在于,所述计算机可读存储介质中存储有指令,当其在计算机上运行时,使得计算机执行上述权利要求1-6任一项所述的方法。
PCT/CN2021/121432 2020-11-12 2021-09-28 一种网卡数据包缓存管理方法、装置、终端及存储介质 WO2022100310A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/245,791 US11777873B1 (en) 2020-11-12 2021-09-28 Method and apparatus for managing buffering of data packet of network card, terminal and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011264058.9A CN112491979B (zh) 2020-11-12 2020-11-12 一种网卡数据包缓存管理方法、装置、终端及存储介质
CN202011264058.9 2020-11-12

Publications (1)

Publication Number Publication Date
WO2022100310A1 true WO2022100310A1 (zh) 2022-05-19

Family

ID=74930385

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/121432 WO2022100310A1 (zh) 2020-11-12 2021-09-28 一种网卡数据包缓存管理方法、装置、终端及存储介质

Country Status (3)

Country Link
US (1) US11777873B1 (zh)
CN (1) CN112491979B (zh)
WO (1) WO2022100310A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116501657A (zh) * 2023-06-19 2023-07-28 阿里巴巴(中国)有限公司 缓存数据的处理方法、设备及系统

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112491979B (zh) * 2020-11-12 2022-12-02 苏州浪潮智能科技有限公司 一种网卡数据包缓存管理方法、装置、终端及存储介质
CN114189462B (zh) * 2021-12-08 2024-01-23 北京天融信网络安全技术有限公司 一种流量采集方法、装置、电子设备及存储介质
CN117407148B (zh) * 2022-07-08 2024-06-18 华为技术有限公司 数据写入方法、读取方法、装置、电子设备以及存储介质
CN115865831A (zh) * 2023-02-25 2023-03-28 广州翼辉信息技术有限公司 一种基于多队列加速网络性能的方法

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101764760A (zh) * 2010-03-24 2010-06-30 深圳市中科新业信息科技发展有限公司 多链路报文捕获方法、多链路报文处理方法及系统
CN103391256A (zh) * 2013-07-25 2013-11-13 武汉邮电科学研究院 一种基于Linux系统的基站用户面数据处理优化方法
WO2016101473A1 (zh) * 2014-12-26 2016-06-30 中兴通讯股份有限公司 计数处理方法及装置
CN109062826A (zh) * 2018-08-16 2018-12-21 算丰科技(北京)有限公司 数据传输方法及系统
CN109783250A (zh) * 2018-12-18 2019-05-21 中兴通讯股份有限公司 一种报文转发方法及网络设备
US20200218662A1 (en) * 2017-09-29 2020-07-09 SZ DJI Technology Co., Ltd. Data caching device and control method therefor, data processing chip, and data processing system
CN112491979A (zh) * 2020-11-12 2021-03-12 苏州浪潮智能科技有限公司 一种网卡数据包缓存管理方法、装置、终端及存储介质

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101267293B (zh) * 2008-04-18 2011-03-30 清华大学 基于分层模型的流媒体隐蔽通信方法
CN101707564B (zh) * 2009-12-04 2012-10-10 曙光信息产业(北京)有限公司 用于零拷贝缓冲区队列网络数据发送和接收的处理方法和装置
JP5817193B2 (ja) * 2011-04-15 2015-11-18 セイコーエプソン株式会社 記録装置、記録装置の制御方法、及び、プログラム
CN102546386A (zh) * 2011-10-21 2012-07-04 北京安天电子设备有限公司 自适应多网卡捕包方法及装置
CN102541779B (zh) * 2011-11-28 2015-07-08 曙光信息产业(北京)有限公司 一种提高多数据缓冲区dma效率的系统和方法
CN103885527A (zh) * 2014-04-15 2014-06-25 东南大学 一种基于rrc编码的时钟偏差补偿装置
CN104809075B (zh) * 2015-04-20 2017-09-12 电子科技大学 一种存取实时并行处理的固态记录装置及方法
CN105071973B (zh) * 2015-08-28 2018-07-17 迈普通信技术股份有限公司 一种报文接收方法及网络设备
CN106502934A (zh) * 2016-11-09 2017-03-15 上海微小卫星工程中心 高速一体化星载数据管理系统
CN108683536B (zh) * 2018-05-18 2021-01-12 东北大学 异步片上网络的可配置双模式融合通信方法及其接口
CN108733344B (zh) * 2018-05-28 2023-07-04 深圳市道通智能航空技术股份有限公司 数据读写方法、装置以及环形队列

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101764760A (zh) * 2010-03-24 2010-06-30 深圳市中科新业信息科技发展有限公司 多链路报文捕获方法、多链路报文处理方法及系统
CN103391256A (zh) * 2013-07-25 2013-11-13 武汉邮电科学研究院 一种基于Linux系统的基站用户面数据处理优化方法
WO2016101473A1 (zh) * 2014-12-26 2016-06-30 中兴通讯股份有限公司 计数处理方法及装置
US20200218662A1 (en) * 2017-09-29 2020-07-09 SZ DJI Technology Co., Ltd. Data caching device and control method therefor, data processing chip, and data processing system
CN109062826A (zh) * 2018-08-16 2018-12-21 算丰科技(北京)有限公司 数据传输方法及系统
CN109783250A (zh) * 2018-12-18 2019-05-21 中兴通讯股份有限公司 一种报文转发方法及网络设备
CN112491979A (zh) * 2020-11-12 2021-03-12 苏州浪潮智能科技有限公司 一种网卡数据包缓存管理方法、装置、终端及存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116501657A (zh) * 2023-06-19 2023-07-28 阿里巴巴(中国)有限公司 缓存数据的处理方法、设备及系统
CN116501657B (zh) * 2023-06-19 2023-11-10 阿里巴巴(中国)有限公司 缓存数据的处理方法、设备及系统

Also Published As

Publication number Publication date
US20230291696A1 (en) 2023-09-14
CN112491979B (zh) 2022-12-02
CN112491979A (zh) 2021-03-12
US11777873B1 (en) 2023-10-03

Similar Documents

Publication Publication Date Title
WO2022100310A1 (zh) 一种网卡数据包缓存管理方法、装置、终端及存储介质
US6779084B2 (en) Enqueue operations for multi-buffer packets
US11698929B2 (en) Offload of data lookup operations
US8321385B2 (en) Hash processing in a network communications processor architecture
US8505013B2 (en) Reducing data read latency in a network communications processor architecture
US7269179B2 (en) Control mechanisms for enqueue and dequeue operations in a pipelined network processor
US7219121B2 (en) Symmetrical multiprocessing in multiprocessor systems
US8924687B1 (en) Scalable hash tables
US7366865B2 (en) Enqueueing entries in a packet queue referencing packets
US8539199B2 (en) Hash processing in a network communications processor architecture
US8935483B2 (en) Concurrent, coherent cache access for multiple threads in a multi-core, multi-thread network processor
US7149226B2 (en) Processing data packets
US7647436B1 (en) Method and apparatus to interface an offload engine network interface with a host machine
WO2018107681A1 (zh) 一种队列操作中的处理方法、装置及计算机存储介质
US11210216B2 (en) Techniques to facilitate a hardware based table lookup
US20110222553A1 (en) Thread synchronization in a multi-thread network communications processor architecture
CN111949568B (zh) 一种报文处理方法、装置及网络芯片
US20100083259A1 (en) Directing data units to a core supporting tasks
CN113518130B (zh) 一种基于多核处理器的分组突发负载均衡方法及系统
US20240045869A1 (en) A method and device of data transmission
WO2020181820A1 (zh) 数据缓存方法、装置、计算机设备和存储介质
CN112883041B (zh) 一种数据更新方法、装置、电子设备及存储介质
WO2024109068A1 (zh) 程序监控方法、装置、电子设备和存储介质
US20110302377A1 (en) Automatic Reallocation of Structured External Storage Structures
CN113259274B (zh) 多核模式下处理网络报文乱序和负载均衡的方法及存储介质

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21890838

Country of ref document: EP

Kind code of ref document: A1