WO2016078341A1 - 一种缓存分配方法、装置及网络处理器 - Google Patents

一种缓存分配方法、装置及网络处理器 Download PDF

Info

Publication number
WO2016078341A1
WO2016078341A1 PCT/CN2015/077698 CN2015077698W WO2016078341A1 WO 2016078341 A1 WO2016078341 A1 WO 2016078341A1 CN 2015077698 W CN2015077698 W CN 2015077698W WO 2016078341 A1 WO2016078341 A1 WO 2016078341A1
Authority
WO
WIPO (PCT)
Prior art keywords
cache
port
dynamic
fixed
network processor
Prior art date
Application number
PCT/CN2015/077698
Other languages
English (en)
French (fr)
Inventor
姜海明
孔玲丽
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2016078341A1 publication Critical patent/WO2016078341A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/9005Buffering arrangements using dynamic buffer space allocation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/9063Intermediate storage in different physical parts of a node or terminal
    • H04L49/9068Intermediate storage in different physical parts of a node or terminal in the network interface card

Definitions

  • the present invention relates to the field of communications, and in particular to a buffer allocation method, apparatus, and network processor for a port of a network processor.
  • network chips include ASIC (Application Specific Integrated Circuit) and NP (Network Processor). With its high-speed processing and flexible programmability, network processors have become an effective solution for data processing in today's networks.
  • ASIC Application Specific Integrated Circuit
  • NP Network Processor
  • the network processor usually contains two units internally: a packet buffer unit and a packet processing engine.
  • the packet enters the network processor and first enters the queue corresponding to the ingress port in the packet buffer unit. Then, the packet header enters the packet processing engine, and the microcode is responsible for processing the packet header, and the packet is modified and then re-entered into the packet buffer unit, and The corresponding buffer of the packet buffer unit takes the original packet and links it and sends it out from the port.
  • the packet buffer unit includes multiple queues, and each queue corresponds to one ingress port.
  • the packet cache unit contains a buffer, which is shared by queues of different ports.
  • the allocation of the queue cache is usually in a fixed allocation manner, that is, the cache is allocated according to the port rate. For example, a chip contains two 10G ports and ten 1GE ports (Gigabit Ethernet interfaces). If the total cache size is 60K, each 10G port allocates 20K memory, and each 1GE port allocates 2K memory.
  • This cache allocation method does not make full use of the valuable resources of the queue buffer.
  • a cache allocation method including:
  • each block of the network processor is allocated a fixed cache resource
  • the dynamic cache resource is allocated to the port from the dynamic shared cache area.
  • the steps of respectively allocating a fixed cache resource to each port of the network processor include:
  • a fixed cache resource is allocated to each port of the network processor according to the data transmission rate of the port.
  • the dynamic shared cache area includes one or more dynamic cache resource pools
  • the step of allocating dynamic cache resources to the port from the dynamic shared cache area includes: allocating dynamic cache resources to the port from one or more dynamic cache resource pools in the dynamic shared cache area.
  • the dynamic shared cache area includes a dynamic cache resource pool corresponding to a port type of the network processor
  • the step of allocating dynamic cache resources to the port from the dynamic shared cache area includes: allocating dynamic cache resources to the port from a dynamic cache pool corresponding to the port type to which the port belongs.
  • the port type of the network processor is divided according to the data transmission rate of the port.
  • a cache allocation device comprising:
  • a dividing module configured to divide a cache unit of the network processor into a fixed reserved cache area and a dynamic shared cache area
  • a first allocation module configured to allocate a fixed cache resource for each port of the network processor in the fixed reserved cache area
  • the second allocation module configured to allocate a dynamic cache resource to the port from the dynamic shared cache area, is set such that the actual traffic on one port is greater than the fixed cache resource allocated for the port.
  • the first allocation module allocates a fixed cache resource for each port of the network processor in the fixed reservation cache area, including: in the fixed reservation cache area, according to the data transmission rate of the port as the network processor Each port is assigned a fixed cache resource.
  • the first allocation module is further configured to: allocate traffic to the fixed reservation buffer area corresponding to the port, and notify the second allocation module to notify the fixed reservation buffer area corresponding to the port.
  • the port allocates cache resources.
  • the second allocating module allocates dynamic cache resources for the port from the dynamic shared cache area, including:
  • the second allocating module divides the dynamic shared cache area into one or more dynamic cache resource pools, and after receiving the notification of the first allocation module, from the one or more dynamic cache resource pools Assign a dynamic cache resource to this port.
  • the second allocating module allocates dynamic cache resources for the port from the dynamic shared cache area, including:
  • the second allocation module divides the dynamic shared cache area into one or more dynamic cache resource pools corresponding to the port type of the network processor, and after receiving the notification of the first allocation module, The dynamic cache pool corresponding to the port type to which the port belongs is assigned a dynamic cache for the port. Resources.
  • the port type of the network processor is divided according to the data transmission rate of the port.
  • a network processor comprising a plurality of ports and a cache unit, characterized by further comprising: a cache allocation device according to any one of claims 6-11.
  • the network processor also includes a processing engine coupled to the cache unit, the processing engine receiving data packets from the cache unit and processing, returning the processed data packets to the port.
  • the above solution divides the cache unit of the network processor into a fixed reserved cache area and a dynamic shared cache area; in the fixed reserved cache area, each port of the network processor is allocated a fixed cache resource; If the actual traffic is greater than the fixed cache resource allocated for the port, the dynamic cache resource is allocated to the port from the dynamic shared cache area; thus, the network processor cache resource can be more effectively utilized when the port traffic suddenly increases.
  • FIG. 1 is a simplified schematic diagram of an internal structure of a network processor
  • FIG. 2 is a diagram showing an example of a network processor cache unit queue cache allocation manner
  • FIG. 3 is a flowchart of a buffer allocation method according to an embodiment of the present invention.
  • FIG. 4 is a diagram showing an example of a network processor cache unit port buffer allocation manner according to an embodiment of the present invention
  • FIG. 5 is a schematic diagram of cache allocation according to an embodiment of the present invention.
  • FIG. 6 is a flowchart of an exemplary cache allocation according to an embodiment of the present invention.
  • FIG. 7 is a block diagram of a cache allocation apparatus according to an embodiment of the present invention.
  • the inventor of the present invention has found that there is a drawback in the way that the queue buffer is allocated in a fixed allocation manner. Due to the burstiness and randomness of the port traffic, some port queue buffers are exhausted and some queue buffers are not used. The situation, so the queue buffer is not fully utilized.
  • an embodiment of the present invention provides a cache allocation method, including:
  • Step 31 Divide the cache unit of the network processor into a fixed reserved cache area and a dynamic shared cache area;
  • Step 32 In the fixed reservation cache area, allocate a fixed cache resource for each port of the network processor;
  • Step 33 If the actual traffic of a port is greater than the fixed cache resource allocated for the port, the dynamic cache resource is allocated to the port from the dynamic shared cache area.
  • the foregoing solution divides the cache unit of the network processor into a fixed reservation cache area and a dynamic shared cache area; in the fixed reservation cache area, each port of the network processor is allocated a fixed cache resource, that is, a fixed pre-fix.
  • the reserved cache area is a queue-specific allocation area, and each queue is allocated according to the port rate; if the actual traffic of a port is greater than the fixed cache resource allocated for the port, the dynamic cache resource is allocated from the dynamic shared cache area; Ensure that network processor cache resources are used more efficiently when port traffic suddenly increases.
  • step 32 may be: in the fixed reserved buffer area, each port of the network processor is allocated a fixed cache resource according to the data transmission rate of the port.
  • the dynamic shared cache area may also include a port type of the network processor. Corresponding dynamic cache resource pool; wherein the port type of the network processor can be divided according to the data transmission rate of the port; if the port whose data transmission rate is smaller than the first value corresponds to a dynamic cache resource pool, the data transmission rate is greater than or equal to The port of the first value corresponds to a dynamic cache resource pool.
  • the dynamic cache resource can be allocated to the port from the dynamic cache pool corresponding to the port type to which the port belongs.
  • the network processor cache unit is divided into two parts, a fixed reserved cache area and a dynamic shared cache area.
  • the port first requests the cache from the fixed reserved cache area. If the reserved area is exhausted and then applies from the dynamic shared cache area, the system uses the previous 10G ports and 10 1GE ports as an example.
  • the cache can be allocated according to Figure 4.
  • the 60K cache is divided into two, 30K fixed reserved cache area and 30K dynamic shared cache area.
  • the fixed reserved cache area is divided according to the end transmission rate.
  • the two 10G ports are each divided into 10K, and each 1GE port is divided into 2K.
  • the dynamic shared cache area 30K area is divided into two dynamic cache resource pools, and one 20K dynamic cache resource pool is 10G.
  • a 10K dynamic cache resource pool is used by the GE port.
  • the mapping between the port and the cache pool is performed.
  • the two 10G ports are mapped to the 20K dynamic cache resource pool, and the 10 GE ports are mapped to the 10K dynamic cache resource pool.
  • Port traffic is first allocated from the port-specific fixed reservation cache area. As shown in Figure 5, port 1 corresponds to the queue queue1; if the port fixed reservation buffer area is full, the memory is requested from the dynamic cache resource pool associated with the port.
  • the buffer unit area is divided into two parts: a fixed reserved cache area and a dynamic shared cache area; the specific two area allocation ratios can be optimized by debugging;
  • the fixed reserved buffer area of the port can be divided according to the port traffic (that is, the maximum transmission rate of the port);
  • 64 divides the dynamic shared cache area and divides the dynamic shared cache area into resource pools; the number and size of resource pools can be flexibly specified. It can also be divided according to traffic attributes. For example, multiple 10G ports are allocated to one resource pool, and multiple GE ports are assigned one resource pool.
  • the traffic entering from the port first allocates memory from the corresponding fixed reserved buffer area of the port queue. If there is enough space in the fixed reserved buffer area, all applications are from the fixed reserved cache area; otherwise, the dynamic sharing associated with the queue is shared. Apply in the resource pool in the cache area.
  • the method of the embodiment of the present invention can more effectively utilize the network processor cache resource.
  • the embodiment of the present invention further provides a cache allocation device, as shown in FIG. 7, comprising:
  • the dividing module 10 is configured to divide the cache unit of the network processor into a fixed reserved cache area and a dynamic shared cache area;
  • the first allocating module 20 is configured to allocate a fixed buffer resource for each port of the network processor in the fixed reserved buffer area;
  • the second allocation module 30 is configured to allocate a dynamic cache resource for the port from the dynamic shared cache area when the actual traffic of the port is greater than the fixed cache resource allocated for the port.
  • the first allocation module allocates a fixed cache resource to each port of the network processor in the fixed reserved cache area, which may include: in the fixed reserved cache area, according to the data transmission rate of the port as the network processor.
  • Each port is assigned a fixed cache resource.
  • the first allocation module may be further configured to: allocate traffic to the fixed reservation buffer area corresponding to the port, and notify the second allocation module if the fixed reservation buffer area corresponding to the port does not have sufficient space. This port allocates cache resources.
  • the second allocation module allocates the dynamic cache resource to the port from the dynamic shared cache area, and the method includes: dividing the dynamic shared cache area into one or more dynamic cache resource pools, and receiving the first allocation module. After the notification, the port is allocated dynamic cache resources from the one or more dynamic cache resource pools.
  • the second allocation module allocates a dynamic cache resource to the port from the dynamic shared cache area, and may include: dividing the dynamic shared cache area into one or more one-to-one correspondence with a port type of the network processor. a dynamic cache resource pool, wherein the port type of the network processor may be divided according to a data transmission rate of the port; after receiving the notification by the first allocation module, the dynamic cache corresponding to the port type to which the port belongs In the pool, allocate dynamic cache resources for this port.
  • Embodiments of the present invention also provide a network processor including a plurality of ports and cache units, and a cache allocation device as described above.
  • the network processor may further include: a processing engine connected to the cache unit, the processing engine receiving the data packet from the cache unit, and processing, and returning the processed data packet to the port.
  • the network processor cache unit of the present application is divided into two parts: a fixed reserved cache area and a dynamic shared cache area.
  • the fixed reserved cache area is a queue-specific allocation area, and each queue is allocated according to the port rate; the dynamic shared cache area is shared by the packet queue of multiple ports.
  • Each port queue needs to specify its dynamic cache resource pool.
  • Each port traffic enters the network processor, and firstly requests the cache resource from the fixed reserved cache area of the corresponding queue of the port. If the fixed reserved cache area is exhausted, the application allocates the cache resource from the specified dynamic cache resource pool. Therefore, the present application has strong industrial applicability.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

一种缓存分配方法、装置及网络处理器,缓存分配方法包括:将网络处理器的缓存单元划分为固定预留缓存区域和动态共享缓存区域;在固定预留缓存区域中,为网络处理器的每个端口分别分配一块固定缓存资源;若一端口的实际流量大于为该端口分配的固定缓存资源,则从动态共享缓存区域中为该端口分配动态缓存资源。本申请的方案可以更加有效利用网络处理器的缓存资源。

Description

一种缓存分配方法、装置及网络处理器 技术领域
本发明涉及通信领域,特别是指一种应用于网络处理器的端口的缓存分配方法、装置及网络处理器。
背景技术
现今网络发展速度惊人,网络流量的增长及新业务的出现,需要网络设备具有线速和灵活的处理能力。目前网络芯片包括ASIC(专用集成电路)和NP(网络处理器)两大类。网络处理器凭借其高速处理及灵活的可编程性,已成为当今网络中数据处理的有效解决方案。
如图1所示,网络处理器内部通常包含两个单元:包缓存单元和包处理引擎。报文进入网络处理器,首先进入包缓存单元中与入端口相应的队列中;然后,报文头进入包处理引擎,微码负责处理包头,对报文进行修改后重新进入包缓存单元,与包缓存单元相应队列取出原包进行链接后从端口发送出去。
如图2所示,包缓存单元中包含多个队列,每个队列对应于一个入端口。包缓存单元包含一块缓存(buffer),该内存为不同端口的队列共用。队列缓存的分配方式通常采用固定分配的方式,即根据端口速率分配缓存。比如,某芯片包含2个10G端口和10个1GE端口(千兆以太网接口),如果缓存总大小为60K,则每个10G端口分配20K内存,每个1GE端口分配2K内存。
这种缓存分配方式没有充分利用队列缓存区这一宝贵资源。
发明内容
以下是对本文详细描述的主题的概述。本概述并非是为了限制权利要求的保护范围。
本发明实施例提供的技术方案如下:
一种缓存分配方法,包括:
将网络处理器的缓存单元划分为固定预留缓存区域和动态共享缓存区域;
在固定预留缓存区域中,为网络处理器的每个端口分别分配一块固定缓存资源;
若一端口的实际流量大于为该端口分配的固定缓存资源,则从动态共享缓存区域中为该端口分配动态缓存资源。
可选的,
在固定预留缓存区域中,为网络处理器的每个端口分别分配一块固定缓存资源的步骤包括:
在固定预留缓存区域中,按照端口的数据传输速率为网络处理器的每个端口分别分配一块固定缓存资源。
可选的,
所述动态共享缓存区域包括一个或多个动态缓存资源池;
所述从动态共享缓存区域中为该端口分配动态缓存资源的步骤包括:从动态共享缓存区域中的一个或多个动态缓存资源池中,为该端口分配动态缓存资源。
可选的,
所述动态共享缓存区域包括与网络处理器的端口类型一一对应的动态缓存资源池;
所述从动态共享缓存区域中为该端口分配动态缓存资源的步骤包括:从与该端口所属的端口类型对应的动态缓存池中,为该端口分配动态缓存资源。
可选的,
所述网络处理器的端口类型是根据端口的数据传输速率划分的。
一种缓存分配装置,其中,包括:
划分模块,设置为将网络处理器的缓存单元划分为固定预留缓存区域和动态共享缓存区域;
第一分配模块,设置为在固定预留缓存区域中,为网络处理器的每个端口分别分配一块固定缓存资源;
第二分配模块,设置为在一端口的实际流量大于为该端口分配的固定缓存资源,则从动态共享缓存区域中为该端口分配动态缓存资源。
可选的,
所述第一分配模块在固定预留缓存区域中,为网络处理器的每个端口分别分配一块固定缓存资源,包括:在固定预留缓存区域中,按照端口的数据传输速率为网络处理器的每个端口分别分配一块固定缓存资源。
可选的,
第一分配模块还设置为:对端口进入的流量,首先从端口对应的固定预留缓存区域分配内存,如果该端口对应的固定预留缓存区域没有足够空间,通知所述第二分配模块为该端口分配缓存资源。
可选的,
所述第二分配模块从动态共享缓存区域中为该端口分配动态缓存资源,包括:
所述第二分配模块将所述动态共享缓存区域分为一个或多个动态缓存资源池,收到所述第一分配模块的所述通知后,从所述一个或多个动态缓存资源池中为该端口分配动态缓存资源。
可选的,
所述第二分配模块从动态共享缓存区域中为该端口分配动态缓存资源,包括:
所述第二分配模块将所述动态共享缓存区域分为与网络处理器的端口类型一一对应的一个或多个动态缓存资源池,收到所述第一分配模块的所述通知后,从该端口所属的端口类型对应的动态缓存池中为该端口分配动态缓存 资源。
可选的,
所述网络处理器的端口类型根据端口的数据传输速率划分。
一种网络处理器,包括多个端口和缓存单元,其特征在于,还包括:如权利要求6-11任一项所述的缓存分配装置。
可选的,
所述网络处理器还包括与所述缓存单元连接的处理引擎,所述处理引擎接收来自所述缓存单元中的数据包,并进行处理,将处理后的数据包返回所述端口。
上述方案通过将网络处理器的缓存单元划分为固定预留缓存区域和动态共享缓存区域;在固定预留缓存区域中,为网络处理器的每个端口分别分配一块固定缓存资源;若一端口的实际流量大于为该端口分配的固定缓存资源,则从动态共享缓存区域中为该端口分配动态缓存资源;从而保证在端口流量突然增大时,可以更加有效利用网络处理器缓存资源。
本发明的其它特征和优点将在随后的说明书中阐述,
附图概述
图1为网络处理器内部结构简化示意图;
图2为网络处理器缓存单元队列缓存分配方式示例图;
图3为本发明实施例的缓存分配方法流程图;
图4为本发明实施例的网络处理器缓存单元端口缓存分配方式示例图;
图5为本发明实施例的缓存分配示意图;
图6为本发明实施例一个示例性的缓存分配流程图;
图7为本发明实施例缓存分配装置的模块图。
本发明的较佳实施方式
为使本发明要解决的技术问题、技术方案和优点更加清楚,下面将结合附图及具体实施例进行详细描述。
本发明的发明人发现,队列缓存的分配方式采用固定分配的方式存在一个弊端,由于端口流量的突发性和随机性,会出现某些端口队列缓存区耗尽而某些队列缓冲区没有使用的情况,因此队列缓存区没有充分利用。
如图3所示,本发明的实施例提供一种缓存分配方法,包括:
步骤31,将网络处理器的缓存单元划分为固定预留缓存区域和动态共享缓存区域;
步骤32,在固定预留缓存区域中,为网络处理器的每个端口分别分配一块固定缓存资源;
步骤33,若一端口的实际流量大于为该端口分配的固定缓存资源,则从动态共享缓存区域中为该端口分配动态缓存资源。
上述方案通过将网络处理器的缓存单元划分为固定预留缓存区域和动态共享缓存区域;在固定预留缓存区域中,为网络处理器的每个端口分别分配一块固定缓存资源,也即固定预留缓存区域为队列专有分配区域,每个队列按照端口速率分配;若一端口的实际流量大于为该端口分配的固定缓存资源,则从动态共享缓存区域中为该端口分配动态缓存资源;从而保证在端口流量突然增大时,可以更加有效利用网络处理器缓存资源。
本发明的实施例中,步骤32可以是:在固定预留缓存区域中,按照端口的数据传输速率为网络处理器的每个端口分别分配一块固定缓存资源。
其中,所述动态共享缓存区域可以包括一个或多个动态缓存资源池;所述从动态共享缓存区域中为该端口分配动态缓存资源的步骤可以是:从动态共享缓存区域中的一个或多个动态缓存资源池中,为该端口分配动态缓存资源。
其中,所述动态共享缓存区域也可以包括与网络处理器的端口类型一一 对应的动态缓存资源池;其中,所述网络处理器的端口类型可以根据端口的数据传输速率划分的;如数据传输速率小于第一值的端口对应一个动态缓存资源池,数据传输速率大于或者等于该第一值的端口对应一个动态缓存资源池。从动态共享缓存区域中为端口分配动态缓存资源时,可以从该端口所属的端口类型对应的动态缓存池中,为该端口分配动态缓存资源。
如图4所示,网络处理器缓存单元划分为两部分,固定预留缓存区域和动态共享缓存区域。端口首先从固定预留缓存区域申请缓存,预留区如果耗尽再从动态共享缓存区域中申请,以之前2个10G端口和10个1GE端口的系统为例,可以按照图4方式分配缓存。首先将60K缓存,一分为二,30K固定预留缓存区域和30K动态共享缓存区域。固定预留缓存区域按照端传输速率划分,2个10G端口各分10K,每个1GE端口各分2K;动态共享缓存区域30K区域划分成两个动态缓存资源池,一个20K动态缓存资源池为10G端口所用,一个10K动态缓存资源池为GE端口使用。最后绑定端口与缓存池映射关系,两个10G端口映射到20K动态缓存资源池,10个GE端口映射到10K动态缓存资源池。
端口流量首先从端口专属固定预留缓存区域分配内存,如图5中端口1对应队列queue1;如果端口固定预留缓存区域已满,则从端口关联的动态缓存资源池中申请内存。
一个示例性的实现流程图如图6所示:
61开始;
62将缓存单元区划分成两部分:固定预留缓存区域和动态共享缓存区域;具体两个区域分配比例可以通过调试取最优值;
63划分端口的固定预留缓存区域,可按照端口流量(即端口的最大传输速率)划分;
64划分动态共享缓存区域,将动态共享缓存区域划分为一个个资源池;资源池数目及大小可以灵活指定。也可以按照流量属性划分,比如将多个10G端口分配一个资源池,多个GE端口分配一个资源池;
65指定每个端口对应的资源池,将端口与资源池关联起来;
66结束。
转发层面,从端口进入的流量,首先从端口队列相应的固定预留缓存区域分配内存,如果固定预留缓存区域有足够空间则全部从固定预留缓存区域申请;否则从队列相关联的动态共享缓存区域中的资源池中申请。
与现有技术相比较,本发明实施例的方法可以更加有效利用网络处理器缓存资源。
另外,与上述方法实施例相对应的,本发明的实施例还提供一种缓存分配装置,如图7所示,包括:
划分模块10,配置为将网络处理器的缓存单元划分为固定预留缓存区域和动态共享缓存区域;
第一分配模块20,配置为在固定预留缓存区域中,为网络处理器的每个端口分别分配一块固定缓存资源;
第二分配模块30,配置为在一端口的实际流量大于为该端口分配的固定缓存资源时,从动态共享缓存区域中为该端口分配动态缓存资源。
其中,第一分配模块在固定预留缓存区域中,为网络处理器的每个端口分别分配一块固定缓存资源,可以包括:在固定预留缓存区域中,按照端口的数据传输速率为网络处理器的每个端口分别分配一块固定缓存资源。
其中,第一分配模块还可以设置为:对端口进入的流量,首先从端口对应的固定预留缓存区域分配内存,如果该端口对应的固定预留缓存区域没有足够空间,通知第二分配模块为该端口分配缓存资源。
其中,第二分配模块从动态共享缓存区域中为该端口分配动态缓存资源,可以包括:将所述动态共享缓存区域分为一个或多个动态缓存资源池,收到所述第一分配模块的所述通知后,从所述一个或多个动态缓存资源池中为该端口分配动态缓存资源。
其中,所述第二分配模块从动态共享缓存区域中为该端口分配动态缓存资源,可以包括:可以将所述动态共享缓存区域分为与网络处理器的端口类型一一对应的一个或多个动态缓存资源池,其中,所述网络处理器的端口类型可以根据端口的数据传输速率划分的;收到所述第一分配模块的所述通知后,从该端口所属的端口类型对应的动态缓存池中,为该端口分配动态缓存资源。
本发明的实施例还提供一种网络处理器,包括多个端口和缓存单元,以及如上所述的缓存分配装置。
其中,上述网络处理器还可以包括:与所述缓存单元连接的处理引擎,所述处理引擎接收来自所述缓存单元中的数据包,并进行处理,将处理后的数据包返回所述端口。
以上所述是本发明的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明所述原理的前提下,还可以作出若干改进和润饰,这些改进和润饰也应视为本发明的保护范围。
工业实用性
本申请的将网络处理器缓存单元划分为两部分:固定预留缓存区域和动态共享缓存区域。固定预留缓存区域为队列专有分配区域,每个队列按照端口速率分配;动态共享缓存区域为多个端口的数据包队列共享。每个端口队列都需要指定其动态缓存资源池。每个端口流量进入网络处理器,首先到端口相应队列的固定预留缓存区域申请缓存资源,如果固定预留缓存区域耗尽,再从其指定的动态缓存资源池中申请分配缓存资源。因此,本申请具有很强的工业实用性。

Claims (13)

  1. 一种缓存分配方法,包括:
    将网络处理器的缓存单元划分为固定预留缓存区域和动态共享缓存区域;
    在固定预留缓存区域中,为网络处理器的每个端口分别分配一块固定缓存资源;
    若一端口的实际流量大于为该端口分配的固定缓存资源,则从动态共享缓存区域中为该端口分配动态缓存资源。
  2. 根据权利要求1所述的缓存分配方法,其中:
    在固定预留缓存区域中,为网络处理器的每个端口分别分配一块固定缓存资源的步骤包括:
    在固定预留缓存区域中,按照端口的数据传输速率为网络处理器的每个端口分别分配一块固定缓存资源。
  3. 根据权利要求1或2所述的缓存分配方法,其中:
    所述动态共享缓存区域包括一个或多个动态缓存资源池;
    所述从动态共享缓存区域中为该端口分配动态缓存资源的步骤包括:从动态共享缓存区域中的一个或多个动态缓存资源池中,为该端口分配动态缓存资源。
  4. 根据权利要求1或2所述的缓存分配方法,其中:
    所述动态共享缓存区域包括与网络处理器的端口类型一一对应的动态缓存资源池;
    所述从动态共享缓存区域中为该端口分配动态缓存资源的步骤包括:从与该端口所属的端口类型对应的动态缓存池中,为该端口分配动态缓存资源。
  5. 根据权利要求4所述的缓存分配方法,其中:
    所述网络处理器的端口类型是根据端口的数据传输速率划分的。
  6. 一种缓存分配装置,其中,包括:
    划分模块,设置为将网络处理器的缓存单元划分为固定预留缓存区域和动态共享缓存区域;
    第一分配模块,设置为在固定预留缓存区域中,为网络处理器的每个端口分别分配一块固定缓存资源;
    第二分配模块,设置为在一端口的实际流量大于为该端口分配的固定缓存资源,则从动态共享缓存区域中为该端口分配动态缓存资源。
  7. 根据权利要求6所述的缓存分配装置,其中:
    所述第一分配模块在固定预留缓存区域中,为网络处理器的每个端口分别分配一块固定缓存资源,包括:在固定预留缓存区域中,按照端口的数据传输速率为网络处理器的每个端口分别分配一块固定缓存资源。
  8. 根据权利要求6或7所述的缓存分配装置,其中:
    第一分配模块还设置为:对端口进入的流量,首先从端口对应的固定预留缓存区域分配内存,如果该端口对应的固定预留缓存区域没有足够空间,通知所述第二分配模块为该端口分配缓存资源。
  9. 根据权利要求8所述的缓存分配装置,其中:
    所述第二分配模块从动态共享缓存区域中为该端口分配动态缓存资源,包括:
    所述第二分配模块将所述动态共享缓存区域分为一个或多个动态缓存资源池,收到所述第一分配模块的所述通知后,从所述一个或多个动态缓存资源池中为该端口分配动态缓存资源。
  10. 根据权利要求8所述的缓存分配装置,其中:
    所述第二分配模块从动态共享缓存区域中为该端口分配动态缓存资源,包括:
    所述第二分配模块将所述动态共享缓存区域分为与网络处理器的端口类型一一对应的一个或多个动态缓存资源池,收到所述第一分配模块的所述通 知后,从该端口所属的端口类型对应的动态缓存池中为该端口分配动态缓存资源。
  11. 根据权利要求10所述的缓存分配装置,其中:
    所述网络处理器的端口类型根据端口的数据传输速率划分。
  12. 一种网络处理器,包括多个端口和缓存单元,其特征在于,还包括:如权利要求6-11任一项所述的缓存分配装置。
  13. 根据权利要求12所述的网络处理器,其中:
    所述网络处理器还包括与所述缓存单元连接的处理引擎,所述处理引擎接收来自所述缓存单元中的数据包,并进行处理,将处理后的数据包返回所述端口。
PCT/CN2015/077698 2014-11-19 2015-04-28 一种缓存分配方法、装置及网络处理器 WO2016078341A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410663761.5 2014-11-19
CN201410663761.5A CN105610729A (zh) 2014-11-19 2014-11-19 一种缓存分配方法、装置及网络处理器

Publications (1)

Publication Number Publication Date
WO2016078341A1 true WO2016078341A1 (zh) 2016-05-26

Family

ID=55990271

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/077698 WO2016078341A1 (zh) 2014-11-19 2015-04-28 一种缓存分配方法、装置及网络处理器

Country Status (2)

Country Link
CN (1) CN105610729A (zh)
WO (1) WO2016078341A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112187665A (zh) * 2020-09-28 2021-01-05 杭州迪普科技股份有限公司 一种报文处理的方法及装置
CN113836048A (zh) * 2021-09-17 2021-12-24 许昌许继软件技术有限公司 一种基于fpga内存动态分配的数据交换方法及装置
WO2022172091A1 (en) * 2021-02-15 2022-08-18 Mellanox Technologies, Ltd. Zero-copy buffering of traffic of long-haul links

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107870871B (zh) * 2016-09-23 2021-08-20 华为技术有限公司 分配缓存的方法和装置
CN107222429A (zh) * 2017-05-27 2017-09-29 努比亚技术有限公司 数据传输系统及方法
CN107277796A (zh) * 2017-05-27 2017-10-20 努比亚技术有限公司 移动终端及其数据传输方法
CN108768898A (zh) * 2018-04-03 2018-11-06 郑州云海信息技术有限公司 一种片上网络传输报文的方法及其装置
CN110661724B (zh) 2018-06-30 2023-03-31 华为技术有限公司 一种分配缓存的方法和设备
CN109495401B (zh) * 2018-12-13 2022-06-24 迈普通信技术股份有限公司 缓存的管理方法及装置
WO2023097575A1 (en) * 2021-12-01 2023-06-08 Huawei Technologies Co.,Ltd. Devices and methods for wirelesscommunication in a wireless network
CN115051958A (zh) * 2022-04-14 2022-09-13 重庆奥普泰通信技术有限公司 缓存分配方法、装置和设备
CN116340202B (zh) * 2023-03-28 2024-03-01 中科驭数(北京)科技有限公司 数据传输方法、装置、设备及计算机可读存储介质

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101364948A (zh) * 2008-09-08 2009-02-11 中兴通讯股份有限公司 一种动态分配缓存的方法
CN102185725A (zh) * 2011-05-31 2011-09-14 北京星网锐捷网络技术有限公司 一种缓存的管理方法、装置和网络交换设备
WO2012079382A1 (zh) * 2010-12-15 2012-06-21 中兴通讯股份有限公司 一种调整出端口缓存的方法及交换机

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100512206C (zh) * 2004-11-18 2009-07-08 华为技术有限公司 分组转发设备的缓存资源管理方法
CN1798094A (zh) * 2004-12-23 2006-07-05 华为技术有限公司 一种使用缓存区的方法
US7802028B2 (en) * 2005-05-02 2010-09-21 Broadcom Corporation Total dynamic sharing of a transaction queue

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101364948A (zh) * 2008-09-08 2009-02-11 中兴通讯股份有限公司 一种动态分配缓存的方法
WO2012079382A1 (zh) * 2010-12-15 2012-06-21 中兴通讯股份有限公司 一种调整出端口缓存的方法及交换机
CN102185725A (zh) * 2011-05-31 2011-09-14 北京星网锐捷网络技术有限公司 一种缓存的管理方法、装置和网络交换设备

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112187665A (zh) * 2020-09-28 2021-01-05 杭州迪普科技股份有限公司 一种报文处理的方法及装置
CN112187665B (zh) * 2020-09-28 2023-04-07 杭州迪普科技股份有限公司 一种报文处理的方法及装置
WO2022172091A1 (en) * 2021-02-15 2022-08-18 Mellanox Technologies, Ltd. Zero-copy buffering of traffic of long-haul links
CN113836048A (zh) * 2021-09-17 2021-12-24 许昌许继软件技术有限公司 一种基于fpga内存动态分配的数据交换方法及装置

Also Published As

Publication number Publication date
CN105610729A (zh) 2016-05-25

Similar Documents

Publication Publication Date Title
WO2016078341A1 (zh) 一种缓存分配方法、装置及网络处理器
CN110620731B (zh) 一种片上网络的路由装置及路由方法
US11799764B2 (en) System and method for facilitating efficient packet injection into an output buffer in a network interface controller (NIC)
CN107171980B (zh) 网络交换机中的灵活的缓冲区分配
US10708197B2 (en) Network data processor having per-input port virtual output queues
US9007902B1 (en) Method and apparatus for preventing head of line blocking in an Ethernet system
US7701849B1 (en) Flow-based queuing of network traffic
US8392565B2 (en) Network memory pools for packet destinations and virtual machines
US20150215226A1 (en) Device and Method for Packet Processing with Memories Having Different Latencies
US8644194B2 (en) Virtual switching ports on high-bandwidth links
US20070053294A1 (en) Network load balancing apparatus, systems, and methods
US11212590B2 (en) Multiple core software forwarding
US8553708B2 (en) Bandwith allocation method and routing device
CN113676416B (zh) 一种在高速网卡/dpu内提升网络服务质量的方法
JP2008546298A (ja) 電子装置及び通信リソース割り当ての方法
WO2018024173A1 (zh) 报文处理方法及路由器
US20060251071A1 (en) Apparatus and method for IP packet processing using network processor
US10263905B2 (en) Distributed flexible scheduler for converged traffic
US9985886B2 (en) Technologies for network packet pacing during segmentation operations
US9846658B2 (en) Dynamic temporary use of packet memory as resource memory
JP2011091711A (ja) ノード及び送信フレーム振り分け方法並びにプログラム
WO2022042396A1 (zh) 数据传输方法和系统、芯片
US20150149639A1 (en) Bandwidth allocation in a networked environment
US9621487B2 (en) Method and apparatus for protection switching based on memory control in packet transport system
CN117118762B (zh) 中央处理器收包处理方法、装置、电子设备和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15861442

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15861442

Country of ref document: EP

Kind code of ref document: A1