WO2011012023A1 - 一种网络处理器输出端口队列的管理方法及系统 - Google Patents

一种网络处理器输出端口队列的管理方法及系统 Download PDF

Info

Publication number
WO2011012023A1
WO2011012023A1 PCT/CN2010/073685 CN2010073685W WO2011012023A1 WO 2011012023 A1 WO2011012023 A1 WO 2011012023A1 CN 2010073685 W CN2010073685 W CN 2010073685W WO 2011012023 A1 WO2011012023 A1 WO 2011012023A1
Authority
WO
WIPO (PCT)
Prior art keywords
data packet
priority
output port
queue
discriminating
Prior art date
Application number
PCT/CN2010/073685
Other languages
English (en)
French (fr)
Inventor
王凤彬
邹旭军
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2011012023A1 publication Critical patent/WO2011012023A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/32Flow control; Congestion control by discarding or delaying data units, e.g. packets or frames
    • H04L47/326Flow control; Congestion control by discarding or delaying data units, e.g. packets or frames with random discard, e.g. random early discard [RED]

Definitions

  • the present invention relates to the field of data communications, and in particular, to a method and system for managing a network processor output port queue.
  • the existing network processor technology is usually based on the micro-engine to implement the service processing, and the micro-engines cooperate to complete the flow classification, rate limiting, queue management, message modification, and scheduling output of the data traffic flow. deal with.
  • the functions implemented by the micro-engines are fixed.
  • the Policing micro-engine implements rate limiting.
  • Traffic Shaping (TS) micro-engine implements traffic shaping
  • traffic management (TM) micro-engine implements queue management, and packet modification.
  • Stream Editor, SED for short The micro-engine implements message modification
  • the Fast Pattern Processor (FPP) micro-engine implements flow classification (this micro-engine is also called flow classification micro-engine). Queue management of received data packets by the network processor is also an important basic function, and its purpose is to perform congestion control.
  • Congestion control is usually implemented in the TM micro-engine. When congestion occurs, it will seriously affect the service quality of the network, increase the packet loss rate in network transmission, increase the delay of network transmission, and must take measures to control and avoid congestion.
  • AQM Active Queue Management
  • the tail drop method discards the message only when the queue buffer of the network device overflows, and AQM actively marks or discards the message before the queue buffer overflows.
  • AQM has the advantages of reducing the loss rate, reducing the packet transmission delay, and avoiding system oscillation.
  • the representative of AQM is the Random Early Detection (RED) algorithm and the Weighted Random Early Detection (WRED) algorithm. Practice has proved that AQM has better performance than tail drop.
  • the management part of the network processor output port queue cannot perform blocking processing on specific data packets.
  • the technical problem to be solved by the present invention is to provide a network processor output port queue management method and system, which allows other non-specific data messages to be blocked while the output port queue management part blocks the specific data packets in the ACL rule. Continue forwarding processing.
  • the present invention provides a method for managing a queue of an output port of a network processor, including: a fast mode FPP micro-engine performs pattern matching on a received data packet through a query tree table, and causes a data packet according to a matching result. Entering the corresponding service processing flow; identifying the priority of the data packet for the data packet entering the corresponding service flow, and sending the data packet and its priority parameter to the traffic managementTM microengine together;
  • the TM micro-engine uses the priority parameter of the data packet and the output logical port cache threshold as the access control list ACL rule, and generates a discriminating conclusion that the data packet is added to the output port queue according to the ACL rule.
  • the forwarding or discarding processing of the data packet in the TM micro engine output interface queue is completed.
  • the FPP micro-engine identifies the packet sent by the CPU port of the central processing unit and the loop report as a high priority, and the common service packet identifier is a low priority when the priority of the data packet is identified.
  • the step of the TM microengine generating a discriminating conclusion according to the ACL rule comprises: determining, by the TM microengine, whether a current consumed capacity of the output port queue reaches the output logical port buffer threshold, if If not, the data packet is added to the output port queue for forwarding; if it is reached, the priority parameter of the data packet is determined again. If the priority is high, the data packet is sent. The data is added to the output port queue for forwarding. If the priority is low, the data packet is discarded.
  • the method further includes: the TM micro engine updating a current consumed capacity of the output port queue when adding the data packet to the output port queue.
  • the present invention provides a management system for a network processor output port queue, including a fast mode FPP micro engine and a traffic management TM micro engine, the FPP micro engine including a message type identification module and a priority identification module.
  • the TM micro-engine includes a TM discriminating module and a forwarding discarding module, where: the packet type identifying module is configured to: perform pattern matching on the received data packet through a query tree table, and make a data packet according to the matching result.
  • the priority identification module is configured to: identify a priority of the data packet for the data packet entering the corresponding service flow, and send the data packet and its priority parameter together to the
  • the TM discriminating module is configured to: use the priority parameter of the data packet and the output logical port buffer threshold as an ACL rule, and generate, according to the ACL rule, whether the data packet is added to the output port. a discriminating conclusion of the queue, and sending the discriminating conclusion to the forwarding discarding module; the forwarding discarding module Configured to: according to the received discrimination conclusion, the complete data packet forwarding process microengine TM output interface queue or discarded.
  • the priority identifier module is configured to: when the priority of the data packet is identified, the packet sent by the CPU port and the loop report message are identified as high priority, and the normal service packet identifier is low priority. level.
  • the TM discriminating module is configured to: when determining a discriminating conclusion according to the ACL rule, first determining whether a current consumed capacity of the output port queue reaches the output logical port buffer threshold, if not If yes, a discriminating conclusion is generated for adding the data packet to the output port queue; if yes, determining a priority parameter of the data packet, and if the priority is high, generating the data The discriminating conclusion that the packet is added to the output port queue, if the priority is low, the discriminating conclusion of discarding the data packet is generated.
  • the TM discriminating module determines the current consumed capacity of the output port queue When the quantity has not reached the output logical port buffer threshold, the generated discriminating conclusion that the data packet is added to the output port queue further includes using a weighted random early detection algorithm on the output port.
  • the queue is subject to congestion control.
  • the TM discriminating module is further configured to: update a current consumed capacity of the output port queue when generating a discriminating conclusion of adding a data message to an output port queue.
  • the invention uses the ACL rule to implement local blocking of data packets in the TM micro-engine part, and can effectively avoid congestion by blocking low-priority data packets. Meanwhile, if the TM micro-engine part cooperates with the WRED algorithm, the congestion control can be For better performance, the policy of forwarding high-priority packets and the discarding of low-priority packets in the case of resource shortages can prevent high-priority packets from being discarded when WRED is discarded.
  • FIG. 1 is a block diagram of a management system for a network processor output port queue according to an embodiment of the present invention
  • FIG. 2 is a schematic flow chart of a method for managing a network processor output port queue according to an embodiment of the present invention.
  • the preferred embodiment of the present invention is also an important basic function of the network processor for classifying received data packets.
  • Traffic classification is usually implemented by an Access Control List (ACL), which is usually used in the FPP microengine.
  • ACL Access Control List
  • the driver software configures a series of ACL access control rules.
  • the configuration table of the FPP micro-engine query tree table performs pattern matching of ACL rules on the fields of multiple domains in the packet header of the data packet.
  • the tree table returns the corresponding handle or directly jumps to the corresponding stream. According to the result returned by the tree table, it is determined whether the data packet satisfies or does not satisfy the ACL access control rule, and determines the data according to whether the rule is satisfied or not met. Take packet discard or forward processing.
  • ACL access control rules facilitate network management.
  • ACL rules are mainly used for packet filtering and network address translation on the port. Translation, referred to as NAT, policy routing, Unicast Reverse Path Forwarding (URPF) and other applications.
  • the ACL access control rules are implemented in the traffic classification part, and the application of the ACL access control does not involve the congestion avoidance control of the data packet in the output port queue management part; and the management part of the network processor output port queue cannot currently Enables blocking of specific data packets.
  • the core idea of the present invention is: in the TM micro-engine, determining whether to add a packet to an output port queue according to the configured ACL rule and the priority identifier of the received data packet, thereby blocking the blocking of the specific data packet on the output port.
  • the present invention provides a queue management method for implementing a network processor output port based on ACL, which mainly includes the following steps: Step 1. Set an output logical port cache threshold as part of an output port queue ACL rule;
  • the traffic classification microengine receives the data packet, queries the tree table input entry, performs pattern matching on the key segment of the data packet header, and makes the datagram according to the matching conclusion returned by the tree table.
  • the file jumps to the corresponding service processing flow; Step 3.
  • the priority of the data packet is identified, and the data packet and its priority parameter are sent together Downstream logic;
  • the TM micro engine uses the priority parameter carried in the data packet and the output logical port cache threshold as the ACL rule, and generates a data packet to be added to the output port queue or not added to the output. The discriminating conclusion of the port queue, setting the status of the data packet, and completing the forwarding or discarding of the data packet in the queue of the TM micro engine output port.
  • the management system of the network processor output port queue of the embodiment of the present invention is mainly composed of the following modules: a message type identification module 111, a priority identification module 112, and a TM discriminant module.
  • Block 121 and forwarding discard module 122 are implemented in the network processor FPP micro-engine 11 part, and the TM discrimination module 121 and the forwarding discarding module 122 are implemented in the network processor TM micro-engine 12 part.
  • the processing sequence of the data message sequentially passes through the message type identification module 111, the priority identification module 112, the TM discrimination module 121, and the forwarding discarding module 122, and each module processes the data in parallel.
  • the function of each module in this embodiment is as follows:
  • the message type identification module 111 is configured to: perform pattern matching on the incoming data packet through the query tree table, and enter the corresponding service processing flow according to the matching result; priority identifier
  • the module 121 is configured to: after the traffic classification function is completed, after the flow classification function is completed, the priority of the packet is identified before being sent to the next-level micro-engine (ie, the TM micro-engine 12), and The priority parameter is sent to the TM microengine 12 together with the message;
  • the TM discriminating module 121 is configured to: use the priority parameter of the data packet and the output logical port buffer threshold as an ACL rule, and generate a discriminating conclusion that the packet is added or not added to the output port queue according to the ACL rule, and The discriminating conclusion is sent to the forwarding discarding module 122.
  • the forwarding discarding module 122 is configured to: according to the discriminating conclusion of the TM discriminating module 121, complete the forwarding or discarding processing of the packet in the TM microengine output interface queue.
  • FIG. 2 is a schematic diagram of a method for managing a network processor output port queue provided by the present invention.
  • a packet and a loop report sent by a CPU (Central Processing Unit) port are set to be forwarded.
  • High priority (priority parameter pri l)
  • the output port queue management system implements local packet blocking control based on the packet priority parameter and the pre-configured output logical port cache threshold.
  • the specific steps of the solution are as follows: Step 101: The driver software completes hardware initialization of the network processor; Step 102. Configure a default value of the output logical port cache threshold port-thresh in the configuration file, or modify the default of the port-thresh by the driver.
  • Step 103 The network processor receives the data packet.
  • Step 104 The traffic classification micro engine (FPP microengine) queries the tree table input entry, and the pattern matches the data packet head key field, and returns a matching result; Step 105.
  • FPP microengine The traffic classification micro engine
  • the data packet is redirected to the corresponding service processing flow, for example, according to the matching result to determine whether the message type is a normal message, if yes, then proceeds to step 106, otherwise, proceeds to step 107;
  • the loop report message jumps to the loop report file processing flow; the normal service message (normal text) jumps to the normal service flow.
  • Step 106 After the data packet is redirected to the various processing flows, the data packet is prioritized before being sent to the next-level micro-engine. In this embodiment, the common packet flow is identified as a low priority. And proceeding to step 108; Step 107.
  • the loopback stream is identified as high priority 1 and proceeds to step 108; if there are other types of packets, such as packets sent by the CPU port, then jump to the CPU port to send The message flow, the priority is also set to high priority.
  • Step 108. Use the fTransmitO function to send the priority parameter and the data packet to the downstream logic, that is, the TM microengine;
  • Step 109. Determine whether the highest bit of the port-thresh is 1, and if yes, perform the step
  • Step 110 Determine whether the consumed capacity of the output port queue reaches the output logical port cache threshold port-thresh, if yes, go to step 112, otherwise, go to step 111;
  • the TM micro-engine caches the classification result of the data packet sent by the FPP micro-engine and the priority parameter carried, and takes the priority parameter of the data packet and the output logical port cache threshold port-thresh configured in step 102 as
  • the ACL rule generates a discriminating conclusion that the packet is enqueued or not entered according to the ACL rule in the subsequent step.
  • Step 111 Implement the congestion control of the output port queue management by using the WRED algorithm, and end the process;
  • Step 112. Determine whether pri is 0. If yes, go to step 113. Otherwise, go to step 114.
  • Step 113. Generate the discriminant conclusion that the ordinary packet is not added to the output port queue according to the ACL rule, and discard the ordinary according to the discriminant conclusion. Packets; Step 114. Generate a discriminating conclusion to join the output port queue according to the ACL rule, and according to the discriminating conclusion, the high-priority message (the ring-return message and the packet sent by the CPU port) is added to the output port queue for forwarding, and Update the current consumed capacity of the output port queue. At this point, the data packet is forwarded or discarded in the TM micro engine output port queue, and the process ends.
  • the output logical port cache threshold port-thresh can be modified by the driver, for example, port-thresh is 2 bytes (16 bits), and If the highest bit is set to 1 and the current consumed capacity of the output port queue is set to the maximum value, the output port queue is full. In this case, normal data packets are discarded. Blocking processing. In this embodiment, it is determined whether the WRED algorithm is directly executed by determining whether the highest byte of the output logical port buffer threshold port_thresh is 1. In other embodiments, step 109 may be omitted, that is, after step 108 is performed, step 110 is directly executed.
  • the step is performed. 111. Add the data packet to the output port queue for forwarding, and perform congestion control on the output port queue by using a weighted random early detection algorithm.
  • the present invention provides a method and a system for managing a queue of an output port of a network processor.
  • the ACL rule is used to implement local blocking of data packets in the TM micro-engine part, and congestion can be effectively prevented by blocking low-priority data packets.
  • the TM micro-engine part cooperates with the WRED algorithm, the congestion control can achieve better results.
  • the policy of ensuring high-priority packet forwarding and low-priority packet discarding can be avoided. When WRED is discarded, the discarding of high-priority packets occurs.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Description

一种网络处理器输出端口队列的管理方法及系统
技术领域 本发明涉及数据通讯领域, 尤其涉及一种网络处理器输出端口队列的管 理方法及系统。
背景技术
现有的网络处理器技术通常是基于微引擎实现业务处理, 由各个微引擎 共同协作, 完成对通过的数据报文业务流的流分类、 速率限速、 队列管理、 报文修改及调度输出等处理。 各个微引擎所实现的功能较为固定, Policing 微引擎实现速率限制, 流量整形 (Traffic Shaping, 简称 TS )微引擎实现流 量整形, 流量管理(Traffic Management, 简称 TM )微引擎实现队列管理, 报文修改( Stream Editor, 简称 SED )微引擎实现报文修改, 快速模式( Fast Pattern Processor, 简称 FPP )微引擎实现流分类 (该微引擎也称为流分类微 引擎) 。 网络处理器对接收的数据报文进行队列管理也是一项重要的基本功能, 其目的就是进行拥塞控制。 拥塞控制通常在 TM微引擎中实现, 拥塞发生时 会严重影响网络的服务质量, 增加网络传输中的报文丟失率, 增加网络传输 的延时, 必须釆取措施来控制和避免拥塞的发生。 当前, 针对拥塞主要釆取 的是主动队列管理( Active Queue Management, 简称 AQM )算法。 在 AQM 算法出现前, 釆用队尾丟弃 (DropTail ) 的方法。 队尾丟弃方法只有在网络 设备的队列緩存溢出时才丟弃报文, 而 AQM是在队列緩存溢出之前就主动 标记或者丟弃 文。 与队尾丟弃方法相比, AQM具有减少^艮文丟失率, 减 少报文传输延迟,避免系统震荡等优点。 AQM的代表是随机早检测(Random Early Detection,简称 RED )算法和加权的随机早检测( Weighted Random Early Detection, 简称 WRED )算法。 实践证明 AQM比队尾丟弃具有更好的性能。
发明内容 目前网络处理器输出端口队列的管理部分不能实现对特定数据报文进行 阻塞处理。 本发明要解决的技术问题是提供一种网络处理器输出端口队列的 管理方法及系统,在输出端口队列管理部分对 ACL规则中特定数据报文进行 阻塞处理的同时, 允许其它非特定数据报文继续转发处理。 为了解决上述技术问题, 本发明提供一种网络处理器输出端口队列的管 理方法, 包括: 快速模式 FPP微引擎对接收到的数据报文通过查询树表实现模式匹配, 根据匹配结果令数据报文进入相应业务处理流; 对于进入相应业务流的数据 报文, 标识数据报文的优先级, 并将所述数据报文及其优先级参数一同发送 到流量管理 TM微引擎; 以及
TM微引擎以所述数据报文的优先级参数及输出逻辑端口緩存门限值作 为访问控制列表 ACL规则, 并根据所述 ACL规则产生所述数据报文是否加 入到输出端口队列的判别结论, 完成所述数据报文在 TM微引擎输出接口队 列的转发或者丟弃处理。 优选地, 所述 FPP微引擎在标识所述数据报文的优先级时, 将中央处理 单元 CPU端口发出的报文和环回报文标识为高优先级,普通业务报文标识为 低优先级。 优选地, 所述 TM微引擎根据所述 ACL规则产生判别结论的步骤包括: 所述 TM微引擎判断所述输出端口队列的当前已耗用容量是否达到所述 输出逻辑端口緩存门限值, 如果未达到, 则将所述数据报文加入到所述输出 端口队列进行转发; 如果已达到, 则再判断所述数据报文的优先级参数, 如 果优先级为高, 则将所述数据报文加入到所述输出端口队列进行转发, 如果 优先级为低, 则丟弃所述数据报文。 优选地, 在所述 TM微引擎根据所述 ACL规则产生判别结论的步骤中, 如果所述 TM微引擎判断出所述输出端口队列的当前已耗用容量尚未达到所 述输出逻辑端口緩存门限值, 则在将所述数据报文加入到所述输出端口队列 进行转发的同时, 釆用加权的随机早检测算法对所述输出端口队列进行拥塞 控制。 优选地, 所述方法还包括: 所述 TM微引擎在将数据报文加入到输出端口队列中时, 更新所述输出 端口队列的当前已耗用容量。 为了解决上述技术问题, 本发明提供一种网络处理器输出端口队列的管 理系统, 包括快速模式 FPP微引擎和流量管理 TM微引擎, 所述 FPP微引擎 包括报文类型识别模块和优先级标识模块, 所述 TM微引擎包括 TM判别模 块和转发丟弃模块, 其中: 所述报文类型识别模块设置成: 对接收到的数据报文通过查询树表实现 模式匹配, 根据匹配结果令数据报文进入相应业务处理流; 所述优先级标识模块设置成: 对于进入相应业务流的数据报文, 标识数 据报文的优先级, 并将所述数据报文及其优先级参数一同发送到所述 TM判 别模块; 所述 TM判别模块设置成: 以所述数据报文的优先级参数及输出逻辑端 口緩存门限值作为 ACL规则, 根据所述 ACL规则产生所述数据报文是否加 入到输出端口队列的判别结论, 并将该判别结论发送至所述转发丟弃模块; 所述转发丟弃模块设置成: 根据接收到的所述判别结论, 完成所述数据 报文在 TM微引擎输出接口队列的转发或者丟弃处理。 优选地, 所述优先级标识模块是设置成: 在标识所述数据报文的优先级 时,将 CPU端口发出的报文和环回报文标识为高优先级,普通业务报文标识 为低优先级。 优选地,所述 TM判别模块是设置成:在根据所述 ACL规则产生判别结 论时, 首先判断所述输出端口队列的当前已耗用容量是否达到所述输出逻辑 端口緩存门限值, 如果未达到, 则产生将所述数据报文加入到所述输出端口 队列的判别结论; 如果已达到, 则再判断所述数据报文的优先级参数, 如果 优先级为高, 则产生将所述数据报文加入到所述输出端口队列的判别结论, 如果优先级为低, 则产生丟弃所述数据报文的判别结论。 优选地, 当所述 TM判别模块判断出所述输出端口队列的当前已耗用容 量尚未达到所述输出逻辑端口緩存门限值时, 所产生的将所述数据报文加入 到所述输出端口队列的判别结论中, 还包括釆用加权的随机早检测算法对所 述输出端口队列进行拥塞控制。 优选地, 所述 TM判别模块还设置成: 在产生将数据报文加入到输出端 口队列的判别结论时, 更新所述输出端口队列的当前已耗用容量。
本发明在 TM微引擎部分利用 ACL规则实现数据报文的局部阻塞,通过 阻塞低优先级数据报文能够有效的避免拥塞发生; 同时, 如果在 TM微引擎 部分与 WRED算法配合,对拥塞控制能够取得更优的效果,在资源紧张情况 下, 保证高优先级报文转发, 低优先级报文丟弃的策略, 能够避免 WRED丟 弃时, 丟弃高优先级 文情况发生。
附图概述 图 1 为本发明实施例的网络处理器输出端口队列的管理系统的组成框 图; 图 2为本发明实施例的网络处理器输出端口队列的管理方法的流程示意 图。
本发明的较佳实施方式 网络处理器对接收的数据报文进行流分类也是其一项重要的基本功能。 流分类通常由访问控制列表(Access Control List, 简称 ACL )实现, 即 ACL 通常在 FPP微引擎中使用。 驱动软件配置一系列 ACL访问控制规则, FPP 微引擎查询树表的配置表项, 对数据报文的报文头部中的多个域的字段进行 ACL规则的模式匹配 ,根据模式匹配的结果由树表返回相应句柄或直接跳转 到相应流,根据树表返回的结果确定数据报文是否满足或者不满足 ACL访问 控制规则, 并根据是否满足或不满足某一类规则, 来决定对数据釆取报文丟 弃或者转发处理方式。通过 ACL访问控制规则能够方便网络管理, 目前 ACL 规则主要用于在端口的报文过滤、 网络地址转换 ( Network Address Translation ,简称 NAT )、策略路由、单播反向路径转发( Unicast Reverse Path Forwarding, 简称 URPF )等应用中。 现有技术中 ACL访问控制规则均在流分类部分实现, 并且对 ACL访问 控制的应用不涉及数据报文在输出端口队列管理部分的避免拥塞控制; 且目 前网络处理器输出端口队列的管理部分不能实现对特定数据报文进行阻塞处 理。 本发明的核心思想是:在 TM微引擎中,根据配置的 ACL规则及所接收 数据报文的优先级标识确定是否将报文加入到输出端口队列, 从而实现在输 出端口阻塞特定数据报文, 同时允许继续转发其他数据报文。 基于上述思想,本发明提供了一种基于 ACL实现网络处理器输出端口的 队列管理方法, 主要包括以下步骤: 步骤 1. 设置输出逻辑端口緩存门限值, 作为输出端口队列 ACL规则的 一部分;
步骤 2. 流分类微引擎 (即 FPP微引擎)接收数据报文, 查询树表输入 表项,对数据报文头部关键字段进行模式匹配,并根据树表返回的匹配结论, 令数据报文跳转到相应业务处理流; 步骤 3. 完成流分类功能后,在发送到下一级微引擎处理之前,标识数据 报文的优先级, 并将数据报文及其优先级参数一同发送到下游逻辑; 步骤 4. 接收到数据报文时, TM微引擎以该数据报文携带的优先级参数 及输出逻辑端口緩存门限值作为 ACL规则,产生数据报文加入输出端口队列 或者不加入输出端口队列的判别结论, 设置数据报文状态, 完成数据报文在 TM微引擎输出端口队列的转发或者丟弃处理。
下面结合附图及具体实施例对本发明技术方案的实施作进一步详细描 述。 如图 1所示, 本发明实施例的网络处理器输出端口队列的管理系统主要 由以下模块组成: 报文类型识别模块 111、 优先级标识模块 112、 TM判别模 块 121和转发丟弃模块 122。 其中, 报文类型识别模块 111和优先级标识模 块 112在网络处理器 FPP微引擎 11部分实现, TM判别模块 121和转发丟弃 模块 122是在网络处理器 TM微引擎 12部分实现。数据报文的处理顺序依次 经过报文类型识别模块 111、 优先级标识模块 112、 TM判别模块 121和转发 丟弃模块 122, 各个模块并行处理数据。 本实施例中各模块的作用如下: 报文类型识别模块 111设置成: 对进入的数据报文通过查询树表实现模 式匹配, 并根据匹配结果令数据报文进入相应业务处理流; 优先级标识模块 121设置成: 对于进入指定业务流的数据报文, 在完成 流分类功能后, 在发送到下一级微引擎 (即 TM微引擎 12 )处理之前, 标识 该报文的优先级, 并将优先级参数与该报文一同发送到 TM微引擎 12;
TM判别模块 121设置成: 以所述数据报文的优先级参数及输出逻辑端 口緩存门限值作为 ACL规则, 根据所述 ACL规则产生报文加入或不加入输 出端口队列的判别结论, 并将判别结论发送至转发丟弃模块 122; 转发丟弃模块 122设置成: 根据 TM判别模块 121的判别结论, 完成报 文在 TM微引擎输出接口队列的转发或者丟弃处理等。
图 2示出了本发明提供的网络处理器输出端口队列的管理方法的一个具 体方案, 该方案中, 将 CPU ( Central Processing Unit, 中央处理单元)端口 发出的报文和环回报文设置为转发高优先级(优先级参数 pri=l ) , 其它业务 报文被设置为转发低优先级(优先级参数 pri=0 )。 输出端口队列管理系统根 据报文优先级参数和预先配置的输出逻辑端口緩存门限值, 实现报文局部阻 塞的控制。 该方案具体步骤如下: 步骤 101 , 驱动软件完成网络处理器的硬件初始化; 步骤 102. 在配置文件中配置输出逻辑端口緩存门限值 port— thresh的默 认值, 或者由驱动修改 port— thresh的默认值, 并将该输出逻辑端口緩存门限 值作为输出端口队列 ACL规则的一部分; 其中, 配置输出逻辑端口緩存门限值 port— thresh是现有技术的内容, 但 现有技术中该 port— thresh的值是固定值, 一般不对其进行修改, 也并没有作 为 ACL规则。 步骤 103. 网络处理器接收数据报文; 步骤 104. 流分类微引擎(FPP微引擎)查询树表输入表项, 模式匹配数 据报文头部关键字段, 返回匹配结果; 步骤 105. FPP微引擎根据返回的匹配结果, 令数据报文跳转到相应业务 处理流, 例如, 根据匹配结果判断报文类型是否为普通报文, 如果是, 则转 入步骤 106, 否则, 转入步骤 107; 在本实施例中, 环回报文跳转到环回报文处理流; 正常业务报文(普通 文)跳转到正常业务流。 步骤 106. 数据报文跳转到各种的处理流后,在发送到下一级微引擎处理 之前, 标识数据报文的优先级, 本实施例中将普通报文流标识为低优先级 0, 并进入步骤 108; 步骤 107. 环回 ^艮文流标识为高优先级 1 , 并进入步骤 108; 如果有其他类型的报文, 如 CPU端口发出的报文, 则跳转到 CPU端口 发送报文流, 优先级也设为高优先级。 步骤 108. 利用 fTransmitO函数将优先级参数和数据报文一同发送到下 游逻辑, 即 TM微引擎; 步骤 109. 判断 port— thresh最高比特位是否为 1 , 如果是, 则执行步骤
110, 否则, 执行步骤 111 ; 步骤 110. 判断输出端口队列的已耗用容量是否到达输出逻辑端口緩存 门限值 port— thresh, 如果是, 则执行步骤 112, 否则, 执行步骤 111 ; 该步骤中 , TM微引擎緩存 FPP微引擎发送过来的数据报文及携带的优 先级参数等分类结论, 并以数据报文的优先级参数和步骤 102中配置的输出 逻辑端口緩存门限值 port— thresh作为 ACL规则, 在后续步骤中根据该 ACL 规则产生报文入队或者不入队的判别结论。 步骤 111. 釆用 WRED算法实现输出端口队列管理拥塞控制, 结束本流 程;
步骤 112. 判断 pri是否为 0, 如果是, 则执行步骤 113 , 否则, 执行步 骤 114; 步骤 113. 根据 ACL规则产生普通报文不加入输出端口队列的判别结 论, 并根据该判别结论丟弃普通报文; 步骤 114. 根据 ACL规则产生加入输出端口队列的判别结论, 并根据该 判别结论将高优先级的报文(环回报文和 CPU端口发出的报文)加入输出端 口队列进行转发, 同时更新输出端口队列的当前已耗用容量。 至此,已完成数据报文在 TM微引擎输出端口队列的转发或者丟弃处理, 本流程结束。 由上述流程可知, 在需要对正常数据报文进行阻塞时, 可通过驱动对输 出逻辑端口緩存门限值 port— thresh进行修改, 例如, port— thresh为 2个字节 ( 16比特) , 将其最高比特位设置为 1 , 同时将输出端口队列当前已消耗容 量设为最大值, 则说明输出端口队列已处于满的状态, 此时, 正常数据报文 将被丟弃, 从而实现对特定报文的阻塞处理。 在本实施例中, 通过判断输出逻辑端口緩存门限值 port— thresh最高比特 位是否为 1 , 从而决定是否直接执行 WRED算法。 在其它实施例中, 也可以 无需步骤 109 , 即在步骤 108执行之后, 直接执行步骤 110, 在判断输出端 口队列的已耗用容量未达到输出逻辑端口緩存门限值 port— thresh时, 执行步 骤 111 , 将所述数据报文加入到所述输出端口队列进行转发, 同时, 釆用加 权的随机早检测算法对所述输出端口队列进行拥塞控制。
尽管本发明结合特定实施例进行了描述, 但是对于本领域的技术人员来 说, 可以在不背离本发明的精神或范围的情况下进行修改和变化。 这样的修 改和变化被视作在本发明的范围和附加的权利要求书范围之内。 工业实用性 本发明提供一种网络处理器输出端口队列的管理方法及系统, 在 TM微 引擎部分利用 ACL规则实现数据报文的局部阻塞 ,通过阻塞低优先级数据报 文能够有效的避免拥塞发生; 同时,如果在 TM微引擎部分与 WRED算法配 合, 对拥塞控制能够取得更优的效果, 在资源紧张情况下, 保证高优先级报 文转发, 低优先级报文丟弃的策略, 能够避免 WRED丟弃时, 丟弃高优先级 报文情况发生。

Claims

权 利 要 求 书
1、 一种网络处理器输出端口队列的管理方法, 包括: 快速模式 FPP微引擎对接收到的数据报文通过查询树表实现模式匹配, 根据匹配结果令数据报文进入相应业务处理流; 对于进入相应业务流的数据 报文, 标识数据报文的优先级, 并将所述数据报文及其优先级参数一同发送 到流量管理 TM微引擎; 以及
TM微引擎以所述数据报文的优先级参数及输出逻辑端口緩存门限值作 为访问控制列表 ACL规则, 并根据所述 ACL规则产生所述数据报文是否加 入到输出端口队列的判别结论, 完成所述数据报文在 TM微引擎输出接口队 列的转发或者丟弃处理。
2、 如权利要求 1所述的方法, 其中: 所述 FPP微引擎在标识所述数据报文的优先级时,将中央处理单元 CPU 端口发出的 "^文和环回 "^文标识为高优先级,普通业务"¾文标识为低优先级。
3、 如权利要求 1或 2所述的方法, 其中: 所述 TM微引擎根据所述 ACL规则产生判别结论的步骤包括: 所述 TM微引擎判断所述输出端口队列的当前已耗用容量是否达到所述 输出逻辑端口緩存门限值, 如果未达到, 则将所述数据报文加入到所述输出 端口队列进行转发; 如果已达到, 则再判断所述数据报文的优先级参数, 如 果优先级为高, 则将所述数据报文加入到所述输出端口队列进行转发, 如果 优先级为低, 则丟弃所述数据报文。
4、 如权利要求 3所述的方法, 其中: 在所述 TM微引擎根据所述 ACL规则产生判别结论的步骤中 ,如果所述 TM微引擎判断出所述输出端口队列的当前已耗用容量尚未达到所述输出逻 辑端口緩存门限值, 则在将所述数据报文加入到所述输出端口队列进行转发 的同时, 釆用加权的随机早检测算法对所述输出端口队列进行拥塞控制。
5、 如权利要求 3所述的方法, 所述方法还包括: 所述 TM微引擎在将数据报文加入到输出端口队列中时, 更新所述输出 端口队列的当前已耗用容量。
6、 一种网络处理器输出端口队列的管理系统, 包括快速模式 FPP微引 擎和流量管理 TM微引擎, 所述 FPP微引擎包括报文类型识别模块和优先级 标识模块 , 所述 TM微引擎包括 TM判别模块和转发丟弃模块, 其中: 所述报文类型识别模块设置成: 对接收到的数据报文通过查询树表实现 模式匹配, 根据匹配结果令数据报文进入相应业务处理流; 所述优先级标识模块设置成: 对于进入相应业务流的数据报文, 标识数 据报文的优先级, 并将所述数据报文及其优先级参数一同发送到所述 TM判 别模块; 所述 TM判别模块设置成: 以所述数据报文的优先级参数及输出逻辑端 口緩存门限值作为 ACL规则, 根据所述 ACL规则产生所述数据报文是否加 入到输出端口队列的判别结论, 并将该判别结论发送至所述转发丟弃模块; 所述转发丟弃模块设置成: 根据接收到的所述判别结论, 完成所述数据 报文在 TM微引擎输出接口队列的转发或者丟弃处理。
7、 如权利要求 6所述的系统, 其中: 所述优先级标识模块是设置成: 在标识所述数据报文的优先级时, 将 CPU端口发出的报文和环回报文标识为高优先级,普通业务报文标识为低优 先级。
8、 如权利要求 6或 7所述的系统, 其中: 所述 TM判别模块是设置成:在根据所述 ACL规则产生判别结论时,首 先判断所述输出端口队列的当前已耗用容量是否达到所述输出逻辑端口緩存 门限值, 如果未达到, 则产生将所述数据报文加入到所述输出端口队列的判 别结论; 如果已达到, 则再判断所述数据报文的优先级参数, 如果优先级为 高, 则产生将所述数据报文加入到所述输出端口队列的判别结论, 如果优先 级为低, 则产生丟弃所述数据报文的判别结论。
9、 如权利要求 8所述的系统, 其中: 当所述 TM判别模块判断出所述输出端口队列的当前已耗用容量尚未达 到所述输出逻辑端口緩存门限值时, 所产生的将所述数据报文加入到所述输 出端口队列的判别结论中, 还包括釆用加权的随机早检测算法对所述输出端 口队列进行拥塞控制。
10、 如权利要求 8所述的系统, 其中: 所述 TM判别模块还设置成: 在产生将数据报文加入到输出端口队列的 判别结论时, 更新所述输出端口队列的当前已耗用容量。
PCT/CN2010/073685 2009-07-31 2010-06-08 一种网络处理器输出端口队列的管理方法及系统 WO2011012023A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2009101615375A CN101616097B (zh) 2009-07-31 2009-07-31 一种网络处理器输出端口队列的管理方法及系统
CN200910161537.5 2009-07-31

Publications (1)

Publication Number Publication Date
WO2011012023A1 true WO2011012023A1 (zh) 2011-02-03

Family

ID=41495513

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2010/073685 WO2011012023A1 (zh) 2009-07-31 2010-06-08 一种网络处理器输出端口队列的管理方法及系统

Country Status (2)

Country Link
CN (1) CN101616097B (zh)
WO (1) WO2011012023A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108259378A (zh) * 2017-03-30 2018-07-06 新华三技术有限公司 一种报文处理方法及装置
CN108881065A (zh) * 2017-05-08 2018-11-23 英特尔公司 基于流的速率限制
CN109933907A (zh) * 2019-03-14 2019-06-25 北京五维星宇科技有限公司 一种装备管理业务模型的建立方法及装置
CN111030943A (zh) * 2019-12-13 2020-04-17 迈普通信技术股份有限公司 一种报文的处理方法、装置、转发设备及存储介质
CN113064738A (zh) * 2021-03-29 2021-07-02 南京邮电大学 基于概要数据的主动队列管理方法

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101616097B (zh) * 2009-07-31 2012-05-23 中兴通讯股份有限公司 一种网络处理器输出端口队列的管理方法及系统
GB2481971B (en) * 2010-07-07 2016-12-21 Cray Uk Ltd Apparatus & method
CN101984608A (zh) * 2010-11-18 2011-03-09 中兴通讯股份有限公司 报文拥塞避免方法及系统
CN102025638A (zh) * 2010-12-21 2011-04-20 福建星网锐捷网络有限公司 基于优先级的数据传输方法、装置及网络设备
CN102594669A (zh) * 2012-02-06 2012-07-18 福建星网锐捷网络有限公司 数据报文的处理方法、装置及设备
CN102594691B (zh) * 2012-02-23 2019-02-15 中兴通讯股份有限公司 一种处理报文的方法及装置
CN103152251A (zh) * 2013-02-27 2013-06-12 杭州华三通信技术有限公司 一种报文处理方法及装置
CN105490961A (zh) * 2014-09-19 2016-04-13 杭州迪普科技有限公司 报文处理方法、装置以及网络设备
CN107438035B (zh) * 2016-05-25 2021-11-12 中兴通讯股份有限公司 一种网络处理器、网络处理方法和系统、单板
CN107454014A (zh) * 2016-05-30 2017-12-08 中兴通讯股份有限公司 一种优先级队列调度的方法及装置
US10554556B2 (en) * 2017-08-08 2020-02-04 Mellanox Technologies Tlv Ltd. Network element with congestion-aware match tables
CN108833299B (zh) * 2017-12-27 2021-12-28 北京时代民芯科技有限公司 一种基于可重构交换芯片架构的大规模网络数据处理方法
CN109586780A (zh) * 2018-11-30 2019-04-05 四川安迪科技实业有限公司 卫星网络中防止报文阻塞的方法
CN112398728B (zh) * 2019-08-14 2024-03-08 中兴通讯股份有限公司 虚拟网关平滑演进方法、网关设备及存储介质
CN112217738B (zh) * 2020-11-04 2023-08-25 成都中科大旗软件股份有限公司 一种文旅数据服务的流控方法、系统、存储介质及终端

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1402472A (zh) * 2002-09-29 2003-03-12 清华大学 基于网络处理器平台实现的动态部分缓冲共享方法
CN1658575A (zh) * 2005-03-21 2005-08-24 北京北方烽火科技有限公司 一种在sgsn网络处理器中提高服务质量的方法
CN1688140A (zh) * 2005-06-03 2005-10-26 清华大学 基于网络处理器的高速多维报文分类算法的设计和实现
US20060265424A1 (en) * 2004-01-06 2006-11-23 Yong Yean K Random early detect and differential packet aging flow in switch queues
CN101193061A (zh) * 2006-12-14 2008-06-04 中兴通讯股份有限公司 基于多Qos的流量控制方法
US20080253284A1 (en) * 2007-04-16 2008-10-16 Cisco Technology, Inc. Controlling a Transmission Rate of Packet Traffic
CN101616097A (zh) * 2009-07-31 2009-12-30 中兴通讯股份有限公司 一种网络处理器输出端口队列的管理方法及系统

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6404752B1 (en) * 1999-08-27 2002-06-11 International Business Machines Corporation Network switch using network processor and methods
KR100716184B1 (ko) * 2006-01-24 2007-05-10 삼성전자주식회사 네트워크 프로세서에서의 큐 관리 방법 및 그 장치

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1402472A (zh) * 2002-09-29 2003-03-12 清华大学 基于网络处理器平台实现的动态部分缓冲共享方法
US20060265424A1 (en) * 2004-01-06 2006-11-23 Yong Yean K Random early detect and differential packet aging flow in switch queues
CN1658575A (zh) * 2005-03-21 2005-08-24 北京北方烽火科技有限公司 一种在sgsn网络处理器中提高服务质量的方法
CN1688140A (zh) * 2005-06-03 2005-10-26 清华大学 基于网络处理器的高速多维报文分类算法的设计和实现
CN101193061A (zh) * 2006-12-14 2008-06-04 中兴通讯股份有限公司 基于多Qos的流量控制方法
US20080253284A1 (en) * 2007-04-16 2008-10-16 Cisco Technology, Inc. Controlling a Transmission Rate of Packet Traffic
CN101616097A (zh) * 2009-07-31 2009-12-30 中兴通讯股份有限公司 一种网络处理器输出端口队列的管理方法及系统

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108259378A (zh) * 2017-03-30 2018-07-06 新华三技术有限公司 一种报文处理方法及装置
CN108259378B (zh) * 2017-03-30 2021-09-21 新华三技术有限公司 一种报文处理方法及装置
CN108881065A (zh) * 2017-05-08 2018-11-23 英特尔公司 基于流的速率限制
CN109933907A (zh) * 2019-03-14 2019-06-25 北京五维星宇科技有限公司 一种装备管理业务模型的建立方法及装置
CN109933907B (zh) * 2019-03-14 2023-10-20 北京五维星宇科技有限公司 一种装备管理业务模型的建立方法及装置
CN111030943A (zh) * 2019-12-13 2020-04-17 迈普通信技术股份有限公司 一种报文的处理方法、装置、转发设备及存储介质
CN113064738A (zh) * 2021-03-29 2021-07-02 南京邮电大学 基于概要数据的主动队列管理方法
CN113064738B (zh) * 2021-03-29 2022-10-25 南京邮电大学 基于概要数据的主动队列管理方法

Also Published As

Publication number Publication date
CN101616097A (zh) 2009-12-30
CN101616097B (zh) 2012-05-23

Similar Documents

Publication Publication Date Title
WO2011012023A1 (zh) 一种网络处理器输出端口队列的管理方法及系统
US11374858B2 (en) Methods and systems for directing traffic flows based on traffic flow classifications
US7916718B2 (en) Flow and congestion control in switch architectures for multi-hop, memory efficient fabrics
US8503307B2 (en) Distributing decision making in a centralized flow routing system
US8593970B2 (en) Methods and apparatus for defining a flow control signal related to a transmit queue
US9071529B2 (en) Method and apparatus for accelerating forwarding in software-defined networks
US9674102B2 (en) Methods and network device for oversubscription handling
US8149705B2 (en) Packet communications unit
US9276852B2 (en) Communication system, forwarding node, received packet process method, and program
US10778588B1 (en) Load balancing for multipath groups routed flows by re-associating routes to multipath groups
EP2560333A2 (en) Methods and apparatus for defining a flow control signal
WO2012065477A1 (zh) 报文拥塞避免方法及系统
US11818022B2 (en) Methods and systems for classifying traffic flows based on packet processing metadata
US11863459B2 (en) Packet processing method and apparatus
JP2002044139A (ja) ルータ装置及びそれに用いる優先制御方法
US20210263744A1 (en) Methods and systems for processing data in a programmable data processing pipeline that includes out-of-pipeline processing
WO2020043200A1 (zh) 建立快速转发表
JP2002111742A (ja) データ伝送フローのパケットをマークするための方法およびこの方法を実行するマーカデバイス
CN112787951A (zh) 拥塞控制方法、装置、设备和计算机可读存储介质
WO2016150020A1 (zh) 基于调度流标识的报文调度方法和装置
US8310927B1 (en) Priority scheme for control traffic in network switches
US20200145340A1 (en) Packet-content based WRED protection
US9152494B2 (en) Method and apparatus for data packet integrity checking in a processor
WO2023207461A1 (zh) 拥塞流识别方法、装置、设备及计算机可读存储介质
EP2164210B1 (en) Methods and apparatus for defining a flow control signal

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10803856

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 10803856

Country of ref document: EP

Kind code of ref document: A1