WO2008128464A1 - Device and method for realizing transferring single stream by multiple network processing units - Google Patents

Device and method for realizing transferring single stream by multiple network processing units Download PDF

Info

Publication number
WO2008128464A1
WO2008128464A1 PCT/CN2008/070733 CN2008070733W WO2008128464A1 WO 2008128464 A1 WO2008128464 A1 WO 2008128464A1 CN 2008070733 W CN2008070733 W CN 2008070733W WO 2008128464 A1 WO2008128464 A1 WO 2008128464A1
Authority
WO
WIPO (PCT)
Prior art keywords
network processing
data
processing unit
processing units
unit
Prior art date
Application number
PCT/CN2008/070733
Other languages
French (fr)
Chinese (zh)
Inventor
Ke Chen
Zheng Li
Zhenhua Xu
Wenyang Lei
Original Assignee
Huawei Technologies Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co., Ltd. filed Critical Huawei Technologies Co., Ltd.
Publication of WO2008128464A1 publication Critical patent/WO2008128464A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/61Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio

Definitions

  • the present invention relates to the field of communications, and in particular, to an apparatus and method for implementing single stream forwarding of a multi-network processing unit.
  • the existing forwarding processing functions are mainly implemented by devices such as a CP (Control Processor), an NP (Network Processor), and a TM (Traffic Management)/Fabric (Switching Network).
  • the NP implements the core forwarding processing function of the data communication device, which can be implemented by a NP unit with a bidirectional processing function, as shown in FIG. 1; the user side service completes the physical layer processing of the physical line signal through the inbound interface unit, and performs parallel conversion and go Frame processing, partial Layer 2 overhead processing, etc., and then the service is sent to the NP unit for processing.
  • the NP and the CP interact, and finally enter the network side through the traffic management/switching network unit;
  • the network side service reaches the NP unit through the traffic management/switching network unit, and then the physical interface processing, the serial-to-parallel conversion, the framing processing, and the partial Layer 2 overhead processing of the physical line signal are performed by the outbound interface unit to reach the user side.
  • the NP function can also be implemented by two NP units with one-way processing function, as shown in FIG. 2:
  • the service uplink direction completes physical layer processing, parallel-to-serial conversion, de-frame processing, and partial Layer 2 overhead of physical line signals through the inbound interface unit.
  • the user enters the uplink NP processing unit and enters the network side through the traffic management/switching network unit; the network side service enters the downlink NP processing unit through the traffic management/switching network unit, and then the physical interface signal is completed by the outgoing interface unit.
  • Physical layer processing, serial-to-parallel conversion, framing processing, partial Layer 2 overhead processing, etc. are sent to the user side.
  • the NP of the existing processing capability must not meet the requirements. If the service traffic increases, the NP of the existing processing capability must not meet the requirements. If the existing forwarding processing architecture is adopted, the high-performance NP is the only option for solving large traffic. In order to improve the forwarding processing capability of data communication equipment, although high-performance NP can be developed to replace the existing low-processing capability NP, it will bring huge investment in research and development and waste of existing resources. In addition, if the next business volume is increased, it will lead to another high cost investment in hardware upgrades, so that the new high-performance NP will not be able to meet the rapid growth of business demand, which will eventually lead to network operator CapEx. (Capital Expense, basic capital expenditure). Summary of the invention
  • the embodiments of the present invention provide a method and a device for implementing high-performance single-flow forwarding processing by using multiple NPs, so as to reduce the cost of system upgrade caused by an increase in network traffic.
  • An embodiment of the present invention provides a device for implementing single-flow forwarding of a multi-network processing unit, including multiple network processing units, a data receiving management unit, and a data transmission management unit.
  • the data receiving management unit allocates received data to the device. Said a plurality of network processing units for processing;
  • the data transmission management unit combines the data processed by the plurality of network processing units into a single stream data transmission.
  • the embodiment of the invention further provides a method for implementing single stream forwarding of a multi-network processing unit, including:
  • the data processed by the plurality of network processing units is combined into a single stream data transmission.
  • the received data is separately sent to a plurality of network processing units for processing; and the data processed by the plurality of network processing units is reordered, the single stream data is synthesized, and the multi-NP single stream is realized.
  • the function of transmitting, improving the forwarding capability of the device based on the use of multiple existing NPs, does not require the development of a new high-performance NP, thus saving R&D investment and development costs.
  • FIG. 1 is a system for implementing NP function in a bidirectional NP unit in the prior art
  • FIG. 2 is a system for implementing NP function in two unidirectional NP units in the prior art
  • FIG. 3 is a multi-network processing unit according to an embodiment of the present invention
  • FIG. 4 is a structural diagram of a device for implementing single-stream forwarding of a multi-network processing unit according to an embodiment of the present invention
  • FIG. 5 is a structural diagram of a data receiving management unit in an embodiment of the present invention.
  • FIG. 6 is a structural diagram of a data transmission management unit in an embodiment of the present invention.
  • FIG. 7 is a structural diagram of a backup unit of a network processing unit backup control unit in a data receiving management unit according to an embodiment of the present invention.
  • FIG. 8 is a structural diagram of a network processing unit backup control subunit in a data transmission management unit according to an embodiment of the present invention. detailed description
  • the embodiment of the invention provides a method for implementing single-flow forwarding of a multi-network processing unit. As shown in FIG. 3, the method includes the following steps:
  • Step s301 The single stream data received from the user side or the network side is allocated to a plurality of network processing units for processing.
  • the processing specifically includes: sorting the received data, and assigning the sorting identifier, and then transmitting the data of the assigned sorting identifier to the plurality of network processing units. Due to the differences in the processing flow of the NP forwarding engine itself, the difference in access to the memory when the table entries are searched, and the differences in the PCB (Print Circuit Board) traces, the delay of the single-stream data is delayed. . Therefore, the sequence number byte is added before the data message before entering the NP, and then sent to the NP for processing.
  • load balancing, congestion control, and the like can also be performed on the received data.
  • a backup network processing unit may be added to the system. If the trigger condition is met, for example, the network processing unit is faulty, the data is switched from the working network processing unit to the backup network processing unit. Yuan.
  • step s302 the data processed by the plurality of network processing units is combined into a single stream data transmission.
  • the processing specifically includes: reordering the data processed by the plurality of network processing units according to the sorting identifier, and synthesizing the single stream data to be sent; the sorting and the ending of the serial number are completed by adding the relative direction logic to the sequence number.
  • n NPs are n queues, and the timing query data is sent to the cache state before each NP. If the cache corresponding to one or all NPs overflows, it is considered that A problem occurs in the NP or all NPs, and the traffic originally allocated to the NP is discarded until the corresponding cache has no overflow.
  • FQ Frequeuing
  • the data traffic is allocated to multiple NPs in a reasonable and balanced manner according to the processing capabilities of each service and each NP, and the maximum efficiency utilization NP processing resource can use the load balancing algorithm.
  • the load balancing algorithm has a polling equalization, a weighted polling equalization, a random equalization, a weighted random equalization, and a processing capability equalization.
  • each NP has the same performance and hardware and software configuration, so the polling equalization plus processing capability is used. Balanced integration is more appropriate. Specifically, the data packet entered by the interface is polled and assigned to each NP, such as the data packet P1 enters NP1, the data packet P2 enters NP2, ....
  • the data packet Pn enters NPn, so that the data packet allocation is based on the NP processing capability.
  • the packet processing rate is used for equalization.
  • the buffer remaining space on the NP sending side should also be considered to adjust the allocation. For example, when the data packet Pn+1 is entered into NP1 and the data packet Pn+2 is selected, NP2 is selected.
  • ⁇ NPn corresponds to the largest cache remaining space, to achieve the purpose of optimal load balancing.
  • the interface conversion function can be used to seamlessly connect data, such as SERDES with large traffic on the user side or the network side.
  • SPI Serial Peripheral Interface
  • XAUI Xialix Assistant Unit Interface
  • Xilinx Assistant Unit Interface etc. interface.
  • the above congestion control function is usually added to the total traffic entrance instead of the NP entry. This is because: The router IP congestion control algorithm is for different connections, discards according to certain policies, and balances all traffic. Assigned to multiple NPs, so the same connection will be dispersed into n streams. If congestion control is placed in each NP, the weighted congestion control algorithm will not be implemented accurately, such as WFQ ( Weighted Fair Queuing).
  • WFQ Weighted Fair Queuing
  • the message sequence number is used to ensure the strict sequence of the message, and the message sequence number is added at the total traffic, if at each NP. Discarding packets will result in missing and discontinuous message sequence numbers.
  • the core router is on a very important network node, and reliability is an important indicator. At present, physical link backup and important board-level backup are widely used to ensure high reliability. As the most important interface in the router, the NP forwarding processing module should also implement module-level n+1 backup. .
  • the n+1 backup of the NP module can be easily implemented by using the network processing unit backup control function. For example, normal single-stream traffic requires n NPs to process, and NP backup units are added for backup. The fault detection mechanism of the NP is judged by the overflow status of the independent FIFO (First In First Out) sent to each NP. If the overflow state is detected at a certain time, the system starts the backup function and sends the backup function to the NP. The traffic is forwarded to the backup NP.
  • FIFO First In First Out
  • the message sorting and reordering function is a necessary function, and functions such as congestion control, load balancing, interface conversion, and backup control are optional functions, and may be provided at the same time, or only one or several items may be provided.
  • the above method provided by the embodiment of the present invention can conveniently provide the n+1 backup capability of the NP module, improve the reliability of the system, save the R&D investment and development cost of the high-performance NP, and prolong the service life of the existing NP.
  • An embodiment of the present invention provides a device for implementing single-stream forwarding of a multi-network processing unit, including n-2 slices of NPs, which are completely peer-to-peer with each other, and simultaneously increase and decrease before multiple NPs.
  • a special logic function is added, which can be implemented by an FPGA (Field Programmable Gate Array) or an ASIC (Application Specific Integrated Circuits).
  • the method includes: a plurality of network processing units (which can be divided into an uplink/downlink network processing unit), a data receiving management unit 100, and a data sending management unit 200, wherein the data receiving management unit 100 and the data sending management unit 200 Connect to the switching network 300 with traffic management capabilities.
  • the data reception management unit 100 distributes the single stream data received from the user side to a plurality of network processing units, and the data transmission management unit 200 combines the data processed by the plurality of network processing units into single stream data and transmits the data to the switching network 300; or data.
  • the reception management unit 100 distributes the data received from the switching network 300 to a plurality of network processing units, and the data transmission management unit 200 combines the data processed by the plurality of network processing units into single stream data and transmits them to the user side.
  • the plurality of network processing units, the data receiving processing unit 100, and the data transmission processing unit 200 are connected to the control processing unit, and are controlled and managed by the control processing unit.
  • the data receiving management unit 100 further includes: a message sorting subunit 130, sorting the received data, and transmitting the data of the assigned sorting identifier to the plurality of network processing units respectively; the network processing unit backup controller
  • the unit 140 is connected to the message sorting subunit 130, and is configured to switch the data processed by the network processing unit to the backup by the network processing unit when the triggering condition is met, that is, when the working network processing unit fails or the data carried exceeds the load.
  • each subunit in this embodiment is only an application example, and is not limited to that shown in FIG. 5.
  • the message ordering subunit 130 is directly connected to the congestion control subunit 110 and the like.
  • the data transmission management unit 200 further includes a message sorting sub-unit 230, which reorders the messages processed by the plurality of network processing units according to the sorting identifier, and synthesizes the single stream data transmission; the network processing unit backup control subunit 220, connected to the message de-sorting sub-unit 230, configured to: when the trigger condition is met, that is, the working network processing unit When the faulty or bearer data exceeds the load, the data processed by the network processing unit is switched by the network processing unit to the backup network processing unit; the interface conversion subunit 210 performs interface conversion on the data processed by the plurality of network processing units.
  • the traffic of each NP is simultaneously connected to the NP backup, but the switch control is required, and the opening and closing of the switch is determined by the NP fault detection result, the network processing unit backup control subunit 140 in FIG. 5 and the The network processing unit backup control sub-unit 220 implements the principle as shown in FIG. 7 and FIG. 8, respectively, and the working NP1 to the working NPn are respectively connected to the backup NP through the switch.
  • the switching time of the working NP and the backup NP is mainly determined by the fault detection time, which is a microsecond level.
  • the network processing unit backup control subunit may also exist only in the data reception management unit 100 or the data transmission management unit 200.
  • the device provided by the embodiment of the present invention can conveniently provide the n+1 backup capability of the NP module, improve the reliability of the system, save the R&D investment and development cost of the high-performance NP, and prolong the service life of the existing NP.
  • the present invention can be implemented by means of software plus a necessary general hardware platform, and of course, can also be through hardware, but in many cases, the former is a better implementation. the way.
  • the technical solution of the present invention which is essential or contributes to the prior art, may be embodied in the form of a software product stored in a storage medium, including a plurality of instructions for making a A computer device (which may be a personal computer, server, or network device, etc.) performs the methods described in various embodiments of the present invention.
  • a computer device which may be a personal computer, server, or network device, etc.

Abstract

A device for realizing transferring single stream by multiple network processing units comprises: multiple network processing units, a data receiving management unit and a data transmitting management unit; the data receiving management unit allocates the received data to multiple network processing units to be processed; the data transmitting management unit combines all the data processed by multiple network processing units into single-stream data for transmission. Also a method for realizing transferring single stream by multiple network processing units is provided. This invention uses multiple NPs to improve the ability of single-stream transferring and to save the cost on the development and investment of high-performance NP.

Description

实现多网络处理单元单流转发的设备及方法 技术领域  Device and method for realizing single stream forwarding of multi-network processing unit
本发明涉及通讯领域,尤其涉及一种实现多网络处理单元单流转 发的设备及方法。  The present invention relates to the field of communications, and in particular, to an apparatus and method for implementing single stream forwarding of a multi-network processing unit.
背景技术 Background technique
随着宽带网络的迅猛发展及新应用的不断增加,现有技术中即使 具有最高处理能力的数据通讯设备也面临更高流量的压力。现有的转 发处理功能主要由 CP ( Control Processor,控制处理器)、 NP ( Network Processor, 网络处理单元)和 TM ( Traffic Management, 流量管理) /Fabric (交换网)等器件实现。 其中 NP实现数据通讯设备的核心转 发处理功能,可以由一个双向处理功能的 NP单元实现,如图 1所示; 用户侧业务通过入接口单元完成物理线路信号的物理层处理、并串转 换、 去帧处理、 部分二层开销处理等功能, 然后业务被送入 NP单元 处理, 若协议 4艮文等需要由 CP处理则由 NP和 CP进行交互, 最后 通过流量管理 /交换网单元进入网络侧; 网络侧业务通过流量管理 /交 换网单元到达 NP单元处理, 然后由出接口单元完成物理线路信号的 物理层处理、 串并转换、 成帧处理、 部分二层开销处理等功能, 达到 用户侧。  With the rapid development of broadband networks and the increasing number of new applications, even the data communication equipment with the highest processing capacity in the prior art is under pressure from higher traffic. The existing forwarding processing functions are mainly implemented by devices such as a CP (Control Processor), an NP (Network Processor), and a TM (Traffic Management)/Fabric (Switching Network). The NP implements the core forwarding processing function of the data communication device, which can be implemented by a NP unit with a bidirectional processing function, as shown in FIG. 1; the user side service completes the physical layer processing of the physical line signal through the inbound interface unit, and performs parallel conversion and go Frame processing, partial Layer 2 overhead processing, etc., and then the service is sent to the NP unit for processing. If the protocol 4 text needs to be processed by the CP, the NP and the CP interact, and finally enter the network side through the traffic management/switching network unit; The network side service reaches the NP unit through the traffic management/switching network unit, and then the physical interface processing, the serial-to-parallel conversion, the framing processing, and the partial Layer 2 overhead processing of the physical line signal are performed by the outbound interface unit to reach the user side.
NP功能也可以由两个单向处理功能的 NP单元实现, 如图 2所 示: 业务上行方向通过入接口单元完成物理线路信号的物理层处理、 并串转换、 去帧处理、 部分二层开销处理等功能后, 进入上行 NP处 理单元, 再通过流量管理 /交换网单元进入网络侧; 网络侧业务通过 流量管理 /交换网单元进入下行 NP处理单元, 然后, 由出接口单元完 成物理线路信号的物理层处理、 串并转换、 成帧处理、 部分二层开销 处理等, 发送到用户侧。  The NP function can also be implemented by two NP units with one-way processing function, as shown in FIG. 2: The service uplink direction completes physical layer processing, parallel-to-serial conversion, de-frame processing, and partial Layer 2 overhead of physical line signals through the inbound interface unit. After processing the functions, the user enters the uplink NP processing unit and enters the network side through the traffic management/switching network unit; the network side service enters the downlink NP processing unit through the traffic management/switching network unit, and then the physical interface signal is completed by the outgoing interface unit. Physical layer processing, serial-to-parallel conversion, framing processing, partial Layer 2 overhead processing, etc., are sent to the user side.
发明人在实现本发明的过程中, 发现现有方法至少存在以下缺 点: In the process of implementing the present invention, the inventors found that the existing methods have at least the following shortcomings. Point:
如果业务流量提高, 现有处理能力的 NP势必不能满足要求, 如 果按照现有转发处理架构,高性能 NP是解决大业务流量的唯一选择。 为了提高数据通讯设备的转发处理能力, 虽然可以研发高性能 NP, 替代现有低处理能力 NP, 但却会带来巨额的研发投资和现有资源的 浪费。 另外, 如果再面临下一次业务量的提升, 又将导致再一次硬件 升级的高额成本投入, 使得新的高性能 NP很快也不能满足业务量快 速增长的需求, 最终会导致网络运用商 CapEx ( Capital Expense, 基 础资金花费) 的提高。 发明内容  If the service traffic increases, the NP of the existing processing capability must not meet the requirements. If the existing forwarding processing architecture is adopted, the high-performance NP is the only option for solving large traffic. In order to improve the forwarding processing capability of data communication equipment, although high-performance NP can be developed to replace the existing low-processing capability NP, it will bring huge investment in research and development and waste of existing resources. In addition, if the next business volume is increased, it will lead to another high cost investment in hardware upgrades, so that the new high-performance NP will not be able to meet the rapid growth of business demand, which will eventually lead to network operator CapEx. (Capital Expense, basic capital expenditure). Summary of the invention
本发明实施例提供一种多 NP实现高性能单流转发处理的方法及 设备, 以减小网络业务量增加导致系统升级的成本。  The embodiments of the present invention provide a method and a device for implementing high-performance single-flow forwarding processing by using multiple NPs, so as to reduce the cost of system upgrade caused by an increase in network traffic.
本发明实施例提供了一种实现多网络处理单元单流转发的设备, 包括多个网络处理单元、 数据接收管理单元和数据发送管理单元; 所述数据接收管理单元,将接收的数据分配到所述多个网络处理 单元进行处理;  An embodiment of the present invention provides a device for implementing single-flow forwarding of a multi-network processing unit, including multiple network processing units, a data receiving management unit, and a data transmission management unit. The data receiving management unit allocates received data to the device. Said a plurality of network processing units for processing;
所述数据发送管理单元,将所述多个网络处理单元处理后的数据 合为单流数据发送。  The data transmission management unit combines the data processed by the plurality of network processing units into a single stream data transmission.
本发明实施例还提供了一种实现多网络处理单元单流转发的方 法包括:  The embodiment of the invention further provides a method for implementing single stream forwarding of a multi-network processing unit, including:
将接收的数据分配到多个网络处理单元进行处理;  Allocating received data to multiple network processing units for processing;
将所述多个网络处理单元处理后的数据合为单流数据发送。  The data processed by the plurality of network processing units is combined into a single stream data transmission.
本发明的实施例中 ,通过对接收到的数据分别发送到多个网络处 理单元进行处理; 并对多个网络处理单元处理的数据进行重排序, 合 成单流数据发送, 实现了多 NP单流转发功能, 在使用多个现有 NP 基础上提高了设备的转发能力, 不需要研制新的高性能 NP, 因此节 省了研发投资和开发成本。 附图说明 In the embodiment of the present invention, the received data is separately sent to a plurality of network processing units for processing; and the data processed by the plurality of network processing units is reordered, the single stream data is synthesized, and the multi-NP single stream is realized. The function of transmitting, improving the forwarding capability of the device based on the use of multiple existing NPs, does not require the development of a new high-performance NP, thus saving R&D investment and development costs. DRAWINGS
图 1 是现有技术中一个双向 NP单元实现 NP功能的系统; 图 2 是现有技术中两个单向 NP单元实现 NP功能的系统; 图 3 是本发明实施例一种实现多网络处理单元单流转发的方 法流程图;  1 is a system for implementing NP function in a bidirectional NP unit in the prior art; FIG. 2 is a system for implementing NP function in two unidirectional NP units in the prior art; FIG. 3 is a multi-network processing unit according to an embodiment of the present invention; Flow chart of single stream forwarding method;
图 4 是本发明实施例一种实现多网络处理单元单流转发的设 备结构图;  4 is a structural diagram of a device for implementing single-stream forwarding of a multi-network processing unit according to an embodiment of the present invention;
图 5是本发明实施例中数据接收管理单元结构图;  5 is a structural diagram of a data receiving management unit in an embodiment of the present invention;
图 6是本发明实施例中数据发送管理单元结构图;  6 is a structural diagram of a data transmission management unit in an embodiment of the present invention;
图 7 是本发明实施例数据接收管理单元中网络处理单元备份 控制子单元结构图;  7 is a structural diagram of a backup unit of a network processing unit backup control unit in a data receiving management unit according to an embodiment of the present invention;
图 8 是本发明实施例数据发送管理单元中网络处理单元备份 控制子单元结构图。 具体实施方式  FIG. 8 is a structural diagram of a network processing unit backup control subunit in a data transmission management unit according to an embodiment of the present invention. detailed description
本发明实施例提供了一种实现多网络处理单元单流转发的方法, 如图 3所示, 包括以下步骤:  The embodiment of the invention provides a method for implementing single-flow forwarding of a multi-network processing unit. As shown in FIG. 3, the method includes the following steps:
步骤 s301 ,将从用户侧或网络侧接收的单流数据分配到多个网络 处理单元进行处理。  Step s301: The single stream data received from the user side or the network side is allocated to a plurality of network processing units for processing.
处理过程具体包括: 对接收到的数据进行报文排序, 并分配排序 标识, 然后将分配排序标识的数据分别发送到多个网络处理单元。 由 于报文在各个 NP转发引擎本身的处理流程差异、 表项查找时对存储 器的访问差异、 PCB ( Print Circuit Board, 印刷电路板)走线差异等, 会导致单流数据分流后造成时延差。 因此, 在进入 NP前对数据报文 前面添加序列号字节, 然后送给 NP处理。  The processing specifically includes: sorting the received data, and assigning the sorting identifier, and then transmitting the data of the assigned sorting identifier to the plurality of network processing units. Due to the differences in the processing flow of the NP forwarding engine itself, the difference in access to the memory when the table entries are searched, and the differences in the PCB (Print Circuit Board) traces, the delay of the single-stream data is delayed. . Therefore, the sequence number byte is added before the data message before entering the NP, and then sent to the NP for processing.
为了节省网络资源, 提高网络处理单元的利用效率, 还可以对接 收的数据进行负载均衡、 拥塞控制等。 另外, 可以在系统中增加备份 网络处理单元, 如果触发条件满足时, 例如网络处理单元发生故障等 情况,将数据由处于工作状态的网络处理单元切换到备份网络处理单 元。 In order to save network resources and improve the utilization efficiency of the network processing unit, load balancing, congestion control, and the like can also be performed on the received data. In addition, a backup network processing unit may be added to the system. If the trigger condition is met, for example, the network processing unit is faulty, the data is switched from the working network processing unit to the backup network processing unit. Yuan.
步骤 s302, 将多个网络处理单元处理后的数据合为单流数据发 送。  In step s302, the data processed by the plurality of network processing units is combined into a single stream data transmission.
处理过程具体包括:对多个网络处理单元处理的数据按照排序标 识进行重排序, 合成单流数据发送; 排序和序列号的终结在和序号添 加相对称方向逻辑完成。  The processing specifically includes: reordering the data processed by the plurality of network processing units according to the sorting identifier, and synthesizing the single stream data to be sent; the sorting and the ending of the serial number are completed by adding the relative direction logic to the sequence number.
在上述实施例中, 当系统总的数据流量大于多个 NP处理能力之 和, 或者某个 NP发生故障等导致流量无法线速处理, 数据发生拥塞 时, 需要釆用拥塞控制策略和算法。 例如, 釆用 FQ ( Fair Queuing, 公平队列)算法, n个 NP就是 n个队列, 定时查询数据发送给每个 NP前的緩存状态, 如果一旦某个或所有 NP对应的緩存发生溢出, 认为该 NP或所有 NP发生问题导致拥塞, 这时对原来分配给该 NP 的流量进行丟弃, 直到该对应緩存无溢出为止。  In the above embodiment, when the total data traffic of the system is greater than the sum of the NP processing capabilities, or the NP fails, the traffic cannot be processed at the line rate, and when the data is congested, the congestion control strategy and algorithm need to be used. For example, using the FQ (Fair Queuing) algorithm, n NPs are n queues, and the timing query data is sent to the cache state before each NP. If the cache corresponding to one or all NPs overflows, it is considered that A problem occurs in the NP or all NPs, and the traffic originally allocated to the NP is discarded until the corresponding cache has no overflow.
在上述实施例中, 根据各业务、 各 NP的处理能力, 将数据流量 合理均衡地分配给多个 NP, 最大效率利用 NP处理资源可以釆用负 载均衡算法。 负载均衡算法有轮询均衡、 权重轮询均衡、 随机均衡、 权重随机均衡、 处理能力均衡等, 本实施例中, 各个 NP具有相同的 性能和软硬件配置,因此釆用轮询均衡加处理能力均衡结合方式比较 合适。 具体地讲, 将接口进入的数据包轮询地分配给各个 NP, 如数 据包 P1进入 NP1、 数据包 P2进入 NP2、 .…数据包 Pn进入 NPn, 这 样按数据包分配是根据 NP处理能力以包处理速率来进行均衡的。 同 时, 考虑到数据包的长度是变化的, 因此在负载均衡时还要考虑往 NP发送侧的緩存剩余空间来调整分配,如分配数据包 Pn+1进入 NP1、 数据包 Pn+2则选择 NP2 ~ NPn中对应緩存剩余空间最多的一个, 达 到最佳负载均衡的目的。  In the above embodiment, the data traffic is allocated to multiple NPs in a reasonable and balanced manner according to the processing capabilities of each service and each NP, and the maximum efficiency utilization NP processing resource can use the load balancing algorithm. The load balancing algorithm has a polling equalization, a weighted polling equalization, a random equalization, a weighted random equalization, and a processing capability equalization. In this embodiment, each NP has the same performance and hardware and software configuration, so the polling equalization plus processing capability is used. Balanced integration is more appropriate. Specifically, the data packet entered by the interface is polled and assigned to each NP, such as the data packet P1 enters NP1, the data packet P2 enters NP2, .... the data packet Pn enters NPn, so that the data packet allocation is based on the NP processing capability. The packet processing rate is used for equalization. At the same time, considering that the length of the data packet changes, in the load balancing, the buffer remaining space on the NP sending side should also be considered to adjust the allocation. For example, when the data packet Pn+1 is entered into NP1 and the data packet Pn+2 is selected, NP2 is selected. ~ NPn corresponds to the largest cache remaining space, to achieve the purpose of optimal load balancing.
在上述实施例中, 由于用户侧、 TM/Fabric侧总流量处的数据总 线带宽最大, 而各 NP侧的数据总线带宽小很多, 因此, 往往两边会 选择不同的总线标准, 如果存在此差异, 可以通过接口转换功能, 使 数据无缝连接, 例如将用户侧或网络侧大流量的 SERDES ( Serializer/Deserializer, 串行器 /解串行器)总线 , 转换为适合 NP的 小流量 SPI ( Serial Peripheral Interface, 串行外设接口) 4.2、 XAUI ( Xilinx Assistant Unit Interface, Xilinx辅助单元接口)等接口。 In the above embodiment, since the data bus bandwidth at the total traffic on the user side and the TM/Fabric side is the largest, and the data bus bandwidth on each NP side is much smaller, therefore, different bus standards are often selected on both sides. If there is such a difference, The interface conversion function can be used to seamlessly connect data, such as SERDES with large traffic on the user side or the network side. (Serializer/Deserializer, Serializer/Deserializer) bus, converted to SPI (Serial Peripheral Interface) 4.2, XAUI (Xialix Assistant Unit Interface), Xilinx Assistant Unit Interface, etc. interface.
另外, 上述拥塞控制功能通常加在总流量入口处, 而不是放在各 个 NP入口处, 这是因为: 路由器 IP拥塞控制算法是针对不同连接, 按照一定策略进行丟弃处理, 而将所有流量均衡分配到多个 NP处, 这样同样的连接将分散到 n个流中, 如果将拥塞控制放在各 NP中将 无法精确实现带加权的拥塞控制算法, 如 WFQ ( Weighted Fair Queuing , 加权公平 4非队法)、 WRED ( Weighted Random Early Detection, 加权随机先期检测); 本发明实施例还需要使用报文序列 号保证报文严格顺序, 报文序号的添加是在总流量处, 如果在各 NP 处丟弃报文将导致报文序号的缺失和不连续。  In addition, the above congestion control function is usually added to the total traffic entrance instead of the NP entry. This is because: The router IP congestion control algorithm is for different connections, discards according to certain policies, and balances all traffic. Assigned to multiple NPs, so the same connection will be dispersed into n streams. If congestion control is placed in each NP, the weighted congestion control algorithm will not be implemented accurately, such as WFQ ( Weighted Fair Queuing). In the embodiment of the present invention, the message sequence number is used to ensure the strict sequence of the message, and the message sequence number is added at the total traffic, if at each NP. Discarding packets will result in missing and discontinuous message sequence numbers.
另外, 核心路由器处于非常重要的网络节点上, 可靠性是重要的 指标。 目前普遍釆用物理链路备份、 重要单板板级备份等技术来保证 高可靠性, 而作为路由器中最重要的接口处理板上核心模块, NP转 发处理模块也应该实现模块级 n+1备份。在本发明实施例中, 利用网 络处理单元备份控制功能, 可以很容易实现 NP模块的 n+1备份。 例 如, 正常的单流流量需要 n个 NP来处理, 增加 NP备份单元来做备 份。 NP的故障检测机制通过发往各个 NP前的独立 FIFO( First In First Out, 先进先出) 的溢出状态进行判断, 如果一定时刻检测到溢出状 态, 则系统启动备份功能, 将发往该 NP的流量转发到备份 NP上。  In addition, the core router is on a very important network node, and reliability is an important indicator. At present, physical link backup and important board-level backup are widely used to ensure high reliability. As the most important interface in the router, the NP forwarding processing module should also implement module-level n+1 backup. . In the embodiment of the present invention, the n+1 backup of the NP module can be easily implemented by using the network processing unit backup control function. For example, normal single-stream traffic requires n NPs to process, and NP backup units are added for backup. The fault detection mechanism of the NP is judged by the overflow status of the independent FIFO (First In First Out) sent to each NP. If the overflow state is detected at a certain time, the system starts the backup function and sends the backup function to the NP. The traffic is forwarded to the backup NP.
上述实施例中, 报文排序和重排序功能是必要功能, 而对于拥塞 控制、 负载均衡、 接口转换和备份控制等功能是可选功能, 可以同时 具备, 也可以只具备其中一项或几项功能。  In the above embodiment, the message sorting and reordering function is a necessary function, and functions such as congestion control, load balancing, interface conversion, and backup control are optional functions, and may be provided at the same time, or only one or several items may be provided. Features.
通过本发明实施例提供的上述方法, 可以方便地提供 NP模块的 n+1备份能力, 提高系统的可靠性, 节省高性能 NP的研发投资和开 发成本, 延长现有 NP的使用周期。  The above method provided by the embodiment of the present invention can conveniently provide the n+1 backup capability of the NP module, improve the reliability of the system, save the R&D investment and development cost of the high-performance NP, and prolong the service life of the existing NP.
本发明实施例提供了一种实现多网络处理单元单流转发的设备, 包括 n-2片 NP, 彼此之间是完全对等关系, 同时, 在多 NP的前后增 加了特殊逻辑功能 ,该逻辑功能可由 FPGA ( Field Programmable Gate Array, 现场可编程门电路)或 ASIC ( Application Specific Integrated Circuits, 专用集成电路)来实现。 具体如图 4所示, 包括: 多个网络 处理单元(可以分为上行 /下行网络处理单元 )、数据接收管理单元 100 以及数据发送管理单元 200 , 其中数据接收管理单元 100和数据发送 管理单元 200连接到具有流量管理功能的交换网络 300。 数据接收管 理单元 100将从用户侧接收的单流数据分配到多个网络处理单元,数 据发送管理单元 200 将多个网络处理单元处理后的数据合为单流数 据发送到交换网络 300; 或数据接收管理单元 100将从交换网络 300 接收的数据分配到多个网络处理单元,数据发送管理单元 200将多个 网络处理单元处理后的数据合为单流数据发送到用户侧。 其中, 多个 网络处理单元、数据接收处理单元 100和数据发送处理单元 200与控 制处理单元连接, 由控制处理单元控制管理。 An embodiment of the present invention provides a device for implementing single-stream forwarding of a multi-network processing unit, including n-2 slices of NPs, which are completely peer-to-peer with each other, and simultaneously increase and decrease before multiple NPs. A special logic function is added, which can be implemented by an FPGA (Field Programmable Gate Array) or an ASIC (Application Specific Integrated Circuits). Specifically, as shown in FIG. 4, the method includes: a plurality of network processing units (which can be divided into an uplink/downlink network processing unit), a data receiving management unit 100, and a data sending management unit 200, wherein the data receiving management unit 100 and the data sending management unit 200 Connect to the switching network 300 with traffic management capabilities. The data reception management unit 100 distributes the single stream data received from the user side to a plurality of network processing units, and the data transmission management unit 200 combines the data processed by the plurality of network processing units into single stream data and transmits the data to the switching network 300; or data. The reception management unit 100 distributes the data received from the switching network 300 to a plurality of network processing units, and the data transmission management unit 200 combines the data processed by the plurality of network processing units into single stream data and transmits them to the user side. The plurality of network processing units, the data receiving processing unit 100, and the data transmission processing unit 200 are connected to the control processing unit, and are controlled and managed by the control processing unit.
参考图 5 , 数据接收管理单元 100进一步包括: 报文排序子单元 130 , 对接收到的数据进行报文排序, 将分配排序标识的数据分别发 送到多个网络处理单元; 网络处理单元备份控制子单元 140 , 与报文 排序子单元 130连接, 用于在触发条件满足时, 即工作的网络处理单 元出现故障或承载的数据超过负荷时,使网络处理单元处理的数据由 网络处理单元切换到备份网络处理单元; 负载均衡子单元 120, 对接 收的数据进行负载均衡并通过报文排序子单元 130 将接收的数据均 衡分配到多个网络处理单元; 拥塞控制子单元 110 , 对于进入负载均 衡子单元 120之前的数据进行拥塞控制; 接口转换子单元 150, 对接 收数据进行接口转换。本实施例中各子单元的连接关系只是一个应用 实例, 并不限于图 5所示, 例如, 当没有负载均衡子单元 120时, 报 文排序子单元 130直接与拥塞控制子单元 110连接等。  Referring to FIG. 5, the data receiving management unit 100 further includes: a message sorting subunit 130, sorting the received data, and transmitting the data of the assigned sorting identifier to the plurality of network processing units respectively; the network processing unit backup controller The unit 140 is connected to the message sorting subunit 130, and is configured to switch the data processed by the network processing unit to the backup by the network processing unit when the triggering condition is met, that is, when the working network processing unit fails or the data carried exceeds the load. a network processing unit; load balancing sub-unit 120, load-balances the received data and distributes the received data to a plurality of network processing units through the message sorting sub-unit 130; the congestion control sub-unit 110, for entering the load balancing sub-unit The data before 120 is subjected to congestion control; the interface conversion sub-unit 150 performs interface conversion on the received data. The connection relationship of each subunit in this embodiment is only an application example, and is not limited to that shown in FIG. 5. For example, when there is no load balancing subunit 120, the message ordering subunit 130 is directly connected to the congestion control subunit 110 and the like.
参考图 6, 数据发送管理单元 200进一步包括 >¾文解排序子单元 230 , 对多个网络处理单元处理的报文按照排序标识进行重排序, 合 成单流数据发送; 网络处理单元备份控制子单元 220 , 与报文解排序 子单元 230连接, 用于在触发条件满足时, 即工作的网络处理单元出 现故障或承载的数据超过负荷时,使网络处理单元处理的数据由网络 处理单元切换到备份网络处理单元; 接口转换子单元 210, 对多个网 络处理单元处理后的数据进行接口转换。 Referring to FIG. 6, the data transmission management unit 200 further includes a message sorting sub-unit 230, which reorders the messages processed by the plurality of network processing units according to the sorting identifier, and synthesizes the single stream data transmission; the network processing unit backup control subunit 220, connected to the message de-sorting sub-unit 230, configured to: when the trigger condition is met, that is, the working network processing unit When the faulty or bearer data exceeds the load, the data processed by the network processing unit is switched by the network processing unit to the backup network processing unit; the interface conversion subunit 210 performs interface conversion on the data processed by the plurality of network processing units.
在增加备份 NP单元, 每个 NP的流量都同时接到 NP备份, 但 是需要开关控制, 开关的开合由 NP故障检测结果决定, 图 5中网络 处理单元备份控制子单元 140和图 6中的网络处理单元备份控制子单 元 220实现原理分别如图 7和图 8所示, 工作 NP1到工作 NPn分别 通过开关与备份 NP连接。 其中, 工作 NP与备份 NP的切换时间主 要由故障检测时间决定, 为微秒级别。 当然, 网络处理单元备份控制 子单元也可以只存在于数据接收管理单元 100 或数据发送管理单元 200中。  In the addition of the backup NP unit, the traffic of each NP is simultaneously connected to the NP backup, but the switch control is required, and the opening and closing of the switch is determined by the NP fault detection result, the network processing unit backup control subunit 140 in FIG. 5 and the The network processing unit backup control sub-unit 220 implements the principle as shown in FIG. 7 and FIG. 8, respectively, and the working NP1 to the working NPn are respectively connected to the backup NP through the switch. The switching time of the working NP and the backup NP is mainly determined by the fault detection time, which is a microsecond level. Of course, the network processing unit backup control subunit may also exist only in the data reception management unit 100 or the data transmission management unit 200.
通过本发明实施例提供的上述设备, 可以方便地提供 NP模块的 n+1备份能力, 提高系统的可靠性, 节省高性能 NP的研发投资和开 发成本, 延长现有 NP的使用周期。  The device provided by the embodiment of the present invention can conveniently provide the n+1 backup capability of the NP module, improve the reliability of the system, save the R&D investment and development cost of the high-performance NP, and prolong the service life of the existing NP.
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解 到本发明可借助软件加必需的通用硬件平台的方式来实现, 当然也可 以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解, 本发明的技术方案本质上或者说对现有技术做出贡献的部分可以以 软件产品的形式体现出来, 该计算机软件产品存储在一个存储介质 中, 包括若干指令用以使得一台计算机设备(可以是个人计算机, 服 务器, 或者网络设备等)执行本发明各个实施例所述的方法。 以上公 开的仅为本发明的几个具体实施例, 但是, 本发明并非局限于此, 任 何本领域的技术人员能思之的变化都应落入本发明的保护范围。  Through the description of the above embodiments, those skilled in the art can clearly understand that the present invention can be implemented by means of software plus a necessary general hardware platform, and of course, can also be through hardware, but in many cases, the former is a better implementation. the way. Based on such understanding, the technical solution of the present invention, which is essential or contributes to the prior art, may be embodied in the form of a software product stored in a storage medium, including a plurality of instructions for making a A computer device (which may be a personal computer, server, or network device, etc.) performs the methods described in various embodiments of the present invention. The above is only a few specific embodiments of the present invention, but the present invention is not limited thereto, and any changes that can be made by those skilled in the art should fall within the protection scope of the present invention.
以上所述仅为本发明实施例的过程及方法实施例,并不用以限制 本发明实施例, 凡在本发明实施例的精神和原则之内所做的任何修 改、 等同替换、 改进等, 均应包含在本发明实施例的保护范围之内。  The above is only the process and method embodiments of the embodiments of the present invention, and is not intended to limit the embodiments of the present invention. Any modifications, equivalents, improvements, etc., which are made within the spirit and principles of the embodiments of the present invention, are It should be included in the scope of protection of the embodiments of the present invention.

Claims

权利要求 Rights request
1、 一种实现多网络处理单元单流转发的设备, 其特征在于, 包 括多个网络处理单元、 数据接收管理单元和数据发送管理单元; 所述数据接收管理单元,将接收的数据分配到所述多个网络处理 单元进行处理; A device for implementing single-stream forwarding of a multi-network processing unit, comprising: a plurality of network processing units, a data receiving management unit, and a data transmission management unit; wherein the data receiving management unit allocates the received data to the Said a plurality of network processing units for processing;
所述数据发送管理单元,将所述多个网络处理单元处理后的数据 合为单流数据发送。  The data transmission management unit combines the data processed by the plurality of network processing units into a single stream data transmission.
2、 如权利要求 1所述实现多网络处理单元单流转发的设备, 其 特征在于,  2. The apparatus for implementing single stream forwarding of a multi-network processing unit according to claim 1, wherein:
所述数据接收管理单元进一步包括报文排序子单元,对所述接收 的数据进行报文排序,将分配排序标识的数据分别发送到所述多个网 络处理单元;  The data receiving management unit further includes a message sorting subunit, performing message sorting on the received data, and transmitting data of the assigned sorting identifier to the plurality of network processing units respectively;
所述数据发送管理单元进一步包括报文解排序子单元,对所述多 个网络处理单元处理的数据按照排序标识进行重排序,合成单流数据 发送。  The data transmission management unit further includes a message de-sorting sub-unit, reordering the data processed by the plurality of network processing units according to the sorting identifier, and synthesizing the single-stream data transmission.
3、 如权利要求 2所述实现多网络处理单元单流转发的设备, 其 特征在于, 所述设备还包括备份网络处理单元, 用于在满足触发条件 时处理切换前由所述网络处理单元处理的数据;所述数据接收管理单 元中还包括与所述报文排序子单元连接的网络处理单元备份控制子 单元;所述数据发送管理单元还包括与所述报文解排序子单元连接的 网络处理单元备份控制子单元; 所述网络处理单元备份控制子单元, 在触发条件满足时,使所述网络处理单元处理的数据由所述网络处理 单元切换到所述备份网络处理单元。  The device for implementing single-flow forwarding of a multi-network processing unit according to claim 2, wherein the device further comprises a backup network processing unit, configured to be processed by the network processing unit before processing the handover when the trigger condition is met The data receiving management unit further includes a network processing unit backup control subunit connected to the message sorting subunit; the data sending management unit further includes a network connected to the packet descrambling subunit Processing unit backup control subunit; the network processing unit backup control subunit, when the trigger condition is satisfied, causing data processed by the network processing unit to be switched by the network processing unit to the backup network processing unit.
4、 如权利要求 2所述实现多网络处理单元单流转发的设备, 其 特征在于, 所述数据接收管理单元还包括负载均衡子单元, 对接收的 数据进行负载均衡并通过所述报文排序子单元将所述接收的数据均 衡分配到所述多个网络处理单元。  The apparatus for implementing single-stream forwarding of a multi-network processing unit according to claim 2, wherein the data receiving management unit further comprises a load balancing sub-unit, load-balances the received data, and sorts by the message. The subunit equalizes the received data to the plurality of network processing units.
5、 如权利要求 4所述实现多网络处理单元单流转发的设备, 其 特征在于, 所述数据接收管理单元还包括拥塞控制子单元, 对于进入 所述负载均衡子单元之前的数据进行拥塞控制。 5. The apparatus for implementing single stream forwarding of a multi-network processing unit according to claim 4, The data receiving management unit further includes a congestion control sub-unit that performs congestion control on data before entering the load balancing sub-unit.
6、 如权利要求 2所述实现多网络处理单元单流转发的设备, 其 特征在于, 所述数据接收管理单元还包括接口转换子单元, 对所述接 收的数据进行接口转换;所述数据发送管理单元还包括接口转换子单 元, 对所述多个网络处理单元处理后的数据进行接口转换。  The device for implementing the single-flow forwarding of the multi-network processing unit according to claim 2, wherein the data receiving management unit further comprises an interface conversion sub-unit, performing interface conversion on the received data; The management unit further includes an interface conversion subunit that performs interface conversion on the processed data of the plurality of network processing units.
7、 如权利要求 1至 6中任一项所述实现多网络处理单元单流转 发的设备, 其特征在于, 所述数据接收单元接收用户侧单流数据, 所 述数据发送单元向网络侧发送单流数据;或所述数据接收单元接收网 络侧单流数据, 所述数据发送单元向用户侧发送单流数据。  The apparatus for implementing single-stream forwarding of a multi-network processing unit according to any one of claims 1 to 6, wherein the data receiving unit receives user-side single-stream data, and the data sending unit sends the data to the network side. Single stream data; or the data receiving unit receives network side single stream data, and the data sending unit sends single stream data to the user side.
8、 一种实现多网络处理单元单流转发的方法, 其特征在于, 包 括:  8. A method for implementing single stream forwarding of a multi-network processing unit, characterized in that:
将接收的数据分配到多个网络处理单元进行处理;  Allocating received data to multiple network processing units for processing;
将所述多个网络处理单元处理后的数据合为单流数据发送。 The data processed by the plurality of network processing units is combined into a single stream data transmission.
9、 如权利要求 8所述实现多网络处理单元单流转发的方法, 其 特征在于, 9. The method for implementing single stream forwarding of a multi-network processing unit according to claim 8, wherein:
所述将接收的数据分配到多个网络处理单元进行处理具体包括: 对接收到的数据进行报文排序,将分配排序标识的数据分别发送到所 述多个网络处理单元进行处理;  The distributing the received data to the plurality of network processing units for processing specifically includes: performing message sorting on the received data, and transmitting the data of the assigned sorting identifier to the plurality of network processing units for processing;
所述将所述多个网络处理单元处理后的数据合为单流数据发送 具体包括:对所述多个网络处理单元处理的数据按照排序标识进行重 排序, 合成单流数据发送。  The combining the data processed by the plurality of network processing units into the single stream data transmission specifically includes: reordering the data processed by the plurality of network processing units according to the sorting identifier, and synthesizing the single stream data.
10、如权利要求 9所述实现多网络处理单元单流转发的方法, 其 特征在于,所述将接收的数据分配到多个网络处理单元进行处理后还 包括: 在触发条件满足时, 将所述网络处理单元处理的数据由所述网 络处理单元切换到备份网络处理单元。  The method for implementing the single-stream forwarding of the multi-network processing unit according to claim 9, wherein the distributing the received data to the plurality of network processing units for processing further comprises: when the trigger condition is met, The data processed by the network processing unit is switched by the network processing unit to the backup network processing unit.
11、 如权利要求 9所述实现多网络处理单元单流转发的方法, 其 特征在于, 在所述将接收的数据分配到多个网络处理单元之前还包 括: 对所述接收的数据进行负载均衡。 The method for implementing single-stream forwarding of a multi-network processing unit according to claim 9, wherein before the allocating the received data to the plurality of network processing units, the method further comprises: performing load balancing on the received data. .
12、如权利要求 9所述实现多网络处理单元单流转发的方法, 其 特征在于, 在所述将接收的数据分配到多个网络处理单元之前还包 括: 对所述接收的数据进行拥塞控制。 The method for implementing single-stream forwarding of a multi-network processing unit according to claim 9, wherein before the allocating the received data to the plurality of network processing units, the method further comprises: performing congestion control on the received data. .
13、如权利要求 9所述实现多网络处理单元单流转发的方法, 其 特征在于, 所述将接收的数据分配到多个网络处理单元之前还包括: 对所述接收的数据进行接口转换;所述将多个网络处理单元处理后的 数据合为单流数据之前还包括:对所述多个网络处理单元处理后的数 据进行接口转换。  The method for implementing the single-flow forwarding of the multi-network processing unit according to claim 9, wherein the distributing the received data to the plurality of network processing units further comprises: performing interface conversion on the received data; Before the combining the data processed by the plurality of network processing units into the single stream data, the method further comprises: performing interface conversion on the data processed by the plurality of network processing units.
PCT/CN2008/070733 2007-04-24 2008-04-17 Device and method for realizing transferring single stream by multiple network processing units WO2008128464A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN200710098010.3 2007-04-24
CN200710098010.3A CN101043460B (en) 2007-04-24 2007-04-24 Apparatus and method for realizing single stream forwarding of multi-network processing unit

Publications (1)

Publication Number Publication Date
WO2008128464A1 true WO2008128464A1 (en) 2008-10-30

Family

ID=38808664

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2008/070733 WO2008128464A1 (en) 2007-04-24 2008-04-17 Device and method for realizing transferring single stream by multiple network processing units

Country Status (2)

Country Link
CN (1) CN101043460B (en)
WO (1) WO2008128464A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101043460B (en) * 2007-04-24 2010-07-07 华为技术有限公司 Apparatus and method for realizing single stream forwarding of multi-network processing unit
JP2009232224A (en) 2008-03-24 2009-10-08 Nec Corp Optical signal divided transmission system, optical transmitter, optical receiver and optical signal divided transmission method
CN111245627B (en) * 2020-01-15 2022-05-13 湖南高速铁路职业技术学院 Communication terminal device and communication method
CN113067778B (en) * 2021-06-04 2021-09-17 新华三半导体技术有限公司 Flow management method and flow management chip

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1679278A (en) * 2002-08-28 2005-10-05 先进微装置公司 Wireless interface
CN1946054A (en) * 2006-09-30 2007-04-11 华为技术有限公司 Transmission method and device for high speed data flow and data exchange device
CN101043460A (en) * 2007-04-24 2007-09-26 华为技术有限公司 Apparatus and method for realizing single stream forwarding of multi-network processing unit

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100486189C (en) * 2003-01-03 2009-05-06 华为技术有限公司 Router

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1679278A (en) * 2002-08-28 2005-10-05 先进微装置公司 Wireless interface
CN1946054A (en) * 2006-09-30 2007-04-11 华为技术有限公司 Transmission method and device for high speed data flow and data exchange device
CN101043460A (en) * 2007-04-24 2007-09-26 华为技术有限公司 Apparatus and method for realizing single stream forwarding of multi-network processing unit

Also Published As

Publication number Publication date
CN101043460B (en) 2010-07-07
CN101043460A (en) 2007-09-26

Similar Documents

Publication Publication Date Title
JP4023281B2 (en) Packet communication apparatus and packet switch
US8625427B1 (en) Multi-path switching with edge-to-edge flow control
US8472312B1 (en) Stacked network switch using resilient packet ring communication protocol
US10313768B2 (en) Data scheduling and switching method, apparatus, system
WO2008067720A1 (en) Method, device and system for performing separate-way transmission in multimode wireless network
EP1891778B1 (en) Electronic device and method of communication resource allocation.
US20080196033A1 (en) Method and device for processing network data
IL230406A (en) Method and cloud computing system for implementing a 3g packet core in a cloud computer with openflow data and control planes
WO2012160465A1 (en) Implementing epc in a cloud computer with openflow data plane
EP2831733A1 (en) Implementing epc in a cloud computer with openflow data plane
JP2005537764A (en) Mechanism for providing QoS in a network using priority and reserve bandwidth protocols
WO2009003374A1 (en) Data communication system, switching network board and method
JP3322195B2 (en) LAN switch
WO2022052882A1 (en) Data transmission method and apparatus
WO2013016971A1 (en) Method and device for sending and receiving data packet in packet switched network
CN105337895B (en) A kind of network equipment main computer unit, network equipment subcard and the network equipment
CN101848168A (en) Target MAC (Media Access Control) address based flow control method, system and equipment
WO2008128464A1 (en) Device and method for realizing transferring single stream by multiple network processing units
CN102333026A (en) Message forwarding method and device
US7990873B2 (en) Traffic shaping via internal loopback
WO2023274165A1 (en) Parameter configuration method and apparatus, controller, communication device, and communication system
EP4325800A1 (en) Packet forwarding method and apparatus
WO2014000467A1 (en) Method for adjusting bandwidth in network virtualization system
WO2011140873A1 (en) Data transport method and apparatus for optical transport layer
WO2018127235A1 (en) Ue idle state processing method, mobility management (mm) functional entity, and session management (sm) functional entity

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08734091

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 08734091

Country of ref document: EP

Kind code of ref document: A1