WO2008128464A1 - Dispositif et procédé pour réaliser le transfert d'un flux unique par de multiples unités de traitement en réseau - Google Patents

Dispositif et procédé pour réaliser le transfert d'un flux unique par de multiples unités de traitement en réseau Download PDF

Info

Publication number
WO2008128464A1
WO2008128464A1 PCT/CN2008/070733 CN2008070733W WO2008128464A1 WO 2008128464 A1 WO2008128464 A1 WO 2008128464A1 CN 2008070733 W CN2008070733 W CN 2008070733W WO 2008128464 A1 WO2008128464 A1 WO 2008128464A1
Authority
WO
WIPO (PCT)
Prior art keywords
network processing
data
processing unit
processing units
unit
Prior art date
Application number
PCT/CN2008/070733
Other languages
English (en)
Chinese (zh)
Inventor
Ke Chen
Zheng Li
Zhenhua Xu
Wenyang Lei
Original Assignee
Huawei Technologies Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co., Ltd. filed Critical Huawei Technologies Co., Ltd.
Publication of WO2008128464A1 publication Critical patent/WO2008128464A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/61Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio

Definitions

  • the present invention relates to the field of communications, and in particular, to an apparatus and method for implementing single stream forwarding of a multi-network processing unit.
  • the existing forwarding processing functions are mainly implemented by devices such as a CP (Control Processor), an NP (Network Processor), and a TM (Traffic Management)/Fabric (Switching Network).
  • the NP implements the core forwarding processing function of the data communication device, which can be implemented by a NP unit with a bidirectional processing function, as shown in FIG. 1; the user side service completes the physical layer processing of the physical line signal through the inbound interface unit, and performs parallel conversion and go Frame processing, partial Layer 2 overhead processing, etc., and then the service is sent to the NP unit for processing.
  • the NP and the CP interact, and finally enter the network side through the traffic management/switching network unit;
  • the network side service reaches the NP unit through the traffic management/switching network unit, and then the physical interface processing, the serial-to-parallel conversion, the framing processing, and the partial Layer 2 overhead processing of the physical line signal are performed by the outbound interface unit to reach the user side.
  • the NP function can also be implemented by two NP units with one-way processing function, as shown in FIG. 2:
  • the service uplink direction completes physical layer processing, parallel-to-serial conversion, de-frame processing, and partial Layer 2 overhead of physical line signals through the inbound interface unit.
  • the user enters the uplink NP processing unit and enters the network side through the traffic management/switching network unit; the network side service enters the downlink NP processing unit through the traffic management/switching network unit, and then the physical interface signal is completed by the outgoing interface unit.
  • Physical layer processing, serial-to-parallel conversion, framing processing, partial Layer 2 overhead processing, etc. are sent to the user side.
  • the NP of the existing processing capability must not meet the requirements. If the service traffic increases, the NP of the existing processing capability must not meet the requirements. If the existing forwarding processing architecture is adopted, the high-performance NP is the only option for solving large traffic. In order to improve the forwarding processing capability of data communication equipment, although high-performance NP can be developed to replace the existing low-processing capability NP, it will bring huge investment in research and development and waste of existing resources. In addition, if the next business volume is increased, it will lead to another high cost investment in hardware upgrades, so that the new high-performance NP will not be able to meet the rapid growth of business demand, which will eventually lead to network operator CapEx. (Capital Expense, basic capital expenditure). Summary of the invention
  • the embodiments of the present invention provide a method and a device for implementing high-performance single-flow forwarding processing by using multiple NPs, so as to reduce the cost of system upgrade caused by an increase in network traffic.
  • An embodiment of the present invention provides a device for implementing single-flow forwarding of a multi-network processing unit, including multiple network processing units, a data receiving management unit, and a data transmission management unit.
  • the data receiving management unit allocates received data to the device. Said a plurality of network processing units for processing;
  • the data transmission management unit combines the data processed by the plurality of network processing units into a single stream data transmission.
  • the embodiment of the invention further provides a method for implementing single stream forwarding of a multi-network processing unit, including:
  • the data processed by the plurality of network processing units is combined into a single stream data transmission.
  • the received data is separately sent to a plurality of network processing units for processing; and the data processed by the plurality of network processing units is reordered, the single stream data is synthesized, and the multi-NP single stream is realized.
  • the function of transmitting, improving the forwarding capability of the device based on the use of multiple existing NPs, does not require the development of a new high-performance NP, thus saving R&D investment and development costs.
  • FIG. 1 is a system for implementing NP function in a bidirectional NP unit in the prior art
  • FIG. 2 is a system for implementing NP function in two unidirectional NP units in the prior art
  • FIG. 3 is a multi-network processing unit according to an embodiment of the present invention
  • FIG. 4 is a structural diagram of a device for implementing single-stream forwarding of a multi-network processing unit according to an embodiment of the present invention
  • FIG. 5 is a structural diagram of a data receiving management unit in an embodiment of the present invention.
  • FIG. 6 is a structural diagram of a data transmission management unit in an embodiment of the present invention.
  • FIG. 7 is a structural diagram of a backup unit of a network processing unit backup control unit in a data receiving management unit according to an embodiment of the present invention.
  • FIG. 8 is a structural diagram of a network processing unit backup control subunit in a data transmission management unit according to an embodiment of the present invention. detailed description
  • the embodiment of the invention provides a method for implementing single-flow forwarding of a multi-network processing unit. As shown in FIG. 3, the method includes the following steps:
  • Step s301 The single stream data received from the user side or the network side is allocated to a plurality of network processing units for processing.
  • the processing specifically includes: sorting the received data, and assigning the sorting identifier, and then transmitting the data of the assigned sorting identifier to the plurality of network processing units. Due to the differences in the processing flow of the NP forwarding engine itself, the difference in access to the memory when the table entries are searched, and the differences in the PCB (Print Circuit Board) traces, the delay of the single-stream data is delayed. . Therefore, the sequence number byte is added before the data message before entering the NP, and then sent to the NP for processing.
  • load balancing, congestion control, and the like can also be performed on the received data.
  • a backup network processing unit may be added to the system. If the trigger condition is met, for example, the network processing unit is faulty, the data is switched from the working network processing unit to the backup network processing unit. Yuan.
  • step s302 the data processed by the plurality of network processing units is combined into a single stream data transmission.
  • the processing specifically includes: reordering the data processed by the plurality of network processing units according to the sorting identifier, and synthesizing the single stream data to be sent; the sorting and the ending of the serial number are completed by adding the relative direction logic to the sequence number.
  • n NPs are n queues, and the timing query data is sent to the cache state before each NP. If the cache corresponding to one or all NPs overflows, it is considered that A problem occurs in the NP or all NPs, and the traffic originally allocated to the NP is discarded until the corresponding cache has no overflow.
  • FQ Frequeuing
  • the data traffic is allocated to multiple NPs in a reasonable and balanced manner according to the processing capabilities of each service and each NP, and the maximum efficiency utilization NP processing resource can use the load balancing algorithm.
  • the load balancing algorithm has a polling equalization, a weighted polling equalization, a random equalization, a weighted random equalization, and a processing capability equalization.
  • each NP has the same performance and hardware and software configuration, so the polling equalization plus processing capability is used. Balanced integration is more appropriate. Specifically, the data packet entered by the interface is polled and assigned to each NP, such as the data packet P1 enters NP1, the data packet P2 enters NP2, ....
  • the data packet Pn enters NPn, so that the data packet allocation is based on the NP processing capability.
  • the packet processing rate is used for equalization.
  • the buffer remaining space on the NP sending side should also be considered to adjust the allocation. For example, when the data packet Pn+1 is entered into NP1 and the data packet Pn+2 is selected, NP2 is selected.
  • ⁇ NPn corresponds to the largest cache remaining space, to achieve the purpose of optimal load balancing.
  • the interface conversion function can be used to seamlessly connect data, such as SERDES with large traffic on the user side or the network side.
  • SPI Serial Peripheral Interface
  • XAUI Xialix Assistant Unit Interface
  • Xilinx Assistant Unit Interface etc. interface.
  • the above congestion control function is usually added to the total traffic entrance instead of the NP entry. This is because: The router IP congestion control algorithm is for different connections, discards according to certain policies, and balances all traffic. Assigned to multiple NPs, so the same connection will be dispersed into n streams. If congestion control is placed in each NP, the weighted congestion control algorithm will not be implemented accurately, such as WFQ ( Weighted Fair Queuing).
  • WFQ Weighted Fair Queuing
  • the message sequence number is used to ensure the strict sequence of the message, and the message sequence number is added at the total traffic, if at each NP. Discarding packets will result in missing and discontinuous message sequence numbers.
  • the core router is on a very important network node, and reliability is an important indicator. At present, physical link backup and important board-level backup are widely used to ensure high reliability. As the most important interface in the router, the NP forwarding processing module should also implement module-level n+1 backup. .
  • the n+1 backup of the NP module can be easily implemented by using the network processing unit backup control function. For example, normal single-stream traffic requires n NPs to process, and NP backup units are added for backup. The fault detection mechanism of the NP is judged by the overflow status of the independent FIFO (First In First Out) sent to each NP. If the overflow state is detected at a certain time, the system starts the backup function and sends the backup function to the NP. The traffic is forwarded to the backup NP.
  • FIFO First In First Out
  • the message sorting and reordering function is a necessary function, and functions such as congestion control, load balancing, interface conversion, and backup control are optional functions, and may be provided at the same time, or only one or several items may be provided.
  • the above method provided by the embodiment of the present invention can conveniently provide the n+1 backup capability of the NP module, improve the reliability of the system, save the R&D investment and development cost of the high-performance NP, and prolong the service life of the existing NP.
  • An embodiment of the present invention provides a device for implementing single-stream forwarding of a multi-network processing unit, including n-2 slices of NPs, which are completely peer-to-peer with each other, and simultaneously increase and decrease before multiple NPs.
  • a special logic function is added, which can be implemented by an FPGA (Field Programmable Gate Array) or an ASIC (Application Specific Integrated Circuits).
  • the method includes: a plurality of network processing units (which can be divided into an uplink/downlink network processing unit), a data receiving management unit 100, and a data sending management unit 200, wherein the data receiving management unit 100 and the data sending management unit 200 Connect to the switching network 300 with traffic management capabilities.
  • the data reception management unit 100 distributes the single stream data received from the user side to a plurality of network processing units, and the data transmission management unit 200 combines the data processed by the plurality of network processing units into single stream data and transmits the data to the switching network 300; or data.
  • the reception management unit 100 distributes the data received from the switching network 300 to a plurality of network processing units, and the data transmission management unit 200 combines the data processed by the plurality of network processing units into single stream data and transmits them to the user side.
  • the plurality of network processing units, the data receiving processing unit 100, and the data transmission processing unit 200 are connected to the control processing unit, and are controlled and managed by the control processing unit.
  • the data receiving management unit 100 further includes: a message sorting subunit 130, sorting the received data, and transmitting the data of the assigned sorting identifier to the plurality of network processing units respectively; the network processing unit backup controller
  • the unit 140 is connected to the message sorting subunit 130, and is configured to switch the data processed by the network processing unit to the backup by the network processing unit when the triggering condition is met, that is, when the working network processing unit fails or the data carried exceeds the load.
  • each subunit in this embodiment is only an application example, and is not limited to that shown in FIG. 5.
  • the message ordering subunit 130 is directly connected to the congestion control subunit 110 and the like.
  • the data transmission management unit 200 further includes a message sorting sub-unit 230, which reorders the messages processed by the plurality of network processing units according to the sorting identifier, and synthesizes the single stream data transmission; the network processing unit backup control subunit 220, connected to the message de-sorting sub-unit 230, configured to: when the trigger condition is met, that is, the working network processing unit When the faulty or bearer data exceeds the load, the data processed by the network processing unit is switched by the network processing unit to the backup network processing unit; the interface conversion subunit 210 performs interface conversion on the data processed by the plurality of network processing units.
  • the traffic of each NP is simultaneously connected to the NP backup, but the switch control is required, and the opening and closing of the switch is determined by the NP fault detection result, the network processing unit backup control subunit 140 in FIG. 5 and the The network processing unit backup control sub-unit 220 implements the principle as shown in FIG. 7 and FIG. 8, respectively, and the working NP1 to the working NPn are respectively connected to the backup NP through the switch.
  • the switching time of the working NP and the backup NP is mainly determined by the fault detection time, which is a microsecond level.
  • the network processing unit backup control subunit may also exist only in the data reception management unit 100 or the data transmission management unit 200.
  • the device provided by the embodiment of the present invention can conveniently provide the n+1 backup capability of the NP module, improve the reliability of the system, save the R&D investment and development cost of the high-performance NP, and prolong the service life of the existing NP.
  • the present invention can be implemented by means of software plus a necessary general hardware platform, and of course, can also be through hardware, but in many cases, the former is a better implementation. the way.
  • the technical solution of the present invention which is essential or contributes to the prior art, may be embodied in the form of a software product stored in a storage medium, including a plurality of instructions for making a A computer device (which may be a personal computer, server, or network device, etc.) performs the methods described in various embodiments of the present invention.
  • a computer device which may be a personal computer, server, or network device, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

L'invention concerne un dispositif pour réaliser un transfert d'un unique flux par de multiples unités de traitement en réseau, lequel dispositif comprend : de multiples unités de traitement en réseau, une unité de gestion de réception de données et une unité de gestion d'émission de données ; l'unité de gestion de réception de données attribue les données reçues à de multiples unités de traitement en réseau pour qu'elles soient traitées ; l'unité de gestion d'émission de données combine toutes les données traitées par les multiples unités de traitement en réseau en les données de flux unique pour une émission. L'invention concerne égalementun procédé pour réaliser le transfert d'un flux unique par de multiples unités de traitement en réseau. Cette invention utilise de multiples NP pour améliorer la capacité de transfert de flux unique et pour économiser le développement et l'investissement de NP à performance élevée.
PCT/CN2008/070733 2007-04-24 2008-04-17 Dispositif et procédé pour réaliser le transfert d'un flux unique par de multiples unités de traitement en réseau WO2008128464A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN200710098010.3 2007-04-24
CN200710098010.3A CN101043460B (zh) 2007-04-24 2007-04-24 实现多网络处理单元单流转发的设备及方法

Publications (1)

Publication Number Publication Date
WO2008128464A1 true WO2008128464A1 (fr) 2008-10-30

Family

ID=38808664

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2008/070733 WO2008128464A1 (fr) 2007-04-24 2008-04-17 Dispositif et procédé pour réaliser le transfert d'un flux unique par de multiples unités de traitement en réseau

Country Status (2)

Country Link
CN (1) CN101043460B (fr)
WO (1) WO2008128464A1 (fr)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101043460B (zh) * 2007-04-24 2010-07-07 华为技术有限公司 实现多网络处理单元单流转发的设备及方法
JP2009232224A (ja) 2008-03-24 2009-10-08 Nec Corp 光信号分割伝送システム、光送信機、光受信機及び光信号分割伝送方法
CN111245627B (zh) * 2020-01-15 2022-05-13 湖南高速铁路职业技术学院 一种通信末端装置及通信方法
CN113067778B (zh) * 2021-06-04 2021-09-17 新华三半导体技术有限公司 一种流量管理方法及流量管理芯片

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1679278A (zh) * 2002-08-28 2005-10-05 先进微装置公司 无线接口
CN1946054A (zh) * 2006-09-30 2007-04-11 华为技术有限公司 一种高速数据流的传输方法、装置及数据交换设备
CN101043460A (zh) * 2007-04-24 2007-09-26 华为技术有限公司 实现多网络处理单元单流转发的设备及方法

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100486189C (zh) * 2003-01-03 2009-05-06 华为技术有限公司 一种路由器

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1679278A (zh) * 2002-08-28 2005-10-05 先进微装置公司 无线接口
CN1946054A (zh) * 2006-09-30 2007-04-11 华为技术有限公司 一种高速数据流的传输方法、装置及数据交换设备
CN101043460A (zh) * 2007-04-24 2007-09-26 华为技术有限公司 实现多网络处理单元单流转发的设备及方法

Also Published As

Publication number Publication date
CN101043460B (zh) 2010-07-07
CN101043460A (zh) 2007-09-26

Similar Documents

Publication Publication Date Title
JP4023281B2 (ja) パケット通信装置及びパケットスイッチ
US8625427B1 (en) Multi-path switching with edge-to-edge flow control
US10313768B2 (en) Data scheduling and switching method, apparatus, system
US8472312B1 (en) Stacked network switch using resilient packet ring communication protocol
WO2022052882A1 (fr) Procédé et appareil de transmission de données
WO2008067720A1 (fr) Procédé, dispositif et système pour effectuer une transmission de façon séparée dans un réseau sans fil multimodal
US20080196033A1 (en) Method and device for processing network data
IL230406A (en) Cloud computing method and system for executing g3 package on cloud computer with open flow science and control planes
WO2012160465A1 (fr) Implémentation d'epc dans un ordinateur en nuage avec plan de données openflow
EP2831733A1 (fr) Mise en oeuvre d'epc dans un ordinateur en nuage à plan de données openflow
JP2005537764A (ja) 優先度及びリザーブ帯域幅プロトコルを利用したネットワークにおけるQoSを提供する機構
WO2009003374A1 (fr) Système de communication de données, carte réseau de commutation et procédé
JP3322195B2 (ja) Lanスイッチ
CN102594802B (zh) 低延迟联网的方法及系统
WO2013016971A1 (fr) Procédé et dispositif pour transmettre et recevoir un paquet de données dans un réseau à commutation de paquets
CN105337895B (zh) 一种网络设备主机单元、网络设备子卡以及网络设备
CN101848168A (zh) 基于目的mac地址的流量控制方法、系统及设备
WO2008128464A1 (fr) Dispositif et procédé pour réaliser le transfert d'un flux unique par de multiples unités de traitement en réseau
US7990873B2 (en) Traffic shaping via internal loopback
WO2011140873A1 (fr) Procédé et appareil de transport de données pour une couche de transport optique
WO2023274165A1 (fr) Procédé et appareil de configuration de paramètres, dispositif de commande, dispositif de communication et système de communication
CN106937331A (zh) 一种基带处理方法及装置
WO2012065419A1 (fr) Procédé, système et sous-système logique de traitement en cascade pour mettre en oeuvre une mise en cascade de stations de base
WO2018127235A1 (fr) Procédé de traitement d'état inactif d'ue, entité fonctionnelle de gestion de mobilité (mm), et entité fonctionnelle de gestion de session (sm)
KR100703369B1 (ko) 통신 시스템에서의 데이터 처리 장치 및 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08734091

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 08734091

Country of ref document: EP

Kind code of ref document: A1