CN101442490A - Method for processing flux load equilibrium - Google Patents

Method for processing flux load equilibrium Download PDF

Info

Publication number
CN101442490A
CN101442490A CNA2008102411385A CN200810241138A CN101442490A CN 101442490 A CN101442490 A CN 101442490A CN A2008102411385 A CNA2008102411385 A CN A2008102411385A CN 200810241138 A CN200810241138 A CN 200810241138A CN 101442490 A CN101442490 A CN 101442490A
Authority
CN
China
Prior art keywords
processing unit
network processing
exchange chip
processing
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2008102411385A
Other languages
Chinese (zh)
Other versions
CN101442490B (en
Inventor
穆志新
易善平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Haote Information Technology Co., Ltd.
Original Assignee
BEIJING QQ TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BEIJING QQ TECHNOLOGY Co Ltd filed Critical BEIJING QQ TECHNOLOGY Co Ltd
Priority to CN2008102411385A priority Critical patent/CN101442490B/en
Publication of CN101442490A publication Critical patent/CN101442490A/en
Application granted granted Critical
Publication of CN101442490B publication Critical patent/CN101442490B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention discloses a traffic load balancing processing method, which needs to estimate the bandwidth of a system, determines the number of network processors according to the actual utilized bandwidth, uses one network processor for distribution processing and one network processor for convergence processing in uplink and downlink directions respectively, and needs to perform symmetrical processing to uplink and downlink traffic to balance loads of uplink and downlink simultaneously, wherein the distribution processing is to adopt the TCP connection to perform classification processing and analysis on messages, different connections are marked with different categories, one of the categories is extracted for deep packet inspection analysis at the same time, and other categories are forwarded to a switching chip by using different interfaces/ports respectively and are forwarded to other network processors through the switching chip. The convergence processing is used for converging messages of an output port and performing the speed limitation and statistics of the port after the convergence. The method has the advantages that data processed by the network processors can not be overlapped and repeated, and the processing capacity of a system is high.

Description

Method for processing flux load equilibrium
Technical field
The present invention relates to a kind of method, particularly relate to a kind of a kind of method for processing flux load equilibrium that in communication equipment, uses a plurality of network processing unit associated treatment network traffics.
Background technology
The network equipment is when providing deep-packet detection (DPI) function, and its forwarding performance can be different because the processing degree of depth of message is different.When a single processor can't reach systematic function when handling DPI, can not produce packet loss in order to guarantee message through system, system just can adopt the mode of random sampling to carry out DPI to flow and handle, the mode that other then employing is directly transmitted.Therefore will certainly there be certain deviation in system with statistics to the detection of message.
Summary of the invention
In view of the above problems, main purpose of the present invention is to provide a kind of equalization methods that flow load is carried out accurate processing.
To achieve these goals, the present invention has adopted following technical proposals:
Method for processing flux load equilibrium of the present invention is for comprising: system bandwidth is estimated, determined the use number of network processing unit according to this actual bandwidth; And for up-downlink direction, use a network processing unit to do distribution processor respectively, a network processing unit is done convergence processing; Simultaneously, and to up-downgoing flow enforcement symmetrical treatment, the load of balanced up-downgoing, assurance uplink message and the processing path of downlink message in system be symmetry fully.
Wherein, described distribution processor is for comprising: use a network processing unit as master control, and adopt TCP to connect, and it is resolved the message processing of classifying, and the classification different to different connection identifier; Simultaneously, an extraction wherein classification is carried out the deep-packet detection parsing to it, and all the other classifications then use different interface/ports to be forwarded to exchange chip respectively, and are forwarded on other network processing units by this exchange chip.
Described convergence processing is for comprising: use a network processing unit to come the outbound port message is converged, and carry out port speed constraint and statistics after converging; In converging and by different interface/ports flow is forwarded to exchange chip, and is forwarded to system's input/output interface by exchange chip again, wherein, network processing unit is for to distinguish different messages according to exchange chip difference of port numbers when E-Packeting.
The present invention uses polylith network processing unit parallel processing deep-packet detection, and each network processing unit carries out random sampling by certain rule to message to be handled, both guaranteed that stack and repetition do not appear in the data that each network processing unit is handled, and also made the disposal ability of system be improved.
Description of drawings
The schematic flow sheet of Fig. 1 method for processing flux load equilibrium of the present invention;
Fig. 2 is the FB(flow block) of the specific embodiment of the described method for processing flux load equilibrium of invention.
Embodiment
Come flow equalization processing method of the present invention is described in further detail below in conjunction with accompanying drawing and specific embodiment.
Method for processing flux load equilibrium of the present invention is realized the technical scheme of a kind of flow load balance of a plurality of processor parallel processing network traffics in the network equipment.
In the whole system design, at first, need the actual network bandwidth that needs processing of analytical system; Asymmetric according to up-downgoing flow in the real network; Secondly, in design, must guarantee up-downgoing flow flow process symmetry in system, with this at first balanced up-downgoing load; Then, design adopts the TCP connection to carry out traffic classification in design, and uses this classification that message is forwarded to and carry out the deep packet parsing in the different network processing units, thereby further guarantees system when the processing deep packet is resolved, and the load of each network processing unit is identical.
Shown in Fig. 1, the method for the invention is specific as follows:
Step 100: the bandwidth to system's actual treatment is analyzed, and determines the network processing unit that uses;
Step 101: the up-downgoing flow is carried out the flow process symmetrical treatment, and the load of balanced up-downgoing guarantees uplink message and the processing path of downlink message in system symmetry fully.
Step 102: for up-downlink direction, use a network processing unit to do distribution processor respectively, a network processing unit is done convergence processing.
On realizing that flow load balance is handled, above-mentioned steps 100, step 101 and step 102 are in no particular order.
Wherein, described distribution processor is for comprising: use a network processing unit as master control, and use TCP to connect, and it is resolved the message processing of classifying, and the classification different to different connection identifier; Simultaneously, an extraction wherein classification is carried out the deep-packet detection parsing to it, and all the other classifications then use different interface/ports to be forwarded to exchange chip respectively, and are forwarded on other network processing units by this exchange chip.
Described convergence processing is for comprising: use a network processing unit to come the outbound port message is converged, and carry out port speed constraint and statistics after converging; In converging and by different interface/ports flow is forwarded to exchange chip, and is forwarded to system's input/output interface by exchange chip again, wherein, network processing unit is for to distinguish different messages according to exchange chip difference of port numbers when E-Packeting.
In addition, in step 100, being defined as of described network processing unit comprises: at first, the type selecting of network processing unit is that the evaluation board of using manufacturer is assessed test, therefrom draws the ability of monolithic processor processing DPI; Secondly the actual bandwidth according to estimation calculates the network processing unit number of handling needs, promptly, if the ability of the monolithic processor processing DPI that tests out is 5Gbps, then in actual applications, if in order to reach the disposal ability of two-way 10Gbps, can determine that then system needs 4 network processing units.
Based on system requirements, in general application, can design and use four network processing units and two six port exchange chips to come two-way 10Gbps flow load is carried out equilibrium treatment.
Flow at two-way each 10Gbps, then the flow of every network processing unit processing deep-packet detection is (10/4) * 2=5Gbps, and all consider to use a network processing unit to do distribution processor to up-downlink direction, a network processing unit is done convergence processing, therefore, the direct converting flow of each network processing unit is 2.5Gbps*3=7.5Gbps, and each network processing unit gateway flow is 12.5Gbps.
Wherein, load is impartial when handling deep-packet detection for each network processing unit in the assurance system, then system need guarantee that same TCP connection needs handle on same network processing unit, and uses the classification of a network processing unit as key network processor processing message.This described key network processor at first goes to receive message and it is resolved, and different connections is indicated different classifications, and evenly is divided into 4 parts.This key network processor extracts a kind and do the deep-packet detection parsing on the present networks processor, and other 3 kind then uses different interface/ports to be forwarded to exchange chip.At this moment, this described exchange chip then is based on interface/port and different messages is forwarded on the different network processing units, thereby goes to reduce the delay of message and the performance consumption of other network processing unit.
In addition, and based on whole system the outbound port flow is carried out overall speed limit and statistics, and use a network processing unit to carry out message and converge, this network processing unit is distinguished different messages by the port numbers difference that exchange chip E-Packets.In this practical application, this message is the 2.5Gbps flow process that comprises the flow of doing deep-packet detection of 7.5Gbps and need the present networks processor processing; Wherein, and the flow that uses the interface/port of appointment will do deep-packet detection is forwarded to exchange chip, by exchange chip this flow is forwarded on system's input/output interface.
By said process, thereby guarantee not stack and the repetition of data that each network processing unit is handled, and make system reach the disposal ability of 10Gbps.
Shown in Fig. 2, dotted line represents not have the data traffic of DPI among the figure, and solid line is represented the flow of DPI.Not too large in order to take into account hardware device, system design divides two groups to be installed in respectively on two boards 4 network processing units.Shown in Fig. 2, network processing unit a is the main network processor of first block of plate of expression, network processing unit b represent first block of plate from network processing unit, network processing unit c and network processing unit d are for representing the main network processor of second block of plate respectively and from network processing unit.
Data flow A is divided into 4 data flow A1 by load balancing after entering network processing unit A, A2, A3, A4, be dealt on second block of plate by exchange chip and deserializer in the back of finishing dealing with on first block of plate, send to system's input/output interface of second block of plate at last by network processing unit d.
As shown in the figure, the concrete shiftable haulage line of data flow A is as follows:
Data flow A: first plate system input/output interface → exchange chip → network processing unit a;
Data flow A1: network processing unit a → exchange chip → first plate deserializer → second plate deserializer → exchange chip → network processing unit d → second a plate system input/output interface;
Data flow A2: network processing unit a → exchange chip → network processing unit b → exchange chip → first plate deserializer → second plate deserializer → exchange chip → network processing unit d → exchange chip → second a plate system input/output interface;
Data flow A3: network processing unit a → exchange chip → first plate deserializer → second plate deserializer → exchange chip → network processing unit c → exchange chip → network processing unit d → exchange chip → second plate system input/output interface;
Data flow A4: network processing unit a → exchange chip → first plate deserializer → second plate deserializer → exchange chip → network processing unit d → exchange chip → second a plate system input/output interface.
Data stream B is divided into 4 data flow B1 by load balancing after entering network processing unit c, B2, B3, B4, be dealt on first block of plate at the deserializer of back of finishing dealing with on second block of plate, send to system's input/output interface of first block of plate at last by network processing unit b by exchange chip and backboard.
Referring to Fig. 2, the concrete shiftable haulage line of data stream B is as follows:
Data stream B: second plate system input/output interface → exchange chip → network processing unit c;
Data stream B 1: network processing unit c → exchange chip → second plate deserializer → second plate deserializer → exchange chip → network processing unit b → first a plate system input/output interface;
Data stream B 2: network processing unit c → exchange chip → network processing unit d → exchange chip → second plate deserializer → first plate deserializer → exchange chip → network processing unit b → exchange chip → first a plate system input/output interface;
Data stream B 3: network processing unit c → exchange chip → second plate deserializer → first plate deserializer → exchange chip → network processing unit a → exchange chip → network processing unit b → exchange chip → first plate system input/output interface;
Data stream B 4: network processing unit c → exchange chip → second plate deserializer → first plate deserializer → exchange chip → network processing unit b → exchange chip → first a plate system input/output interface
Above-mentioned each network processing unit uses 20 threads to do the DPI Business Processing and transmit 5Gbps, uses 8 threads directly to transmit 7.5Gbps.

Claims (4)

1. a method for processing flux load equilibrium is characterized in that, may further comprise the steps:
1) system bandwidth is estimated, determined the use number of network processing unit according to this actual bandwidth;
2) for up-downlink direction, use a network processing unit to do distribution processor respectively, a network processing unit is done convergence processing;
3) the up-downgoing flow is implemented symmetrical treatment, the load of balanced up-downgoing;
Above-mentioned steps in no particular order;
Wherein, described distribution processor process is: use a network processing unit as master control, and to the message processing of classifying, and it is resolved, and the classification different to different connection identifier; Simultaneously, an extraction wherein classification is carried out the deep-packet detection parsing to it, and all the other classifications then use different interface/ports to be forwarded to exchange chip respectively, and are forwarded on other network processing units by this exchange chip;
Described convergence processing process is: use a network processing unit to come the outbound port message is converged, and carry out port speed constraint and statistics after converging; In converging, flow is forwarded to exchange chip, is forwarded to system's input/output port by exchange chip again by different interface/ports.
2. according to the described method for processing flux load equilibrium of claim 1, it is characterized in that described classification is treated to adopts the TCP connection that message is classified.
3. according to the described method for processing flux load equilibrium of claim 1, it is characterized in that described network processing unit adopts based on the random sampling that connects message.
4. according to the described method for processing flux load equilibrium of claim 1, it is characterized in that described network processing unit is for to distinguish different messages according to exchange chip difference of port numbers when E-Packeting.
CN2008102411385A 2008-12-30 2008-12-30 Method for processing flux load equilibrium Expired - Fee Related CN101442490B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2008102411385A CN101442490B (en) 2008-12-30 2008-12-30 Method for processing flux load equilibrium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2008102411385A CN101442490B (en) 2008-12-30 2008-12-30 Method for processing flux load equilibrium

Publications (2)

Publication Number Publication Date
CN101442490A true CN101442490A (en) 2009-05-27
CN101442490B CN101442490B (en) 2011-04-20

Family

ID=40726737

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2008102411385A Expired - Fee Related CN101442490B (en) 2008-12-30 2008-12-30 Method for processing flux load equilibrium

Country Status (1)

Country Link
CN (1) CN101442490B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102882707A (en) * 2012-09-04 2013-01-16 大唐移动通信设备有限公司 Method and device for detecting and inhibiting Ethernet link storm
CN104618266A (en) * 2015-02-09 2015-05-13 浪潮集团有限公司 Method and device for transferring messages among a plurality of ports
CN105939218A (en) * 2016-04-15 2016-09-14 杭州迪普科技有限公司 Statistical method and device for network traffic
CN106921672A (en) * 2017-03-28 2017-07-04 南京国电南自维美德自动化有限公司 A kind of protocol conversion device of the Multi-netmouth multi -CPU based on exchange chip
CN110447209A (en) * 2017-03-16 2019-11-12 英特尔公司 System, method and apparatus for user plane traffic forwarding
CN112583730A (en) * 2019-09-30 2021-03-30 深圳市中兴微电子技术有限公司 Routing information processing method and device for switching system and packet switching equipment
WO2021243649A1 (en) * 2020-06-04 2021-12-09 深圳市欢太科技有限公司 Rate limit bandwidth adjustment method and apparatus

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101212319A (en) * 2006-12-29 2008-07-02 西门子公司 Method and system for flow statistics in mobile communication
CN101136852B (en) * 2007-06-01 2010-05-19 武汉虹旭信息技术有限责任公司 Deep pack processing method of microengine

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102882707A (en) * 2012-09-04 2013-01-16 大唐移动通信设备有限公司 Method and device for detecting and inhibiting Ethernet link storm
CN102882707B (en) * 2012-09-04 2015-12-02 大唐移动通信设备有限公司 The method and apparatus that a kind of Ethernet link storm detects and suppresses
CN104618266A (en) * 2015-02-09 2015-05-13 浪潮集团有限公司 Method and device for transferring messages among a plurality of ports
CN105939218A (en) * 2016-04-15 2016-09-14 杭州迪普科技有限公司 Statistical method and device for network traffic
CN105939218B (en) * 2016-04-15 2019-02-19 杭州迪普科技股份有限公司 The statistical method and device of network flow
CN110447209A (en) * 2017-03-16 2019-11-12 英特尔公司 System, method and apparatus for user plane traffic forwarding
US11089511B2 (en) 2017-03-16 2021-08-10 Apple Inc. Systems, methods and devices for user plane traffic forwarding
CN106921672A (en) * 2017-03-28 2017-07-04 南京国电南自维美德自动化有限公司 A kind of protocol conversion device of the Multi-netmouth multi -CPU based on exchange chip
CN106921672B (en) * 2017-03-28 2023-12-22 南京国电南自维美德自动化有限公司 Protocol conversion device of many net gaps many CPUs based on exchange chip
CN112583730A (en) * 2019-09-30 2021-03-30 深圳市中兴微电子技术有限公司 Routing information processing method and device for switching system and packet switching equipment
WO2021063279A1 (en) * 2019-09-30 2021-04-08 深圳市中兴微电子技术有限公司 Method and apparatus for processing routing information used for switching system, and packet switching device
WO2021243649A1 (en) * 2020-06-04 2021-12-09 深圳市欢太科技有限公司 Rate limit bandwidth adjustment method and apparatus

Also Published As

Publication number Publication date
CN101442490B (en) 2011-04-20

Similar Documents

Publication Publication Date Title
CN101442490B (en) Method for processing flux load equilibrium
Agrawal et al. Simulation of network on chip for 3D router architecture
US9674084B2 (en) Packet processing apparatus using packet processing units located at parallel packet flow paths and with different programmability
US9900090B1 (en) Inter-packet interval prediction learning algorithm
CN103067218B (en) A kind of express network packet content analytical equipment
CN101656677A (en) Message diversion processing method and device
CN103312565A (en) Independent learning based peer-to-peer (P2P) network flow identification method
CN106357726A (en) Load balancing method and device
CN102835081A (en) Scheduling method, device and system based on three-level interaction and interchange network
CN102497297A (en) System and method for realizing deep packet inspection technology based on multi-core and multi-thread
CN103368777A (en) Data packet processing board and processing method
CN102413054B (en) Method, device and system for controlling data traffic as well as gateway equipment and switchboard equipment
CN105847179B (en) The method and device that Data Concurrent reports in a kind of DPI system
CN111193971B (en) Machine learning-oriented distributed computing interconnection network system and communication method
WO2013139678A1 (en) A method and a system for network traffic monitoring
US9344384B2 (en) Inter-packet interval prediction operating algorithm
CN105471770B (en) A kind of message processing method and device based on multi-core processor
CN101355585A (en) System and method for protecting information of distributed architecture data communication equipment
US8707100B2 (en) Testing a network using randomly distributed commands
JP5742549B2 (en) Packet capture processing method and apparatus
CN105357129B (en) A kind of business sensing system and method based on software defined network
CN102857436A (en) Flow transmission method and flow transmission equipment based on IRF (intelligent resilient framework) network
CN101247397A (en) Optimization method for effective order of mirror and access control list function
CN104348675A (en) Bidirectional service data flow identification method and device
CN105141543B (en) A kind of optimization method and flow controller based on flow controller

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
ASS Succession or assignment of patent right

Owner name: BEIJING CHANGXUN HENGXING NETWORKING TECHNOLOGY CO

Free format text: FORMER OWNER: BEIJING QQ TECHNOLOGY CO.,LTD.

Effective date: 20100329

C41 Transfer of patent application or patent right or utility model
TA01 Transfer of patent application right

Effective date of registration: 20100329

Address after: 100037 Beijing City, Xicheng District Fuwai Street No. 2 Wantong New World Plaza B block 8 layer

Applicant after: Beijing Chang Xing Star Network Technology Co., Ltd.

Address before: 100037 Beijing City, Xicheng District Fuwai Street No. 2 Wantong New World Plaza B block 8 layer

Applicant before: Beijing QQ Technology Co., Ltd.

ASS Succession or assignment of patent right

Owner name: HEFEI HOT INFORMATION SCIENCE AND TECHNOLOGY CO.,

Free format text: FORMER OWNER: BEIJING CHANGXUN HENGXING NETWORK TECHNOLOGY CO., LTD.

Effective date: 20100622

C41 Transfer of patent application or patent right or utility model
COR Change of bibliographic data

Free format text: CORRECT: ADDRESS; FROM: 100037 8/F, BLOCK B, WANTONG XINSHIJIE PLAZA, NO.2, FUWAI STREET, XICHENG DISTRICT, BEIJING TO: 230088 ROOM 320, MINCHUANG CENTER, NO.605, HUANGSHAN ROAD, HIGH-TECH. ZONE, HEFEI CITY, ANHUI PROVINCE

TA01 Transfer of patent application right

Effective date of registration: 20100622

Address after: 320 room 230088, center of 605 people's road, Mount Huangshan Road, hi tech Zone, Anhui, Hefei

Applicant after: Hefei Haote Information Technology Co., Ltd.

Address before: 100037 Beijing City, Xicheng District Fuwai Street No. 2 Wantong New World Plaza B block 8 layer

Applicant before: Beijing Chang Xing Star Network Technology Co., Ltd.

C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20110420

Termination date: 20141230

EXPY Termination of patent right or utility model