CN1599356A - Flow equilization processing method and device based on connection pair - Google Patents

Flow equilization processing method and device based on connection pair Download PDF

Info

Publication number
CN1599356A
CN1599356A CNA2004100095934A CN200410009593A CN1599356A CN 1599356 A CN1599356 A CN 1599356A CN A2004100095934 A CNA2004100095934 A CN A2004100095934A CN 200410009593 A CN200410009593 A CN 200410009593A CN 1599356 A CN1599356 A CN 1599356A
Authority
CN
China
Prior art keywords
data
flow
osi
buffer area
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2004100095934A
Other languages
Chinese (zh)
Other versions
CN100413283C (en
Inventor
何喆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Ruian Technology Co Ltd
Original Assignee
Beijing Ruian Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Ruian Technology Co Ltd filed Critical Beijing Ruian Technology Co Ltd
Priority to CNB2004100095934A priority Critical patent/CN100413283C/en
Publication of CN1599356A publication Critical patent/CN1599356A/en
Application granted granted Critical
Publication of CN100413283C publication Critical patent/CN100413283C/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

This invention discloses a flow balance process method and device based on a connection pair and includ taking the upline and downstream data transmitted by back-bone network as the image copy to finish the access of the first layer data of OSI and convert the data to its second layer protocol process to the data, dividing into packet units, picking up the SIP and DIP information of the packet units, enquiring the preset link distribution list based on their information and storing the data into the outlet buffer zone according to the distributed outlet to package the data of the buffer zone to be transmitted on the first layer of OSI after its conversion from first to the second.

Description

Based on connecting right flow equalization processing method and device
Technical field
The present invention relates to a kind of flow equalization processing method and device, relate in particular to a kind of based on connecting right flow equalization processing method and device.
Background technology
In the internet, point-to-point communication by IP to realizing, setting up in the process of a communication, such as sending an envelope Email from server A, must the transmission request be proposed earlier by server A, if server B has resource that reception is provided to server B, so just sending response tells server A to send, server A sends to server B according to the desired form of Network Transmission with Email then, and last server A is reminded to send and finished, and server B discharges resource.Below as can be seen, no matter the transmission of valid data is to receive or send out to a side, this communication process itself be mutual, be the process of a duplex signaling.As shown in Figure 1, be the communication process schematic diagram between two users.Its own unique on network of sign is exactly its IP address for server A, such as 1234, be 4321 for server B, so in communication process, when data flow during from server A to server B the source address SIP of data flow be exactly 1234, destination address is 4321; When data flow during from server B to server A the source address SIP of data flow be exactly 4321, destination address is 1234.Regardless of data flow direction, 1234 and 4321 these pair of IP of forming are to being thick-and-thin.Based on the bidirectional traffic of this pair of IP, it is right to be referred to as to connect.
In order to set up the network system of communication between all types of computer systems, osi model provides the agreement of seven levels to realize, this is the Fundamentals of Mathematics that current most of network is set up.As shown in Figure 2, be the basic model of modern communications.The ground floor physical layer is used for carrying out actual transfer of data, and it is the light or the signal of telecommunication.The second layer is a data link layer, and it is responsible for data cell (bit group) zero defect sent to next adjacent physical address from a last physical address.And add that significant bit forms heading and message trailer comes packet is encapsulated, be referred to as frame.The three-layer network layer includes agreement (PROTOCOL), source IP address (SIP), purpose IP address (DIP), source port address (SP), destination interface address (DP) and the content etc. of message.
Setting up in the process of the Internet infrastructure, because the difference and the application oriented object of economic development and its construction level are different, thereby on the different circuits of different regions or areal, adopted the network bandwidth of different brackets, corresponding infrastructure can not be directly compatible, as direct compatibility just between 10G, 2.5G, 622M, 155M or the like.In the time need moving the data of on the 10G backbone network, transmitting to the 2.5G backbone network, just need 1: 4 ratio to build system at least, the situation of the 2.5G line idle that the 2.5G circuit overload that does not have has simultaneously but the data flow that how to guarantee a 10G flow can be spread on 4 2.5G circuits equably just need be distributed based on the equilibrium that guarantees flow someway.Same, as shown in Figure 3, even on the circuit of same grade, also may flow evenly need be divided into the transmission of several roads, to guarantee to be no more than the limit of back-end processing ability because of some application.As seen, the flow shunting is to compare common phenomena in the network service.
At present, generally be that source, destination address according to packet connects identification based on the flow equalization method of TCP/IP, and connection be assigned on the different nodes according to equalization algorithm.Obviously, what present algorithm was primarily aimed at is the node forwarding problems, and switches still not really ripe at flow between the different bandwidth.Corresponding flow equalization algorithm neither be very ripe, and implementation method also is not quite similar, and its assignment of traffic is not really even.
Summary of the invention
At existing problem and shortage in the above-mentioned existing network communication flows equalization process, the purpose of this invention is to provide a kind of based on connecting right flow equalization processing method and device.
The present invention is achieved in that and a kind ofly may further comprise the steps based on connecting right flow equalization processing method,
1) the uplink and downlink data of backbone network transmission are done mirror-image copies, finish the access of the ground floor data of OSI, and with the second layer of described data transaction to OSI;
2) second layer of described OSI carries out protocol processes to described data, and described data separating is the bag unit;
3) SIP and the DIP information of the described bag of extraction unit, and according to this SIP and DIP information inquiry preset link distributing list, and the distribution of pressing in the link distributing list exports the buffer area of described metadata cache to this outlet, to the encapsulation of packing of described buffer area data, the second layer that carries out described OSI again ground floor at this OSI after the conversion of ground floor transmits by the second layer agreement of described OSI.
Further, default link distributing list in the described step (3) is to set up like this: if data source need be divided into the n road of flow equalization, then the IP address field with communication is mapped to the n road successively by adjacent mode of not going the same way, and the data traffic that is about to described backbone network is divided into n flow piece.
Further, this method also comprises, described n flow piece detected, and when a certain flow piece flow surpasses threshold value, reduces the IP address that this flow piece is comprised, and the IP address that increases other flow pieces simultaneously comprises quantity.
Further, described buffer area is a plurality of, adopts the repeating query mode to carry out when the buffer area data are encapsulated.
Further, when described buffer area overflows, this buffer area that overflows of priority treatment then.
A kind ofly comprise based on connecting right flow equalization processing unit,
OSI ground floor data processing and forwarding module are used for the access mirror-image copies of the ground floor data of OSI, and with described data forwarding to the OSI second layer;
OSI second layer data processing and forwarding module are used for by related protocol the ground floor data of OSI being handled, and described data separating is the bag unit;
The bag buffer area is used for data cached;
SIP and DIP information extraction modules are used for extracting packet from described bag buffer area, and therefrom extract the SIP and the DIP information of this bag;
Calculating and comparing module, SIP and DIP information matches link distributing list that utilize to extract, find the path after, distribute outlet described data to be sent to the buffer area of this outlet by it; The interior data of this buffer area are sent to OSI second layer data processing and forwarding module, OSI ground floor data processing and forwarding module respectively and are transmitted.
Further, described link distributing list is to set up like this: if data source need be divided into the n road of flow equalization, then the IP address field with communication is mapped to the n road successively by adjacent mode of not going the same way, and the data traffic that is about to described backbone network is divided into n flow piece.
Further, described link distributing list is stored in the memory of this device.
Further, this device also comprises the flow detection module, is used for the bulk flow of each flow piece is detected, and when surpassing threshold value as if a certain flow piece flow, reduces the IP address that this flow piece is comprised, and the IP address that increases other flow pieces simultaneously comprises quantity.
Further, described data forwarding in a plurality of buffer areas is specially adopts the repeating query mode to carry out, when buffer area overflows, and this buffer area that overflows of priority treatment then.
The present invention is by being mapped to the IP address field of communication successively by adjacent mode of not going the same way the link of pre-distribution in communication process, make each link form the flow piece, substantially realized flow equalization, and, because communication process is based on connection to carrying out, therefore can guarantee that the bidirectional transmit-receive of communication data carries out on the same line road, data can not broken.The present invention is assigned to source traffic in some subordinate's circuits uniformly according to the purpose IP address, source of each packet just, thereby has realized the flow equalization distribution.
In addition, for avoiding the actual flow inequality of flow piece, promptly other flow piece flows are too small in order to avoid a certain flow piece overload, the present invention also is provided with the flow detection mode, and each flow piece is detected in good time, and promptly the flow piece feeds back its flow, when the flow that detects a certain IP flow piece is excessive, dwindle this IP flow piece and comprise the scope of concrete IP value, thereby reduce flow, realize instant flow equalization.The present invention has solved the equilibrium shunting of backbone network to the undernet circuit well, and can guarantee the instant equilibrium of each subordinate's circuit, has guaranteed the flow equalization distribution between the different bandwidth.
Description of drawings
Below in conjunction with accompanying drawing, the present invention is made detailed description.
Fig. 1 is general communication structure schematic diagram;
Fig. 2 is a Network Transmission model structure schematic diagram;
Fig. 3 is existing flow equalization structural representation;
Fig. 4 is a data processing structure schematic diagram of the present invention.
Embodiment
As shown in Figure 4, the uplink and downlink data of backbone network transmission are done the access that mirror-image copies has just been finished the ground floor data of OSI.Ground floor at OSI, what transmit all is the signal of telecommunication or the light signal of simulation or numeral, though the signal of this layer itself also comprises SIP and DIP information, but these signals all are to be delivered to the other end with the high-speed of serial from an end, the present invention does not do any improvement to the ground floor of OSI, directly receive by this layer data and processing module to signal processing, guarantee that the high speed of its signal is unimpeded.Because if want from the ground floor of OSI, directly to parse SIP and DIP, be very difficult.At first will isolate packet unit, be the unit caches data with the packet again, then data analyzed, and extracts correct SIP and DIP according to different data type, and then calculate the flow direction after its shunting according to SIP and DIP.This is to waste time and energy and uneconomic method.
And the second layer of OSI, what import is the digital electric signal of the protocol compliant regulation of ground floor output, this signal of telecommunication is confusing physically, but processing through the second layer, isolate packet unit one by one the digital electric signal of the protocol compliant after handling through ground floor, and with its head and the tail of mode sign of independent control signal, to mistake, verification etc.Its output be exactly bag unit one by one, and erased bag tail tag bit position, packet header necessary when physical layer transmission, and when sending packet, unwrapped the beginning and finish to provide synchronously independently control signal at number to the 3rd layer.The Data Receiving of the second layer and processing module can be ready-made asic chips, also can be to design embedded chip voluntarily by the user, but must satisfy relevant protocol type, these agreements be according to different Network Transmission bandwidth, switch, route etc. have nothing in common with each other.
The packet of bag buffering area one after with second layer protocol analysis is that unit is stored in the buffering area with the bag.Buffering area of the present invention can be a storage medium arbitrarily such as RAM, FIFO, FLASH, as long as its read or write speed must adapt to the requirement on the whole system speed.Its storage size needs and can set arbitrarily according to system.The SIP/DIP information extraction modules is analyzed the affiliated protocol type and the SIP/DIP information of each bag from bag buffering area one, and with SIP/DIP information together with the first address that wraps in storage in the buffering area one, information such as length are exported.Because the position that different its SIP/DIP of data of agreement stores in message also is not quite similar.Thereby when analyzing data SIP/DIP, at first to analyze the protocol type of packet, extract the SIP/DIP information of packet again in the packet relevant position according to the data pack protocol type, the record bag is long simultaneously, wrap in first address information of storing in the buffering area etc., send to downstream module together, use during for moving data.
Calculating and comparing module receives that to call IP after the SIP/DIP information that information extraction modules sends right---bag output is pointed in the corresponding list and is inquired about, obtain the direction of this packet outlet, then together with packet address stored in buffering area one, information such as length send to the data-moving module together.Here, IP is right---and it promptly is the link distributing list that corresponding list is pointed in bag output.Here, the link distributing list is to set up like this: because the IP address is a Normal Distribution on the network, that is to say it is that macroscopic view is evenly continuous.The present invention adopts the IP address to be mapped on the link successively by adjacent mode of not going the same way.Soon all IP from 0.0.0.0 to 255.255.255.255 guarantee between each piece roughly the same to defined data flow by IP to being dispersed as several bulks simultaneously.With 0.0.0.0 to all IP of 255.255.255.255 according to adjacent different principle, define the dispense flow rate direction of each IP.Such as, if the data source of backbone network need be divided into 5 tunnel of flow equalization, each IP will shine upon in such a way so: IP is that the data of 0.0.0.0 output to the first via, 0.0.0.1 to the second the tunnel, 0.0.0.2 is to Third Road, 0.0.0.3 to the four the tunnel, 0.0.0.4 to the five the tunnel, 0.0.0.5 to the first via, 0.0.0.6 is to the second the tunnel ... 255.255.255.254 to the five the tunnel, 255.255.255.255 is to the first via.Because the source destination address of communication two party exchanges, so that the present invention has guaranteed simultaneously that also uniform flow rate is based on connection is right.Because its IP of data flow of network delivery is normal distribution, macroscopic view is uniform, so this processing method can guarantee the flow basis equalization of the IP piece that each is big.
IP of the present invention pair is pointed to corresponding list with bag output and has set up the IP pair of correspondence table of pointing to outlet that covers entire I P address scope.Each IP points to corresponding outlet is arranged.
Data-moving module one is according to the first address that wraps in storage in the buffering area one, and length information is moved packet the buffering area two from buffering area one, simultaneously the output directional information of additional packets.Here, SIP/DIP information extraction modules, calculating and comparing module, data-moving module can be by embedded design, and also available ASIC realizes.
Two storages of bag buffering area have the packet unit that exports directional information.
Data-moving module two according to certain strategy, is moved whole bag successively, and is write definite subordinate's buffering area according to their sensing from two bag buffering areas two from two buffering areas---bag buffering area three.The strategy of moving here is specially, from two bag buffering areas two repeating query (table tennis) move bag, and, when one of them wraps buffering area two spill-overs, the bag buffering area two of this spill-over of priority treatment.In this module, broken up from the up-downgoing data that data source is come, moved separately respectively according to the difference of right difference of IP rather than up-downgoing again and sent in the data buffer zone.Each sends data in buffering area and had both comprised upstream data and also comprise downlink data, and only to be whether be based on IP right to relevant connection in the difference of packet between different buffering areas.Can realize by embedded system or ASIC.
Wrap buffering area three storage back each self-corresponding all packet of each outlet line of level, and erase the bag directional information, determine that the buffering area that points to has suffered because wrap this moment to be stored in.Independently wrap tail information in control signal sync mark packet header at dateout Bao Shiyong, use when reading for downstream module.Buffering area three and thereafter the number of each processing module of polyphone decide according to the ratio of Data Receiving source and transmitting terminal.Be transformed into the 622M circuit such as 1 road 2.5G upstream data, just must 4 send buffering area and its rear end polyphone module chain at least.
Second layer packing sending module is packed packet in the buffering area and is exported according to second layer protocol requirement.The dateout form meets the requirement of second layer data output protocol.Here, the packing sending module is data processing and forwarding module.The present invention also comprises the traffic statistics module that each shunt link flow is detected, detect by the data traffic that each second layer packing sending module is handled, can obtain each flow information of link along separate routes, if surpassed threshold value, then dwindle this IP flow piece and comprise the scope of concrete IP value, thereby the reduction flow is realized instant flow equalization.
Like this, the present invention has realized the equilibrium distribution of flow, just the rate of discharge of each IP piece correspondence should approximate equality, link distributing list of the present invention is a dynamic list, because flow is always unsettled, although meet normal distribution in the distribution of flow on the macroscopic view in the IP address, but in local time, but might produce unstable unbalanced, so IP is right-and corresponding list is pointed in outlet must can be according to the variation of each rate of discharge, time update respectively exports the right coverage of corresponding IP, both can satisfy total coverage rate 100%, can realize fine setting to interblock at each sub-IP again.
Ground floor conversion sending module carries out the data transaction of ground floor, and machinery and the electrical characteristic transmission Bit data according to interface flows on the circuit then.
The present invention can realize that the flow equalization of 1 road 10G when 4 road 2.5G flows are changed handle; 1 road 2.5G handles to the flow cycled flow equalization of 4 road 622M; 1 road 622M handles or the like to the flow equalization of 4 road 155M.
The present invention does not consider the difference of up-downgoing, puts on an equal footing for the up-downgoing data that data source is come, and each data flow of output is propped up the difference of also not doing up-downgoing.If need also consider the difference of up-downgoing when using desire of the present invention to realize flow equalization at the dateout end, then need before data-moving module two is carried out data-moving, the difference (difference of up-downgoing) according to data buffer zone, the left and right sides carry out mark, put into the buffering area of appointment according to the difference of up-downgoing mark and output sensing when moving.The realization number that sends buffering area three than present Duo one times get final product.

Claims (10)

1, a kind ofly may further comprise the steps based on connecting right flow equalization processing method,
1) the uplink and downlink data of backbone network transmission are done mirror-image copies, finish the access of the ground floor data of OSI, and with the second layer of described data transaction to OSI;
2) second layer of described OSI carries out protocol processes to described data, and described data separating is the bag unit;
3) SIP and the DIP information of the described bag of extraction unit, and according to this SIP and DIP information inquiry preset link distributing list, and the distribution of pressing in the link distributing list exports the buffer area of described metadata cache to this outlet, to the encapsulation of packing of described buffer area data, the second layer that carries out described OSI again ground floor at this OSI after the conversion of ground floor transmits by the second layer agreement of described OSI.
2, as claimed in claim 1 based on connecting right flow equalization processing method, it is characterized in that, default link distributing list in the described step (3) is to set up like this: if data source need be divided into the n road of flow equalization, then the IP address field with communication is mapped to the n road successively by adjacent mode of not going the same way, and the data traffic that is about to described backbone network is divided into n flow piece.
3, as claimed in claim 2 based on connecting right flow equalization processing method, it is characterized in that, this method also comprises, described n flow piece detected, when a certain flow piece flow surpasses threshold value, reduce the IP address that this flow piece is comprised, the IP address that increases other flow pieces simultaneously comprises quantity.
4, as claimed in claim 1ly it is characterized in that based on connecting right flow equalization processing method described buffer area is a plurality of, when the buffer area data are encapsulated, adopt the repeating query mode to carry out.
5, as claimed in claim 4ly it is characterized in that based on connecting right flow equalization processing method, when described buffer area overflows, this buffer area that overflows of priority treatment then.
6, a kind ofly comprise based on connecting right flow equalization processing unit,
OSI ground floor data processing and forwarding module are used for the access mirror-image copies of the ground floor data of OSI, and with described data forwarding to the OSI second layer;
OSI second layer data processing and forwarding module are used for by related protocol the ground floor data of OSI being handled, and described data separating is the bag unit;
The bag buffer area is used for data cached; It is characterized in that this device also comprises,
SIP and DIP information extraction modules are used for extracting packet from described bag buffer area, and therefrom extract the SIP and the DIP information of this bag;
Calculating and comparing module, SIP and DIP information matches link distributing list that utilize to extract, find the path after, distribute outlet described data to be sent to the buffer area of this outlet by it; The interior data of this buffer area are sent to OSI second layer data processing and forwarding module, OSI ground floor data processing and forwarding module respectively and are transmitted.
7, as claimed in claim 6 based on connecting right flow equalization processing unit, it is characterized in that, described link distributing list is to set up like this: if data source need be divided into the n road of flow equalization, then the IP address field with communication is mapped to the n road successively by adjacent mode of not going the same way, and the data traffic that is about to described backbone network is divided into n flow piece.
8, as claimed in claim 6 based on connecting right flow equalization processing unit, it is characterized in that described link distributing list is stored in the memory of this device.
9, as claimed in claim 7 based on connecting right flow equalization processing unit, it is characterized in that, this device also comprises the flow detection module, be used for the bulk flow of each flow piece is detected, when if a certain flow piece flow surpasses threshold value, reduce the IP address that this flow piece is comprised, the IP address that increases other flow pieces simultaneously comprises quantity.
10, as claimed in claim 6ly it is characterized in that based on connecting right flow equalization processing unit described data forwarding in a plurality of buffer areas is specially adopts the repeating query mode to carry out, when buffer area overflows, this buffer area that overflows of priority treatment then.
CNB2004100095934A 2004-09-21 2004-09-21 Flow equilization processing method and device based on connection pair Expired - Lifetime CN100413283C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2004100095934A CN100413283C (en) 2004-09-21 2004-09-21 Flow equilization processing method and device based on connection pair

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2004100095934A CN100413283C (en) 2004-09-21 2004-09-21 Flow equilization processing method and device based on connection pair

Publications (2)

Publication Number Publication Date
CN1599356A true CN1599356A (en) 2005-03-23
CN100413283C CN100413283C (en) 2008-08-20

Family

ID=34662540

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2004100095934A Expired - Lifetime CN100413283C (en) 2004-09-21 2004-09-21 Flow equilization processing method and device based on connection pair

Country Status (1)

Country Link
CN (1) CN100413283C (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100426773C (en) * 2005-11-30 2008-10-15 中兴通讯股份有限公司 Method for equalizing port flow while multiple-MAC-port inter-connecting
CN100505684C (en) * 2005-03-29 2009-06-24 国际商业机器公司 Network system, flow equalization method, network monitoring device and host machine
CN102209035A (en) * 2011-05-25 2011-10-05 杭州华三通信技术有限公司 Traffic forwarding method and devices
CN102420752A (en) * 2011-11-28 2012-04-18 曙光信息产业(北京)有限公司 Dynamic distribution device under 10Gbps flow
CN101635720B (en) * 2009-08-31 2012-09-05 杭州华三通信技术有限公司 Filtering method of unknown flow rate and bandwidth management equipment
CN102664789A (en) * 2012-04-09 2012-09-12 北京百度网讯科技有限公司 Method and system for processing large-scale data
CN101789898B (en) * 2009-01-23 2013-01-02 雷凌科技股份有限公司 Method and equipment for forwarding packet
CN105591965A (en) * 2015-12-28 2016-05-18 北京锐安科技有限公司 Flow balance output method and device

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6751221B1 (en) * 1996-10-04 2004-06-15 Kabushiki Kaisha Toshiba Data transmitting node and network inter-connection node suitable for home network environment
DE69943057D1 (en) * 1998-10-30 2011-02-03 Virnetx Inc NETWORK PROTOCOL FOR PROTECTED COMMUNICATION
JP2000295274A (en) * 1999-04-05 2000-10-20 Nec Corp Packet exchange
JP4647825B2 (en) * 2001-04-27 2011-03-09 富士通セミコンダクター株式会社 Packet transmission / reception system, host, and program
EP1301008B1 (en) * 2001-10-04 2005-11-16 Alcatel Process for transmission of data via a communication network to a terminal and network node
CN1254050C (en) * 2002-01-22 2006-04-26 瑞昱半导体股份有限公司 Link layer communication protocol controlled swapping controller and control method thereof
CN2749188Y (en) * 2004-09-08 2005-12-28 北京锐安科技有限公司 Traffic balancing device based on connection couple

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100505684C (en) * 2005-03-29 2009-06-24 国际商业机器公司 Network system, flow equalization method, network monitoring device and host machine
CN100426773C (en) * 2005-11-30 2008-10-15 中兴通讯股份有限公司 Method for equalizing port flow while multiple-MAC-port inter-connecting
CN101789898B (en) * 2009-01-23 2013-01-02 雷凌科技股份有限公司 Method and equipment for forwarding packet
CN101635720B (en) * 2009-08-31 2012-09-05 杭州华三通信技术有限公司 Filtering method of unknown flow rate and bandwidth management equipment
CN102209035A (en) * 2011-05-25 2011-10-05 杭州华三通信技术有限公司 Traffic forwarding method and devices
CN102209035B (en) * 2011-05-25 2014-10-15 杭州华三通信技术有限公司 Traffic forwarding method and devices
CN102420752A (en) * 2011-11-28 2012-04-18 曙光信息产业(北京)有限公司 Dynamic distribution device under 10Gbps flow
CN102420752B (en) * 2011-11-28 2015-02-04 曙光信息产业(北京)有限公司 Dynamic distribution device under 10Gbps flow
CN102664789A (en) * 2012-04-09 2012-09-12 北京百度网讯科技有限公司 Method and system for processing large-scale data
CN102664789B (en) * 2012-04-09 2016-08-17 北京百度网讯科技有限公司 The processing method of a kind of large-scale data and system
CN105591965A (en) * 2015-12-28 2016-05-18 北京锐安科技有限公司 Flow balance output method and device
CN105591965B (en) * 2015-12-28 2018-12-14 北京锐安科技有限公司 flow equalization output method and device

Also Published As

Publication number Publication date
CN100413283C (en) 2008-08-20

Similar Documents

Publication Publication Date Title
CN1166246C (en) System supporting variable bandwidth asynchronous transfer mode network access for wireline and wireless communications
CN101258719B (en) Method to extend the physical reach of an Infiniband network
CN102106125B (en) A kind of multi-path network
CN1073318C (en) Movable communication
CN1097933C (en) Method and device for transforming a series of data packets by means of data compression
CN1221548A (en) Minicell segmentation and reassembly
JPH10506242A (en) Method and switch node for switching STM cells in a circuit emulated ATM switch
CN101616064B (en) Method for managing data, mesh network system and associated device
CN101499997A (en) Apparatus for multi-path low speed service multiplexing and demultiplexing, and method therefor
CN1599356A (en) Flow equilization processing method and device based on connection pair
CN114339488B (en) Method and device for protecting Ethernet service in optical transmission network
CN101064697B (en) Apparatus and method for realizing asynchronous transmission mode network service quality control
CN101202634B (en) Single board improving data utilization ratio and system and method of data transmission
CN1501640A (en) Method and system for transmitting Ethernet data using multiple E1 lines
CN1479459A (en) Ethernet optical fiber transceiver and data transceiving method used on said tranceiver
CN1270727A (en) VC marging for ATM switch
JP2020519100A (en) Method, apparatus and system for transmitting traffic in flex ethernet protocol
US7379467B1 (en) Scheduling store-forwarding of back-to-back multi-channel packet fragments
CN2749188Y (en) Traffic balancing device based on connection couple
CN1859433A (en) Frame transmitting method for micro frame multiplex
CN105680928B (en) Large capacity check-in signal captures and processing method and system
US6347098B1 (en) Packet multiplexing apparatus with less multiplexing delay
US20080212590A1 (en) Flexible protocol engine for multiple protocol processing
CN1567830A (en) Multi-channel network management apparatus and method for transmission equipment
CN1427590A (en) Method for forming and transmitting tandem pocket in ATM exchange system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CX01 Expiry of patent term

Granted publication date: 20080820