CN102497430A - System and method for implementing splitting equipment - Google Patents

System and method for implementing splitting equipment Download PDF

Info

Publication number
CN102497430A
CN102497430A CN2011104151261A CN201110415126A CN102497430A CN 102497430 A CN102497430 A CN 102497430A CN 2011104151261 A CN2011104151261 A CN 2011104151261A CN 201110415126 A CN201110415126 A CN 201110415126A CN 102497430 A CN102497430 A CN 102497430A
Authority
CN
China
Prior art keywords
network interface
buffering area
interface card
message
contract
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2011104151261A
Other languages
Chinese (zh)
Other versions
CN102497430B (en
Inventor
窦晓光
刘朝辉
贺志强
刘兴彬
邵宗有
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dawning Information Industry Beijing Co Ltd
Dawning Information Industry Co Ltd
Original Assignee
Dawning Information Industry Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dawning Information Industry Beijing Co Ltd filed Critical Dawning Information Industry Beijing Co Ltd
Priority to CN201110415126.1A priority Critical patent/CN102497430B/en
Publication of CN102497430A publication Critical patent/CN102497430A/en
Application granted granted Critical
Publication of CN102497430B publication Critical patent/CN102497430B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention provides a system and a method for implementing splitting equipment. The splitting equipment comprises a universal server to which a plurality of network cards are plugged. The server adopts a symmetrical multi-processing (SMP) architecture. Packets are received in a plurality of queues on the network cards. Each network card corresponds to a thread for packet transmission. A multi-network-card-based converging and splitting method comprises the following steps that: each network card i divides a hash value into m*n parts according to a configured proportion, and uploads messages to each buffer area, wherein each part corresponds to a packet receiving buffer area allocated by a central processing unit (CPU); packet transmission and receiving threads of a host receive the messages from packet receiving buffer areas j*n to (j+1)*n-1 of each network card i in turn, and transmit the messages from packet transmission buffer areas j; the packet transmission and receiving threads submit messages required to be processed to a processing thread for processing; and messages not to be forwarded are directly discarded. By a multi-network-card-based converging and splitting system and the multi-network-card-based converging and splitting method, the use of network bandwidths is reduced, intermediate network equipment is reduced at the same time, and the investment cost of a user is decreased.

Description

A kind of shunting device is realized system and method
Technical field
The invention belongs to computer network communication field, be specifically related to a kind of shunting device and realize system and method.
Background technology
The present invention solves in the prior art that special-purpose networks converge shunting device involves great expense, power consumption is excessive and the problem of computing capability is not provided.
In the prior art, realize many network traffics converge after when carrying out homology chummage shunting function, the scheme that is adopted is a special equipment.This equipment is made up of backboard and ply-yarn drill, and ply-yarn drill realizes that flow inserts, and the backboard data converge and shunt.The shortcoming of this scheme is that cost is too high, and power consumption is bigger, causes the wasting of resources.Simultaneously, special equipment does not provide computing capability, that is to say, if desired some flow is further handled, and need forward the traffic on the processor, has not only caused the waste of the network bandwidth, has caused the waste of mid-level net communication gear yet.
Application number is that to disclose " a kind of cut-in method of multi-network card server and system " and application number be that 20041000011.7 patent discloses " a kind of data transmission method and device of the network equipment based on many network interface cards " for 20091007660.0 patent; In these two patents; All just relate to the functions such as load balancing or active and standby switching of many network interface cards on the server, do not relate to converging and distributing of flow.
Summary of the invention
The present invention overcomes the prior art deficiency, relates to the computer network interface card technology of giving out a contract for a project.
The invention provides a kind of shunting device that converges based on many network interface cards, it comprises the generic server of inserting the polylith network interface card, and server adopts the SMP framework, and packet receiving adopts many formations to realize on network interface card, and the corresponding thread of each network interface card is given out a contract for a project.
The shunting device that converges based on many network interface cards provided by the invention, every network interface card network interface number is different.
The shunting device that converges based on many network interface cards provided by the invention; Count the system that the network interface card of n constitutes for there being the m piece to have network interface; CPU is that every network interface card distributes m*n packet receiving buffering area and 1 buffering area of giving out a contract for a project; And m transmitting-receiving of startup envelope curve journey, corresponding m*n the packet receiving buffering area of each thread and 1 buffering area of giving out a contract for a project.
The shunting device that converges based on many network interface cards provided by the invention, each network interface card realize that flow inserts, and according to disposing the hash value of calculating the message group, wherein, said message group is a tuple, doublet, tlv triple, four-tuple, five-tuple or seven tuples.
The shunting device that converges based on many network interface cards provided by the invention, message seven tuples are respectively the combination of source order IP, source eye end mouth, transport layer protocol, order mac address, source.
The shunting device that converges based on many network interface cards provided by the invention, each transmitting-receiving envelope curve journey of main frame receives message in turn from the packet receiving buffering area of each network interface card, and sends from the corresponding buffering area of giving out a contract for a project of thread of giving out a contract for a project.
The present invention also provides a kind of shunt method that converges based on many network interface cards, and it comprises the steps:
1) CPU is that every network interface card distributes m*n packet receiving buffering area and 1 buffering area of giving out a contract for a project, and starts m transmitting-receiving envelope curve journey, corresponding m*n the packet receiving buffering area of each thread and 1 buffering area of giving out a contract for a project.Wherein m is the piece number of network interface card, and n is the network interface number of each network interface card;
2) every block of corresponding packet receiving buffering area 0 to m*n-1 of network interface card i and 1 buffering area of giving out a contract for a project.Network interface card i realizes that flow inserts, and calculates the hash value of message group according to configuration.Wherein, said message group is a tuple, doublet, tlv triple, four-tuple, five-tuple or seven tuples; I (i=0,1,2 ... M-1).
3) network interface card i is divided into m*n part with the hash value according to the ratio that disposes, the packet receiving buffering area that CPU of every part of correspondence distributes, and network interface card uploads to message in each buffering area.
4) each transmitting-receiving envelope curve journey j of main frame receives message from the packet receiving buffering area j*n of each network interface card i in turn to (j+1) * n-1, and sends from the buffering area j that gives out a contract for a project.
5) message of handling for needs, transmitting-receiving envelope curve journey submits to processing threads to handle message; Message for need not to transmit directly abandons.
The present invention also provides a kind of shunt method that converges based on many network interface cards, and message seven tuples are combinations of source order IP, source eye end mouth, transport layer protocol, order mac address, source.
The present invention also provides a kind of shunt method that converges based on many network interface cards, and m is 3, and n is 4.
Compared with prior art, beneficial effect of the present invention is:
1) price of generic server is more much lower than special-purpose shunting device;
2) this scheme can further be handled flow, does not handle and need not forward the traffic to other processors, has reduced network bandwidth use, has reduced intermediary network device simultaneously, has reduced user's input cost.
Description of drawings
Fig. 1 is a structural representation of the present invention.
Embodiment
The present invention proposes a kind of implementation method that converges shunting device based on many network interface cards.This method is to adopt generic server to insert the polylith network interface card to realize, considers the SMP framework of server simultaneously, optimizes the transmitting-receiving packet interface of network interface card; Make packet receiving on network interface card, adopt many formations to realize; The corresponding thread of each network interface card of giving out a contract for a project is also avoided data interaction between the thread as far as possible, influences performance.Count the system that the network interface card of n (every network interface card network interface number can be different) constitutes for there being the m piece to have network interface, the concrete realization flow of method is following:
1) CPU is that every network interface card distributes m*n packet receiving buffering area and 1 buffering area of giving out a contract for a project, and starts m transmitting-receiving envelope curve journey, corresponding m*n the packet receiving buffering area of each thread and 1 buffering area of giving out a contract for a project.
Following with network interface card i (i=0,1,2 ... M-1) be example:
2) every block of corresponding packet receiving buffering area 0 to m*n-1 of network interface card i and 1 buffering area of giving out a contract for a project.Network interface card i realizes that flow inserts, and calculates message seven tuple hash values (combination of source order IP, source eye end mouth, agreement, order mac address, source) according to configuration.
3) network interface card i is divided into m*n part with the hash value according to the ratio that disposes, the packet receiving buffering area that CPU of every part of correspondence distributes, and network interface card uploads to message in each buffering area.
4) each transmitting-receiving envelope curve journey j of main frame receives message from the packet receiving buffering area j*n of each network interface card i in turn to (j+1) * n-1, and sends from the buffering area j that gives out a contract for a project.
5) message of handling for needs, transmitting-receiving envelope curve journey submits to processing threads to handle message; Message for need not to transmit directly abandons.
Fig. 1 be the present invention with 3 network interface cards, the structural representation that 4 ports of every network interface card are example.
Above embodiment is only in order to technical scheme of the present invention to be described but not to its restriction; Although the present invention has been carried out detailed explanation with reference to the foregoing description; The those of ordinary skill in said field is to be understood that: still can specific embodiments of the invention make amendment or replacement on an equal basis; And do not break away from any modification of spirit and scope of the invention or be equal to replacement, it all should be encompassed in the middle of the claim scope of the present invention.

Claims (9)

1. shunting device that converges based on many network interface cards, it comprises the generic server of inserting the polylith network interface card, and server adopts the SMP framework, and packet receiving adopts many formations to realize on network interface card, and thread of each network interface card correspondence is given out a contract for a project.
2. the equipment of claim 1 is characterized in that, every network interface card network interface number is different.
3. the equipment of claim 1-2; It is characterized in that; Count the system that the network interface card of n constitutes for there being the m piece to have network interface; CPU is that every network interface card distributes m*n packet receiving buffering area and 1 buffering area of giving out a contract for a project, and starts m transmitting-receiving envelope curve journey, corresponding m*n the packet receiving buffering area of each thread and 1 buffering area of giving out a contract for a project.
4. the equipment of claim 1-3 is characterized in that, each network interface card realizes that flow inserts, and according to disposing the hash value of calculating the message group, wherein, said message group is a tuple, doublet, tlv triple, four-tuple, five-tuple or seven tuples.
5. the equipment of claim 1-4 is characterized in that, message seven tuples are respectively the combination of source order IP, source eye end mouth, transport layer protocol, order mac address, source.
6. the equipment of claim 1-5 is characterized in that, each transmitting-receiving envelope curve journey of main frame receives message in turn from the packet receiving buffering area of each network interface card, and sends from the corresponding buffering area of giving out a contract for a project of thread of giving out a contract for a project.
7. shunt method that converges based on many network interface cards, it comprises the steps:
1) CPU is that every network interface card distributes m*n packet receiving buffering area and 1 buffering area of giving out a contract for a project, and starts m transmitting-receiving envelope curve journey, corresponding m*n the packet receiving buffering area of each thread and 1 buffering area of giving out a contract for a project.Wherein m is the piece number of network interface card, and n is the network interface number of each network interface card;
2) every block of corresponding packet receiving buffering area 0 to m*n-1 of network interface card i and 1 buffering area of giving out a contract for a project.Network interface card i realizes that flow inserts, and calculates the hash value of message group according to configuration.Wherein, said message group is a tuple, doublet, tlv triple, four-tuple, five-tuple or seven tuples; I (i=0,1,2 ... M-1).
3) network interface card i is divided into m*n part with the hash value according to the ratio that disposes, the packet receiving buffering area that CPU of every part of correspondence distributes, and network interface card uploads to message in each buffering area.
4) each transmitting-receiving envelope curve journey j of main frame receives message from the packet receiving buffering area j*n of each network interface card i in turn to (j+1) * n-1, and sends from the buffering area j that gives out a contract for a project.
5) message of handling for needs, transmitting-receiving envelope curve journey submits to processing threads to handle message; Message for need not to transmit directly abandons.
8. the method for claim 7 is characterized in that, message seven tuples are combinations of source order IP, source eye end mouth, transport layer protocol, order mac address, source.
9. the method for claim 7-8 is characterized in that, m is 3, and n is 4.
CN201110415126.1A 2011-12-13 2011-12-13 System and method for implementing splitting equipment Active CN102497430B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201110415126.1A CN102497430B (en) 2011-12-13 2011-12-13 System and method for implementing splitting equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110415126.1A CN102497430B (en) 2011-12-13 2011-12-13 System and method for implementing splitting equipment

Publications (2)

Publication Number Publication Date
CN102497430A true CN102497430A (en) 2012-06-13
CN102497430B CN102497430B (en) 2014-12-03

Family

ID=46189215

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110415126.1A Active CN102497430B (en) 2011-12-13 2011-12-13 System and method for implementing splitting equipment

Country Status (1)

Country Link
CN (1) CN102497430B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103516749A (en) * 2012-06-21 2014-01-15 浙江大华技术股份有限公司 Method and apparatus for multi-card sending-based data sending
CN104734993A (en) * 2013-12-24 2015-06-24 杭州华为数字技术有限公司 Data distribution method and distributor
CN105337888A (en) * 2015-11-18 2016-02-17 华为技术有限公司 Multinuclear forwarding-based load balancing method and device, and virtual switch
CN112788158A (en) * 2020-12-25 2021-05-11 互联网域名系统北京市工程研究中心有限公司 DPDK-based DNS (Domain name Server) packet sending system
CN114884882A (en) * 2022-06-16 2022-08-09 深圳星云智联科技有限公司 Traffic visualization method, device and equipment and storage medium
CN116094840A (en) * 2023-04-07 2023-05-09 珠海星云智联科技有限公司 Intelligent network card and convergence and distribution system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050002389A1 (en) * 2003-07-02 2005-01-06 Intel Corporation Method, system, and program for processing a packet to transmit on a network in a host system including a plurality of network adaptors
CN101064659A (en) * 2006-04-28 2007-10-31 腾讯科技(深圳)有限公司 Data transmission system and method
CN101442446A (en) * 2007-11-20 2009-05-27 英业达股份有限公司 Server and server group set
CN101488918A (en) * 2009-01-09 2009-07-22 杭州华三通信技术有限公司 Multi-network card server access method and system
CN101540727A (en) * 2009-05-05 2009-09-23 曙光信息产业(北京)有限公司 Hardware shunt method of IP report
CN201813388U (en) * 2010-07-07 2011-04-27 南京烽火星空通信发展有限公司 Line-speed shunt device
CN102098215A (en) * 2010-12-17 2011-06-15 天津曙光计算机产业有限公司 Priority management method for multi-application packet reception

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050002389A1 (en) * 2003-07-02 2005-01-06 Intel Corporation Method, system, and program for processing a packet to transmit on a network in a host system including a plurality of network adaptors
CN101064659A (en) * 2006-04-28 2007-10-31 腾讯科技(深圳)有限公司 Data transmission system and method
CN101442446A (en) * 2007-11-20 2009-05-27 英业达股份有限公司 Server and server group set
CN101488918A (en) * 2009-01-09 2009-07-22 杭州华三通信技术有限公司 Multi-network card server access method and system
CN101540727A (en) * 2009-05-05 2009-09-23 曙光信息产业(北京)有限公司 Hardware shunt method of IP report
CN201813388U (en) * 2010-07-07 2011-04-27 南京烽火星空通信发展有限公司 Line-speed shunt device
CN102098215A (en) * 2010-12-17 2011-06-15 天津曙光计算机产业有限公司 Priority management method for multi-application packet reception

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103516749A (en) * 2012-06-21 2014-01-15 浙江大华技术股份有限公司 Method and apparatus for multi-card sending-based data sending
CN104734993A (en) * 2013-12-24 2015-06-24 杭州华为数字技术有限公司 Data distribution method and distributor
WO2015096655A1 (en) * 2013-12-24 2015-07-02 华为技术有限公司 Data splitting method and splitter
CN104734993B (en) * 2013-12-24 2018-05-18 杭州华为数字技术有限公司 Data distribution method and current divider
US10097466B2 (en) 2013-12-24 2018-10-09 Huawei Technologies Co., Ltd. Data distribution method and splitter
CN105337888A (en) * 2015-11-18 2016-02-17 华为技术有限公司 Multinuclear forwarding-based load balancing method and device, and virtual switch
CN105337888B (en) * 2015-11-18 2018-12-07 华为技术有限公司 Load-balancing method, device and virtual switch based on multicore forwarding
CN112788158A (en) * 2020-12-25 2021-05-11 互联网域名系统北京市工程研究中心有限公司 DPDK-based DNS (Domain name Server) packet sending system
CN114884882A (en) * 2022-06-16 2022-08-09 深圳星云智联科技有限公司 Traffic visualization method, device and equipment and storage medium
CN114884882B (en) * 2022-06-16 2023-11-21 深圳星云智联科技有限公司 Flow visualization method, device, equipment and storage medium
CN116094840A (en) * 2023-04-07 2023-05-09 珠海星云智联科技有限公司 Intelligent network card and convergence and distribution system
CN116094840B (en) * 2023-04-07 2023-06-16 珠海星云智联科技有限公司 Intelligent network card and convergence and distribution system

Also Published As

Publication number Publication date
CN102497430B (en) 2014-12-03

Similar Documents

Publication Publication Date Title
CN102497430B (en) System and method for implementing splitting equipment
CN108476177B (en) Apparatus, and associated method, for supporting a data plane for handling functional scalability
CN101217464B (en) UDP data package transmission method
CN101572667B (en) Method for realizing equal cost multipath of IP route and device
CN101217493B (en) TCP data package transmission method
Sidler et al. Low-latency TCP/IP stack for data center applications
CN102970242B (en) Method for achieving load balancing
CN103139093B (en) Based on the express network data stream load equalization scheduling method of FPGA
CN102497322A (en) High-speed packet filtering device and method realized based on shunting network card and multi-core CPU (Central Processing Unit)
WO2018219100A1 (en) Data transmission method and device
CN103049336A (en) Hash-based network card soft interrupt and load balancing method
US11070386B2 (en) Controlling an aggregate number of unique PIM joins in one or more PIM join/prune messages received from a PIM neighbor
US20130003748A1 (en) Relay apparatus and relay control method
CN105049368A (en) Priority-based load balancing algorithm in hybrid network
CN104009928A (en) Method and device for limiting speed of data flow
CN107317759A (en) A kind of thread-level dynamic equalization dispatching method of network interface card
CN114006863A (en) Multi-core load balancing cooperative processing method and device and storage medium
Addanki et al. Controlling software router resource sharing by fair packet dropping
Salah et al. Performance analysis and comparison of interrupt-handling schemes in gigabit networks
CN102035743A (en) Dynamic load balanced distribution method
CN113518037A (en) Congestion information synchronization method and related device
CN101102269A (en) A data load balance method for GPRS network
CN109450823B (en) Network large-capacity switching device based on aggregation type cross node
CN103346976B (en) A kind of bandwidth fairness control method based on layering token bucket
Zhao et al. High-performance implementation of dynamically configurable load balancing engine on FPGA

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C41 Transfer of patent application or patent right or utility model
TR01 Transfer of patent right

Effective date of registration: 20151112

Address after: 100084 Beijing Haidian District City Mill Street No. 64

Patentee after: Dawning Information Industry (Beijing) Co.,Ltd.

Patentee after: WUXI CITY CLOUD COMPUTING CENTER CO.,LTD.

Address before: 100084 Beijing Haidian District City Mill Street No. 64

Patentee before: Dawning Information Industry (Beijing) Co.,Ltd.

TR01 Transfer of patent right

Effective date of registration: 20220118

Address after: 100193 Shuguang building, Zhongguancun Software Park, 8 Dongbeiwang West Road, Haidian District, Beijing

Patentee after: Dawning Information Industry (Beijing) Co.,Ltd.

Address before: 100084 Beijing Haidian District City Mill Street No. 64

Patentee before: Dawning Information Industry (Beijing) Co.,Ltd.

Patentee before: WUXI CITY CLOUD COMPUTING CENTER CO.,LTD.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220729

Address after: 100193 No. 36 Building, No. 8 Hospital, Wangxi Road, Haidian District, Beijing

Patentee after: Dawning Information Industry (Beijing) Co.,Ltd.

Patentee after: DAWNING INFORMATION INDUSTRY Co.,Ltd.

Address before: 100193 Shuguang building, Zhongguancun Software Park, 8 Dongbeiwang West Road, Haidian District, Beijing

Patentee before: Dawning Information Industry (Beijing) Co.,Ltd.

TR01 Transfer of patent right