CN110324265B - Traffic distribution method, routing method, equipment and network system - Google Patents

Traffic distribution method, routing method, equipment and network system Download PDF

Info

Publication number
CN110324265B
CN110324265B CN201810271926.2A CN201810271926A CN110324265B CN 110324265 B CN110324265 B CN 110324265B CN 201810271926 A CN201810271926 A CN 201810271926A CN 110324265 B CN110324265 B CN 110324265B
Authority
CN
China
Prior art keywords
network
switching
chips
port
ports
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810271926.2A
Other languages
Chinese (zh)
Other versions
CN110324265A (en
Inventor
曹政
高山渊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201810271926.2A priority Critical patent/CN110324265B/en
Publication of CN110324265A publication Critical patent/CN110324265A/en
Application granted granted Critical
Publication of CN110324265B publication Critical patent/CN110324265B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/10Packet switching elements characterised by the switching fabric construction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/10Packet switching elements characterised by the switching fabric construction
    • H04L49/102Packet switching elements characterised by the switching fabric construction using shared medium, e.g. bus or ring
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/25Routing or path finding in a switch fabric
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/25Routing or path finding in a switch fabric
    • H04L49/253Routing or path finding in a switch fabric using establishment or release of connections between ports
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/30Peripheral units, e.g. input or output ports

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The embodiment of the application provides a traffic distribution method, a routing method, equipment and a network system. In the embodiment of the application, the switching chips belonging to different networks are combined into one network switching device, and the node device is accessed into different networks through the network switching device, so that a multi-rail network system is constructed.

Description

Traffic distribution method, routing method, equipment and network system
Technical Field
The present application relates to the field of network communication technologies, and in particular, to a traffic distribution method, a routing method, a device, and a network system.
Background
With the development of network communication technology, the network bandwidth of data center has been rapidly increased, the network bandwidth of 25Gb/s has been popularized, the network bandwidth of 100Gb/s has also matured, and even the network bandwidth of 400Gb/s has been in the future. However, the development of network bandwidth is limited by the manufacturing process of transceivers on physical links, such as SERDES, and the development of network bandwidth has not kept pace with the increase in computing power of nodes, and the bottleneck of network bandwidth appears more prominent in data centers.
In order to solve the problem of network bandwidth faced by a data center, in the prior art, a plurality of physically independent networks are deployed, a node device is allowed to access to the plurality of networks simultaneously, and the plurality of networks provide network services for the node device simultaneously, which solves the problem of limited access bandwidth to a certain extent.
However, the existing multi-network architecture also has some disadvantages, such as high complexity and large scale, and therefore, a new technology is required to implement a multi-network architecture with relatively low complexity and scale.
Disclosure of Invention
The application provides a traffic distribution method, a routing method, a device and a network system in multiple aspects, and aims to solve the problems of high complexity, large scale and the like of the existing multi-network architecture.
An embodiment of the present application provides a network switching device, including: n exchange chips and at least one sharing module which can be shared by the N exchange chips; the N switching chips belong to N different networks, and form M2 network ports for connecting M2 node devices, so that the M2 node devices access to the corresponding networks; wherein N and M2 are natural numbers, and N is not less than 2.
An embodiment of the present application further provides a network system, including: at least one node device and at least one network switching device;
each network switching device comprises N switching chips and at least one sharing module which can be shared by the N switching chips; the N switching chips belong to N different networks, and form M2 network ports, which are used to connect M2 node devices of the at least one node device, so that the M2 node devices access to corresponding networks; wherein N and M2 are natural numbers, and N is more than or equal to 2;
each node device comprises K network ports which are used for being correspondingly connected with K switching chips in one network switching device in the at least one network switching device, K is a natural number and is less than or equal to N.
The embodiment of the present application further provides a traffic distribution method, which is applicable to a node device including at least two network ports, and the method includes:
acquiring a current data packet;
determining a target port selection parameter from port selection parameters corresponding to a flow distribution strategy by aiming at flow balance;
selecting a target network port from the at least two network ports based on the target port selection parameter;
and sending the current data packet to a network accessed by the target network port through the target network port, wherein the at least two network ports are respectively accessed to different networks.
An embodiment of the present application further provides a routing method, which is applicable to the network switching device provided in the foregoing embodiment, and the method includes:
determining a first switching chip corresponding to a data packet to be sent from N switching chips of the network switching equipment based on a routing strategy;
when the first exchange chip meets the set network switching condition, selecting a second exchange chip from other reachable exchange chips, wherein the other reachable exchange chips are exchange chips which can be reached by the route of the destination end of the data packet to be sent except the first exchange chip in the N exchange chips;
and controlling the second switching chip to send the data packet to be sent out based on an internal network between the first switching chip and the second switching chip.
An embodiment of the present application further provides a node device, including: k network ports, a memory and a processor;
the K network ports are used for accessing K different networks, K is a natural number and is more than or equal to 2;
the memory for storing a computer program;
the processor, coupled with the memory, to execute the computer program to:
acquiring a current data packet;
determining a target port selection parameter from port selection parameters corresponding to a flow distribution strategy by aiming at flow balance;
selecting a target network port from the at least two K network ports based on the target port selection parameter;
and sending the current data packet to a network accessed by the target network port through the target network port.
In the embodiment of the application, the switching chips belonging to different networks are combined into one network switching device, and the node device is accessed into different networks through the network switching device, so that a multi-track network system comprising different networks is constructed. In the multi-track network system, the switching chips belonging to different networks in the network switching equipment can share some modules, which is beneficial to saving space resources and reducing the volume of the network switching equipment.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1a is a schematic structural diagram of a network switching device according to an exemplary embodiment of the present application;
fig. 1b is a schematic structural diagram of another network switching device according to another exemplary embodiment of the present application;
fig. 1c is a schematic structural diagram of a switch chip applicable to the network switching device according to the exemplary embodiment of the present application, according to another exemplary embodiment of the present application;
fig. 1d is a schematic structural diagram of another network switching device according to another exemplary embodiment of the present application;
fig. 1e is a schematic structural diagram of another network switching device according to another exemplary embodiment of the present application;
fig. 2 is a schematic structural diagram of a network system according to another exemplary embodiment of the present application;
fig. 3a is a schematic flow chart of a traffic distribution method according to another exemplary embodiment of the present application;
fig. 3b is a schematic flowchart of a packet-by-packet traffic distribution method according to another exemplary embodiment of the present application;
fig. 3c is a schematic flow chart of a flow-by-flow traffic distribution method according to another exemplary embodiment of the present application;
fig. 3d is a schematic flow chart of another packet-by-packet traffic distribution method according to another exemplary embodiment of the present application;
fig. 3e is a schematic flow chart of another flow-by-flow traffic distribution method according to another exemplary embodiment of the present application;
fig. 4a is a schematic flow chart of a routing method according to another exemplary embodiment of the present application;
fig. 4b is a schematic flow chart of another routing method according to another exemplary embodiment of the present application;
fig. 5 is a schematic structural diagram of a node device according to another exemplary embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In some exemplary embodiments of the present application, switching chips belonging to different networks are combined into one network switching device, and the node device is accessed to different networks through the network switching device, so as to construct a multi-track network system including different networks. In the multi-track network system, the switching chips belonging to different networks in the network switching equipment can share some modules, which is beneficial to saving space resources and reducing the volume of the network switching equipment.
The technical solutions provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings.
Fig. 1a is a schematic structural diagram of a network switching device according to an exemplary embodiment of the present application. As shown in fig. 1a, the network switching device 100 includes: n switch chips 101 and can be shared by N switch chipsAt least one shared module 102. Wherein, N is a natural number, and N is greater than or equal to 2, that is, the network switching device 100 includes at least two switching chips 101. In the network switching apparatus 100, N switching chips 101 belong to N different networks S1-SN
In the present embodiment, the network S1-SNIt can be understood that: a system for realizing resource sharing by connecting a plurality of autonomous computer systems distributed at different places with each other by using communication lines and communication equipment and sharing resources such as hardware, software and the like according to a common network protocol. Depending on the interconnection scope, the network S1-SNAny of the networks may be a local area network, a metropolitan area network, or a wide area network.
In some alternative embodiments, the network S1-SNThe network in (1) may be a homogeneous network, i.e. the devices in the network are supplied by the same vendor or are compatible devices, which run under the same operating system or network operating system. In other alternative embodiments, the network S1-SNThe network in (1) may also be a heterogeneous network, i.e. devices in the network are supplied by different vendors, mostly running on different protocols supporting different functions or applications.
In the present embodiment, a network switching device 100 with a new structure is designed, and the network switching device 100 includes N switching chips 101 belonging to N different networks, which is equivalent to combining the N switching chips 101 into one network switching device 100. In addition, the network switching device 100 further includes at least one module that can be shared by the N switching chips 101, which is referred to as a sharing module for short. These shared modules refer to modules, such as a clock module, a power module, etc., which can provide the same service for the N switch chips 101 or can meet the same requirements of the N switch chips 101.
For the network switching device 100, the N switching chips 101 included therein may form M2 network ports for connecting M2 node devices 200, so that the M2 node devices 200 access to the corresponding network. Where M2 is a natural number. Preferably, M2 is a natural number ≧ 2. For one node device 200, it may be connected to some switch chips of the N switch chips 101 to access the network to which the connected switch chips belong, or may be connected to the N switch chips 101 respectively to access N different networks.
In this embodiment, the implementation form of the node device 200 is not limited, and the node device 200 may be any device capable of being connected to the switch chip 101, for example, a workstation, a terminal device, a network user, or a personal computer, and may also be a server, a printer, or other network-connected devices.
The network switching device of the embodiment includes switching chips belonging to different networks, and the node device can be accessed to different networks, thereby constructing a network system including different networks. For convenience of description, a network system constructed based on the network switching device of the present embodiment is referred to as a multi-track network system. In addition, the switching chips belonging to different networks in the network switching equipment can share some modules, which is beneficial to saving space resources and reducing the volume of the network switching equipment, so the complexity of the multi-rail network system is relatively low and the scale is relatively small.
In some exemplary embodiments, for network S1-SNThe network of (1) may include a plurality of switch chips, and the plurality of switch chips may be cascaded by using a cascade technique to expand the network scale, and a cascade hierarchy may be formed between the cascaded switch chips. In this embodiment, the cascade mode between the switching chips is not limited, and the definition of the cascade hierarchy is not limited. For example, taking three switch chips cascaded in sequence as an example, the cascade hierarchy of the three switch chips may be defined as a first hierarchy, a second hierarchy, and a third hierarchy, or may also be defined as a hierarchy a, a hierarchy B, and a hierarchy C, and so on.
Based on the cascade hierarchy among the above switching chips, the networkThe N switching chips 101 in the switching device 100 belong to N different networks S1-SNAnd the cascade level of the N switch chips 101 in the network to which each belongs is the same.
In other exemplary embodiments of the present application, as shown in fig. 1b, at least one sharing module in the network switching device 100 may include: m2 external port modules 103. The M2 external port modules 103 in the network switch device 100 are mainly used to implement connections between the M2 node devices 200 and N switch chips 101 inside the network switch device 100.
In some application scenarios, each node device 200 may be connected to N switching chips 101, respectively, to facilitate access to N different networks. Then, as shown in fig. 1b, one end of each external port module 103 is connected to one node device 200, and the other end is connected to N switching chips 101, respectively. For any external port module 103, on one hand, signals from the node device 200 connected thereto may be distributed to the N switch chips 101 in the network switch device 100 to which it belongs, and on the other hand, signals from the N switch chips 101 may be aggregated and then sent to the node device 200 connected thereto. In these application scenarios, the number of external port modules 103 included in the network switch device 100 determines the number of node devices 200 that the network switch device 100 can connect to some extent.
In the embodiment of the present application, the connection manner between the external port module 103 and the N switch chips 101 is not limited. In some exemplary embodiments, as shown in fig. 1c, each switch chip 101 in the network switch device 100 includes M2 external ports 11. The M2 external ports 11 in each switch chip 101 are respectively connected to the M2 external port modules 103 in the network switch device 100 in a one-to-one correspondence. With the switching chip structure, the connection between the external port module 103 and the N switching chips 101 can be simply and conveniently realized.
In some exemplary embodiments of the present application, as shown in fig. 1d, the at least one sharing module in the network switching device 100 may further include: intranet control module 104. The intranet control module 104 is connected to the N switching chips 101 in the network switching device 100, and is configured to control data exchange between the N switching chips 101. The N switching chips 101 are connected to each other to form an internal network of the network switching device 100. Optionally, as shown in fig. 1c, each switch chip 101 in the network switch device 100 further includes: m1 internal ports 12. The N switching chips 101 in the network switching device 100 are interconnected through M1 internal ports 12 respectively included to form an internal network of the network switching device 100; wherein M1 is a natural number, and M1 is not more than N. The topology of the internal network formed by the M1 internal ports included in each of the N switch chips 101 is not limited in the embodiments of the present application, and may be, for example, a star structure, a ring structure, a bus structure, a tree structure, a mesh structure, and the like.
In these exemplary embodiments, the N switching chips 101 are interconnected to form an internal network of the network switching device 100, based on the internal network among the N switching chips 101, the intranet control module 104 may control network data to be switched between different networks according to application requirements, and especially when a certain network is congested or has a fault, the intranet control module 104 may flexibly switch data in the network to other light-load networks, thereby implementing interconnection and intercommunication among co-located switching chips, so that traffic may be switched in different networks, dynamic load balancing and high availability of a multi-track network system are implemented, multi-paths in the multi-track network system are also sufficiently mined, and advantages of the multi-track network system are sufficiently exerted.
It should be noted that the intranet control module 104 may serve as a control plane inside the network switching device 100, and its function, besides controlling the network data to be switched between different networks, may also be responsible for configuration management of the network switching device 100, such as configuration of a routing table. The intranet control module 104 controls the process of switching network data between different networks, and is also a process of the intranet control module 104 performing adaptive routing between N switching chips. Optionally, intranet control module 104 may control data exchange between N switching chips 101 by, but not limited to, the following routing manner:
in this manner, the intranet control module 104 may determine, based on the routing policy, a first switch chip corresponding to the data packet to be sent from the N switch chips; when the first exchange chip meets the set network switching condition, selecting a second exchange chip from other reachable exchange chips, wherein the other reachable exchange chips are exchange chips which can be reached by a destination route of a data packet to be sent except the first exchange chip in the N exchange chips; and controlling the second switching chip to send out the data packet to be sent based on the internal network between the first switching chip and the second switching chip.
The routing policy may be a conventional routing algorithm, and may be, for example, a distributed routing method, a centralized routing method, a hybrid dynamic routing method, a link state routing algorithm, or the like. The detailed process of determining the first switch chip by intranet control module 104 based on the conventional routing algorithm is similar to the prior art, and the determination process of the first switch chip can be easily understood by those skilled in the art based on the knowledge of the conventional routing algorithm, so that the detailed description is omitted here. The first switch chip may be any switch chip of the N switch chips.
The network switching condition may depend on a specific application scenario. For example, in some application scenarios, after determining the first switch chip, the intranet control module 104 may determine whether an external port used for sending a data packet to be sent on the first switch chip has a fault, whether a packet loss rate exceeds a packet loss rate threshold, and/or whether an average queuing delay exceeds a delay threshold, and determine that the first switch chip satisfies a network switching condition when the external port used for sending the data packet to be sent on the first switch chip has a fault, and the packet loss rate exceeds the packet loss rate threshold, and/or the average queuing delay exceeds the delay threshold, so as to trigger an operation of selecting the second switch chip.
In some exemplary embodiments, the M2 external ports 11 in each switch chip 101 are numbered in the same order, for example, the ports D _1, D _2, D _3, … … D _ M2 from left to right, and the external ports with the same port number in each switch chip 101 may be connected to the same external port module 103. Based on this, when the second switch chip is selected, the intranet control module 104 may obtain traffic loads of external ports having the same port number on other reachable switch chips based on the port number of the external port on the first switch chip for sending the data packet to be sent; and selecting the switching chip with the traffic load meeting the set requirement from other reachable switching chips as a second switching chip according to the traffic load of the external port with the same port number on the other reachable switching chips. Therefore, when a certain network is in fault/serious congestion, the data packet in the network is switched to another light load network for transmission, so that the load balance among different networks can be realized, the transmission delay of the data packet can be reduced, and the accessibility of the data packet can be ensured.
The setting requirement may be adaptively set according to an application requirement, and may be, for example, that the traffic load is the lowest, the traffic load is smaller than a set traffic load threshold, or the traffic load is within a certain traffic range. Based on this, the second switch chip may be a switch chip with the lowest traffic load of the external ports with the same port number in the other reachable switch chips, or may also be a switch chip with the traffic load of the external ports with the same port number in the other reachable switch chips being smaller than the set traffic load threshold, or may also be a switch chip with the traffic load of the external ports with the same port number in the other reachable switch chips being within a certain range, and so on.
In some further exemplary embodiments of the present application, as shown in fig. 1e, the at least one sharing module in the network switching device 100 may further include, but is not limited to: a clock module 105, a power module 106, and/or a heat sink module 107.
The clock module 105 is connected to the N switching chips 101, and is configured to provide a clock signal to the N switching chips 101. In the same network switching device 100, the clock module 105 is shared among the N switching chips 101, which is beneficial to achieving the maximum synchronization among different networks.
The power module 106 is connected to the N switching chips 101, and is configured to supply power to the N switching chips 101. The heat dissipation module 107 is connected to the N switching chips 101, and is configured to dissipate heat for the N switching chips 101. In the same network switching device 100, the power module 106 and the heat dissipation module 107 are shared among the N switching chips 101, which not only saves the internal space of the network switching device and reduces the implementation cost of the network switching device, but also greatly improves the energy efficiency.
Based on the network switching device provided by the above embodiment, a multi-track network system can be constructed. As shown in fig. 2, the multi-track network system 10 includes: at least one node device 200 and at least one network switching device 100. The number of the node devices 200 and the network switching devices 100 is not limited in this embodiment, and may be determined according to application requirements and network size.
As shown in fig. 1a, each network switching device 100 includes N switching chips. Wherein, N is a natural number, and N is greater than or equal to 2, that is, the network switching device 100 includes at least two switching chips. In the network switching device 100, N switching chips belong to N different networks S1-SN
Optionally, for network S1-SNThe network of (1) may include a plurality of switch chips, and the plurality of switch chips may be cascaded by using a cascade technique to expand the network scale, and a cascade hierarchy may be formed between the cascaded switch chips. Based on this, the N switching chips in each network switching device 100 belong to N different networks S1-SNAnd the cascade level of the N switching chips in the network to which each switching chip belongs is the same. In addition, since the cascading hierarchy of the N switch chips in the same network switch device 100 in the respective networks to which the N switch chips belong is the same, the cascading hierarchy of the N switch chips 101 in the network switch device 100 in the respective networks to which the N switch chips belong can also be regarded as the cascading hierarchy of the network switch device 100.
In addition, as shown in fig. 1a, each network switching device 100 further includes at least one module that can be shared by N switching chips, which is referred to as a sharing module. These shared modules refer to modules, such as clock modules, power modules, etc., which can provide the same service for the N switch chips or can meet the same requirements of the N switch chips. It should be noted that, in order to more clearly illustrate the architecture of the multi-track network system 10, only a part of the network switching devices 100 in fig. 2 is shown with the shared module, and other network switching devices 100 not shown with the shared module also include the shared module.
The N switch chips in each network switch device 100 may form at least one network port for connecting to some node devices in at least one node device 200, so that the node devices can access the corresponding network. For convenience of description, the number of network ports formed by the N switch chips in each network switch device 100, that is, the number of node devices connected to each network switch device 100, is denoted as M2, and M2 may be the same or different for different network switch devices 100. That is, the number of node apparatuses 200 connected to different network switching apparatuses 100 may be the same or different. Wherein M2 is a natural number, and M2 is less than or equal to the total number of at least one node device 200.
Wherein each node device 200 includes K network ports. Each node apparatus 200 is configured to connect to one of the at least one network switching apparatus 100. The connection between a node device 200 and a network switch device 100 mainly means that K network ports in the node device 200 are correspondingly connected with K switch chips in the network switch device 100, where K is a natural number and is greater than or equal to 1 and less than or equal to N. One network port is correspondingly connected with one exchange chip, and the corresponding relation between the network port and the exchange chip is not limited and can be flexibly set. For example, the network ports and the switch chip may be respectively numbered, and the network ports with the same number may be correspondingly connected to the switch chip, but the invention is not limited thereto.
In this embodiment, the implementation form of the node device 200 is not limited, and the node device 200 may be any device that can be connected to a switch chip and has K network ports, such as a workstation, a terminal device, a network user or a personal computer, and may also be a server, a printer or other network-connected devices.
In some alternative embodiments, K ≧ 2, i.e., node device 200 has at least two network ports. In an implementation form, the node device 200 may include a single multi-port network card, where the multi-port network card provides multiple network ports, or the node device 200 may also include multiple network cards (each network card may have one port or multiple ports), and the multiple network cards collectively provide multiple network ports.
In an alternative embodiment, K is equal to N, that is, each node device 200 includes N network ports, which are respectively connected to N switch chips in a network switch device in a one-to-one correspondence.
In addition, at least one network switching device 100 may be interconnected according to a cascading hierarchy of network switching devices. Optionally, when the cascade levels of the N switch chips 101 in the network switch 100 in the respective networks are the same and are regarded as the cascade levels of the network switch 100, at least one network switch 100 is connected to another according to the cascade level of the internal switch chip in the network. It should be noted that in the multi-track network system 10, one or more network switching devices 100 may be deployed in the same cascade level.
The links interconnected between the network switching devices 100 and between the node device 200 and the network switching device 100 are mainly used for carrying transmission of a plurality of parallel signals sent by two ends of the links, and may be wired links or wireless links. The medium of the wired link may be a multi-core optical fiber, a multi-core high-speed cable, or an optical fiber carrying WDM signals, among others. A wireless link refers to a path space through which electromagnetic waves propagate. The network switching devices 100 and the node devices 200 and the network switching devices 100 are connected to each other to form a multi-track network system.
In this embodiment, the network switching device includes switching chips belonging to different networks, and the node device may be accessed to different networks, thereby constructing a network system including different networks. For convenience of description, a network system constructed based on the network switching device of the present embodiment may be referred to as a multi-track network system. In addition, the switching chips belonging to different networks in the network switching equipment can share some modules, which is beneficial to saving space resources and reducing the volume of the network switching equipment, so the complexity of the multi-rail network system is relatively low and the scale is relatively small.
In some exemplary embodiments of the present application, each node apparatus 200 further includes a convergence/distribution (mux/de-mux) module. The convergence/distribution module is connected to the K network ports in the node device 200 to which the convergence/distribution module belongs, and configured to converge and send signals from the K network ports in the node device 200 to the K switch chips in the network switch device connected to the node device 200 to which the convergence/distribution module belongs, or distribute signals from the K switch chips in the network switch device connected to the node device 200 to the K network ports in the node device 200 to which the convergence/distribution module belongs.
In some exemplary embodiments of the present application, as shown in fig. 1b, the at least one sharing module in each network switching device includes: m2 outbound port modules. One end of each external port module is connected with one node device, and the other end of each external port module is connected with N switching chips in the network switching device to which the external port module belongs, and the external port module is used for distributing signals from the node devices connected with the external port module to the N switching chips in the network switching device to which the external port module belongs, or gathering the signals from the N switching chips in the network switching device to which the external port module belongs and then sending the signals to the node devices connected with the external port module.
Alternatively, as shown in FIG. 1c, each switch chip contains M2 external ports. The M2 external ports of each switch chip are connected with the M2 external port modules in the network switch device to which the switch chip belongs in a one-to-one correspondence manner.
In some exemplary embodiments of the present application, as shown in fig. 1d, the at least one sharing module in each network switching device further includes: and an intranet control module. The internal network control module in each network switching device is connected with the N switching chips in the network switching device to which the internal network control module belongs and is used for controlling data exchange among the N switching chips in the network switching device to which the internal network control module belongs; the internal network control module is connected with N exchange chips in the network exchange equipment.
Optionally, as shown in fig. 1c, each switch chip further comprises M1 internal ports. The N switching chips in each network switching device are interconnected through M1 internal ports respectively contained in the N switching chips; m1 is a natural number, and M1. ltoreq.N.
In some exemplary embodiments of the present application, as shown in fig. 1e, the at least one sharing module in each network switching device further includes, but is not limited to: the clock module, the power module and/or the heat dissipation module.
The clock module is connected with the N switching chips and used for providing clock signals for the N switching chips. In the same network switching equipment, the N switching chips share the clock module, which is beneficial to the maximum synchronization among different networks.
The power module is connected with the N switching chips and used for supplying power to the N switching chips. The heat dissipation module is connected with the N exchange chips and used for dissipating heat of the N exchange chips. In the same network switching equipment, the power module and the heat dissipation module are shared among the N switching chips, so that the internal space of the network switching equipment can be saved, the implementation cost of the network switching equipment is reduced, and the energy efficiency can be greatly improved.
The embodiment of the present application provides a traffic distribution method for a node device in the multi-track network system. As shown in fig. 3a, the method comprises:
301. and acquiring the current data packet.
302. And determining target port selection parameters from the port selection parameters corresponding to the flow distribution strategy by taking flow balance as a purpose.
303. And selecting a target network port from at least two network ports of the node equipment based on the target port selection parameter, wherein the at least two network ports of the node equipment are respectively accessed to different networks.
304. And sending the current data packet to a network accessed by the target network port through the target network port.
In this embodiment, the node device has at least two network ports, and the at least two network ports access different networks respectively. For the node device, the data packet can be distributed to the corresponding network for transmission so as to reach the destination end. In order to distribute the data packets to different networks as uniformly as possible, the node devices may distribute the data packets for the purpose of traffic balancing.
In this embodiment, a packet distribution process is described in detail by taking a current packet as an example.
In this embodiment, a traffic distribution policy adopted by the node device may be preset, and a corresponding port selection parameter may be configured for the traffic distribution policy, where the port selection parameter is used for the node device to select a network used for transmitting a data packet under the corresponding traffic distribution policy.
In some application scenarios, the node device is a data source, and may generate a data packet by itself and need to send the data packet to a destination, so that when the node device generates a new data packet, the newly generated data packet may be acquired as a current data packet to be distributed. In other application scenarios, the node device may serve as a relay device or a relay device, and when receiving a data packet reported or sent by a downstream device, the node device may use the received data packet as a current data packet to be sent.
In any application scenario, after obtaining the current data packet, the node device may determine a target port selection parameter from port selection parameters corresponding to a traffic distribution policy with a purpose of traffic balancing, select a target network port from at least two network ports of the node device based on the target port selection parameter, and then send the current data packet to a network to which the target network port is accessed through the target network port.
In this embodiment, the node device selects a transmission network for the data packet with the purpose of traffic balancing by combining the traffic distribution policy and the port selection parameter corresponding to the traffic distribution policy, which can ensure that the traffic in each network is balanced, and is convenient for exerting the advantages of multiple networks.
The traffic distribution policies are different, and the corresponding port selection parameters are also different. Two examples are given below:
in one example, the traffic distribution policy is a packet-by-packet traffic distribution policy, and in this scenario, the port selection parameter corresponding to the traffic distribution policy is sequence numbers of at least two network ports. In this example, as shown in fig. 3b, a traffic distribution process includes:
311. and acquiring the current data packet.
312. According to the serial number of the network port sending the previous data packet and the total number of the ports of the at least two network ports of the node equipment, determining an initial serial number value from the serial numbers of the at least two network ports as a target port selection parameter, wherein the at least two network ports are respectively accessed to different networks.
313. And sequentially comparing whether the current available flow of each subsequent network port is greater than or equal to the length of the current data packet or not from the network ports with the sequence numbers of the initial sequence number values in the at least two network ports, and selecting a target network port from the network ports with the current available flow greater than or equal to the length of the current data packet.
314. And sending the current data packet to a network accessed by the target network port through the target network port.
Optionally, one implementation of step 312 includes: the initial sequence number value is determined according to the formula s ═ t +1) mod K. Wherein s represents an initial sequence number value, t represents a sequence number of a network port transmitting a previous data packet, K represents a total number of ports of at least two network ports, and K is a natural number.
In this embodiment, a packet-by-packet traffic distribution policy is adopted, and a target network port is selected based on a network port that sends a previous data packet and in combination with the current available traffic of each network port, which is beneficial to uniformly distributing data packets to different networks and realizing traffic balance.
Further, in combination with the network switching device provided in the foregoing embodiment, the same clock signal is used between different networks, so that clock synchronization between different networks can be performed to the maximum extent, and different data packets in the same data stream are transmitted in different networks without causing out-of-order.
In another example, the traffic distribution policy is a flow-by-flow traffic distribution policy, and in this scenario, the port selection parameters corresponding to the traffic distribution policy are at least two hash functions. In this example, when determining the destination port selection parameter, the destination port selection parameter may be selected from at least two hash functions according to a difference degree of currently available traffic of at least two network ports. As shown in fig. 3c, a traffic distribution process includes:
321. and acquiring the current data packet.
322. Determining a maximum available flow and a minimum available flow from current available flows of at least two network ports of the node equipment, wherein the at least two network ports are respectively accessed to different networks.
323. Judging whether the difference value between the maximum available flow and the minimum available flow is larger than a set threshold value or not; if the determination result is yes, that is, the difference between the maximum available flow rate and the minimum available flow rate is greater than the set threshold, 324 is executed. Otherwise, step 325 is performed.
324. Another hash function other than the hash function used by the previous packet is selected from the at least two hash functions as the target hash function, and step 326 is performed.
325. The hash function used by the previous packet is taken as the target hash function and step 326 is performed.
326. Based on the target hash function, a target network port is selected from the at least two network ports, and step 327 is performed.
Alternatively, a Hash function Hash may be used based on the flow information X (e.g., 5-tuple) corresponding to the current packetd(X) performing hash, taking the hash result as the number of the target network port, and selecting the network port with the number as the target network port.
327. And sending the current data packet to a network accessed to the target network port through the target network port.
Optionally, one implementation of step 324 includes: using the formula d ═ c +1) mod m, the target hash function is determined. Wherein d represents the label of the target hash function; c denotes the label of the hash function used by the previous packet; m represents the total number of at least two hash functions.
In this embodiment, a plurality of hash functions are adopted, and in combination with traffic differences in a network, when a difference in traffic is large, that is, a difference between a maximum available traffic and a minimum available traffic is greater than a set threshold, a new hash function is selected for a current data packet, so that the current data packet can be dispersed in a new network to achieve a traffic balancing purpose when the current data packet and a previous data packet belong to the same data flow.
In the flow distribution process shown in fig. 3b and fig. 3c, a flow counter may be configured for each network port of the node device, and is used to count the flow information of each network port, for example, the current available flow or the current used flow, and set a counter update period for the flow counters, so that when the current time is an integer multiple of the preset counter update period, the flow counter of each network port is reset to the initial value, and the update of the flow counter is implemented. Alternatively, the counter update period may be a value associated with the elapsed time of transmission of the data packet in the network. For example, the time length for completing one data packet in each network transmission may be empirically counted and set as the counter update period.
With reference to the traffic counter and the counter update period, as shown in fig. 3d, a packet-by-packet traffic distribution method includes:
331. initializing node equipment, respectively setting flow counters FC for K network ports of the node equipment, wherein the flow counters FC are used for recording the size of data quantity which can be output by the corresponding network ports, setting the initial value of the flow counter FC as Q, and setting the period of recovering the initial value as Ta
332. After the initialization operation is completed, the network port with the serial number orig is randomly selected as the first network port for sending the data packet, and the process goes to step 333.
333. When a packet of length L1 is received, it is sent from the network port numbered orig to the network port numbered orig network port and updating the flow counter FC of the orig network portorig=FCorigL1, record the currently scheduled number s ═ orig, jump to step 334;
334. judging whether the current time is the period T for recovering the initial valueaIf so, go to step 338; if not, go to step 335;
335. receiving a data packet with the length Lx, searching a first network port t meeting FC > ═ Lx from the number s, and if the network port t can be found, jumping to step 337; if not, go to step 336.
336. And updating the values of the traffic counters of all the network ports to FC ═ Q + FC, and jumping to step 334.
337. Sending the data packet with length Lx from the network port with number t to the network accessed by the network port with number t, and updating the flow counter FC of the network port with number tt=FCtLx, record the currently scheduled number s ═ t +1) mod K, and jump to step 334.
338. Resetting the values of the flow counters FC of all the network ports to the initial value Q, randomly selecting the network port numbered with orig as the network port for sending the next data packet, and jumping to step 334.
In the packet-by-packet distribution method, the network port is selected for the current data packet based on the currently scheduled network port, and the network port is selected again at random in combination with the regular time, so that the flow balance can be realized to a certain extent, and the congestion caused by the accumulation of the flow in a certain network is avoided.
With reference to the traffic counter and the counter update period, as shown in fig. 3e, a method for distributing traffic flow by flow includes:
341. initializing node equipment, respectively setting a flow counter FCN for K network ports of the node equipment, wherein the flow counter FCN is used for recording the size of data volume output by the corresponding network port, and setting the period of the flow counter FCN for restoring an initial value to be TbAnd setting m Hash function labels as Hash _ 1-Hash _ m, skippingGo to step 342. Wherein m is a natural number, and m is more than or equal to 2.
342. Judging whether the current time is the period T for recovering the initial valuebIf so, go to step 349; if not, go to step 343.
343. The length Ly of the packet is received and the process goes to step 344.
344. Based on the stream information X (for example, 5-tuple) corresponding to the packet having the length Ly, Hash is performed using the Hash function Hash _ c (X), and the result of the Hash is referred to as the number tar, so that the process proceeds to step 345.
Wherein, Hash _ c (X) represents the Hash function scheduled currently, i.e. the Hash function used when the last data packet is sent, c represents the number of the Hash function scheduled currently, c is a natural number, and c is more than or equal to 1 and less than or equal to m.
345. The maximum value and the minimum value among all the flow counters FC are obtained.
346. Judging whether the difference value between the maximum value and the minimum value is larger than a set threshold value or not; if the difference between the maximum value and the minimum value is greater than the set threshold value, it indicates that the loads between different networks are already unbalanced, and go to step 347; if the difference between the maximum value and the minimum value is less than or equal to the predetermined threshold, indicating that the loads of different networks are relatively balanced, the process proceeds to step 348.
347. The Hash function is replaced with the formula Hash _ c ═ Hash _ [ (c +1) mod m ], and the process jumps to step 344.
348. Sending the data packet with the length of Ly from the network port with the number of tar to the network accessed by the network port with the number of tar, and updating the FCNtar=FCNtar+ Ly, go to step 342.
Wherein, FCNtarIndicating the traffic counter of the network port numbered tar.
349. The values of the flow counters FC of all the network ports are reset to 0, and the process goes to step 343.
In the flow-by-flow distribution method, based on a plurality of hash functions, data packets in the same data flow can be distributed to different networks under the condition of unbalanced network load, so that flow balance can be realized to a certain extent, and congestion caused by accumulation of flow in a certain network is avoided.
For the network switching device in the foregoing embodiment, an embodiment of the present application provides a routing method. As shown in fig. 4a, the method comprises:
401. based on a routing strategy, a first switching chip corresponding to a data packet to be sent is determined from N switching chips of the network switching equipment, wherein N is a natural number and is more than or equal to 2.
402. When the first exchange chip meets the set network switching condition, a second exchange chip is selected from other reachable exchange chips, wherein the other reachable exchange chips are exchange chips which are reachable by the destination route of the data packet to be sent except the first exchange chip in the N exchange chips.
403. And controlling the second switching chip to send out the data packet to be sent based on the internal network between the first switching chip and the second switching chip.
The network switching device of the embodiment includes N switching chips, the N switching chips belong to different networks, the N switching chips are interconnected to form an internal network of the network switching device, N is a natural number, and N is greater than or equal to 2. For the data packet entering the network switching equipment, the internal network based on the network switching equipment can be switched from the network accessed by the first switching chip to the network accessed by the second switching chip when the network switching condition is met, so that interconnection and intercommunication among the co-located switching chips are realized, the flow can be switched in different networks at will, the dynamic load balance and high availability of the multi-track network system are realized, the multi-path in the multi-track network system is fully excavated, and the advantages of the multi-track network system are fully exerted.
In the above method embodiment, the network switching condition may depend on a specific application scenario. In some application scenarios, the network switch condition may be whether the current switch chip is malfunctioning and/or congested. Based on this, as shown in fig. 4b, a routing method includes:
411. based on a routing strategy, a first switching chip corresponding to a data packet to be sent is determined from N switching chips of the network switching equipment, wherein N is a natural number and is more than or equal to 2.
412. And judging whether an external port used for sending a data packet to be sent on the first switching chip has a fault or not. If the determination result is yes, it indicates that the first switch chip satisfies the network switching condition, execute step 413-415; if the determination is negative, go to step 416.
413. And acquiring the flow load of the external ports with the same port number on other reachable switching chips based on the port number of the external port used for sending the data packet to be sent on the first switching chip.
414. And selecting the switching chip with the traffic load meeting the set requirement from other reachable switching chips as a second switching chip according to the traffic load of the external port with the same port number on the other reachable switching chips.
Alternatively, the second switch chip may be a switch chip with the lowest traffic load of the external ports with the same port number in the other reachable switch chips, or may also be a switch chip with the traffic load of the external ports with the same port number in the other reachable switch chips being smaller than the set traffic load threshold, or may also be a switch chip with the traffic load of the external ports with the same port number in the other reachable switch chips being within a certain range, and so on.
415. And controlling the second switching chip to send out the data packet to be sent based on the internal network between the first switching chip and the second switching chip.
416. And controlling the first switching chip to send out the data packet to be sent.
Optionally, in step 412, it may also be determined whether a packet loss rate of an external port on the first switch chip for sending a data packet to be sent exceeds a packet loss rate threshold. If the determination result is yes, it indicates that the first switch chip satisfies the network switching condition, then execute step 413 and 415; if the determination is negative, go to step 416.
Optionally, in step 412, it may also be determined whether the average queuing delay of the external port on the first switch chip for transmitting the data packet to be transmitted exceeds a delay threshold, and the like. If the determination result is yes, go to step 413-415; if the determination is negative, go to step 416.
Alternatively, in step 412, a comprehensive judgment may be performed by combining two or more conditions.
In the embodiment, when a fault/serious congestion occurs in a certain network, the data packet in the network is switched to another light-load network for transmission, so that load balance among different networks can be achieved, transmission delay of the data packet can be reduced, and reachability of the data packet can be ensured.
It should be noted that in some of the flows described in the above embodiments and the drawings, a plurality of operations are included in a specific order, but it should be clearly understood that the operations may be executed out of the order presented herein or in parallel, and the sequence numbers of the operations, such as 301, 302, etc., are merely used for distinguishing different operations, and the sequence numbers do not represent any execution order per se. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different.
In addition to the foregoing network switching device, network system, and corresponding method embodiments, some embodiments of the present application further provide a node device, as shown in fig. 5, where the node device includes: k network ports 51, memory 52, and processor 53.
The K network ports 51 are used for accessing K different networks, K is a natural number and is more than or equal to 2.
The memory 52 is used for storing computer programs and may be configured to store other various data to support operations on the node devices. Examples of such data include instructions for any application or method operating on the node device, contact data, phonebook data, messages, pictures, videos, and so forth.
The memory 52 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
A processor 53, coupled to the memory 52, for executing computer programs in the memory 52 for: acquiring a current data packet; determining a target port selection parameter from port selection parameters corresponding to a flow distribution strategy by aiming at flow balance; selecting a target network port from the K network ports based on the target port selection parameter; and sending the current data packet to a network accessed by the target network port through the target network port.
In an alternative embodiment, as shown in fig. 5, the node apparatus further includes: a convergence/distribution module 54.
The convergence/distribution module 54 is connected to the K network ports 51, and configured to converge and send signals from the K network ports 51 to the K switching chips in the network switching device connected to the node device, or distribute signals from the K switching chips in the network switching device to the K network ports 51. The network switching equipment comprises N switching chips, N is a natural number, N is more than or equal to 2, and K is less than or equal to N.
In this embodiment, the implementation form of the node device is not limited, and the node device may be any device that can be connected to the switch chip and has K network ports, for example, a workstation, a terminal device, a network user or a personal computer, and may also be a server, a printer or other network-connected devices.
In combination with the implementation form of the node device, in some optional embodiments, the node device may further include: communication components, displays, power components, audio components, and the like.
Wherein the communication component is configured to facilitate wired or wireless communication between the device in which the communication component is located and other devices. The device in which the communication component is located may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
The display includes a screen, which may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation.
The power supply component supplies power to various components of equipment where the power supply component is located. The power components may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device in which the power component is located.
Wherein, the audio component can be configured to output and/or input audio signals. For example, the audio component includes a Microphone (MIC) configured to receive an external audio signal when the device in which the audio component is located is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in a memory or transmitted via a communication component. In some embodiments, the audio assembly further comprises a speaker for outputting audio signals.
Accordingly, the present application further provides a computer-readable storage medium storing a computer program, where the computer program is capable of implementing the steps or operations that can be executed by the node device in the foregoing method embodiments when executed.
Accordingly, the present application also provides another computer-readable storage medium storing a computer program, where the computer program is capable of implementing the steps or operations that can be executed by the network switching device in the foregoing method embodiments when executed.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (26)

1. A network switching device, comprising: n exchange chips and at least one sharing module which can be shared by the N exchange chips; the N switching chips belong to N different networks, and form M2 network ports for connecting M2 node devices, so that the M2 node devices access to the corresponding networks; wherein N and M2 are natural numbers, and N is more than or equal to 2;
the at least one sharing module comprises: the intranet control module is connected with the N exchange chips, and the N exchange chips are mutually connected; the intranet control module is used for:
determining a first switching chip corresponding to a data packet to be sent from the N switching chips based on a routing strategy; when the first switching chip meets the set network switching condition, acquiring the traffic load of external ports with the same port number on other reachable switching chips based on the port number of the external port used for sending the data packet to be sent on the first switching chip;
selecting an exchange chip with a flow load meeting a set requirement from the other reachable exchange chips as a second exchange chip according to the flow load of the external port with the same port number on the other reachable exchange chips; the other reachable switch chips refer to switch chips which can be reached to the destination route of the data packet to be sent except the first switch chip in the N switch chips;
and controlling the second switching chip to send the data packet to be sent out based on an internal network between the first switching chip and the second switching chip.
2. The network switching device of claim 1, wherein the N switching chips are cascaded in the same level in the respective networks to which they belong.
3. The network switching device of claim 1, wherein the at least one sharing module comprises: m2 external port modules;
one end of each external port module is connected with one node device, and the other end of each external port module is connected with the N switching chips, and the external port modules are used for distributing signals from the node devices connected with the external port modules to the N switching chips or converging the signals from the N switching chips and then sending the converged signals to the node devices connected with the external port modules.
4. The network switching device of claim 3, wherein each switch chip comprises M2 external ports, and the M2 external ports are connected with the M2 external port modules in a one-to-one correspondence.
5. The network switching device of claim 1, wherein each switch chip includes M1 internal ports; the N switching chips are interconnected through M1 internal ports respectively contained; m1 is a natural number, and M1. ltoreq.N.
6. The network switching device of claim 1, wherein the intranet control module is specifically configured to:
and when an external port used for sending the data packet to be sent on the first switching chip has a fault, the packet loss rate exceeds a packet loss rate threshold, and/or the average queuing delay exceeds a delay threshold, determining that the first switching chip meets a network switching condition.
7. The network switching device of any of claims 1-6, wherein the at least one sharing module comprises:
the clock module is connected with the N switching chips and is used for providing clock signals for the N switching chips; and/or
The power supply module is connected with the N switching chips and used for supplying power to the N switching chips; and/or
And the heat dissipation module is connected with the N exchange chips and used for dissipating heat for the N exchange chips.
8. A network system, comprising: at least one node device and at least one network switching device;
each network switching device comprises N switching chips and at least one sharing module which can be shared by the N switching chips; the N switching chips belong to N different networks, and form M2 network ports, which are used to connect M2 node devices of the at least one node device, so that the M2 node devices access to corresponding networks; wherein N and M2 are natural numbers, and N is more than or equal to 2;
each node device comprises K network ports which are used for being correspondingly connected with K switching chips in one network switching device, wherein K is a natural number and is less than or equal to N;
wherein, the at least one network switching device is connected with each other according to the cascade level of the network switching device;
at least one sharing module in each network switching device comprises: an intranet control module; the intranet control module is connected with N switching chips in the network switching equipment to which the intranet control module belongs, and the N switching chips in the network switching equipment to which the intranet control module belongs are mutually connected; the intranet control module is used for:
determining a first switching chip corresponding to a data packet to be sent from the N switching chips based on a routing strategy; when the first switching chip meets the set network switching condition, acquiring the traffic load of external ports with the same port number on other reachable switching chips based on the port number of the external port used for sending the data packet to be sent on the first switching chip;
selecting an exchange chip with a flow load meeting a set requirement from the other reachable exchange chips as a second exchange chip according to the flow load of the external port with the same port number on the other reachable exchange chips; the other reachable switch chips refer to switch chips which can be reached to the destination route of the data packet to be sent except the first switch chip in the N switch chips;
and controlling the second switching chip to send the data packet to be sent out based on an internal network between the first switching chip and the second switching chip.
9. The network system according to claim 8, wherein the N switching chips in each network switching device are cascaded in the same level in the network to which each switching chip belongs; the at least one network switching device is connected with each other according to the cascade level of the internal switching chip in the network.
10. The network system according to claim 8, wherein each node device further comprises a convergence/distribution module;
the convergence/distribution module is connected with K network ports in the node device to which the convergence/distribution module belongs, and is used for converging signals from the K network ports in the node device to which the convergence/distribution module belongs and then sending the converged signals to K switching chips in the network switching device connected with the node device to which the convergence/distribution module belongs, or distributing the signals from the K switching chips in the network switching device connected with the node device to the K network ports in the node device to which the convergence/distribution module belongs.
11. The network system of claim 8, wherein the at least one sharing module in each network switching device comprises: m2 external port modules;
one end of each external port module is connected with one node device, and the other end of each external port module is connected with N switching chips in the network switching device to which the external port module belongs, and the external port module is used for distributing signals from the node devices connected with the external port module to the N switching chips in the network switching device to which the external port module belongs, or gathering the signals from the N switching chips in the network switching device to which the external port module belongs and then sending the signals to the node devices connected with the external port module.
12. The network system according to claim 11, wherein each switch chip comprises M2 external ports, and the M2 external ports are connected to M2 external port modules in the network switch device to which the external ports belong in a one-to-one correspondence.
13. The network system of claim 8, wherein each switch chip comprises M1 internal ports; the N switching chips in each network switching device are interconnected through M1 internal ports respectively contained in the N switching chips; m1 is a natural number, and M1. ltoreq.N.
14. The network system according to any one of claims 8-13, wherein at least one sharing module in each network switching device comprises:
the clock module is connected with the N internal switching chips and is used for providing clock signals for the N internal switching chips; and/or
The power supply module is connected with the N internal switching chips and is used for supplying power to the N internal switching chips; and/or
And the heat dissipation module is connected with the N internal exchange chips and used for dissipating heat of the N internal exchange chips.
15. A traffic distribution method, adapted to a node device including at least two network ports, wherein the at least two network ports of the node device are correspondingly connected to at least two switch chips in the network switch device according to any one of claims 1 to 7, the method comprising:
acquiring a current data packet;
determining a target port selection parameter from port selection parameters corresponding to a flow distribution strategy by aiming at flow balance;
selecting a target network port from the at least two network ports based on the target port selection parameter;
and sending the current data packet to a network to which a switching chip connected with the target network port belongs through the target network port.
16. The method according to claim 15, wherein the port selection parameters corresponding to the traffic distribution policy are sequence numbers of the at least two network ports;
the determining a target port selection parameter from port selection parameters corresponding to a traffic distribution policy for the purpose of traffic balancing includes:
and determining an initial sequence number value from the sequence numbers of the at least two network ports according to the sequence number of the network port for sending the previous data packet and the total number of the ports of the at least two network ports, wherein the initial sequence number value is used as the target port selection parameter.
17. The method of claim 16, wherein selecting a target network port from the at least two network ports based on the target port selection parameter comprises:
and sequentially comparing whether the current available flow of each subsequent network port is greater than or equal to the length of the current data packet or not from the network port with the sequence number of the initial sequence number value in the at least two network ports, and selecting the target network port from the network ports with the current available flow greater than or equal to the length of the current data packet.
18. The method of claim 16, wherein determining an initial sequence number value from the sequence numbers of the at least two network ports according to the sequence number of the network port sending the previous packet and the total number of ports of the at least two network ports comprises:
determining the initial sequence number value according to the formula s ═ t +1) mod K;
wherein s represents the initial sequence number value, t represents the sequence number of the network port sending the previous data packet, and K represents the total number of the ports of the at least two network ports.
19. The method according to claim 15, wherein the port selection parameters corresponding to the traffic distribution policy are at least two hash functions;
the determining a target port selection parameter from port selection parameters corresponding to a traffic distribution policy for the purpose of traffic balancing includes:
and selecting a target hash function from the at least two hash functions as the target port selection parameter according to the difference degree of the current available flow of the at least two network ports.
20. The method of claim 19, wherein selecting the target hash function from the at least two hash functions according to the difference between the currently available traffic of the at least two network ports comprises:
determining a maximum available traffic and a minimum available traffic from the current available traffic of the at least two network ports;
if the difference value between the maximum available flow and the minimum available flow is larger than a set threshold value, selecting other hash functions except the hash function used by the previous data packet from the at least two hash functions as the target hash function;
and if the difference value between the maximum available flow and the minimum available flow is less than or equal to a set threshold value, taking the hash function used by the previous data packet as the target hash function.
21. The method according to claim 20, wherein said selecting, as the target hash function, another hash function from the at least two hash functions other than the hash function used by the previous packet comprises:
determining the target hash function using the formula d ═ c +1) mod m;
d represents the label of the target hash function;
c denotes the label of the hash function used by the previous packet;
m represents the total number of the at least two hash functions.
22. The method of any one of claims 15-21, further comprising:
and when the current time is integral multiple of the updating period of the preset counter, resetting the flow counters of the at least two network ports to initial values, wherein the flow counters are used for recording the flow information of the corresponding network ports.
23. A routing method, adapted for a network switching device, comprising: n exchange chips and at least one sharing module which can be shared by the N exchange chips; the N switching chips belong to N different networks, and form M2 network ports for connecting M2 node devices, so that the M2 node devices access to the corresponding networks, and the N switching chips are connected with each other; wherein N and M2 are natural numbers, and N is more than or equal to 2; the method comprises the following steps:
determining a first switching chip corresponding to a data packet to be sent from N switching chips of the network switching equipment based on a routing strategy;
when the first exchange chip meets the set network switching condition, selecting a second exchange chip from other reachable exchange chips, wherein the other reachable exchange chips are exchange chips which can be reached by the route of the destination end of the data packet to be sent except the first exchange chip in the N exchange chips;
controlling the second switching chip to send the data packet to be sent out based on an internal network between the first switching chip and the second switching chip;
wherein the selecting a second switch chip from the other reachable switch chips comprises:
acquiring the traffic load of the external ports with the same port number on other reachable switch chips based on the port number of the external port used for sending the data packet to be sent on the first switch chip;
and selecting the switching chip with the traffic load meeting the set requirement from the other reachable switching chips as the second switching chip according to the traffic load of the external port with the same port number on the other reachable switching chips.
24. The method of claim 23, wherein prior to selecting the second switch chip from the other reachable switch chips, the method further comprises:
and when an external port used for sending the data packet to be sent on the first switching chip has a fault, the packet loss rate exceeds a packet loss rate threshold, and/or the average queuing delay exceeds a delay threshold, determining that the first switching chip meets a network switching condition.
25. A node apparatus, comprising: k network ports, a memory and a processor;
the K network ports are used for being correspondingly connected with K switching chips in the network switching equipment of any one of claims 1 to 7, K is a natural number, and N is more than or equal to K and more than or equal to 2;
the memory for storing a computer program;
the processor, coupled with the memory, to execute the computer program to:
acquiring a current data packet;
determining a target port selection parameter from port selection parameters corresponding to a flow distribution strategy by aiming at flow balance;
selecting a target network port from the K network ports based on the target port selection parameter;
and sending the current data packet to a network to which a switching chip connected with the target network port belongs through the target network port.
26. The node device of claim 25, further comprising: the convergence/distribution module is configured to aggregate/distribute,
the convergence/distribution module is connected with the K network ports and is used for converging signals from the K network ports and then sending the converged signals to K switching chips in network switching equipment connected with the node equipment, or distributing the signals from the K switching chips in the network switching equipment to the K network ports.
CN201810271926.2A 2018-03-29 2018-03-29 Traffic distribution method, routing method, equipment and network system Active CN110324265B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810271926.2A CN110324265B (en) 2018-03-29 2018-03-29 Traffic distribution method, routing method, equipment and network system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810271926.2A CN110324265B (en) 2018-03-29 2018-03-29 Traffic distribution method, routing method, equipment and network system

Publications (2)

Publication Number Publication Date
CN110324265A CN110324265A (en) 2019-10-11
CN110324265B true CN110324265B (en) 2021-09-07

Family

ID=68110886

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810271926.2A Active CN110324265B (en) 2018-03-29 2018-03-29 Traffic distribution method, routing method, equipment and network system

Country Status (1)

Country Link
CN (1) CN110324265B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114079627A (en) * 2020-08-14 2022-02-22 华为技术有限公司 Data transmission device and method
CN114697275B (en) * 2020-12-30 2023-05-12 深圳云天励飞技术股份有限公司 Data processing method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101404616A (en) * 2008-11-04 2009-04-08 北京大学深圳研究生院 Load balance grouping and switching structure and its construction method
CN101729424A (en) * 2009-12-16 2010-06-09 杭州华三通信技术有限公司 Flow forwarding method, devices and system
CN106302252A (en) * 2015-05-15 2017-01-04 华为技术有限公司 Data exchange system framework, the method sending data traffic and switch

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7760717B2 (en) * 2005-10-25 2010-07-20 Brocade Communications Systems, Inc. Interface switch for use with fibre channel fabrics in storage area networks
CN104243357B (en) * 2014-09-02 2016-01-20 深圳市腾讯计算机系统有限公司 Switch, switching system, switching network chip assembly and forwarding chip assembly
US9977750B2 (en) * 2014-12-12 2018-05-22 Nxp Usa, Inc. Coherent memory interleaving with uniform latency
CN107493245B (en) * 2017-09-22 2020-04-24 锐捷网络股份有限公司 Board card of switch and data stream forwarding method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101404616A (en) * 2008-11-04 2009-04-08 北京大学深圳研究生院 Load balance grouping and switching structure and its construction method
CN101729424A (en) * 2009-12-16 2010-06-09 杭州华三通信技术有限公司 Flow forwarding method, devices and system
CN106302252A (en) * 2015-05-15 2017-01-04 华为技术有限公司 Data exchange system framework, the method sending data traffic and switch

Also Published As

Publication number Publication date
CN110324265A (en) 2019-10-11

Similar Documents

Publication Publication Date Title
US10148492B2 (en) Data center bridging network configuration and management
US9948553B2 (en) System and method for virtual network-based distributed multi-domain routing control
US10938727B2 (en) Method and device for offloading processing of data flows
US20190044888A1 (en) Methods and apparatus for providing services in a distributed switch
US9065745B2 (en) Network traffic distribution
US8284791B2 (en) Systems and methods for load balancing of management traffic over a link aggregation group
US8750129B2 (en) Credit-based network congestion management
US10666556B2 (en) Application information based network route modification
WO2015118429A1 (en) Method and system for supporting packet prioritization at a data network
Wang et al. Implementation of multipath network virtualization with SDN and NFV
US9743367B2 (en) Link layer discovery protocol (LLDP) on multiple nodes of a distributed fabric
Christodoulopoulos et al. Performance evaluation of a hybrid optical/electrical interconnect
US9565112B2 (en) Load balancing in a link aggregation
CN111147372B (en) Downlink message sending and forwarding method and device
WO2018121178A1 (en) Resource adjustment method, device and system
US9166868B2 (en) Distributed control plane for link aggregation
CN110324265B (en) Traffic distribution method, routing method, equipment and network system
Liu et al. A reconfigurable high-performance optical data center architecture
US9819575B2 (en) Path selection based on error analysis
Askari et al. Latency-aware traffic grooming for dynamic service chaining in metro networks
Chen et al. Traffic-aware load balancing for M2M networks using SDN
Le et al. AgileDCN: An agile reconfigurable optical data center network architecture
US9686123B2 (en) System for media distribution and rendering on spatially extended wireless networks
Zhao et al. Time-aware software defined networking (Ta-SDN) for flexi-grid optical networks supporting data center application
US9655104B1 (en) Carrier aggregation scheduling based reordering density

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant