US8705540B2 - Network relay apparatus - Google Patents

Network relay apparatus Download PDF

Info

Publication number
US8705540B2
US8705540B2 US12/582,901 US58290109A US8705540B2 US 8705540 B2 US8705540 B2 US 8705540B2 US 58290109 A US58290109 A US 58290109A US 8705540 B2 US8705540 B2 US 8705540B2
Authority
US
United States
Prior art keywords
packet
mode
distributed processing
relay apparatus
distributed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related, expires
Application number
US12/582,901
Other languages
English (en)
Other versions
US20100103933A1 (en
Inventor
Shinichi Akahane
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alaxala Networks Corp
Original Assignee
Alaxala Networks Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alaxala Networks Corp filed Critical Alaxala Networks Corp
Assigned to ALAXALA NETWORKS CORPORATION reassignment ALAXALA NETWORKS CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: AKAHANE, SHINICHI
Publication of US20100103933A1 publication Critical patent/US20100103933A1/en
Application granted granted Critical
Publication of US8705540B2 publication Critical patent/US8705540B2/en
Expired - Fee Related legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/10Current supply arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/60Software-defined switches
    • H04L49/602Multilayer or multiprotocol switching, e.g. IP switching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/10Packet switching elements characterised by the switching fabric construction
    • H04L49/101Packet switching elements characterised by the switching fabric construction using crossbar or matrix
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/20Support for services
    • H04L49/201Multicast operation; Broadcast operation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/35Switches specially adapted for specific applications
    • H04L49/354Switches specially adapted for specific applications for supporting virtual local area networks [VLAN]

Definitions

  • the present invention relates to a network relay apparatus used for relay in a network.
  • the conventional design policy of the network relay apparatus focuses more on the improvement of processing performance rather than the reduction of power consumption.
  • a network relay apparatus comprises: a plurality of distributed processing units configured to receive and send a packet from and to an external device; an integrated processing unit connected with the plurality of distributed processing units; and a mode selector configured to change over a processing mode of the network relay apparatus between a distributed processing mode and an integrated processing mode, based on at least either one of a load applied to the network relay apparatus and a packet type determined according to header information of the received packet, wherein (i) the distributed processing mode in which each of the distributed processing units performs destination search with each received packet and transfers the packet to the external device; and (ii) the integrated processing mode in which each of the distributed processing units transfers each received packet to the integrated processing unit without performing destination search with the received packet, and the integrated processing unit performing the destination search and transferring the packet to the external device via one of the distributed processing units.
  • the network relay apparatus changes over the processing mode between the integrated processing mode with the low power consumption and the distributed processing mode with the high processing performance according to the amount of load applied to the network relay apparatus or according to the packet type of the packet received by the network relay apparatus.
  • the network relay apparatus of this arrangement thus effectively reduces power consumption, while assuring the high processing performance.
  • the technique of the invention is not restrictively actualized by the network relay apparatus having any of the configurations and arrangements discussed above, but may be actualized by diversity of other applications including a network relay method corresponding to the network relay apparatus, a computer program configured to attain the functions of such a device or a method, as well as a recording medium with such a computer program recorded therein.
  • FIG. 1 is an explanatory view schematically illustrating the configuration of a network relay apparatus in a first embodiment of the invention
  • FIG. 2 is an explanatory view showing the network relay apparatus in the integrated processing mode
  • FIG. 3 is an explanatory view showing the network relay apparatus in the distributed processing mode
  • FIG. 4 is an explanatory view showing the schematic structure of the integrated processing unit
  • FIG. 5 is an explanatory view showing the schematic structure of the distributed processing unit
  • FIG. 6 is an explanatory view showing one example of the distributed/integrated processing switchover table
  • FIG. 7 is an explanatory view showing another example of the distributed/integrated processing switchover table
  • FIG. 8 is an explanatory view showing one example of the statistical information table
  • FIG. 9 is an explanatory view showing the structure of a frame FM adopted in the network relay apparatus.
  • FIG. 10 is a flowchart showing a series of processing on a packet reception time
  • FIG. 11 is a flowchart showing a processing flow executed in the integrated processing mode
  • FIG. 12 is a flowchart showing a processing flow executed in the distributed processing mode
  • FIG. 13 is a flowchart showing the details of the packet transmission process of sending a packet to the external device
  • FIG. 14 is an explanatory view showing the details of the transmission order control performed by the packet order controller
  • FIG. 15 is a flowchart showing the details of the mode switchover process with respect to each flow type
  • FIG. 16 is an explanatory view schematically illustrating the configuration of a network relay apparatus in a second embodiment
  • FIG. 17 is an explanatory view schematically illustrating the configuration of a network relay apparatus in a third embodiment
  • FIG. 18 is an explanatory view showing the network relay apparatus in the high loading mode
  • FIG. 19 is a graph showing a changeover criterion between the low loading mode and the high loading mode.
  • FIG. 20 is an explanatory view showing one example of the distributed/integrated processing switchover table adopted in the third embodiment.
  • FIG. 1 is an explanatory view schematically illustrating the configuration of a network relay apparatus 10 in a first embodiment of the invention.
  • the network relay apparatus 10 of the embodiment is constructed as a LAN switch and has the functions of a bridge and a router.
  • the network relay apparatus 10 includes two integrated processing units 100 (IPU# 1 and IPU# 2 ), three distributed processing units 200 (DPU# 1 , DPU# 2 , and DPU# 3 ), and a crossbar switch 300 (CSW). Out of the two integrated processing units 100 , one integrated processing unit IPU# 1 is assigned as a master and the other integrated processing unit IPU# 2 is assigned as a backup.
  • the master integrated processing unit IPU# 1 searches pathway information with regard to each packet received from one of the distributed processing units 200 and performs protocol management, overall control, and synchronization and updating of tables.
  • the backup integrated processing unit IPU# 2 stands by as a backup system of the integrated processing unit IPU# 1 .
  • Each of the distributed processing units 200 has three physical lines (# 0 , # 1 , # 2 ) to externally receive and send packets.
  • the crossbar switch 300 relays a packet, in response to an instruction from any of the integrated processing units 100 and the distributed processing units 200 .
  • the two integrated processing units 100 and the three distributed processing unit 200 are interconnected via the crossbar switch 300 .
  • the respective processing units 100 and 200 may be interconnected via a connection circuit other than the crossbar switch 300 .
  • the network relay apparatus 10 has two processing modes, an integrated processing mode and a distributed processing mode. As described in detail below, in the integrated processing mode, the distributed processing unit 200 receiving a packet does not perform destination search, but the integrated processing unit 100 performs the destination search. In the distributed processing mode, on the other hand, the distributed processing unit 200 receiving a packet performs the destination search.
  • FIG. 2 is an explanatory view showing the network relay apparatus 10 in the integrated processing mode.
  • a packet is received via the physical line # 0 of the distributed processing unit DPU# 1 and is sent out from the physical line # 2 of the distributed processing unit DPU# 2 to an external device.
  • packet input’ and ‘packet output’ respectively mean that the network relay apparatus 10 receives a packet via a physical line and that the network relay apparatus 10 sends out a packet via a physical line.
  • the distributed processing unit DPU# 1 does not perform destination search with regard to a received packet but transfers the received packet to the integrated processing unit IPU# 1 .
  • the integrated processing unit IPU# 1 receives the transferred packet and refers to a table (not shown) based on the pathway information of the received packet to perform the destination search with regard to the received packet.
  • the destination search specifies: (i) the distributed processing unit 200 having a physical line used for output of the received packet; and (ii) the physical line used for the packet output.
  • the integrated processing unit IPU# 1 subsequently updates header information included in the received packet and transfers the packet with the updated header information to the specified distributed processing unit DPU# 2 having the specified physical line used for the packet output.
  • the specified distributed processing unit DPU# 2 receives the transferred packet and outputs the received packet from the specified physical line # 2 , based on the header information included in the received packet.
  • the structures of the integrated processing unit 100 and the distributed processing unit 200 and the series of processing performed in the integrated processing mode will be discussed later in detail.
  • the integrated processing unit 100 performs the destination search with regard to the received packet, in place of the distributed processing unit DPU# 1 .
  • the distributed processing unit DPU# 1 is thus required to activate only part of its functions (for example, the function of transferring packets).
  • the integrated processing unit 100 may be designed to use a search circuit having the smaller power consumption required for the destination search than the power consumption required in the distributed processing unit 200 . In such an application, this arrangement of the embodiment desirably reduces the total power consumption required in the whole network relay deice 10 .
  • FIG. 3 is an explanatory view showing the network relay apparatus 10 in the distributed processing mode.
  • the distributed processing unit DPU# 1 receiving a packet performs the destination search with regard to the received packet.
  • the distributed processing unit DPU# 1 subsequently updates header information included in the received packet and transfers the packet with the updated header information to the specified distributed processing unit DPU# 2 having the specified physical line used for the packet output.
  • the distributed processing unit DPU# 2 receives the transferred packet and outputs the received packet from the specified physical line # 2 , based on the header information included in the received packet.
  • the structure of the distributed processing unit 200 and the series of processing performed in the distributed processing mode will be discussed later in detail.
  • the integrated processing unit IPU# 1 performs series of processing other than the destination search (for example, protocol management, overall control, and synchronization and updating of tables).
  • the distributed processing mode does not require transfer of an input packet via the integrated processing unit IPU# 1 and thus assures the high speed processing of the input packet.
  • Switchover of the processing mode between the integrated processing mode and the distributed processing mode is performed by a destination specifying process switchover module (discussed later) provided in the distributed processing unit 200 .
  • a destination specifying process switchover module discussed later
  • the details of this mode switchover process will be discussed later.
  • FIG. 4 is an explanatory view showing the schematic structure of the integrated processing unit 100 .
  • the integrated processing unit 100 includes a packet transfer processing module 110 , a destination specification module 130 , a device management module 140 , and a packet buffer 150 .
  • the packet buffer 150 is a buffer memory area set for temporary storage of packets.
  • the packet transfer processing module 110 has the function of transferring packets to the crossbar switch 300 and to the respective components included in the integrated processing unit 100 .
  • the packet transfer processing module 110 includes a buffer processing module 120 .
  • the buffer processing module 120 has a packet readout controller 122 and performs exchange of the header information with a destination specifying process switchover module 134 and input and output of packets from and to the packet buffer 150 .
  • the destination specification module 130 has the function of specifying the destination of an input packet.
  • the destination specification module 130 includes a destination specification table 132 and the destination specifying process switchover module 134 .
  • the destination specification table 132 is designed to store pathway information including a relay destination of each packet and is constructed in the form of, for example, a MAC table or a routing table.
  • the destination specifying process switchover module 134 searches the destination specification table 132 based on header information included in each input packet to specify a transfer destination of the input packet.
  • the device management module 140 has the management function of the network relay apparatus 10 .
  • the device management module 140 includes a protocol controller 142 , a failure monitor 144 , and a mode setting module 146 .
  • the protocol controller 142 performs controls of protocols such as OSPF and RIP, creation of routing tables in dynamic routing control, and diverse series of layer 3-related processing including confirmation of existence between multiple network relay apparatuses 10 .
  • the failure monitor 144 monitors the conditions of the respective components included in the network relay apparatus 10 .
  • the mode setting module 146 monitors a statistical information table provided in the distributed processing unit 200 at regular intervals and updates the contents of a distributed/integrated processing switchover table provided in the distributed processing unit 200 according to the result of monitoring. This series of processing will be discussed later in detail.
  • FIG. 5 is an explanatory view showing the schematic structure of the distributed processing unit 200 .
  • the distributed processing unit 200 includes a packet transfer processing module 210 , a destination specification module 230 , a packet buffer 250 , a line IF (interface) module 260 , and a statistical information table 280 .
  • the packet buffer 250 has a distributed processing queue 252 and an integrated processing queue 254 .
  • the distributed processing queue 252 is set to accumulate packets that are to be processed in the distributed processing mode.
  • the integrated processing queue 254 is set to accumulate packets that are to be processed in the integrated processing mode.
  • the packet buffer 250 includes the distributed processing queue 252 and the integrated processing queue 254 as physical areas in this embodiment, although these queues may be provided as logical areas.
  • the packet transfer processing module 210 has the function of transferring packets to the crossbar switch 300 and to the respective components included in the distributed processing unit 200 .
  • the packet transfer processing module 210 includes a packet order controller 270 and a buffer processing module 220 .
  • the buffer processing module 220 has a packet readout controller 222 and a packet queuing processor 224 .
  • the packet readout controller 222 controls readout of packets from the distributed processing queue 252 and from the integrated processing queue 254 .
  • the packet queuing processor 224 accumulates each packet into a corresponding queue according to the determined processing type of the packet.
  • the packet queuing processor 224 accumulates packets into the distributed processing queue 252 in the distributed processing mode, while accumulating packets into the integrated processing queue 254 in the integrated processing mode.
  • the packet order controller 270 controls the packet readout controller 222 to read out a packet according to the determined processing type of the packet. The series of processing performed by the buffer processing module 220 and the packet order controller 270 will be discussed later in detail.
  • the destination specification module 230 has the function of specifying the destination of an input packet.
  • the destination specification module 230 includes a destination specification table 232 , a destination specifying process switchover module 234 , and a distributed/integrated processing switchover table 236 .
  • the destination specification table 232 is designed to store pathway information including a relay destination of each packet and is constructed in the form of, for example, a MAC table or a routing table.
  • the distributed/integrated processing switchover table 236 is designed to identify whether each input packet is to be processed in the distributed processing mode or in the integrated processing mode. The details of these tables will be discussed later.
  • the destination specifying process switchover module 234 searches the distributed/integrated processing switchover table 236 with regard to each input packet to determine the processing type of the input packet.
  • the destination specifying process switchover module 234 further searches the destination specification table 232 based on header information included in the input packet to specify a transfer destination of the input packet.
  • the statistical information table 280 is designed to store statistical information, such as a flow rate of each input packet, and is used in a mode switchover process described later. The details of this table will be discussed later.
  • the line IF module 260 interfaces with the physical lines.
  • FIG. 6 is an explanatory view showing one example of the distributed/integrated processing switchover table 236 .
  • the distributed/integrated processing switchover table 236 includes a flow type FTY, layer 2 header information L 2 HD, layer 3 header information L 3 HD, layer 4 header information L 4 HD, and a processing mode DD.
  • the flow type FTY stores a code of unequivocally identifying the type of a flow.
  • the terminology ‘flow’ represents a set of packets sortable into a group according to the header information of the packets.
  • the layer 2 header information L 2 HD includes VLAN ID, UPRI, additional information L 2 OT, layer 2 multicast classification L 2 UM.
  • the VLAN ID stores an identification code of a VLAN (virtual LAN or virtually grouped LAN) which the flow belongs to as one piece of flow identification information.
  • the UPRI stores a packet priority or ‘User Priority’ in the layer 2 .
  • the additional information L 2 OT stores, for example, ‘Type’ or information for identifying a protocol in the upper layer 3 .
  • the layer 2 multicast classification L 2 UM stores unicast/multicast specification in the layer 2 .
  • the layer 3 header information L 3 HD includes TOS, additional information L 3 OT, and layer 3 multicast classification L 3 UM.
  • the TOS stores ‘Type of Service’ or information used to specify the priority order of packet transmission as one piece of the flow identification information.
  • the additional information L 3 OT stores, for example, ‘Protocol’ or information for identifying a protocol in the upper layer 4 .
  • the layer 3 multicast classification L 3 UM stores unicast/multicast specification in the layer 3 .
  • the layer 4 header information L 4 HD stores, for example, ‘Port Number’ or information for identifying an application program in the upper layers 5 through 7 as one piece of the flow identification information.
  • the processing mode DD stores a processing type (integrated processing/distributed processing) determined with regard to each flow type FTY.
  • the update control of the processing mode DD or the mode switchover process will be described later in detail.
  • the conditions or the items set in the distributed/integrated processing switchover table 236 of this illustrated example are not restrictive but are only illustrative. Any arbitrary conditions or items may thus be set in the distributed/integrated processing switchover table 236 .
  • the distributed/integrated processing switchover table 236 may additionally include information regarding a physical line which a packet is input from.
  • the switchover of the processing mode DD is based on the amount of load applied to the network relay apparatus 10 (more specifically, the amount of packets received by the network relay apparatus 10 ). The switchover of the processing mode DD will be discussed later in detail.
  • FIG. 7 is an explanatory view showing another example of the distributed/integrated processing switchover table 236 .
  • the switchover of the processing mode DD with regard to each flow type is based on the load of the flow type.
  • the processing mode is changeable between the integrated processing mode with the low power consumption and the distributed processing mode with the high processing performance according to the characteristic of each packet type or according to the load of each packet type.
  • the network relay apparatus of this embodiment thus desirably reduces the power consumption, while assuring the high processing performance.
  • FIG. 8 is an explanatory view showing one example of the statistical information table 280 .
  • the statistical information table 280 includes statistical information SI, an upper threshold UB, and a lower threshold LB with regard to each flow type.
  • the flow type FTY is described previously with reference to FIGS. 6 and 7 .
  • the statistical information SI includes a number of accumulated packets SPC, a number of accumulated bytes SBC, a previous arrival time SLT, an average number of packets SPPS, and an average number of bytes SBPS.
  • the number of accumulated packets SPC represents an integration value of the number of input packets with regard to each flow type FTY.
  • the number of accumulated bytes SBC represents an integration value of the number of bytes of the input packets with regard to each flow type FTY.
  • the previous arrival time SLT represents a time when a packet was input last time.
  • the average number of packets SPPS represents the number of packets input per second.
  • the average number of bytes SBPS represents the number of bytes
  • the upper threshold UB and the lower threshold LB represent an upper limit value and a lower limit value of the average number of packets SPPS and are used in the mode switchover process discussed later.
  • the average number of packets SPPS is used as an index representing the amount of load applied to the network relay apparatus 10 .
  • the average number of bytes SBPS may be used as the index representing the amount of the applied load.
  • the combination of the average number of packets SPPS with the average number of bytes SBPS may be used as the index representing the amount of the applied load.
  • the contents or the items set in the statistical information table 280 of the illustrated example are not restrictive but are only illustrative. Any arbitrary contents or items may thus be set in the statistical information table 280 .
  • FIG. 9 is an explanatory view showing the structure of a frame FM adopted in the network relay apparatus 10 .
  • the frame FM includes an in-device header INH, a layer 2 header L 2 HD, a layer 3 header L 3 HD, a layer 4 header L 4 HD, and data DT.
  • the in-device header INH includes a processing identifier INT, a flow type INT, an in-device sequence number INS, a destination processing unit INA, a destination line IND, an in-device priority INP, and additional information INO.
  • the processing identifier INI is a code for identifying the processing mode of a packet between the integrated processing mode and the distributed processing mode.
  • the flow type INT is a code for identifying the flow type FTY of the packet.
  • the in-device sequence number INS represents a sequence number assigned by the distributed processing unit receiving an input packet.
  • the destination processing unit INA is a code for specifying a processing unit as a transfer destination of the packet.
  • the destination line IND is a code for specifying a physical line used for output of the packet.
  • the in-device priority INP represents the priority of the packet in the network relay apparatus 10 .
  • FIG. 2 the integrated processing mode
  • FIG. 3 the distributed processing mode
  • FIG. 10 is a flowchart showing a series of processing on a packet reception time. It is assumed that a packet is received from the physical line # 0 of the distributed processing unit DPU# 1 ( FIGS. 2 and 3 ).
  • the line IF module 260 transfers the received packet to the buffer processing module 220 according to a packet processing priority included in the header of the packet (step S 100 ).
  • the buffer processing module 220 allocates a vacant in-device header INH to the received packet and accumulates the packet with the vacant in-device header INH into the packet buffer 250 (step S 102 ). Concurrently the buffer processing module 220 sends the header information (L 2 HD, L 3 HD, L 4 HD) of the received packet to the destination specifying process switchover module 234 (step S 104 ).
  • the destination specifying process switchover module 234 searches the distributed/integrated processing switchover table 236 by the received header information of the packet as a key (step S 106 ).
  • the destination specifying process switchover module 234 selects the processing mode DD representing the determined processing type and specifies the settings of the flow type FTY and the sequence number (step S 108 ).
  • the processing flow is then branched off to two separate processing flows of the respective processing modes (step S 110 ).
  • FIG. 11 is a flowchart showing a processing flow executed in the integrated processing mode.
  • the destination specifying process switchover module 234 of the distributed processing unit DPU# 1 ( FIG. 2 ) notifies the buffer processing module 220 of the processing mode DD, the flow type FTY, and the sequence number specified at step S 108 in the flowchart of FIG. 10 (step S 200 ).
  • the buffer processing module 220 reads out a corresponding packet specified by the notified information from the packet buffer 250 and updates the in-device header INH of the read-out packet.
  • the buffer processing module 220 respectively stores the setting of the processing mode DD, the setting of the flow type FTY, and the sequence number into the processing identifier INI, the flow type INT, and the in-device sequence number INS.
  • the buffer processing module 220 transfers the packet with the updated in-device header INH to the crossbar switch 300 (step S 202 ).
  • the crossbar switch 300 then transfers the packet with the updated in-device header INH to the processing unit (integrated processing unit IPU# 1 ) specified based on the destination processing unit INA included in the updated in-device header INH (step S 204 ).
  • the buffer processing module 120 of the integrated processing unit IPU# 1 ( FIG. 2 ) accumulates the received packet into the packet buffer 150 and sends the header information of the received packet to the destination specifying process switchover module 134 .
  • the destination specifying process switchover module 134 searches the destination specification table 132 by the received header information of the packet as a key, in order to specify the following items given below:
  • the destination specifying process switchover module 134 then notifies the buffer processing module 120 of the specification of these items (step S 206 ).
  • the buffer processing module 120 reads out a corresponding packet specified by the notified information from the packet buffer 150 and updates the in-device header INH of the read-out packet.
  • the buffer processing module 120 respectively stores the setting of the processing priority in the device, the setting of the processing unit as the transfer destination of the packet, and the setting of the physical line used for output of the packet into the in-device priority INP, the destination processing unit INA, and the destination line IND.
  • the buffer processing module 120 transfers the packet with the updated in-device header INH to the crossbar switch 300 (step S 208 ).
  • the crossbar switch 300 then transfers the packet with the updated in-device header INH to the processing unit (distributed processing unit DPU# 2 ) specified based on the destination processing unit INA included in the updated in-device header INH (step S 210 ).
  • the packet queuing processor 224 of the distributed processing unit DPU# 2 ( FIG. 2 ) stores the received packet into the integrated processing queue 254 in the packet buffer 250 (step S 212 ).
  • the packet queuing processor 224 then notifies the packet readout controller 222 of the status of ready for transmission (step S 214 ).
  • the processing flow then proceeds to a packet transmission process.
  • the integrated processing unit 100 collectively performs the search for the processing unit as the transfer destination and for the physical line used for output of the packet.
  • the distributed processing unit DPU# 1 receiving a packet thus simply transfers the received packet to the integrated processing unit IPU# 1 after determination of the processing type of the received packet.
  • FIG. 12 is a flowchart showing a processing flow executed in the distributed processing mode.
  • the destination specifying process switchover module 234 of the distributed processing unit DPU# 1 ( FIG. 3 ) searches the destination specification table 232 by the received header information of the packet as a key, in order to specify the respective items a) through e) explained previously with reference to FIG. 11 .
  • the destination specifying process switchover module 234 notifies the buffer processing module 220 of the specification of these items (step S 300 ).
  • the buffer processing module 220 reads out a corresponding packet specified by the notified information from the packet buffer 250 and updates the in-device header INH of the read-out packet as concretely described above with reference to FIG. 11 .
  • the buffer processing module 220 transfers the packet with the updated in-device header INH to the crossbar switch 300 (step S 302 ).
  • the crossbar switch 300 then transfer the packet with the updated in-device header INH to the processing unit (distributed processing unit DPU# 2 ) specified based on the destination processing unit INA included in the updated in-device header INH (step S 304 ).
  • the packet queuing processor 224 of the distributed processing unit DPU# 2 stores the received packet into the distributed processing queue 252 in the packet buffer 250 and sends the header information of the packet to the destination specifying process switchover module 234 (step S 306 ).
  • the destination specifying process switchover module 234 searches the destination specification table 232 by the received header information of the packet as a key, in order to specify the items a), b), and e) explained previously with reference to FIG. 11 and then notifies the buffer processing module 220 of the specification of these items (step S 308 ).
  • the buffer processing module 220 reads out a corresponding packet specified by the notified information from the packet buffer 250 and updates the in-device header INH of the read-out packet. The details of the update are described above with reference to FIG. 11 .
  • the buffer processing module 220 notifies the packet readout controller 222 of the status of ready for transmission (step S 310 ). The processing flow then proceeds to the packet transmission process.
  • the distributed processing unit 200 (distributed processing unit DPU# 1 ) receiving a packet performs the search for the processing unit as the transfer destination and for the physical line used for output of the packet.
  • FIG. 13 is a flowchart showing the details of the packet transmission process of sending a packet to the external device.
  • the packet order controller 270 of the distributed processing unit DPU# 2 ( FIGS. 2 and 3 ) sends a readout control signal to the packet readout controller 222 .
  • the packet readout controller 222 reads out a packet from a corresponding processing queue and sends the read-out packet to the line IF module 260 (step S 400 ).
  • the process of generating the readout control signal (hereafter may be referred to as ‘transmission order control’) will be discussed in detail below.
  • the line IF module 260 outputs the received packet from the physical line # 2 specified for output of the packet (step S 402 ).
  • FIG. 14 is an explanatory view showing the details of the transmission order control performed by the packet order controller 270 .
  • the transmission order control includes an integrated processing state CPM, a distributed processing transient state CDM, and a distributed processing state DPM.
  • the integrated processing state CPM shows the behavior of the packet order controller 270 in the integrated processing mode.
  • the packet order controller 270 gives a packet readout instruction to read out a packet from the integrated processing queue 254 , while inhibiting readout of a packet from the distributed processing queue 252 .
  • the packet readout instruction is given to the packet readout controller 222 as discussed previously with reference to FIG. 13 .
  • processing mode switchover notification ‘integrated processing to distributed processing’ is sent to the packet order controller 270 .
  • the processing state is shifted from the integrated processing state CPM to the distributed processing transient state CDM.
  • the distributed processing transient state CDM shows the behavior of the packet order controller 270 in a transient state from the integrated processing mode to the distributed processing mode.
  • the packet order controller 270 preferentially performs readout of a packet from the integrated processing queue 254 .
  • the packet order controller 270 gives a packet readout instruction to read out a packet from the integrated processing queue 254 , while inhibiting readout of a packet from the distributed processing queue 252 .
  • the processing state is shifted from the distributed processing transient state CDM to the distributed processing state DPM:
  • Condition 1 The readout and transmission of all the packets, which have smaller sequence numbers than the in-device sequence number INS of a first packet stored in the distributed processing queue 252 , from the integrated processing queue 254 is completed;
  • Condition 2 The processing time exceeds a predetermined time-out period.
  • the distributed processing state DPM shows the behavior of the packet order controller 270 in the distributed processing mode.
  • the packet order controller 270 gives a packet readout instruction to read out a packet from the distributed processing queue 252 , while inhibiting readout of a packet from the integrated processing queue 254 .
  • processing mode switchover notification ‘distributed processing to integrated processing’ is sent to the packet order controller 270 .
  • the processing state is shifted from the distributed processing state DPM to the integrated processing state CPM.
  • the presence of the transient state from the integrated processing mode to the distributed processing mode or the distributed processing transient state CDM effectively prevents inversion of the transmission order of packets that may arise due to the difference in processing speed between the integrated processing mode and the distributed processing mode.
  • the distributed processing mode generally has the higher processing speed than that of the integrated processing mode.
  • An integrated processing transient state or a transient state from the distributed processing mode to the integrated processing mode is thus not provided in this embodiment, although the integrated processing transient state may be provided according to the requirements.
  • FIG. 15 is a flowchart showing the details of the mode switchover process with respect to each flow type.
  • the mode setting module 146 of the integrated processing unit IPU# 1 assigned as the master integrated processing unit ( FIG. 4 ) mainly takes charge of the processing of FIG. 15 .
  • the mode setting module 146 first checks the current processing mode (step S 500 ). According to a concrete procedure, the mode setting module 146 reads the setting of the processing mode DD in the distributed/integrated processing switchover table 236 ( FIGS. 6 and 7 ) of the distributed processing unit DPU# 1 (step S 502 ).
  • the mode setting module 146 compares the value of the statistical information in the statistical information table 280 ( FIG. 8 ) with the value of the upper threshold UB (step S 510 ). In the illustrated example of FIG. 8 , the average number of packets SPPS is compared with the value of the upper threshold UB. When it is determined at step S 512 that the value of the statistical information is less than the value of the upper threshold UB, the mode setting module 146 waits for a predetermined time period (step S 530 ) and returns the processing flow to step S 500 to continue the monitoring.
  • the mode setting module 146 updates the setting of the processing mode DD to the ‘distributed processing’ in the distributed/integrated processing switchover table 236 (step S 514 ).
  • the mode processing module 146 then sends the processing mode switchover notification ‘integrated processing to distributed processing’ to the packet order controllers 270 of the distributed processing units DPU# 1 , DPU# 2 , and DPU# 3 (step S 516 ).
  • the mode setting module 146 waits for the predetermined time period (step S 530 ) and returns the processing flow to step S 500 to continue the monitoring.
  • the mode setting module 146 compares the value of the statistical information in the statistical information table 280 with the value of the lower threshold LB (step S 520 ). In the illustrated example of FIG. 8 , the average number of packets SPPS is compared with the value of the lower threshold LB. When it is determined at step S 522 that the value of the statistical information is greater than the value of the lower threshold LB, the mode setting module 146 waits for the predetermined time period (step S 530 ) and returns the processing flow to step S 500 to continue the monitoring.
  • the mode setting module 146 updates the setting of the processing mode DD to the ‘integrated processing’ in the distributed/integrated processing switchover table 236 (step S 524 ).
  • the mode processing module 146 then sends the processing mode switchover notification ‘distributed processing to integrated processing’ to the packet order controllers 270 of the distributed processing units DPU# 1 , DPU# 2 , and DPU# 3 (step S 526 ).
  • the mode setting module 146 waits for the predetermined time period (step S 530 ) and returns the processing flow to step S 500 to continue the monitoring.
  • the mode switchover process sets the processing mode in the distributed processing unit DPU# 1 , based on the contents of the distributed/integrated processing switchover table 236 shown in FIG. 6 .
  • the mode switchover process of this embodiment may also be performed with respect to each flow type to set the processing mode based on the contents of the distributed/integrated processing switchover table 236 shown in FIG. 7 .
  • the distributed/integrated processing switchover table 236 may be designed to unconditionally activate a preset processing mode (for example, the distributed processing mode) for packets belonging to a specific flow type (for example, a multicast flow or a broadcast flow), irrespective of the load level of the specific flow type.
  • the mode setting module 146 works in cooperation with the destination specifying process switchover module 234 to attain the function of the mode selector in the claims of the invention. As clearly understood from the above explanation, it is preferable to change over the processing mode based on at least either one of the amount of load applied to the network relay apparatus and the flow type or the packet type.
  • the network relay apparatus of the first embodiment changes over the processing mode between the integrated processing mode with the low power consumption and the distributed processing mode with the high processing performance according to the amount of load applied to the network relay apparatus or according to the packet type of the packet received by the network relay apparatus.
  • the network relay apparatus of this arrangement desirably reduces the power consumption, while assuring the high processing performance.
  • FIG. 16 is an explanatory view schematically illustrating the configuration of a network relay apparatus 10 a in a second embodiment.
  • the primary difference from the configuration of the first embodiment shown in FIGS. 1 and 2 is that the network relay apparatus 10 a of the second embodiment has an overall distributed processing unit 200 a and protocol processing units 400 , in place of the integrated processing units 100 .
  • the protocol processing unit 400 performs the protocol management, the overall control, and the synchronization and updating of the tables, among the various functions of the integrated processing unit 100 discussed previously with reference to FIG. 1 .
  • the distributed processing unit DPU# 3 assigned as the overall distributed processing unit 200 a performs the destination search with regard to each packet, among the various functions of the integrated processing unit 100 discussed previously with reference to FIG. 1 .
  • the overall distributed processing unit 200 a accordingly functions as a sort of the integrated processing unit.
  • the distributed processing unit DPU# 1 determines the processing type of an input packet.
  • the packet is transferred from the distributed processing unit DPU# 1 to the overall distributed processing unit 200 a via a crossbar switch CSW.
  • the distributed processing unit DPU# 3 receives the transferred packet and performs a series of processing similar to the processing performed by the integrated processing unit IPU# 1 as described previously with reference to FIG. 2 .
  • the operations of the network relay apparatus 10 a in the distributed processing mode are similar to those discussed previously with reference to FIG. 3 .
  • the network relay apparatus of the second embodiment effectively reduces the power consumption, while assuring the high processing performance, like the network relay apparatus of the first embodiment.
  • the overall distributed processing unit 200 a undertakes part of the functions performed by the integrated processing unit 100 of the first embodiment. This arrangement desirably lowers the total production cost of the network relay apparatus.
  • FIG. 17 is an explanatory view schematically illustrating the configuration of a network relay apparatus 10 b in a third embodiment.
  • the structures of the respective constituents (the integrated processing units 100 , the distributed processing units 200 , and the crossbar switch 300 ) in the network relay apparatus 10 b of the third embodiment are identical with those in the network relay apparatus 10 of the first embodiment.
  • the primary difference from the first embodiment is that the network relay apparatus 10 b of the third embodiment has a low loading mode and a high loading mode for the distributed processing.
  • FIG. 17 shows a series of operations in response to input of a multicast packet from the physical line # 0 of the distributed processing unit DPU# 1 .
  • the physical lines # 0 and # 1 of the distributed processing unit DPU# 2 and the physical lines # 1 and # 2 of the distributed processing unit DPU# 3 are specified as distribution destinations of the input multicast packet.
  • the distributed processing unit DPU# 1 first determines a replication type of an input multicast packet.
  • the replication type has two modes, the low loading mode and the high loading mode, and is determined according to the number of packets to be processed, as discussed later in detail.
  • the distributed processing unit DPU# 1 searches the pathway information to specify the distributed processing units and the physical lines as the transfer destinations of the input multicast packet.
  • the distributed processing unit DPU# 1 subsequently replicates the multicast packet to generate a packet replica, updates the header information of the generated packet replica, and transfers the packet replica with the updated header information to the distributed processing unit specified as the transfer destination.
  • the distributed processing unit DPU# 1 repeats such replication of the input multicast packet and transfer of the packet replica a specific number of times that corresponds to the number of the physical lines specified for output of the multicast packet (four times in the illustrated example of FIG. 17 ).
  • FIG. 18 is an explanatory view showing the network relay apparatus 10 b in the high loading mode.
  • the distributed processing unit DPU# 1 searches the pathway information to specify the distributed processing units as the transfer destinations of the input multicast packet. Retrieval of the physical lines used for output of the packet is not required in the high loading mode.
  • the distributed processing unit DPU# 1 transfers the input multicast packet to the crossbar switch CSW.
  • the crossbar switch CSW replicates the multicast packet to generate a packet replica and transfers the packet replica to the distribution processing unit specified as the transfer destination.
  • the distributed processing unit receiving the packet replica searches the pathway information to specify the physical lines used for output of the packet. Such replication of the multicast packet and output of the packet replica is repeated a specific number of times that corresponds to the number of physical lines specified for output of the multicast packet.
  • FIG. 19 is a graph showing a changeover criterion between the low loading mode and the high loading mode, with the amount of consumed power as ordinate and the load as abscissa.
  • the load is represented by the number of packets to be processed per unit time.
  • the number of packets to be processed per unit time is obtainable from, for example, information used for creation of a multicast table.
  • the crossbar switch CSW is activated in a replication mode.
  • the high loading mode accordingly requires high power consumption even for a small number of packets to be processed, but advantageously allows for high-speed packet replication by taking advantage of the characteristics of the crossbar switch CSW.
  • the low loading mode FIG. 17
  • the distributed processing unit 200 distributed processing unit DPU# 1
  • the low loading mode accordingly has low power consumption when there are a small number of packets to be processed.
  • the disadvantage is that the distributed processing unit 2000 has only the limited processing performance and may thus be incapable of processing a large number of packets.
  • the changeover of the replication type between the low loading mode and the high loading mode is based on the load level computed from the number of packets input into the network relay apparatus 10 b . Any arbitrary criterion may be adopted for the changeover of the replication type. In one modification, the changeover of the replication type may be based on the load level computed from the number of packets output from the network relay apparatus 10 b.
  • FIG. 20 is an explanatory view showing one example of the distributed/integrated processing switchover table 236 adopted in the third embodiment. The details of this table are described previously with reference to FIG. 6 .
  • the changeover of the replication type between the high loading mode and the low loading mode is performed by a procedure similar to the mode switchover process described above with reference to FIG. 15 .
  • the network relay apparatus of the third embodiment effectively reduces the power consumption, while assuring the high processing performance, like the network relay apparatus of the first embodiment.
  • the replication type is changed over between the low loading mode with the low power consumption and the high loading mode with the high processing performance. This arrangement desirably reduces the power consumption of the network relay apparatus in the process of relaying multicast packets.
  • the processing mode is changed over between the integrated processing mode and the distributed processing mode without any differentiation between the functions of the network relay apparatus as a bridge in the layer 2 and as a router in the layer 3 .
  • This application of the changeover of the processing mode is, however, neither essential nor restrictive but may be modified arbitrarily.
  • the distributed processing is unconditionally performed in the layer 2
  • the processing mode is changed over between the integrated processing mode and the distributed processing mode in the layer 3 .
  • the mode setting module is provided in each integrated processing unit.
  • the mode setting module is, however, not restrictively located in the integrated processing unit but may be provided at any arbitrary position.
  • the mode setting module may be provided in each distributed processing unit or may be provided as an independent component in the network relay apparatus.
  • the former modified structure of locating the mode setting module in each distributed processing unit desirably reduces the load of monitoring between the respective processing units.
  • the network relay apparatus of the third embodiment has the two replication modes (the low loading mode and the high loading mode), in addition to the two relay mode (the integrated processing mode and the distributed processing mode) as discussed above.
  • This application is, however, neither essential nor restrictive but may be modified arbitrarily.
  • the relay modes and the replication modes may be independently adopted in the network relay apparatus.
  • the network relay apparatus may be designed to have only the two replication modes.
  • the respective distributed processing units have the distributed/integrated processing switchover tables of the same contents.
  • This application is, however, neither essential nor restrictive but may be modified arbitrarily.
  • the respective distributed processing units may have the distributed/integrated processing switchover tables of different contents.
  • the processing type may be determined according to the amount of load applied to each distributed processing unit.
  • Each of the distributed processing units may be designed to have multiple different distributed/integrated processing switchover tables.
  • the processing type is changed over according to the amount of load applied to the network relay apparatus.
  • the criterion for changing over the processing type is, however, not restricted to the amount of load applied to the network relay apparatus but may be any arbitrary condition.
  • the processing mode in the event of some failure arising in the integrated processing unit in the integrated processing mode, the processing mode may be changed over from the integrated processing mode to the distributed processing mode.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Information Transfer Between Computers (AREA)
US12/582,901 2008-10-28 2009-10-21 Network relay apparatus Expired - Fee Related US8705540B2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2008-276586 2008-10-28
JP2008276586A JP4913110B2 (ja) 2008-10-28 2008-10-28 ネットワーク中継装置

Publications (2)

Publication Number Publication Date
US20100103933A1 US20100103933A1 (en) 2010-04-29
US8705540B2 true US8705540B2 (en) 2014-04-22

Family

ID=42117446

Family Applications (1)

Application Number Title Priority Date Filing Date
US12/582,901 Expired - Fee Related US8705540B2 (en) 2008-10-28 2009-10-21 Network relay apparatus

Country Status (2)

Country Link
US (1) US8705540B2 (ja)
JP (1) JP4913110B2 (ja)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012035639A1 (ja) * 2010-09-16 2012-03-22 富士通株式会社 データ共有システム、端末およびデータ共有方法
WO2016122562A1 (en) * 2015-01-30 2016-08-04 Hewlett Packard Enterprise Development Lp Replicating network communications
CN108307434B (zh) 2017-01-12 2023-04-07 马维尔以色列(M.I.S.L.)有限公司 用于流控制的方法和设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030072271A1 (en) * 2001-09-17 2003-04-17 Simmons Steve M. System and method for router data distribution
US7263091B1 (en) * 2002-05-23 2007-08-28 Juniper Networks, Inc. Scalable routing system
US7289503B1 (en) 2002-07-10 2007-10-30 Juniper Networks, Inc. Systems and methods for efficient multicast handling
US20080044181A1 (en) * 2006-08-21 2008-02-21 Juniper Networks, Inc. Multi-chassis router with multiplexed optical interconnects
US7518986B1 (en) * 2005-11-16 2009-04-14 Juniper Networks, Inc. Push-based hierarchical state propagation within a multi-chassis network device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10190715A (ja) * 1996-12-27 1998-07-21 Matsushita Electric Works Ltd ネットワークスイッチング方式
JP3645733B2 (ja) * 1999-02-24 2005-05-11 株式会社日立製作所 ネットワーク中継装置及びネットワーク中継方法
JP2002158709A (ja) * 2000-11-22 2002-05-31 Nec Corp 負荷分散処理方法、装置およびプログラムを記録した記録媒体
JP3719222B2 (ja) * 2002-02-27 2005-11-24 日本電気株式会社 パケット処理システム

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030072271A1 (en) * 2001-09-17 2003-04-17 Simmons Steve M. System and method for router data distribution
US7263091B1 (en) * 2002-05-23 2007-08-28 Juniper Networks, Inc. Scalable routing system
US7289503B1 (en) 2002-07-10 2007-10-30 Juniper Networks, Inc. Systems and methods for efficient multicast handling
US7518986B1 (en) * 2005-11-16 2009-04-14 Juniper Networks, Inc. Push-based hierarchical state propagation within a multi-chassis network device
US20080044181A1 (en) * 2006-08-21 2008-02-21 Juniper Networks, Inc. Multi-chassis router with multiplexed optical interconnects

Also Published As

Publication number Publication date
JP2010109426A (ja) 2010-05-13
US20100103933A1 (en) 2010-04-29
JP4913110B2 (ja) 2012-04-11

Similar Documents

Publication Publication Date Title
US7295519B2 (en) Method of quality of service based flow control within a distributed switch fabric network
EP2915299B1 (en) A method for dynamic load balancing of network flows on lag interfaces
CN102549973B (zh) 功率节省系统及功率节省方法
CN100405344C (zh) 用于在交换结构中分发缓冲区状态信息的装置和方法
US8774179B1 (en) Member link status change handling for aggregate interfaces
US8953591B2 (en) Packet transferring node
US7688825B2 (en) Filtering frames at an input port of a switch
US20060215568A1 (en) System and method for data collection in an avionics network
CN102047619B (zh) 用于对异常分组的慢路径处理进行动态速率限制的方法、系统和计算机可读介质
US20040032827A1 (en) Method of flow control
US6473815B1 (en) Queue sharing
JP2007081990A (ja) 伝送装置およびフレーム転送方法
CN104995884A (zh) 分布式无交换机互连
CN104717159A (zh) 一种基于存储转发交换结构的调度方法
US20140126577A1 (en) Network communication node comprising a plurality of processors for processing layers of communication and associated node
CN112437027A (zh) 交换芯片、交换设备、堆叠系统及转发报文的方法
US8705540B2 (en) Network relay apparatus
CN1322716C (zh) 一种基于虚拟路由器冗余协议的关键路由信息监视方法
JP4309321B2 (ja) ネットワークシステムの運用管理方法及びストレージ装置
CN100571219C (zh) 一种负载分担路由器以及实现负载分担的设备、方法
US10986043B1 (en) Distributed network switches of data centers
CN102045259B (zh) 分组交换设备以及管理用户业务的方法
CN108199986B (zh) 一种数据传输方法、堆叠设备及堆叠系统
CN111385144B (zh) 基于静态链路聚合组的主备优先级控制方法及装置
CN110430146B (zh) 基于CrossBar交换的信元重组方法及交换结构

Legal Events

Date Code Title Description
AS Assignment

Owner name: ALAXALA NETWORKS CORPORATION,JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:AKAHANE, SHINICHI;REEL/FRAME:023605/0213

Effective date: 20091104

Owner name: ALAXALA NETWORKS CORPORATION, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:AKAHANE, SHINICHI;REEL/FRAME:023605/0213

Effective date: 20091104

STCF Information on status: patent grant

Free format text: PATENTED CASE

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551)

Year of fee payment: 4

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20220422