NZ724695B2 - Built in alternate links within a switch - Google Patents

Built in alternate links within a switch Download PDF

Info

Publication number
NZ724695B2
NZ724695B2 NZ724695A NZ72469515A NZ724695B2 NZ 724695 B2 NZ724695 B2 NZ 724695B2 NZ 724695 A NZ724695 A NZ 724695A NZ 72469515 A NZ72469515 A NZ 72469515A NZ 724695 B2 NZ724695 B2 NZ 724695B2
Authority
NZ
New Zealand
Prior art keywords
port
ports
data
alternate
primary
Prior art date
Application number
NZ724695A
Other versions
NZ724695A (en
Inventor
John R Lagana
Aristito Lorenzo
Ronald M Plante
Mohammad H Raza
David G Stone
Original Assignee
Fiber Mountain Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fiber Mountain Inc filed Critical Fiber Mountain Inc
Priority claimed from PCT/US2015/023077 external-priority patent/WO2015148970A1/en
Publication of NZ724695A publication Critical patent/NZ724695A/en
Publication of NZ724695B2 publication Critical patent/NZ724695B2/en

Links

Abstract

The network switch architecture permits modifications to the network topology in real time without the need for manual intervention. In this architecture, a switching core is capable of switching data paths directly from the ingress or egress of the switching core to alternate destination ports in real time, either under software or hardware control. eal time, either under software or hardware control.

Description

Patent Application for BUILT IN ALTERNATE LINKS WITHIN A SWITCH INVENTORS: Mohammad H. Raza, Cheshire, CT (US); David G. Stone, Irvine, CA (US); Aristito Lorenzo, Plantsville, CT (US); Ronald M. Plante, Prospect, CT (US); John R. Lagana, West Nyack, NY (US) CROSS-REFERENCE TO RELATED APPLICATIONS This ation claims priority to (to-pending US. Provisional Application No. 61/972,121, filed on March 28, 2014, entitled "Built in Alternate Links Within a Switch" which is incorporated herein in its entirety by reference.
BACKGROUND Field The present disclosure relates generally to network switches typically used in data centers, and more particularly to network es providing additional port lities that become active in the event of a failure occurring in a primary port, or that may be used for additional purposes.
Description of the Related Art Current data network switch architectures have a finite number of ports 108 in the switching core, as seen in Fig. 1. This therefore means that a switch of a given size uses all its available ports for switching data between the ports (primary ports 108), leaving no active ports available to compensate in case of a y port 108 failure. A k switch consists of a number of ace ports 108, switch logic 106, and a control WO 48970 2015/023077 processor 102. Some k switches may also have dedicated packet processors to perform routing and other functions. A network switch may also have a Management Interface Port 104 that enables the switch to communicate with Management ller 100 that configures the settings within the Network Switch 10.
Each port 108 connects to Switch Logic 106 via data path 112. In operation, Switch Logic 106 receives data from a particular port 108 and ers or switches the data to an outgoing port 108 as defined by configuration settings from the Management Controller 100. Fig. 2 shows more s of a connection between two ports 108 within the Switch Logic 106. The basic on of a Network Switch 10 consists of receiving a physical layer data stream on an input port 108, extracting the data, and then transmitting that data out as a physical layer data stream on an output port. The two main blocks for this process within the Network Switch are transceiver ports 108 to external medium 118 and Switch Logic 106 which in turn contains a number of onal blocks.
A port 108 consists of a transceiver 132 and a connector 130. The eiver 132 has a receiver which receives the data from a remote end via external medium 118, and a transmitter which transmits the data to a remote end via an external medium 118.
Examples of external s include wireless, Cat 6, Cat 6a, optical fiber, or other physical connection mediums. In a network switch, a port 108 receives a physical layer data signal from the external medium 118, which then converts the signal from the physical layer signal into an electrical data signal, separates the recovered timing information from physical layer signal, and clocks the data via connection 112 into a Serial/Deserializer 120 (SerDes) as a serial data stream. The SerDes 120 converts the serial data stream from the receiver into a parallel interface format for the next stage, a Media Access Control (MAC) layer 122. The MAC layer 122 is an interface between the Logical Link Control (LLC) sublayer and the network's al layer and provides the Data Link Layer functions including the frame delimiting and identification, error checking, MAC addressing, and other functions. The frame delimiting and identification functions are the functions that locates packet boundaries and extracts the packets from the incoming data stream. The packet is parsed by the MAC layer 122 and the header fields are ted and passed via interface bus 110 to the Central Processing Unit (CPU) 102, or a dedicated Packet Processor (not shown), which interprets the header information. In an Ethernet packet for example, the header contains the source MAC address, ation MAC address, and other information needed to determine the packet type and ation of the packet. The Network Switch 10 is configured by the Management Controller 100 which icates with the Management Interface Port 104 via control path 101 to exchange information, such as configuration information, alarm information, status information. The Routing Tables 128 contain the information necessary to direct an incoming packet on a particular port 108 to an outgoing packet on a particular port 108. The Routing Tables 128 may be determined by discovery protocol software within the Network Switch 10, or the CPU 102 may receive uration information from the Management Controller 100 to set up particular routing table configurations. CPU 102, or the dedicated packet processor, looks up the output destination route for the packet, modifies the outgoing header if necessary, then the Switch Fabric 124 transfers the packet to an outgoing queue in the MAC 122. The outgoing MAC layer 122 formats the outgoing packet for ission and performs such functions as generating the frame check sequence for the outgoing packet. The completed packet is then fed to the ng SerDes 120, which ts the parallel data stream into a serial data stream. The serial data stream is fed to the outgoing transceiver port 108 which converts the data stream into a physical layer signal, adds in the physical layer timing and its the data signal out port 108 to external medium 118.
As seen in Figs. 1 and 2, within current Network Switches 10, the number of SerDes 124, the number of MACs 122, the size of Routing Tables 128, the capability of the Switch Fabric 124, the CPU 102 processing power, the packet processing power, and/or some other design constraint results in the Network Switch being able to support only a finite number of ports 108.
Some network switches have dedicated standby ports, also called redundant ports, which can be used in the event of a y port failure. Standby or redundant ports are intended to be manually ured for active use in the event of a failure in a primary port. A primary port failure can occur due to a failure in the switching core, physical port transceivers, or a link connecting the primary port to the remote end. In any of these primary port failure cases, the network loses a connection path (i.e., a link) and therefore loses the ability to transmit all data between two end points in the network, unless an ate or redundant path is established. However, a network architecture that relies on ant and/or y ports to be enabled in case of a e of a primary port necessitates that such redundant or standby ports remain idle and do not carry data until needed. As a result, network data traffic throughput is still limited to the m number of active ports capable of being supported by the Network Switch.
Other network architectures refrain from utilizing all the available bandwidth of a k Switch, so that in the event of a failure of a link, other ports in the Network Switch will have sufficient capacity available to handle the additional load from the failed link. However, this results in each Network Switch operating at less than maximum bandwidth and requires additional Network Switches to support a full bandwidth capability.
A data center network architecture is lly considered a static configuration, such that once a data center network is built out, the main architecture does not change and there are relatively few changes are made to the data center network. This is because each architectural modification or change requires sending personnel to the data center to manually move ents (or equipment) and/or to change interconnections between the components (or ent) within the data center, or to reprogram equipment in the data center. Each architectural modification or change to the data center network incurs cost, sometimes significant cost, and increases the risk of errors in the new data center network architecture, and the risk of failures resulting from the architectural modification or change.
Because of these risks, in most cases architectural cations or changes to a completed data center network is restricted wherever possible to only replacing failed components, minor es to components, adding minor new features or capabilities, or adding a few new connections. Generally, with such architectural modifications or changes, there is little change to the core data flow in the data center network.
BRIEF SUMMARY OF THE INVENTION The present disclosures provides network switches that can be incorporated into data center networks switch to simplify interconnection methodologies in the data center network, the components (or equipment) needed to in the data center network, and reduces the manpower required to maintain the data center network. The network switch according to one embodiment includes a set of ports, a switch logic unit, a data path interconnection unit, and a control unit. Each port within the set of ports is preferably configured to receive data from an al medium, and to it data to an external medium. The switch logic unit is capable of supporting a finite number of ports, wherein the number of ports in the set of ports is greater than the number of finite ports. The data path interconnection unit is connected to the set of ports by a set of port data paths and connected to the switch logic unit by a set of switch logic data paths, where the set of port data paths is equal to the number of ports in the set of ports, and the set of switch logic data paths is equal to the finite number of ports supported by the switch logic unit. The control unit is connected to the path interconnection unit and the switch logic unit, where the control unit is configured to control the switch logic unit to switch data on one switch logic data path to another switch logic data path, and n the control unit is configured to control the data path interconnection unit such that data on one switch logic data path is directed to one or more ports in the set of ports.
BRIEF PTION OF THE DRAWINGS The figures depict embodiments for es of illustration only. One d in the art will readily recognize from the following description that alternative embodiments of the structures illustrated herein may be employed t departing from the principles described herein, wherein: Fig. 1 is a block diagram of a current network switch architecture; ] Fig. 2 is a block diagram of a current network switch architecture detailing the functions within the Switch Logic; Fig. 3 is a block diagram of an exemplary embodiment of a network switch according to the present disclosure with multiplexor logic to select ng ports; Fig. 4 is a block diagram of an exemplary embodiment of a k switch according to the present disclosure with alternate ports configurable to permit implementation of additional ports; Fig. 5 is a block diagram of an exemplary embodiment of a network switch according to the present disclosure with alternate ports created by additional circuitry to permit implementation of additional ports; Fig. 6 is a block diagram of another exemplary embodiment of a network architecture according to the present disclosure with alternate ports urable to permit el or bonded paths within the data center network; Fig. 7 is a block diagram of another exemplary embodiment of a network architecture according to the t disclosure with alternate ports configurable to permit automatic reconfiguring of the data center k; Fig. 8 is a block diagram of another exemplary embodiment of a network switch according to the t sure with alternate destination connections; Fig. 9 is a flow diagram for a method for automatically reconfiguring a data center network according to the present disclosure.
Fig. 10 is a block diagram of another exemplary embodiment of a network switch according to the present disclosure with alternate ports configurable to permit additional ports to be utilized as Test/Monitor ports; Fig. 11 is a block diagram of another exemplary embodiment of a network architecture according to the present disclosure with alternate ports configured as Test/Monitor ports connected to a onitor platform; WO 48970 DETAILED DESCRIPTION Referring to Fig. 3, an exemplary embodiment of a Network Switch according to the present sure is shown. In this embodiment, the Network Switch 20 includes CPU 102, ment Interface Port 104, Switch Logic 106, one or more multiplexers 300, 302, one or more primary ports 108, and one or more alternate ports 402. As noted above, each port 108, 402 includes a transceiver and a connector to connect to an external medium. The k Switch 20 is configured so that the number of physical ports on the Network Switch 20 can be increased to support one or more alternate ports 402 while maintaining the same number of primary ports 108 on Switch Logic 106. In this embodiment, the data traffic on data path 112 from Switch Logic 106 can be switched by Multiplexor (MUX) 300 or 302 onto two or more physical output ports 108 or 402. In this embodiment, the electrical signals on data paths 112 from Switch Logic 106 are passed through MUX 300 or 302 which, under the control of CPU 102 (or dedicated hardware), selects a particular port or ports from the number of primary ports 108 or alternate ports 402 from which to transmit and receive data. For example, for a three port multiplexor 302, the CPU 102 can select any one of the three ports in which to transmit and receive the data traffic via data path 304. The port 108 or 402 selected by the MUX 300 or 302 connects to Switch Logic 106 via a data path 112. The Switch Logic receives data traffic from port 108 or 402 and transfers the data traffic to an outgoing port 108 or 402 as defined by the configuration settings from the CPU 102. As noted above, Management Controller 100 can provide the CPU with information to control the lexers 300, 302 and the Switch Logic 106 via the Management Interface Port 104. r, the Routing Tables may also contain the information used to direct ng packets on a particular port 108 or 402 to an outgoing packet on a particular port 108 or 402.
In the embodiment of Fig. 3, the k Switch 20 has a plurality of primary ports 108 and ate ports 402, each connected to Switch Logic 106 via an associated MUX 300 or 302 and data paths 112 and 304. Each MUX 300 or 302 may have two or more outputs that enable data traffic on data path 112 to be multiplexed to primary ports 108 or alternate ports 402. The multiplexors are lled by CPU 102 via Control Bus 110 to select one or more of the paths 304 to connect to active port 108 or 402. Thus, using the multiplexers enables the Network Switch 20 to set many ports 108 or 402 as active. For example, the CPU 102 can automatically program the one or more multiplexors 300, 302 to switch the transmission path from a primary port 108 to the ate port 402 in the case of a dual MUX 300, or in the case of a multiple port MUX 302 to one of the alternate ports 402. The Network Switch 20 can have as many standby or inactive ports as can be ted by the multiplexer logic. In a different embodiment, the multiplexor switchover may be med by dedicated hardware instead of CPU 102.
In some embodiments, the alternate ports may be of the same type and speed rating as the primary ports. In other embodiments, the alternate ports may be of the different type and speed rating as the primary ports. For example, a primary port 108 may be implemented using a Cat 6a 10GE interface while the ate port 402 might be a 10GB optical interface. In another example, a primary port 108 may be implemented using a 25GB optical interface while the alternate port 402 may be a Cat 6a 10GE interface. In the latter case, the bandwidth of the ate port would limit the actual data rate of the data stream transmitted via the slower path.
] In the case where the CPU 102 has switched from the primary port 108 to an alternate port 402, the CPU 102 can be instructed to automatically program the multiplexor 300 or 302 to switch the transmission path back from the active ate port 402 to the primary port 108 via the Management Controller 100 and the Management Interface Port 104.
Referring now to Fig. 4, another ary embodiment of a Network Switch according to the present disclosure is shown. In this embodiment, the Network Switch 30 includes CPU 102, ment Interface Port 104, Switch Logic 406, one or more primary ports 108, and one or more alternate ports 402. As noted above, each port 108, 402 includes a transceiver and a connector to connect to an external medium. The Switch Logic 106 of this embodiment can support a greater number of primary ports 108 and alternate ports 402 than the number of Media Access Control (MAC) addresses ble in Switch Logic 106. In this embodiment, the Switch Logic 406 contains a port switch 404 which, under the control of CPU 102 selects a particular port or ports 108 or 402 to feed into SerDes 120 and then into MAC 122. CPU 102, or the dedicated packet processor, looks up the output destination route for the packet, modifies the outgoing header if necessary, then the Switch Fabric 124 ers the packet to an outgoing queue in the MAC 122. By idating the transceiver ports, switch fabric 124 can support all the data paths 112 and data paths 400, and the MACs122 can be switched between the primary ports 108 and the alternate ports 402 to create a primary port 108 and alternate port 402 application. For example, if port 108A fails, the Network Switch 30 can enable port 402A as an active alternate port. In another embodiment, there may be a SerDes 120 associated with each port and the port switch 404 selects the output from all the SerDes 120 to feed into the MAC 122.
In other ments, Switch Logic 106 may have the capability of coupling multiple Switch Logic 106 units together using expansion ports (not shown) on Switch Logic 106. Such expansion ports can in some cases be repurposed to provide alternate ports 402 for switchover capabilities from the primary port 108. Depending upon the capabilities within Switch Logic 106 and the configuration of expansion ports, alternate ports 402 may potentially be set as additional primary ports or the alternate ports 402 may remain as alternate ports that can be configured by the CPU 102 as active ports in the event a y port 108 is d from an active state.
Referring now to Fig. 5, another ary embodiment of a Network Switch according to the present disclosure is shown. In this embodiment, the Network Switch 40 includes CPU 102, Management Interface Port 104, Switch Logic 106, /Deserializer (SerDes) 500, one or more primary ports 108 and one or more alternate ports 402. As noted above, each port 108, 402 includes a transceiver and a connector to connect to an external medium. By adding SerDes 500, which is external to the Switch Logic 106, the bandwidth of the Switch Logic 106 can be extended to support the ate ports 402, while the Serial/Deserializer within the Switch Logic 106 WO 48970 supports the primary ports 108. The al SerDes 500 can be added into the Network Switch 40 and connected to the Switch Logic 106 via bus 502.
In another embodiment, there exists additional physical ports for creating multiple paths to a single ation to form a larger bandwidth pipe. Referring to Fig. 6, ports 602A and 602B are paths originating in ports 402A and 402B and which connect Switch 204 to Server 200C. In a parallel path configuration, port 402A and/or port 402B can be set active such that along with port 108B, the ports can transmit and receive data traffic independent of each other.
In another embodiment, there exists additional al ports for bonding multiple ports together to form a larger bandwidth pipe with a single logical path. Again, referring to Fig. 6, in a bonded configuration, port 402A and/or port 402B can be set active such that along with port 108B, the ports can be bonded. This implements a single data path between Switch 204 and Server 200C that is now the sum of the bandwidths of path 600, path 602A and/or path 602B. The network shall see these multiple paths instead as a single larger bandwidth connection.
It is noted that each of the y ports 108, which are active from a switching core perspective, means that the primary ports 108 have both an active transmit path and an active receive path. The alternate ports 402 may be set as inactive ports where the alternate ports 402 are not transmitting or receiving data so that they can act as standby or redundant ports. For example, in the event of failure of a primary port 108, the CPU 102 can be instructed by the ment Controller 100 via Management Interface Port 104 to set a failed y port 108 as inactive and to activate an ate port 402 to function as a primary port. This alternate port 402 may connect to the same endpoint device as the failed primary port or may have a completely different route through the data center network to an entirely different endpoint. In the event the failed primary port is repaired or replaced, the CPU can then be instructed by the Management Controller 100 via Management Interface Port 104 to set the replaced or repaired primary port 108 back as active, and to set the alternate port 402 as ve. As another example, in the event an alternate port 402 functioning as a primary port 108 fails, the CPU can then be cted by the Management ller 100 via Management Interface Port 104 to set another alternate port 402 active as a primary port, route data traffic to that alternate port 402, and set the failed alternate port to inactive.
] It should be noted that for optical fiber applications, this implementation method does not use optical couplers, such that there is no loss of optical power from the input port to the output port.
While the active primary ports 108 and active alternate ports 402 will be capable of transmitting and receiving data, any y primary port 108 and any standby alternate port 402 can be configured to maintain an active link status, such as Keep Alive signals to the remote end and monitoring for Loss of Signal (LOS) or other alarm and status indicators ensuring the path is available to enable to active use when required.
] All ports 108 and 402 can be configured and monitored by CPU 102 to track the physical connection and maintain per port status for the Management Controller.
Reconfigurable Network The present disclosure also provides methods for automatically reconfiguring Network Switches in data center networks. Fig. 7 provides an ary embodiment of Network Switch 20, 30 or 40 (referred to in this section as Network Switch 204) according to the t disclosure incorporated into a data center network. In this embodiment, the capability exists to create a software controlled touchless reconfigurable network, as shown in Fig. 7, where the Management Control Software 100 can modify the routing tables of Switch 204 to create alternate routes within the network. This differs from a redundant path concept that provides an alternate path to the same endpoint or to the same endpoint application. In this case this implementation routes data c to new destinations or provides additional parallel paths to an existing ation to increase bandwidth on demand without the need for manual intervention. The proviso that the al connections between the endpoints are pre—established. Once the physical connections have been made between the endpoints, a Management Controller WO 48970 100 can igure the network without requiring personnel to manually reconnect the interconnections.
] An exemplary method for automatically reconfiguring a data center network that has been previously ucted with known physical connections between the various network devices commonly deployed with a data center network, such as s 200, storage devices 206, onnects 202, and one or more Network Switches 204 of the present disclosure. The discovery of the logical and physical interconnections between the network devices, and the discovery of the end to end path and interconnections between devices is achieved through the Management Controller 100 using known network discovery tools, such as exchanging conventional k ery data packets, such as address resolution protocol (ARP) packets, broadcast, and multicast packets between all the network devices (or links) within the data center network. The Management Controller 100 can ge conventional network discovery data packets with each network device through data paths 101 to discover the network topology.
Through the ery process, the Network Switch 204 g Tables can be created by the ment Controller 100 and used by the Network Switch 204 to route data traffic to endpoint destinations within the data center network, and to destinations outside the data center network. With a known data center network configuration, the Management Controller 100 can then monitor data traffic within the data center network.
The exemplary data center architecture using the Network Switch architectures according to the present disclosure provides a data center network entation that can use the primary ports and/or alternate ports to automatically reconfigure the data center network. Referring again to Fig. 7, the Network Switch 204 has the capacity to support additional ports as described above. The Management Controller 100 can automatically instruct the CPU 102 in the Network Switch 204 to set either one or more primary ports 108 and/or one or more alternate ports 402 as active to transmit and receive data in the data center network. By setting up these additional routes, data traffic can be switched from one path, e. g., to destination "A", to a different path, e. g., to destination "B", (seen in Fig. 7) permitting data to be sent to different endpoints. Referring to Fig. 8, the Network Switch 204 has been configured to set ate ports 402A—402D as primary ports that are connected to endpoint destinations A—E within the data center network. To configure the Network Switch 204 in this way, the Management Controller 100 can instruct the CPU 102 in the Network Switch 204 to set alternate ports 402A—402D as active primary ports to transmit and receive data in the data center network. By setting up these additional , data traffic can be switched from one path, e. g., to destination "A", to a different path, e. g., to destination "E", (seen in Fig. 8) permitting data to be sent to different endpoints.
Referring to Fig. 9, as the Management Controller 100 monitors the data traffic within the data center network it determines if the data traffic has changed such that an alternate port should be activated, or the network is to be reconfigured (at step 900). If the Management Controller s that a primary port 108 has failed, the Management ller 100 instructs the Network Switch 204 to set the failed y port 108 inactive (at step 902) and set an alternate port 402 active so that the alternate port takes over as a primary port to transmit and receive data traffic (at step 904). If the Management Controller 100 detects that the network should be reconfigured to, for example, t the switch to a different nt on particular link, the Management Controller 100 instructs the Network Switch 204 to set one or more alternate ports 402 active (at step 906), and set an one or more primary ports 108 inactive so that the alternate port takes over as a primary port to transmit and receive data traffic to different endpoints (at step 908). If the Management Controller 100 s that the network should be reconfigured to, for example, address additional bandwidth needed on ular link, the Management Controller 100 instructs the Network Switch 204 to create a parallel path (at step 910) by setting one or more alternate ports 402 active (at step 906), so that data traffic can be transmitted and received across the multiple paths in parallel to the same nt. If the Management Controller 100 detects that the network should be reconfigured to, for example, address additional bandwidth with a single logical connection needed on particular link, the Management ller 100 cts the Network Switch 204 to create a bonded path (at step 912) by g one or more alternate ports 402 active (at step 906), and then synchronize the primary ports 108 to the alternate port such that data c can be transmitted and received across the le paths in parallel as a single data path to the same endpoint (at step 914).
Network Switch With Built in Test/Monitoring Ports The present disclosure also provides methods for tically incorporating a Test/Monitoring platform into a communication path in data center networks. Fig. 10 provides an ary embodiment of k Switch 20, 30 or 40 (referred to in this section as Network Switch 204) according to the present sure that supports incorporating a onitoring platform into a communication path of a data center network. Fig. 11 provides an exemplary embodiment of a data center network with a Network Switch 204 that incorporates the Test/Monitoring platform into a communication path of the data center network. Generally, the Network Switch 204 permits the creation of multiple network paths in el which can pull data traffic directly off the Network Switch 204 and forward the transmit and receive data traffic to Test/Monitor platform 218, with zero latency added to the original ication path.
In this exemplary embodiment, the Management Controller 100 can instruct the CPU 102 of the Network Switch 204 to couple the transmit and receive paths of data traffic associated with a primary port 108 to alternate ports which are connected to Test/Monitor Platform 218 within the data center network. As a result, the Network Switch 204 according to the present disclosure removes the need to manually move path connections for test and monitoring purposes or modifying the packet headers.
As an example, the Network Switch 204 architecture according to this embodiment creates three paths associated with one primary port 108, e. g., primary port 108C.In this exemplary embodiment, data traffic on primary port 108C can be routed by Switch Logic 106 to any other primary port or alternate port set active as part of communication path, and the data traffic on primary port 108C is to be tested/monitored.
To test/monitor data traffic on primary port 108C, the ment Controller 100 can instruct CPU 102 of Network Switch 204 to create two onitor port paths 604 and 606, which are based on the primary receive communication path 600 and it communication path 602 of port 108C. While maintaining the primary communication path n the primary port 108C and any other primary port, the Switch Logic also s the receive communication path 600 to transmit communication path 604 connected to alternate port 402A, and couples the transmit communication path 602 to transmit communication path 606 connected to alternate port 402B. The alternate ports 402A and 402B are connected to the Test/Monitor platform 218 via communication paths 608 and 610. This configuration creates a test connection bridge which duplicates the original point to point path onto three separate paths: the original path from source to destination, the test monitor path from the source to the destination mirrored over to the test connection port A, and the monitor path from the destination to the source return path ed over to the test connection port B while adding zero latency to the original path.
Thus, no physical tions need to be moved, nor do headers need to be modified such that a data center network monitoring ecture the path connections can be passed unmodified to the Test/Monitor platform 218.
It should be noted while the embodiment described above shows communications from y port 108C being routed to alternate ports 402A and 402B, any primary port 108 can be mapped to the transmit communication path 604 and 606 on any alternate port 402 which can then be connected to input ports on Test/Monitor platform 218. In addition, rather than ing both the receive communication path 600 and transmit communication path 602 from a primary port 108 to one or more alternate ports 402 for test/monitoring purposes, the Switch Logic 106 may be instructed by the Management Controller 100 to only switch receive communication path 600 or transmit communication path 602 to one alternate port 402. In another embodiment, rather than switch all the data traffic from the primary port 108 to the alternate path 402, Switch Logic 106 may be instructed by Management Controller 100 to switch only certain types of data or packets based on d ia sent by the ment Controller 100 to the CPU 102 of the Network Switch 204.
The resulting architecture therefore permits dedicated test monitoring paths to be built into the switch without requiring physically moving k connections or introducing latency into the network.
It should be noted that for optical fiber applications, this implementation method does not use optical couplers, such that there is no loss of optical power from the input port to the output port.
As will be appreciated by one d in the art, s of the present disclosure may be embodied as a system, method or computer program. Accordingly, aspects of the t invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro—code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a system. Furthermore, aspects of the present disclosure may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming ges, including an object oriented mming language such as Java, Smalltalk, C++ or the like, and conventional procedural programming languages, such as the "C" programming language or similar mming languages.
Aspects of the present invention are described above with reference to flowchart illustrations and/or block ms of methods, systems and computer programs ing to embodiments of the present disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose er, special e computer, or other programmable data processing tus to e a machine, such that the instructions, which execute via the processor of the er or other programmable data processing apparatus, create means for implementing the functions/acts ied in the flowchart and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or .
] The computer program instructions may also be loaded onto a er, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
It will be understood that various modifications can be made to the embodiments of the present sure without departing from the spirit and scope thereof. Therefore, the above ption should not be ued as limiting the disclosure, but merely as embodiments thereof. Those skilled in the art will envision other cations within the scope and spirit of the invention as defined by the claims appended hereto.
I/

Claims (15)

WE CLAIM :
1. A data center switch, comprising: a set of primary ports, wherein each port within the set of primary ports includes a transceiver and a connector to connect with an external data medium; a set of alternate ports, wherein each port within the set of alternate ports includes a transceiver and a connector to connect with an external data transmission medium; a switch logic unit configured to interface with a finite number of ports, wherein the number of ports in the set of primary ports plus the number of ports in the set of alternate ports is r than the number of finite ports; a data path interconnection unit physically ted to the set of primary ports and the set of alternate ports by a set of port data paths and physically connected to the switch logic unit by a set of switch logic data paths, n the number of port data paths in the set of port data paths is equal to the number of primary ports plus the number of alternate ports, and wherein the number of switch logic data paths in the set of switch logic data paths is equal to the finite number of ports interfacing with the switch logic unit; and a control unit operatively connected to the path interconnection unit and the switch logic unit, wherein the control unit is configured to control the switch logic unit to switch data on one switch logic data path to another switch logic data path, wherein the control unit is configured to control the data path onnection unit when the primary ports are set to an active state, such that: data received at the primary ports and transferred to one or more port data paths is directed to one switch logic data path; or data on one switch logic data path is directed to one or more port data paths and transmitted by one or more primary ports; and n the control unit is configured to control the data path interconnection unit so that the alternate ports are set to an active state and to control the data path interconnection unit such that when a y port becomes inoperable the inoperable primary port is d from the active state and one of the alternate ports is set to an active state, such that: data received at the active alternate port and erred to one or more port data paths is directed to one switch logic data path; or 27955959_1 data on one switch logic data path is directed to one or more port data paths and transmitted by the active alternate ports.
2. The data center switch according to claim 1, wherein the data path interconnection unit ses one or more multiplexers.
3. The data center switch ing to claim 2, wherein at least one primary port and at least one alternate port are selectable by lling the multiplexors.
4. The data center switch according to claim 1, further comprising one or more media access control layers, wherein at least one alternate port is enabled by the one or more media access control
5. The data center switch according to claim 1, further comprising one or more serial / deserializer blocks, wherein at least one alternate port is enabled by the one or more serial / deserializer blocks.
6. The data center switch according to claim 1, wherein multiple ports in the set of primary ports and the set of alternate ports are configured to transmit and receive in parallel over multiple paths to a single end destination.
7. The data center switch according to claim 1, wherein multiple ports in the set of primary ports and the set of alternate ports are configured to transmit and receive in parallel over multiple paths configured as a single bonded path to a single end destination.
8. The data center switch according to claim 1, wherein a port in the set of primary ports or a port in the set of alternate ports is configured to receive data from an external medium of a first medium type, wherein the receive data passes through the data path interconnection unit to the switch logic unit, wherein the switch logic ts the received data into a format for transmission onto a second data medium type, and wherein the ted data passes through the data path 27955959_1 interconnection unit to a port in the set of primary ports or a port in the set of alternate ports capable of transmitting data onto an external medium of the second data medium type.
9. The data center switch according to claim 8, wherein the first medium type comprises an electrical medium type or an optical medium type.
10. The data center switch according to claim 8, n the second medium type comprises an electrical medium type or an l medium type.
11. The data center switch according to claim 1, wherein the control unit can cause at least one port in the set of alternate ports to be used as a test/monitor port.
12. The data center switch according to claim 11, wherein the control unit can couple the transmit path from one port in the set of primary ports or the set of alternate ports to the port connected to the test/monitor port with no loss of signal quality or additional latency.
13. The data center switch according to claim 11, wherein the control unit can couple the receive path from one port in the set of primary ports or the set of alternate ports to the port connected to the onitor port with no loss of signal quality or additional latency.
14. The data center switch ing to claim 1, wherein the control unit can cause at least one port in the set of alternate ports to be used as a standby or redundant port to a primary port.
15. The data center switch according to claim 1, wherein the control unit can cause at least one port in the set of ate ports to be used as a port to alternate destinations in the data center network. Fiber Mountain, Inc. By the Attorneys for the Applicant SPRUSON & FERGUSON Per: 27955959_1
NZ724695A 2014-03-28 2015-03-27 Built in alternate links within a switch NZ724695B2 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201461972121P 2014-03-28 2014-03-28
US61/972,121 2014-03-28
PCT/US2015/023077 WO2015148970A1 (en) 2014-03-28 2015-03-27 Built in alternate links within a switch

Publications (2)

Publication Number Publication Date
NZ724695A NZ724695A (en) 2021-09-24
NZ724695B2 true NZ724695B2 (en) 2022-01-06

Family

ID=

Similar Documents

Publication Publication Date Title
US11121959B2 (en) Built in alternate links within a switch
AU2015283976B2 (en) Data center path switch with improved path interconnection architecture
US20200259713A1 (en) Transparent auto-negotiation of ethernet
CA2645995C (en) Network based endoscopic surgical system
US20080261641A1 (en) Redundant wireless base station
JP2010510741A (en) Communication system having master / slave structure
US20160323037A1 (en) Electro-optical signal transmission
US7660239B2 (en) Network data re-routing
EP1995148B1 (en) Transmission system for rail vehicles using two redundant bus systems
JP2016535498A (en) Data transmission system providing improved resiliency
US11381419B2 (en) Communication network
JP2009017190A (en) Sonet/sdh apparatus
WO2011061881A1 (en) Transmission system, transmission method, and communication apparatus
CN102215134B (en) Hot standby switcher of IP (Internet Protocol) code stream
CN106533771A (en) Network device and control information transmission method
NZ724695B2 (en) Built in alternate links within a switch
JP5712586B2 (en) Multiple radio apparatus and line control method by link aggregation
KR102197916B1 (en) Apparatus for duplexing data
KR102188479B1 (en) Apparatus for duplexing data
CN103840952A (en) Method used for MAC black hole prevention and corresponding distributed dual homed nodes