US20120327771A1 - Compact load balanced switching structures for packet based communication networks - Google Patents
Compact load balanced switching structures for packet based communication networks Download PDFInfo
- Publication number
- US20120327771A1 US20120327771A1 US13/586,115 US201213586115A US2012327771A1 US 20120327771 A1 US20120327771 A1 US 20120327771A1 US 201213586115 A US201213586115 A US 201213586115A US 2012327771 A1 US2012327771 A1 US 2012327771A1
- Authority
- US
- United States
- Prior art keywords
- packet
- data packets
- data
- switching
- switching fabric
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/10—Packet switching elements characterised by the switching fabric construction
- H04L49/101—Packet switching elements characterised by the switching fabric construction using crossbar or matrix
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/12—Avoiding congestion; Recovering from congestion
- H04L47/125—Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/30—Peripheral units, e.g. input or output ports
- H04L49/3018—Input queuing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/90—Buffering arrangements
Definitions
- the invention relates to the field of communications and more particularly to a scaleable architecture for packet based communication networking.
- Telecommunications networks have evolved from the earliest networks having few users with plain old telephone service (POTS) to networks in operation today interconnecting hundreds of millions of users with a wide variety of services including for example telephony, Internet, streaming video, and MPEG music. Central to these networks is the requirement for a switching fabric allowing different users to be connected either together or to a service provider. Supporting an increase in a number of users, connections and bandwidth are networks based upon segmentation, transmission, routing, detection and reconstruction of a signal. The segmentation results in a message being divided into segments—referred to as packets, and such networks being packet switched networks.
- this process is transparent provided that the telecommunications network acts in a manner such that the packetization, and all other processes occur in a manner such that the user has available the services and information as required and “on demand.”
- the users perception of this “on demand” service varies substantially depending upon the service used. For example, when downloading most information via the Internet, a small delay is acceptable for text and photographs but not for streamed video unless sufficient memory buffer exists.
- telephony As the human perception of delay in voice is extremely acute. The result is that network providers prioritize packets according to information content, priority information included as part of the header of a packet.
- the switching fabric of current telecommunications packet networks is a massive mesh of large electronic cross-connect switches interconnected generally by very high speed optical networks exploiting dense wavelength division multiplexing to provide interconnection paths offering tens of gigabit per second transmission. Within this mesh are a limited number of optical switches which generally provide protection switching and relatively slow allocation of bandwidth to accommodate demand.
- the alternative is “agile” networks which are widely distributed implementations of packet switching, as necessary to provide dynamic routing/bandwidth very close to users and with rapidly shifting patterns as they access different services. Agility to the network operators implies the ability to rapidly deploy bandwidth on demand at fine granularity. Helping them in this is the evolution of access networks which have to date been electrical at rates up to a few megabits per second but are now being replaced with optical approaches (often referred to as fiber-to-the-home or FTTH) with data rates of tens to hundreds of megabits per second to customers, and roadmaps to even gigabit rates per subscriber.
- FTTH fiber-to-the-home
- the invention disclosed provides such an architecture for the distributed packet switching wherein the fabric acts to balance the traffic load on different paths and network elements within the distributed packet switch. In doing so the disclosed invention removes additionally the requirement for rapid reconfiguration of the packet switches, which has the added benefit of allowing the deployment of optical switches within the network which are slower and smaller than their electrical counterparts.
- a switching node in respect of routing data packets arriving at the switching node within a communications network.
- the switching node contains a plurality of input ports each of which receives data packets addressed to it from the broader communications network.
- multiple memory switches which are implemented by a combination of a plurality of memory queues, for storing the packet data therein, coupled to a first switch matrix for switching of packet data for storage within a memory queue of the plurality of first memory queues, and a second switch matrix for switching of packet data retrieved from within a memory queue of the plurality of first memory queues.
- the multiple memory switches are then coupled to a third switching matrix, which is coupled on one side to the plurality of input ports and the plurality of memory switches on the other.
- the multiple memory switches are then coupled to a fourth switching matrix coupled such that on the one side are the plurality of memory switches and on the other the plurality of output ports.
- At least one of the third or fourth switching matrix is implemented with a second set of multiple memory queues which are coupled between a fifth switch matrix and sixth switch matrix.
- the packets of data arriving at the switching node are sequenced within the memory queues and memory switches with the packets of data then being routed appropriately between the input and outputs using the multiple switching matrices.
- the switching node can meet all of the demands of the network provider in terms of quality of service, flexibility of provisioning to a users varied demands for services, and prioritizing packet data switching based upon predetermined priorities of the packets and the dynamic bandwidth allocation between input and output ports.
- the control approach allows this to be achieved in an architecture where the loading of activities such as switching, memory queuing etc is balanced across the node.
- the use of multiple memory queues and memory switches allows the switching node to store packet data having a lower priority in an earlier stage of the multi-stage memory queue.
- the matrices coupled to the memory queues may be spatial switches, time division multiplexing switches, or a combination thereof.
- FIG. 1 illustrates a prior art approach to packet switching using a centralized shared memory switch with queues.
- FIG. 2 illustrates a prior art packet switch using a three-stage Close-like network.
- FIG. 3A illustrates a first embodiment of the invention wherein the load-balanced switch is implemented with an input queued crossbar switch with route and select switches.
- FIG. 3B illustrates a second embodiment of the invention wherein the load-balanced switch is implemented with an output queued crossbar switch and has the routing controller segmented with a control segment per switching stage.
- FIG. 4A illustrates a third embodiment of the invention wherein the load-balanced switch is implemented in a manner mimicking a three stage Clos fabric where the external links operate at the same speed as the internal links.
- FIG. 4B illustrates a fourth embodiment of the invention wherein the load-balanced switch is implemented in a manner mimicking a three stage Clos fabric but wherein the switch matrices and shuffle networks are reduced functionality.
- FIG. 5 illustrates a fifth embodiment of the invention wherein the load balanced switch is implemented in a manner mimicking a three stage Clos fabric where the external links operate at the twice the speed of the internal links.
- FIG. 1 a shown is a prior art approach to a packet switch using a single stage of memory queues.
- a plurality of input ports 101 are connected to physical links within a communications network (not shown). These input ports 101 are coupled to an input multiplexer 102 , which multiplexes the plurality of input packet data streams to a single data stream.
- the single data stream is then transported to a 1:N distribution switch 103 , which is coupled to N parallel memory queues 104 , each memory queue 104 allowing packets of data to be stored until retrieved.
- the N parallel memory queues 104 are in turn connected to an N:1 concentrator switch 105 that reads from the memory queues 104 .
- the output data stream of the concentrator switch 105 is then connected to a demultiplexing switch 106 which in turn connects to a plurality of output ports 107 .
- a packet of data arriving at input port 101 a of the switching fabric, being one of the plurality of input ports 101 is multiplexed by the multiplexing switch 102 to the common communications path prior to propagating within the distribution switch 103 .
- the packet of data from input port 101 a then propagates to one of the memory queues 104 .
- the packet is then stored prior to being retrieved by the concentrator switch 105 and then being routed by the demultiplexer switch 106 to the appropriate output port 107 b , being one of the plurality of output ports 107 .
- FIG. 2 shown is a prior art implementation of a packet switch based upon a three-stage Clos architecture.
- a packet of data arrives at one of the N input ports 201 of one of the plurality of first stage routing switches 202 .
- the data received is time-stamped, its header read and an identifier of the target output port communicated to the packet switch controller 210 .
- This determines the routing through the switching node specifically, and causes the packet of data to be routed to the appropriate output port of the first stage routing matrix 201 for transport to the subsequent section of the packet switching node.
- the packet of data propagates through a first perfect shuffle network 203 comprising R ⁇ M paths, wherein it addresses one of the M second stage switching matrices 204 , which are N ⁇ N crosspoint switches.
- the packet switch controller 210 routes the packet of data within the second stage switching matrix 204 for transport to the third stage switch matrix 206 . From the appropriate output port of the second stage switch matrix 204 , it is routed via a second perfect shuffle network 205 to the specified third stage switching matrix 206 . Within the third stage switching matrix 206 , the packet is routed directly to an output port 207 of the switching node and transported via the wider communications network.
- an exemplary first embodiment of the invention is shown in the form of a compact load balanced crossbar packet switch with queued input ports.
- a packet of data is incident at one of the input ports 301 of the packet switching node.
- the header of the packet is read and communicated to the packet switching node controller 315 which defines the appropriate routing of the packet through the node.
- the packet switching controller 315 communicates routing data to the first stage switch matrix 303 comprising a first N ⁇ N crossbar switch with memory queues. This is implemented using 1:N distribution switches 302 , a perfect shuffle 313 , a plurality of memory queues 316 and N:1 concentrator switches 304 .
- the packet of data exits the first stage switching matrix 303 on a link connecting a second stage switch matrix 305 determined by the packet switching node controller 315 .
- the second stage switch matrix 305 is constructed from 1:M distribution switches 306 , M memory queues 307 , and M:1 concentrator switches 308 .
- the packet of data is routed by the distribution switch 306 to one of the memory queues 307 wherein it is stored pending extraction under the control of the packet switching node controller 315 .
- the data is extracted from one of the plurality of memory queues 307 and fed forward using the concentrator switch 308 .
- the packet of data Upon arrival at the third switch stage 309 , the packet of data is routed to an output port using a second N ⁇ N crossbar switch implemented again using 1:N distribution switches 310 , a perfect shuffle 314 and N:1 concentrator switches 311 , whereupon it is available at output port 312 for transport to the wider communications network.
- the exemplary first embodiment is again shown in the form of a compact load balanced crossbar packet switch but now with queued output ports.
- the packet of data is routed through the first switch matrix 3030 it passes through the 1:N distribution switches 3020 , a perfect shuffle 3130 , and N:1 concentrator switches 3040 . It is when routed via the third switch matrix 3090 that the packet of data passes through the 1:N distribution switches 3100 , the second perfect shuffle 3140 , the memory queues 3160 and N:1 concentrator switches 3110 .
- first stage switching matrix 3030 and the third stage switching matrix 3090 are implemented with different matrix design architectures which optionally include memory queues in one or the other.
- packet switching controller 3150 is shown as three control sections 3150 A, 3150 B and 3150 C each of which interfaces to a switch stage of the switching node as well as communicating with each other to provide overall control of the node.
- two controller sections are optionally combined if the switching matrices are located making such combination beneficial.
- FIG. 4A a simplified architectural diagram of a second embodiment of the invention is shown in the form of a compact load balanced three stage Clos network wherein the Clos stages operate at a same line data rate as an input port and an output port.
- a packet of data is incident at one of N input ports 411 of a packet switching node.
- a header of the packet of data is read and communicated to a packet switching node controller (not shown) which defines a routing of the packet through the node.
- the packet switching node controller communicates the routing data to a first stage switch matrix 401 comprising a first concentrator switch 406 , a first memory switch element comprising a first distribution switch 407 , a plurality of first memory queues 408 and a first concentrator switch 409 .
- the packet of data is routed to a second distribution switch 410 which feeds the packet of data forward to a first perfect shuffle network 404 .
- the first switching stage 401 performs a grooming of packets to sequence them and route them to a second stage switch matrix 402 .
- the packet of data is again shuffled with other arriving packets and stored within memory queues awaiting transport to a third switch stage.
- the second stage switch matrix 402 feeds the packet of data forward to a second perfect shuffle network 405 .
- the packet of data After being routed through the perfect shuffle 405 , the packet of data arrives at the third switch stage and enters a third stage switch 403 .
- the packet of data is again sequenced with other arriving packets to create output data streams stored within memory queues awaiting transport to the communications network.
- the third stage switch 403 feeds the packet of data forward to an output port 412 of the switching node.
- an alternate embodiment of the compact load balanced three stage Clos network wherein the Clos stages operate at a same line data rate as an input port and an output port, but exploits switching elements with reduced complexity.
- the packet switch algorithm for the packet switch node controller can be implemented such that it grooms packets of data and routes them such that are grouped according to output port it also possible to adjust the algorithm such that it handles reduced complexity within the first and second shuffle networks.
- the reduced complexity of the first shuffle network between the first switch stage 4010 and second switch stage 4020 is implemented with 1:(N-1) distribution switches 4100 , shuffle network 4040 and (N-1):1 concentrator switches 4130 .
- the second shuffle network between the second switch stage 4020 and third switch stage 4030 is implemented with 1:(N-1) distribution switches 4140 , shuffle network 4050 and (N-1):1 concentrator switches 4150 .
- the memory queues 4080 are shown as constructed from three segments in series, 4080 A, 4080 B and 4080 C.
- the memory segments may be assigned to store data packets with predetermined associations, these including, but not being limited to, packets destined for adjacent output ports and assigned to a dedicated output stage memory switch, packet data for packets stored within different memory queues which is assigned to a dedicated intermediate memory sector serving those queues, packet data associated with packets with adjacent input ports and assigned to a dedicated input stage memory sector, data fields arranged so as to provide a transposed interconnection between the input and intermediate stages, and data fields arranged so as to provide a transposed interconnection between the intermediate and output stages.
- the switching matrices 401 , 402 and 403 of FIGS. 4A and 4010 , 4020 , and 4030 of FIG. 4B is implemented with different matrix architectures and/or design, optionally including memory queues.
- FIG. 5 a simplified architectural diagram of a third embodiment of the invention is shown in the form of a compact load balanced three stage Clos network wherein the Clos stages operate at half the data rate of an input port and an output port.
- a packet of data arrives at one of N input ports 512 of a packet switching node.
- a header of the packet of data is read and communicated to a packet switching node controller (not shown), which determines routing for the packet through the node.
- the packet switching node controller communicates routing data to a first stage switch matrix 501 , which comprises a first concentrator switch 506 , a first memory switch element comprising a first distribution switch 507 , a plurality of memory queues 508 and a first concentrator switch 509 .
- the packet of data is routed to a second distribution switch 510 which feeds the packet of data forward to a first perfect shuffle network 504 .
- the first switching stage 501 performs a grooming of packets to sequence them and route them to a second stage switch matrix 502 .
- the packet of data is again shuffled with other arriving packets and stored within memory queues awaiting transport to a third switch stage.
- the second stage switch matrix 502 feeds the packet of data forward to a second perfect shuffle network 505 .
- the packet of data After being routed through the perfect shuffle 505 , the packet of data arrives at the third switch stage and enters a third stage switch 503 .
- the packet of data is again sequenced with other arriving packets to create output data streams stored within memory queues awaiting transport to the communications network.
- the third stage switch 503 feeds the packet of data forward to an output port 511 of the switching node.
- the switching matrices 501 , 502 and 503 are implemented with different matrix architectures and/or design, optionally including include memory queues.
- the core switching fabric operates with a substantially lower frequency thereby facilitating implementation of this switching fabric.
- the switching matrices are depicted as spatial switches operating on timescales relatively long.
- the switch matrices may be implemented with devices which operate at high speed and can be reconfigured as required for each and every time slot associated with a packet of data.
- Such matrices are usually referred to as time division multiplexing switch (TDM switches).
- the multiple stages of memory switching can further be operated synchronously or asynchronously.
- the multiple stages of the switching node can be distributed with each one of the plurality of switching stages under localised clock control.
- the shuffle networks would be transmission links rather than local interconnections.
- the architecture is independent and can be equally photonics or electronic but may be weighted by their specific tradeoffs.
- photonic switches are suited to smaller switching fabrics supporting very high throughput with typically limited memory queuing, whilst electronic switches support queues which hold for long periods of time, large fabrics but tend to suffer at supporting high speed as the conventional silicon platform is firstly replaced with silicon-germanium or gallium arsenide which have fewer design options for the building blocks of the switching node.
- this may be implemented optionally to include polling elements, allowing the controller to provide additional control of the spatially separated memory switches such that they can be considered in operation as a single large switch matrix.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
A switching node is disclosed for the routing of packetized data employing a multi-stage packet based routing fabric combined with a plurality of memory switches employing memory queues. The switching node allowing reduced throughput delays, dynamic provisioning of bandwidth and packet prioritization.
Description
- The invention relates to the field of communications and more particularly to a scaleable architecture for packet based communication networking.
- Telecommunications networks have evolved from the earliest networks having few users with plain old telephone service (POTS) to networks in operation today interconnecting hundreds of millions of users with a wide variety of services including for example telephony, Internet, streaming video, and MPEG music. Central to these networks is the requirement for a switching fabric allowing different users to be connected either together or to a service provider. Supporting an increase in a number of users, connections and bandwidth are networks based upon segmentation, transmission, routing, detection and reconstruction of a signal. The segmentation results in a message being divided into segments—referred to as packets, and such networks being packet switched networks.
- From a viewpoint of users, this process is transparent provided that the telecommunications network acts in a manner such that the packetization, and all other processes occur in a manner such that the user has available the services and information as required and “on demand.” The users perception of this “on demand” service varies substantially depending upon the service used. For example, when downloading most information via the Internet, a small delay is acceptable for text and photographs but not for streamed video unless sufficient memory buffer exists. Amongst the most sensitive services is telephony as the human perception of delay in voice is extremely acute. The result is that network providers prioritize packets according to information content, priority information included as part of the header of a packet.
- The switching fabric of current telecommunications packet networks is a massive mesh of large electronic cross-connect switches interconnected generally by very high speed optical networks exploiting dense wavelength division multiplexing to provide interconnection paths offering tens of gigabit per second transmission. Within this mesh are a limited number of optical switches which generally provide protection switching and relatively slow allocation of bandwidth to accommodate demand.
- But the demands from users for increased services, increased bandwidth and flexible services are causing the network operators to seek an alternative architecture. The alternative is “agile” networks which are widely distributed implementations of packet switching, as necessary to provide dynamic routing/bandwidth very close to users and with rapidly shifting patterns as they access different services. Agility to the network operators implies the ability to rapidly deploy bandwidth on demand at fine granularity. Helping them in this is the evolution of access networks which have to date been electrical at rates up to a few megabits per second but are now being replaced with optical approaches (often referred to as fiber-to-the-home or FTTH) with data rates of tens to hundreds of megabits per second to customers, and roadmaps to even gigabit rates per subscriber.
- As the network evolves, and services become more flexible and expansive, speeds increase such that the network provider is increasingly focused to three problems:
-
- Delay—the time taken to route packets across the network, where excessive delay in any single packet of a message prevents the message being completed
- Mis-Sequencing—the mis-sequencing of packets through the network causes delays at the user as until the mis-sequenced packet arrives the message cannot be completed
- Losses—the loss of packets due to blocked connections within the network causes delays as the lost packets must be retransmitted across the network.
- It is therefore desirable within the network to address these issues with a physical switching fabric. The invention disclosed provides such an architecture for the distributed packet switching wherein the fabric acts to balance the traffic load on different paths and network elements within the distributed packet switch. In doing so the disclosed invention removes additionally the requirement for rapid reconfiguration of the packet switches, which has the added benefit of allowing the deployment of optical switches within the network which are slower and smaller than their electrical counterparts.
- In accordance with the invention there is provided a switching node in respect of routing data packets arriving at the switching node within a communications network. The switching node contains a plurality of input ports each of which receives data packets addressed to it from the broader communications network. Within the switching node are multiple memory switches which are implemented by a combination of a plurality of memory queues, for storing the packet data therein, coupled to a first switch matrix for switching of packet data for storage within a memory queue of the plurality of first memory queues, and a second switch matrix for switching of packet data retrieved from within a memory queue of the plurality of first memory queues.
- The multiple memory switches are then coupled to a third switching matrix, which is coupled on one side to the plurality of input ports and the plurality of memory switches on the other. The multiple memory switches are then coupled to a fourth switching matrix coupled such that on the one side are the plurality of memory switches and on the other the plurality of output ports.
- At least one of the third or fourth switching matrix is implemented with a second set of multiple memory queues which are coupled between a fifth switch matrix and sixth switch matrix. In this invention the packets of data arriving at the switching node are sequenced within the memory queues and memory switches with the packets of data then being routed appropriately between the input and outputs using the multiple switching matrices.
- As a result the switching node can meet all of the demands of the network provider in terms of quality of service, flexibility of provisioning to a users varied demands for services, and prioritizing packet data switching based upon predetermined priorities of the packets and the dynamic bandwidth allocation between input and output ports. The control approach allows this to be achieved in an architecture where the loading of activities such as switching, memory queuing etc is balanced across the node.
- In another embodiment of the invention the use of multiple memory queues and memory switches allows the switching node to store packet data having a lower priority in an earlier stage of the multi-stage memory queue. Additionally the matrices coupled to the memory queues may be spatial switches, time division multiplexing switches, or a combination thereof.
- Exemplary embodiments of the invention will now be described in conjunction with the following drawings, in which:
-
FIG. 1 illustrates a prior art approach to packet switching using a centralized shared memory switch with queues. -
FIG. 2 illustrates a prior art packet switch using a three-stage Close-like network. -
FIG. 3A illustrates a first embodiment of the invention wherein the load-balanced switch is implemented with an input queued crossbar switch with route and select switches. -
FIG. 3B illustrates a second embodiment of the invention wherein the load-balanced switch is implemented with an output queued crossbar switch and has the routing controller segmented with a control segment per switching stage. -
FIG. 4A illustrates a third embodiment of the invention wherein the load-balanced switch is implemented in a manner mimicking a three stage Clos fabric where the external links operate at the same speed as the internal links. -
FIG. 4B illustrates a fourth embodiment of the invention wherein the load-balanced switch is implemented in a manner mimicking a three stage Clos fabric but wherein the switch matrices and shuffle networks are reduced functionality. -
FIG. 5 illustrates a fifth embodiment of the invention wherein the load balanced switch is implemented in a manner mimicking a three stage Clos fabric where the external links operate at the twice the speed of the internal links. - Referring to
FIG. 1 a, shown is a prior art approach to a packet switch using a single stage of memory queues. A plurality ofinput ports 101 are connected to physical links within a communications network (not shown). Theseinput ports 101 are coupled to aninput multiplexer 102, which multiplexes the plurality of input packet data streams to a single data stream. The single data stream is then transported to a 1:N distribution switch 103, which is coupled to Nparallel memory queues 104, eachmemory queue 104 allowing packets of data to be stored until retrieved. - The N
parallel memory queues 104 are in turn connected to an N:1concentrator switch 105 that reads from thememory queues 104. The output data stream of theconcentrator switch 105 is then connected to a demultiplexing switch 106 which in turn connects to a plurality ofoutput ports 107. - A packet of data arriving at
input port 101 a of the switching fabric, being one of the plurality ofinput ports 101 is multiplexed by themultiplexing switch 102 to the common communications path prior to propagating within thedistribution switch 103. The packet of data frominput port 101 a then propagates to one of thememory queues 104. The packet is then stored prior to being retrieved by theconcentrator switch 105 and then being routed by the demultiplexer switch 106 to theappropriate output port 107 b, being one of the plurality ofoutput ports 107. - Now referring to
FIG. 2 shown is a prior art implementation of a packet switch based upon a three-stage Clos architecture. Here a packet of data arrives at one of theN input ports 201 of one of the plurality of firststage routing switches 202. Assuming that there are R such firststage routing matrices 202, each having M output ports, the data received is time-stamped, its header read and an identifier of the target output port communicated to thepacket switch controller 210. This determines the routing through the switching node specifically, and causes the packet of data to be routed to the appropriate output port of the firststage routing matrix 201 for transport to the subsequent section of the packet switching node. When transported, the packet of data propagates through a firstperfect shuffle network 203 comprising R×M paths, wherein it addresses one of the M secondstage switching matrices 204, which are N×N crosspoint switches. - The
packet switch controller 210 routes the packet of data within the secondstage switching matrix 204 for transport to the thirdstage switch matrix 206. From the appropriate output port of the secondstage switch matrix 204, it is routed via a secondperfect shuffle network 205 to the specified thirdstage switching matrix 206. Within the thirdstage switching matrix 206, the packet is routed directly to anoutput port 207 of the switching node and transported via the wider communications network. - Referring to
FIG. 3A , an exemplary first embodiment of the invention is shown in the form of a compact load balanced crossbar packet switch with queued input ports. Here a packet of data is incident at one of theinput ports 301 of the packet switching node. The header of the packet is read and communicated to the packetswitching node controller 315 which defines the appropriate routing of the packet through the node. Thepacket switching controller 315 communicates routing data to the firststage switch matrix 303 comprising a first N×N crossbar switch with memory queues. This is implemented using 1:N distribution switches 302, aperfect shuffle 313, a plurality ofmemory queues 316 and N:1 concentrator switches 304. The packet of data exits the firststage switching matrix 303 on a link connecting a secondstage switch matrix 305 determined by the packetswitching node controller 315. - The second
stage switch matrix 305 is constructed from 1:M distribution switches 306,M memory queues 307, and M:1 concentrator switches 308. The packet of data is routed by thedistribution switch 306 to one of thememory queues 307 wherein it is stored pending extraction under the control of the packetswitching node controller 315. When required for transport to thethird switching stage 309 of the switching node, the data is extracted from one of the plurality ofmemory queues 307 and fed forward using theconcentrator switch 308. - Upon arrival at the
third switch stage 309, the packet of data is routed to an output port using a second N×N crossbar switch implemented again using 1:N distribution switches 310, a perfect shuffle 314 and N:1 concentrator switches 311, whereupon it is available atoutput port 312 for transport to the wider communications network. - Referring to
FIG. 3B , the exemplary first embodiment is again shown in the form of a compact load balanced crossbar packet switch but now with queued output ports. Hence, when the packet of data is routed through thefirst switch matrix 3030 it passes through the 1:N distribution switches 3020, aperfect shuffle 3130, and N:1 concentrator switches 3040. It is when routed via thethird switch matrix 3090 that the packet of data passes through the 1:N distribution switches 3100, the secondperfect shuffle 3140, thememory queues 3160 and N:1 concentrator switches 3110. - Alternatively, the first
stage switching matrix 3030 and the thirdstage switching matrix 3090 are implemented with different matrix design architectures which optionally include memory queues in one or the other. - Additionally the
packet switching controller 3150 is shown as threecontrol sections - Referring to
FIG. 4A , a simplified architectural diagram of a second embodiment of the invention is shown in the form of a compact load balanced three stage Clos network wherein the Clos stages operate at a same line data rate as an input port and an output port. Here a packet of data is incident at one ofN input ports 411 of a packet switching node. A header of the packet of data is read and communicated to a packet switching node controller (not shown) which defines a routing of the packet through the node. The packet switching node controller communicates the routing data to a firststage switch matrix 401 comprising afirst concentrator switch 406, a first memory switch element comprising afirst distribution switch 407, a plurality offirst memory queues 408 and afirst concentrator switch 409. - From the output port of the
first concentrator switch 409, the packet of data is routed to asecond distribution switch 410 which feeds the packet of data forward to a firstperfect shuffle network 404. In use, thefirst switching stage 401 performs a grooming of packets to sequence them and route them to a secondstage switch matrix 402. - Within the second
stage switch matrix 402, the packet of data is again shuffled with other arriving packets and stored within memory queues awaiting transport to a third switch stage. The secondstage switch matrix 402 feeds the packet of data forward to a secondperfect shuffle network 405. - After being routed through the
perfect shuffle 405, the packet of data arrives at the third switch stage and enters athird stage switch 403. Here the packet of data is again sequenced with other arriving packets to create output data streams stored within memory queues awaiting transport to the communications network. Thethird stage switch 403 feeds the packet of data forward to anoutput port 412 of the switching node. - Referring to
FIG. 4B , an alternate embodiment of the compact load balanced three stage Clos network, wherein the Clos stages operate at a same line data rate as an input port and an output port, but exploits switching elements with reduced complexity. As the packet switch algorithm for the packet switch node controller can be implemented such that it grooms packets of data and routes them such that are grouped according to output port it also possible to adjust the algorithm such that it handles reduced complexity within the first and second shuffle networks. - In
FIG. 4B the reduced complexity of the first shuffle network between thefirst switch stage 4010 andsecond switch stage 4020 is implemented with 1:(N-1)distribution switches 4100,shuffle network 4040 and (N-1):1 concentrator switches 4130. Similarly the second shuffle network between thesecond switch stage 4020 andthird switch stage 4030 is implemented with 1:(N-1)distribution switches 4140,shuffle network 4050 and (N-1):1 concentrator switches 4150. - Additionally the
memory queues 4080 are shown as constructed from three segments in series, 4080A, 4080B and 4080C. Optionally the memory segments may be assigned to store data packets with predetermined associations, these including, but not being limited to, packets destined for adjacent output ports and assigned to a dedicated output stage memory switch, packet data for packets stored within different memory queues which is assigned to a dedicated intermediate memory sector serving those queues, packet data associated with packets with adjacent input ports and assigned to a dedicated input stage memory sector, data fields arranged so as to provide a transposed interconnection between the input and intermediate stages, and data fields arranged so as to provide a transposed interconnection between the intermediate and output stages. - Alternatively to perform similar functionality, the switching
matrices FIGS. 4A and 4010 , 4020, and 4030 ofFIG. 4B is implemented with different matrix architectures and/or design, optionally including memory queues. - Now referring to
FIG. 5 , a simplified architectural diagram of a third embodiment of the invention is shown in the form of a compact load balanced three stage Clos network wherein the Clos stages operate at half the data rate of an input port and an output port. A packet of data arrives at one ofN input ports 512 of a packet switching node. A header of the packet of data is read and communicated to a packet switching node controller (not shown), which determines routing for the packet through the node. The packet switching node controller communicates routing data to a firststage switch matrix 501, which comprises afirst concentrator switch 506, a first memory switch element comprising afirst distribution switch 507, a plurality ofmemory queues 508 and afirst concentrator switch 509. - From the output port of the
first concentrator switch 509, the packet of data is routed to asecond distribution switch 510 which feeds the packet of data forward to a firstperfect shuffle network 504. In use, thefirst switching stage 501 performs a grooming of packets to sequence them and route them to a secondstage switch matrix 502. - Within the second
stage switch matrix 502 the packet of data is again shuffled with other arriving packets and stored within memory queues awaiting transport to a third switch stage. The secondstage switch matrix 502 feeds the packet of data forward to a secondperfect shuffle network 505. - After being routed through the
perfect shuffle 505, the packet of data arrives at the third switch stage and enters athird stage switch 503. Here the packet of data is again sequenced with other arriving packets to create output data streams stored within memory queues awaiting transport to the communications network. Thethird stage switch 503 feeds the packet of data forward to anoutput port 511 of the switching node. - Alternatively to perform similar functionality, the switching
matrices - Advantageously, in the embodiment of
FIG. 5 , the core switching fabric operates with a substantially lower frequency thereby facilitating implementation of this switching fabric. - As described in the embodiments of the invention with reference to
FIGS. 3 through 5 the switching matrices are depicted as spatial switches operating on timescales relatively long. However, in alternate embodiments of the invention the switch matrices may be implemented with devices which operate at high speed and can be reconfigured as required for each and every time slot associated with a packet of data. Such matrices are usually referred to as time division multiplexing switch (TDM switches). - Within the embodiments outlined the multiple stages of memory switching can further be operated synchronously or asynchronously. With an asynchronous approach to a switching node the multiple stages of the switching node can be distributed with each one of the plurality of switching stages under localised clock control. In this the shuffle networks would be transmission links rather than local interconnections.
- In respect of the technology used to implement the invention the architecture is independent and can be equally photonics or electronic but may be weighted by their specific tradeoffs. Generally photonic switches are suited to smaller switching fabrics supporting very high throughput with typically limited memory queuing, whilst electronic switches support queues which hold for long periods of time, large fabrics but tend to suffer at supporting high speed as the conventional silicon platform is firstly replaced with silicon-germanium or gallium arsenide which have fewer design options for the building blocks of the switching node.
- In respect of the packet switching node controller this may be implemented optionally to include polling elements, allowing the controller to provide additional control of the spatially separated memory switches such that they can be considered in operation as a single large switch matrix.
- Numerous other embodiments may be envisaged without departing from the spirit or scope of the invention.
Claims (5)
1. A switching node comprising:
a plurality of input ports for receiving data packets;
a plurality of output ports for providing output data from the switching node;
a switching fabric for routing the data packets received over the plurality of input ports to the plurality of output ports as output data, wherein the switching fabric routes a first packet flow of first data packets, received at a first input port and destined for a first output port, into a plurality of packet subflows through the switching fabric, wherein the first data packets of the first packet flow have a first packet flow sequence, wherein the switching fabric routes each first data packet into each packet subflow of the plurality of packet subflows according to a distribution sequence; and
at least one memory switch comprising a plurality of memory queues coupled for storing of packet data therein, for each packet subflow:
the switching fabric routing to a respective memory queue for temporary storage, first data packets which have been routed into the packet subflow and other data packets comprising:
data packets received over at least one other input port different from the first input port; and
data packets destined for at least one other output port different from the first output port,
the switching fabric routing the first data packets of other packet subflows different from the packet subflow away from the respective memory queue,
wherein the switching fabric combines first data packets of each packet subflow, after temporary storage within the at least one memory switch, with use of the distribution sequence to reconstruct the first packet flow comprising the first data packets in the first packet flow sequence.
2. A switching node according to claim 1 wherein the switching fabric routes from the first input port to the first output port each packet of the packet flow through correspondingly similar groups of components each comprising a same number of switches and a same number of memory queues.
3. A switching node according to claim 1 wherein for each packet subflow the switching fabric routes said other data packets to the respective memory queue for temporary storage with said first data packets which have been routed into the packet subflow to provide load balancing within the switching node.
4. A switching node comprising:
a plurality of input ports for receiving data packets;
a plurality of output ports for providing output data from the switching node;
a switching fabric for routing data packets received over the plurality of input ports to the plurality of output ports as output data, wherein the switching fabric routes for each input port-output port pair:
a respective packet flow of data packets, received at the input port of the pair and destined for the output port of the pair, into a respective plurality of packet subflows through the switching fabric, wherein the data packets of the respective packet flow have a respective packet flow sequence, wherein the switching fabric routes each data packet of the respective packet flow into each packet subflow of the respective plurality of packet subflows of the respective packet flow according to a respective distribution sequence; and
at least one memory switch comprising a plurality of memory queues coupled for storing of packet data therein, for each respective packet flow:
the switching fabric, for each respective packet subflow of the respective plurality of packet subflows, routing to a respective memory queue for temporary storage:
the data packets of the respective packet flow which have been routed into the packet subflow and other data packets comprising:
data packets received over at least one other input port different from the input port of the pair for which the switching fabric routes the respective packet flow; and
data packets destined for at least one other output port different from the output port of the pair for which the switching fabric routes the respective packet flow,
the switching fabric routing the data packets of other packet subflows of the respective plurality of packet subflows which are other than the packet subflow of the respective packet subflow away from the respective memory queue,
wherein, for each pair, the switching fabric combines data packets of each packet subflow of the respective plurality of packet subflows, after temporary storage within the at least one memory switch, with use of the respective distribution sequence to reconstruct the respective packet flow comprising the data packets in the respective packet flow sequence.
5. A switching node according to claim 4 wherein for each respective packet subflow the switching fabric routes said other data packets to the respective memory queue for temporary storage with said first data packets which have been routed into the respective packet subflow to provide load balancing within the switching node.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/586,115 US20120327771A1 (en) | 2004-12-17 | 2012-08-15 | Compact load balanced switching structures for packet based communication networks |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US63648504P | 2004-12-17 | 2004-12-17 | |
PCT/CA2005/001913 WO2006063459A1 (en) | 2004-12-17 | 2005-12-19 | Compact load balanced switching structures for packet based communication networks |
US79329808A | 2008-06-27 | 2008-06-27 | |
US13/586,115 US20120327771A1 (en) | 2004-12-17 | 2012-08-15 | Compact load balanced switching structures for packet based communication networks |
Related Parent Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CA2005/001913 Continuation WO2006063459A1 (en) | 2004-12-17 | 2005-12-19 | Compact load balanced switching structures for packet based communication networks |
US79329808A Continuation | 2004-12-17 | 2008-06-27 |
Publications (1)
Publication Number | Publication Date |
---|---|
US20120327771A1 true US20120327771A1 (en) | 2012-12-27 |
Family
ID=36587497
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/793,298 Expired - Fee Related US8254390B2 (en) | 2004-12-17 | 2005-12-19 | Compact load balanced switching structures for packet based communication networks |
US13/586,115 Abandoned US20120327771A1 (en) | 2004-12-17 | 2012-08-15 | Compact load balanced switching structures for packet based communication networks |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/793,298 Expired - Fee Related US8254390B2 (en) | 2004-12-17 | 2005-12-19 | Compact load balanced switching structures for packet based communication networks |
Country Status (4)
Country | Link |
---|---|
US (2) | US8254390B2 (en) |
EP (1) | EP1832060A4 (en) |
CA (1) | CA2590686C (en) |
WO (1) | WO2006063459A1 (en) |
Families Citing this family (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4966206B2 (en) * | 2005-02-04 | 2012-07-04 | レベル スリー コミュニケーションズ,エルエルシー | Ethernet-based system and method for improving network routing |
US8064467B2 (en) | 2005-02-04 | 2011-11-22 | Level 3 Communications, Llc | Systems and methods for network routing in a multiple backbone network architecture |
US9426092B2 (en) * | 2006-02-03 | 2016-08-23 | Level 3 Communications Llc | System and method for switching traffic through a network |
US8005162B2 (en) * | 2007-04-20 | 2011-08-23 | Microelectronics Technology, Inc. | Dynamic digital pre-distortion system |
US8687629B1 (en) * | 2009-11-18 | 2014-04-01 | Juniper Networks, Inc. | Fabric virtualization for packet and circuit switching |
JP5471627B2 (en) * | 2010-03-09 | 2014-04-16 | 富士通株式会社 | Network device, edge router and packet communication system |
JP2012175357A (en) * | 2011-02-21 | 2012-09-10 | Mitsubishi Electric Corp | Input buffer type switch and input device |
US8958418B2 (en) * | 2011-05-20 | 2015-02-17 | Cisco Technology, Inc. | Frame handling within multi-stage switching fabrics |
US9166928B2 (en) * | 2011-09-30 | 2015-10-20 | The Hong Kong University Of Science And Technology | Scalable 3-stage crossbar switch |
BR112014007795A2 (en) * | 2011-10-05 | 2017-04-18 | Nec Corp | load reduction system and load reduction system |
CN104518975B (en) * | 2013-09-27 | 2018-06-26 | 方正宽带网络服务股份有限公司 | A kind of route device |
WO2015053665A1 (en) * | 2013-10-07 | 2015-04-16 | Telefonaktiebolaget L M Ericsson (Publ) | Downlink flow management |
DE102013019643A1 (en) * | 2013-11-22 | 2015-05-28 | Siemens Aktiengesellschaft | Two-stage crossbar distributor and method of operation |
EP3238386B1 (en) * | 2014-12-24 | 2020-03-04 | Intel Corporation | Apparatus and method for routing data in a switch |
US10499125B2 (en) * | 2016-12-14 | 2019-12-03 | Chin-Tau Lea | TASA: a TDM ASA-based optical packet switch |
DE102018206780A1 (en) * | 2018-05-02 | 2019-11-07 | Volkswagen Aktiengesellschaft | Method and computer program for transmitting a data packet, method and computer program for receiving a data packet, communication unit and motor vehicle with communication unit |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20010043610A1 (en) * | 2000-02-08 | 2001-11-22 | Mario Nemirovsky | Queueing system for processors in packet routing operations |
US20020064154A1 (en) * | 2000-11-29 | 2002-05-30 | Vishal Sharma | High-speed parallel cross bar switch |
US20020064156A1 (en) * | 2000-04-20 | 2002-05-30 | Cyriel Minkenberg | Switching arrangement and method |
US20030021269A1 (en) * | 2001-07-25 | 2003-01-30 | International Business Machines Corporation | Sequence-preserving deep-packet processing in a multiprocessor system |
US20040240437A1 (en) * | 2003-05-14 | 2004-12-02 | Fraser Alexander G. | Switching network |
US20050100035A1 (en) * | 2003-11-11 | 2005-05-12 | Avici Systems, Inc. | Adaptive source routing and packet processing |
US20060165070A1 (en) * | 2002-04-17 | 2006-07-27 | Hall Trevor J | Packet switching |
US20070030845A1 (en) * | 2003-09-29 | 2007-02-08 | British Telecommunications Public Ltd., Co. | Channel assignment process |
Family Cites Families (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5287346A (en) * | 1991-10-16 | 1994-02-15 | Carnegie Mellon University | Packet switch |
US5355372A (en) * | 1992-08-19 | 1994-10-11 | Nec Usa, Inc. | Threshold-based load balancing in ATM switches with parallel switch planes related applications |
US5862138A (en) * | 1996-07-15 | 1999-01-19 | Northern Telecom Limited | Adaptive routing in a multiple network communication system |
DE69841486D1 (en) | 1997-05-31 | 2010-03-25 | Texas Instruments Inc | Improved packet switching |
FR2771573B1 (en) * | 1997-11-27 | 2001-10-19 | Alsthom Cge Alkatel | PACKET SWITCHING ELEMENT WITH BUFFER MEMORIES |
US6307852B1 (en) * | 1998-04-09 | 2001-10-23 | Nortel Networks Limited | Rotator switch data path structures |
JP2000295279A (en) * | 1999-04-02 | 2000-10-20 | Nec Corp | Packet switch |
US6643294B1 (en) * | 1999-12-09 | 2003-11-04 | Verizon Laboratories Inc. | Distributed control merged buffer ATM switch |
US6907041B1 (en) * | 2000-03-07 | 2005-06-14 | Cisco Technology, Inc. | Communications interconnection network with distributed resequencing |
WO2002078252A2 (en) * | 2001-03-22 | 2002-10-03 | Siemens Aktiengesellschaft | Electronic switching circuit and method for a communication interface having a cut-through buffer memory |
US7190695B2 (en) * | 2001-09-28 | 2007-03-13 | Lucent Technologies Inc. | Flexible application of mapping algorithms within a packet distributor |
US7317730B1 (en) * | 2001-10-13 | 2008-01-08 | Greenfield Networks, Inc. | Queueing architecture and load balancing for parallel packet processing in communication networks |
US6967951B2 (en) * | 2002-01-11 | 2005-11-22 | Internet Machines Corp. | System for reordering sequenced based packets in a switching network |
US7586909B1 (en) * | 2002-03-06 | 2009-09-08 | Agere Systems Inc. | Striping algorithm for switching fabric |
AU2003225284A1 (en) * | 2002-05-02 | 2003-11-17 | Ciena Corporation | Distribution stage for enabling efficient expansion of a switching network |
US7486678B1 (en) * | 2002-07-03 | 2009-02-03 | Greenfield Networks | Multi-slice network processor |
US20040008674A1 (en) | 2002-07-08 | 2004-01-15 | Michel Dubois | Digital cross connect switch matrix mapping method and system |
US7397794B1 (en) * | 2002-11-21 | 2008-07-08 | Juniper Networks, Inc. | Systems and methods for implementing virtual switch planes in a physical switch fabric |
ATE389997T1 (en) | 2002-12-16 | 2008-04-15 | Alcatel Lucent | MULTI-CHANNEL NETWORK NODE AND METHOD FOR TRANSMITTING/ROUTING THE DATA |
-
2005
- 2005-12-19 WO PCT/CA2005/001913 patent/WO2006063459A1/en active Application Filing
- 2005-12-19 CA CA2590686A patent/CA2590686C/en not_active Expired - Fee Related
- 2005-12-19 EP EP05820897A patent/EP1832060A4/en not_active Withdrawn
- 2005-12-19 US US11/793,298 patent/US8254390B2/en not_active Expired - Fee Related
-
2012
- 2012-08-15 US US13/586,115 patent/US20120327771A1/en not_active Abandoned
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20010043610A1 (en) * | 2000-02-08 | 2001-11-22 | Mario Nemirovsky | Queueing system for processors in packet routing operations |
US20020064156A1 (en) * | 2000-04-20 | 2002-05-30 | Cyriel Minkenberg | Switching arrangement and method |
US20020064154A1 (en) * | 2000-11-29 | 2002-05-30 | Vishal Sharma | High-speed parallel cross bar switch |
US20030021269A1 (en) * | 2001-07-25 | 2003-01-30 | International Business Machines Corporation | Sequence-preserving deep-packet processing in a multiprocessor system |
US20060165070A1 (en) * | 2002-04-17 | 2006-07-27 | Hall Trevor J | Packet switching |
US20040240437A1 (en) * | 2003-05-14 | 2004-12-02 | Fraser Alexander G. | Switching network |
US20070030845A1 (en) * | 2003-09-29 | 2007-02-08 | British Telecommunications Public Ltd., Co. | Channel assignment process |
US20050100035A1 (en) * | 2003-11-11 | 2005-05-12 | Avici Systems, Inc. | Adaptive source routing and packet processing |
Also Published As
Publication number | Publication date |
---|---|
US8254390B2 (en) | 2012-08-28 |
CA2590686A1 (en) | 2006-06-22 |
CA2590686C (en) | 2013-05-21 |
US20080267204A1 (en) | 2008-10-30 |
WO2006063459A1 (en) | 2006-06-22 |
EP1832060A1 (en) | 2007-09-12 |
EP1832060A4 (en) | 2009-11-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8254390B2 (en) | Compact load balanced switching structures for packet based communication networks | |
US11632606B2 (en) | Data center network having optical permutors | |
Ramamirtham et al. | Time sliced optical burst switching | |
US7773608B2 (en) | Port-to-port, non-blocking, scalable optical router architecture and method for routing optical traffic | |
Iyer et al. | Analysis of a packet switch with memories running slower than the line-rate | |
Danielsen et al. | Wavelength conversion in optical packet switching | |
Keslassy et al. | Scaling internet routers using optics | |
US7162155B2 (en) | Optical packet switching apparatus and methods | |
US7397808B2 (en) | Parallel switching architecture for multiple input/output | |
Xiong et al. | Design and analysis of optical burst-switched networks | |
US20040091198A1 (en) | Modular photonic switch with wavelength conversion | |
Baldi et al. | Fractional lambda switching principles of operation and performance issues | |
JP2002325087A (en) | Unblocked switch system, its switching method and program | |
Yang et al. | Combined input and output all-optical variable buffered switch architecture for future optical routers | |
US10499125B2 (en) | TASA: a TDM ASA-based optical packet switch | |
Bernasconi et al. | Architecture of an integrated router interconnected spectrally (IRIS) | |
Zhou et al. | How practical is optical packet switching in core networks? | |
Shalmany et al. | On the choice of all-optical switches for optical networking | |
Keslassy et al. | Scaling internet routers using optics (extended version) | |
Yang et al. | New optical switching fabric architecture incorporating load balanced parallel rapidly switching all-optical variable delay buffer arrays | |
Hunter | Switching systems | |
Engbersen | Prizma switch technology | |
Rodelgo-Lacruz et al. | Load balanced distributed schedulers for WASPNET Optical packet switches maintaining packet order | |
Wang et al. | Load balanced two-stage switches using arrayed waveguide grating routers | |
Paredes et al. | A Load-Balanced Agile All-Photonic Network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: HALL, TREVOR, CANADA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ONECHIP PHOTONICS INC.;REEL/FRAME:028795/0527 Effective date: 20080624 Owner name: ONECHIP PHOTONICS INC., CANADA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HALL, TREVOR;PAREDES, SOFIA;TAEBI, SAREH;REEL/FRAME:028795/0414 Effective date: 20060201 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |