WO1999023853A1 - Multiport interfaces for a network using inverse multiplexed ip switched flows - Google Patents

Multiport interfaces for a network using inverse multiplexed ip switched flows Download PDF

Info

Publication number
WO1999023853A1
WO1999023853A1 PCT/US1998/023535 US9823535W WO9923853A1 WO 1999023853 A1 WO1999023853 A1 WO 1999023853A1 US 9823535 W US9823535 W US 9823535W WO 9923853 A1 WO9923853 A1 WO 9923853A1
Authority
WO
WIPO (PCT)
Prior art keywords
ports
sub
flow
node
interface
Prior art date
Application number
PCT/US1998/023535
Other languages
French (fr)
Inventor
Thomas A. Decanio
Thomas Lyon
Peter Newman
Greg Minshall
Robert Hinden
Fong Ching Liaw
Eric Hoffman
Original Assignee
Nokia Ip Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Ip Inc. filed Critical Nokia Ip Inc.
Priority to AU13077/99A priority Critical patent/AU1307799A/en
Publication of WO1999023853A1 publication Critical patent/WO1999023853A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/30Peripheral units, e.g. input or output ports
    • H04L49/3081ATM peripheral units, e.g. policing, insertion or extraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/04Selecting arrangements for multiplex systems for time-division multiplexing
    • H04Q11/0428Integrated services digital network, i.e. systems for transmission of different types of digitised signals, e.g. speech, data, telecentral, television signals
    • H04Q11/0478Provisions for broadband connections
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5603Access techniques
    • H04L2012/5609Topology
    • H04L2012/561Star, e.g. cross-connect, concentrator, subscriber group equipment, remote electronics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5614User Network Interface
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5614User Network Interface
    • H04L2012/5617Virtual LANs; Emulation of LANs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5629Admission control
    • H04L2012/563Signalling, e.g. protocols, reference model
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5638Services, e.g. multimedia, GOS, QOS
    • H04L2012/5665Interaction of ATM with other protocols
    • H04L2012/5667IP over ATM

Definitions

  • the present invention relates to the field of network communications.
  • a specific embodiment of the invention relates to improving the bandwidth of communications between the network interfaces on two adjacent "IP switched routers.
  • LAN Local area network
  • switches have been conventionally used as a quick, relatively inexpensive way to relieve congestion on shared-media LAN segments to more effectively manage traffic and allocate bandwidth within a LAN than shared-media hubs or simple bridges.
  • LAN switches operate as datalink layer (layer 2 of the OSI reference model) packet-forwarding hardware engines, dealing with media access control (MAC) addresses and performing simple table look-up functions.
  • MAC media access control
  • Routers which operate at the network-layer (layer 3 of the OSI reference model), are still required to solve these types of problems.
  • fast switching technology is overwhelming the capabilities of current routers, creating router bottlenecks.
  • the traditional IP packet-forwarding device on which the Internet is based, the IP router is showing signs of inadequacy.
  • routers are often expensive, complex, and of limited throughput, as compared to emerging switching technology.
  • IP routers need to operate faster and cost less.
  • multiple paths are available between two routers and a router may switch data between one or more paths by making a decision at layer 3 of the OSI reference model.
  • the present invention designates multiple ports of an IP switched router for communication as a single interface with a second adjacent IP switched router to provide increased bandwidth between the network interfaces on the IP switched routers.
  • the multiple designated ports are monitored by both IP switched routers for communications from the other. Data flows are queued up and inverse multiplexed over the multiple ports to optimize the available bandwidth.
  • the inverse multiplexing is done at layer 2 of the OSI reference model, so that layer 3 does not have to know about any reallocation among the multiple ports.
  • the present invention provides a method for transmitting packets over a multiport interface between an upstream node and a downstream node in a network, where the downstream node is downstream from the upstream node.
  • the method includes the steps of establishing a multiport interface that includes multiple sub-ports between the upstream node and the downstream node, receiving a packet at the downstream node, and performing a flow classification at the downstream node on the packet to determine whether the packet belongs to a specified flow that should be redirected in the upstream node to the multiport interface.
  • the method also includes the steps of selecting a free label for one of the multiple sub-ports at the downstream node, and informing the upstream node that future packets belonging to the specified flow should be sent with the selected free label attached.
  • the present invention provides a computer program product that enables dynamic shifting between routing and switching in a network having an upstream node and a downstream node.
  • the downstream node is downstream from the upstream node.
  • the computer program product includes computer- readable code that establishes a multiport interface which includes multiple sub-ports between the upstream node and the downstream node, and computer-readable code that performs a flow classification on a packet at the downstream node to determine whether the packet belongs to a specified flow that should be redirected in the upstream node to the multiport interface.
  • the computer program product also includes computer-readable code that selects a free label for one of the multiple sub-ports at the downstream node, computer-readable code that informs the upstream node that future packets belonging to the specified flow should be sent with the selected first free label attached, and a tangible medium that stores the computer-readable codes.
  • FIG. 1 is a diagram illustrating multiple port connections between two "IP switched routers, " in accordance with a specific embodiment of the present invention
  • Fig. 2 is a diagram illustrating the queuing of multiple flows for the multiple ports, in accordance with the specific embodiment of the present invention
  • Fig. 3 illustrates one of the many network configurations possible in accordance with the present invention
  • Fig. 4a is a system block diagram of a typical computer system 151 that may be used as switch controller 12a in basic switching unit 12 (as shown in Fig. 1) to execute a specific embodiment of the system software of the present invention
  • Fig. 4b is a general block diagram of an architecture of an ATM switch 3 (the example shows a 16-port switch) that may be used as the switching hardware engine of a basic switching unit according to an embodiment of the present invention
  • Fig. 5a is a simplified diagram generally illustrating the initialization procedure in each system node according to an embodiment of the present invention
  • Fig. 5b is a simplified diagram that generally illustrates the operation of a system node according to an embodiment of the present invention
  • Fig. 6a is a diagram generally illustrating the steps involved in labelling a flow in a system node according to an embodiment of the present invention
  • Fig. 6b is a diagram generally illustrating the steps involved in switching a flow in a basic switching unit according to an embodiment of the present invention
  • Fig. 6c is a diagram generally illustrating the steps involved in forwarding a packet in a system node according to an embodiment of the present invention
  • Fig. 7a is a simplified diagram generally illustrating the multiport interface establishing procedure in a basic switching unit according to an embodiment of the present invention
  • Fig. 7b is a diagram generally illustrating some of the steps involved in determining whether a flow should be switched in basic switching units according to an embodiment of the present invention
  • Fig. 7c is a diagram generally illustrating the steps involved in labelling a flow in the upstream link for a designated multiport interface in a basic switching unit, such as shown by label flow step 366 of Fig. 7b according to an embodiment of the present invention.
  • Fig. 7d is a simplified diagram that generally illustrates some of the steps of the operation of the basic switching unit according to the specific embodiment of the present invention. DESCRIPTION OF THE SPECIFIC EMBODIMENTS CONTENTS
  • the present invention provides for a multiport interface made of two or more sub-ports used as a single interface, with flow-by-flow inverse multiplexing, to provide at layer 2 of the OSI reference model very high speed trunking capability between "IP switched routers," also referred to as "basic switching units.”
  • the multiport interface increases the effective bandwidth for transmitting packets in a network.
  • the method and apparatus will find particular utility and is illustrated herein as it is applied in the high throughput flow-based transmission of IP packets capable of carrying voice, video, and data signals over a local area network (LAN), metropolitan area networks (MAN), wide area network (WAN), Internet, or the like, but the invention is not so limited.
  • the invention will find use in a wide variety of applications where it is desired to transmit packets over a network.
  • Fig. 1 illustrates two switch controllers 12a and 14a, respectively coupled to a switching engine 12b and 14b.
  • Each corresponding pair of switch controller and switching engine form what is referred to as an "IP switched router” or a “basic switching unit”
  • basic switching unit 12 includes switch controller 12a and switching engine 12b
  • basic switching unit 14 includes switch controller 14a and switching engine 14b
  • switching a switching unit
  • the basic switching unit of the system via system software installed on its switch controller dynamically provides both layer 2 switching functionality as well as layer 3 routing and packet forwarding functionality.
  • the switching engine which utilizes conventional and currently available asynchronous transfer mode (ATM) switching hardware, is an ATM switch.
  • the ATM switching hardware providing the switching engine of the basic switching unit operates at the datalink layer (layer 2 of the OSI reference model). Any of the software normally associated with the ATM switch that is above the ATM Adaptation Layer type 5 (AAL-5) is completely removed. Thus, the signalling, any existing routing protocol, and any LAN emulation server or address resolution servers, etc. are removed.
  • AAL-5 ATM Adaptation Layer type 5
  • FDDI Fiber Distributed Data Interface
  • the switch controller is a computer having multiple network adapter or network interface cards (NICs) connected to the switching engine via multiport interface 18.
  • System software is installed in basic switching unit, more particularly in the computer serving as switch controller.
  • the switching engine serves to perform high-speed switching functions when required by the basic switching unit, as determined by the system software.
  • the switching capability of the switching system is limited only by the hardware used in the switching engine. Accordingly, the present embodiment of the invention is able to take advantage of the high-speed, high capacity, high bandwidth capabilities of ATM technology.
  • the switch controller In addition to performing standard connectionless IP routing functions at layer 3, the switch controller also makes flow classification decisions for packets on a local basis, as described generally below.
  • each of these basic switching units can also communicate with other nodes in a network or other networks or servers via ports 16, for example.
  • a trunk interface between the two basic switching units may be used.
  • the switching engine of each basic switching unit has multiple physical ports, each being capable of being connected to a variety of devices, including for example data terminal equipment (DTE), data communication equipment (DCE), servers, switches, gateways, etc.
  • DTE data terminal equipment
  • DCE data communication equipment
  • servers switches, gateways, etc.
  • switches switches, gateways, etc.
  • One of these multiple ports for example port 1, is used to provide the communication link between the switch controller and the switching engine.
  • Two or more of these multiple ports may be used as a trunk interface to form a single multiport interface 18.
  • ports 8, 9, 10 and 12 of the basic switching unit 12 may be designated to be sub-ports of multiport interface 18, as described further below.
  • Multiport interface 18 is created by combining several ports of the switching engine of a basic switching unit and having it appear as a single interface to the switch controller of the basic switching unit.
  • Fig. 2 illustrates an example of flow-by-flow inverse multiplexing across the multiple sub-ports of multiport interface 18.
  • flows 1-12 are shown being allocated to different queues for ports 8, 9, 10 and 12 of switching engine 12b. More specifically in this example, flows 4, 5 and 8 have been allocated to sub-port 8; flows 3, 9 and 11 have been allocated to sub-port 9; flows 2 and 7 have been allocated to sub-port 10; and flows 1, 6, 10 and 12 have been allocated to sub-port 12.
  • the bandwidth can be maximized by evenly spreading the flows across the multiple sub-ports to the extent possible.
  • Fig. 3 illustrates one of the many network configurations possible in accordance with the present invention. Of course, many alternate configurations are possible.
  • multiport interface 18 could be used between basic switching units 12 and 14 as shown in Fig. 3, according to the present invention.
  • Basic switching units, switch gateway units, and system software allow users to build flexible IP network topologies targeted at the workgroup, campus, and WAN environments for high performance, scaleable solution to current campus backbone congestion problems.
  • Fig. 3 shows a simplified diagram of a high performance workgroup environment in which several host computers 145 are connected via ATM links 133m to multiple basic switching units 12 and 14, which each connect to a switch gateway unit 121 that connects to a LAN 135 with user devices 141.
  • a first basic switching unit 12 connects to a second basic switching unit 14 via multiport interface 18, such as seen in Fig. 1.
  • host computers 145 equipped with ATM NICs are installed with a subset of the system software, enabling the TCP/IP hosts to connect directly to a basic switching unit.
  • the first and second basic switching units 12 and 14 connect to switch gateway unit 121 via ATM links 133e (155 Mbps) and 1337 (25 Mbps) respectively.
  • Connection of the first and second basic switching units 12 and 14 to switch gateway unit 121 via an Ethernet (e.g. , lOBaseT) or FDDI link 139 enables users of host computers 145 to communicate with users devices 141 attached to LAN 135.
  • User devices 141 may be PCs, terminals, or workstations having appropriate NICs 143 to connect to any Ethernet or FDDI LAN 135.
  • the workgroup of host computers is thereby seamlessly integrated with the rest of the campus network.
  • a "switch gateway unit” which is similar to a basic switching unit without a switching engine, includes a gateway switch controller and IFMP software installed on the gateway switch controller, in accordance with a specific embodiment.
  • the gateway switch controller includes multiple network adaptors or NICs, and an ATM NIC.
  • Switch gateway unit serves as an access device to enable connection of existing LAN and backbone environments to a network of basic switching units.
  • the NICs of the switch gateway unit may be of different types, such as for example Ethernet NICs, Fast Ethernet NICs, FDDI NICs, and others, or any combination of the preceding.
  • the use of particular types of NICs depends on the types of existing LAN and backbone environments to which switch gateway unit provides access.
  • networks utilizing the present invention also may also include high performance host computers, workstations, or servers that are appropriately equipped.
  • a subset of the IFMP software can be installed on a host computer, workstation, or server equipped with an appropriate ATM NIC to enable a host to connect directly to a basic switching unit.
  • system software on a switch controller of a basic switching unit can create and delete multiport interfaces and then direct the switching engine to switch flows on the multiport interface to implement the inverse multiplexing of flows over multiport interface 18 and adds complete IP routing functionality on top of ATM switching hardware instead of any existing conventional ATM switch control software, to control the ATM switch such that the flows are appropriately multiplexed over the sub-ports of interface 18. Therefore, the present system is capable of moving between network layer IP routing when needed and high throughput datalink layer flow switching over interface 18 when possible in order to create high speed and capacity packet transmission in an efficient manner without the problem of router bottlenecks.
  • the packet throughput between their attached network interfaces may reach millions of IP packets-per-second, which is an order of magnitude faster than with traditional IP routers.
  • IFMP Ipsilon Flow Management Protocol
  • a system node such as a basic switching unit, switch gateway unit, or host computer/server/workstation
  • IFMP Ipsilon Flow Management Protocol
  • a flow is a sequence of packets sent from a particular source to a particular (unicast or multicast) destination that are related in terms of their routing and any local handling policy they may require.
  • the present invention efficiently permits different types of flows to be handled differently, depending on the type of flow, and enables the inverse multiplexing of different flows over different sub-ports of multiport interface 18, depending on the bandwidth available on each designated sub-port.
  • Some types of flows may be handled by mapping them into individual ATM connections using the ATM switching engine to perform high speed switching of the packets over multiport interface 18. Flows such as for example those carrying real-time traffic, those with quality of service requirements, or those likely to have a long holding time, may be configured to be switched whenever possible.
  • Other types of flows such as for example short duration flows or database queries, may be handled by connectionless IP routing.
  • a particular flow of packets may be associated with a particular ATM label (i.e., a virtual path identifier (VPI) and virtual channel identifier (VCI)). It is assumed that virtual channels are unidirectional so an ATM label of an incoming direction of each link is owned by the input port to which it is connected. Each direction of transmission on a link is treated separately. Of course, flows travelling in each direction are handled by the system separately but in a similar manner.
  • a virtual path identifier i.e., a virtual path identifier (VPI) and virtual channel identifier (VCI)
  • VPN virtual path identifier
  • VCI virtual channel identifier
  • Flow classification is a local policy decision.
  • the system node transmits the IP packet via the default channel.
  • the node also classifies the IP packet as belonging to a particular flow, and accordingly decides whether future packets belonging to the same flow should preferably be switched directly in the ATM switching engine or continue to be forwarded hop-by-hop by the router software in the node. If a decision to switch a flow of packets is made, the flow must first be labelled. To label a flow, the node selects for that flow an available label (VPI/VCI) of the input port on which the packet was received.
  • VPN/VCI available label
  • the node which has made the decision to label the flow then stores the label, flow identifier, and a lifetime, and then sends an IFMP redirect message upstream to the previous node from which the packet came.
  • the flow identifier contains the set of header fields that characterize the flow.
  • the lifetime specifies the length of time for which the redirection is valid. Unless the flow state is refreshed, the association between the flow and label is deleted upon the expiration of the lifetime. Expiration of the lifetime before the flow state is refreshed results in further packets belonging to the flow to be transmitted on the default forwarding channel between the adjacent nodes.
  • a flow state is refreshed by sending upstream a redirect message having the same label and flow identifier as the original and having another lifetime.
  • the redirect message requests the upstream node to transmit all further packets that have matching characteristics to those identified in the flow identifier via the virtual channel specified by the label.
  • the redirection decision is also a local decision handled by the upstream node, whereas the flow classification decision is a local decision handled by the downstream node. Accordingly, even if a downstream node requests redirection of a particular flow of packets, the upstream node may decide to accept or ignore the request for redirection.
  • redirect messages are not acknowledged. Rather, the first packet arriving on the new virtual channel serves to indicate that the redirection request has been accepted.
  • Different encapsulations are used for the transmission of IP packets that belong to particular labelled flows on an ATM data link, depending on the different flow type of the flows. In the present example of Fig. 2, twelve types of encapsulations are used to transmit IP packets belonging to the twelve types of flows to be multiplexed over multiport interface 18.
  • a system node such as a basic switching engine may utilize the General Switch Management Protocol (GSMP, also described in detail in U.S. patent application no. 08/597,520) to establish communication over the ATM link between the switch controller and ATM hardware switching engine of the basic switching unit and thereby enable flow-by-flow multiplexed layer 2 switching when possible and layer 3 IP routing and packet forwarding when necessary.
  • GSMP is a general purpose, asymmetric protocol to control the switching engine, e.g., the ATM switch. That is, the switch controller acts as the master with the ATM switch as the slave.
  • GSMP runs on a virtual channel established at initialization across the ATM link between the switch controller and the ATM switch.
  • a single switch controller may use multiple instantiations of GSMP over separate virtual channels to control multiple ATM switches.
  • GSMP adjacency protocol which is used to synchronize state across the ATM link between the switch controller and the ATM switch, to discover the identity of the entity at the other end of the link, and to detect changes in the identity of that entity.
  • GSMP allows the switch controller to establish and release connections across the ATM switch, add and delete leaves on a point-to-multipoint connection, manage switch ports, request configuration information, and request statistics (such as the level of traffic on each port).
  • GSMP also allows the ATM switch to inform the switch controller of events such as a link going down.
  • a switch controller may use GSMP to configure multiport interface 18 and to direct the switching engine to switch flows on the multiport interface 18 such that the switching engine distributes the flows across the individual sub-ports designated by the configuration. Creation and deletion of multiport interfaces is done at the switch controller with configuration information being stored in, for example, non-volatile memory in the switch controller.
  • the bandwidth and performance of the multiport interface approaches that of a single interface having a bandwidth equal to the sum of that of the individual sub- ports. For example, as seen in Fig. 2, combining four OC3 (155 Mbps) interfaces into a multiport interface 18 creates an equivalent interface having a virtual bandwidth of an OC12 (622 Mbps) connection.
  • the multiport interface 18 is useful in relieving network congestion that might result in trunk connections between basic switching units, such as when traffic from many LANs (e.g., Fast Ethernets) on one of the basic switching units might cross a trunk before reaching an important server or server farm.
  • the present invention can also be used to allocate bandwidth across multiple ports where other communication methods are used.
  • the advantage of being able to use multiple ports invisible to layer 3 of the OSI reference model provides advantages by reducing the overhead and time required to do the transmissions and reconfigure as necessary.
  • Switching engine 12b is assumed to contain multiple ports, where each physical port is a combination of an input port and an output port.
  • ATM cells arrive at the ATM switch from an external communication link on incoming virtual channels at an input port, and depart from the ATM switch to an external communication link on outgoing virtual channels from an output port.
  • virtual channels on a port or link are referenced by their VPI/VCI.
  • a virtual channel connection across an ATM switch is formed by connecting an incoming virtual channel (or root) to one or more outgoing virtual channels (or branches). Virtual channel connections are referenced by the input port on which they arrive and the VPI/VCI of their incoming virtual channel.
  • each port has a hardware look-up table indexed by the VPI/VCI of the incoming ATM cell, and entries in the tables are controlled by a local control processor in the switch.
  • Fig. 4a is a system block diagram of a typical computer system 151 that may be used as switch controller 12a in basic switching unit 12 (as shown in Fig. 1) to execute the system software of the present invention.
  • Fig. 4a also illustrates an example of the computer system that may be used as switch gateway controller of switch gateway unit 121 (as shown in Fig. 3), as well as serving as an example of a typical computer which may be used as a host computer/server/workstation loaded with a subset of the IFMP software.
  • switch gateway controller of switch gateway unit 121 as shown in Fig. 3
  • serving as an example of a typical computer which may be used as a host computer/server/workstation loaded with a subset of the IFMP software.
  • a monitor, screen, and keyboard are added for the host.
  • computer system 151 includes subsystems such as a central processor 169, system memory 171, I/O controller 173, fixed disk 179, network interface 181, and read-only memory (ROM) 183.
  • the computer system 151 optionally includes monitor 153, keyboard 159, display adapter 175, and removable disk 177, for the host.
  • Arrows such as 185 represent the system bus architecture of computer system 151. However, these arrows are illustrative of any interconnection scheme serving to link the subsystems.
  • a local bus could be utilized to connect central processor 169 to system memory 171 and ROM 183.
  • Configuration information for creation of multiport interfaces may be stored, for example, on ROM 183.
  • Other computer systems suitable for use with the present invention may include additional or fewer subsystems.
  • another computer system could include more than one processor 169 (i.e., a multi-processor system) or a cache memory.
  • the computer used as the switch controller can be a standard Intel-based central processing unit (CPU) machine equipped with a standard peripheral component interconnect (PCI) bus, as well as with an ATM network adapter or network interface card (NIC).
  • the computer is connected to the ATM switch via a 155 Megabits per second (Mbps) ATM link using the ATM NIC.
  • the system software is installed on fixed disk 179 which is the hard drive of the computer.
  • the system software may be stored on a CD-ROM, floppy disk, tape, or other tangible media that stores computer- readable code.
  • Computer system 151 shown in Fig. 4a is but an example of a computer system suitable for use (as the switch controller of a basic switching unit, as the switch gateway controller of a switch gateway unit, or as a host computer/server/workstation) with the present invention.
  • switch gateway unit may be equipped with multiple other NICs to enable connection to various types of LANs.
  • Other NICs or alternative adaptors for different types of LAN backbones may be utilized in switch gateway unit. For example, SMC 10M/100M Ethernet NIC or FDDI NIC may be used.
  • Table 1 provides a list of commercially available components which are useful in operation of the controller, according to the above embodiments. It will be apparent to those of skill in the art that the components listed in Table 1 are merely representative of those which may be used in association with the inventions herein and are provided for the purpose of facilitating assembly of a device in accordance with one particular embodiment of the invention. A wide variety of components readily known to those of skill in the art could readily be substituted or functionality could be combined or separated.
  • the ATM switch hardware provides the switching engine 12b of basic switching unit 12, in accordance with a specific embodiment.
  • the ATM switching engine utilizes vendor-independent ATM switching hardware.
  • the ATM switching engine according to the present invention does not rely on any of its usual connection-oriented ATM routing and signaling software (SSCOP, Q.2931, UNI 3.0/3.1, and P-NNI). Rather, any ATM protocols and software are completely discarded, and the basic switching unit relies on the system software to create and delete multiport interfaces and to control the ATM switching engine for inverse multiplexing of flows.
  • the system software is described in detail later.
  • FIG. 4b is a general block diagram of an architecture of an ATM switch 12b (the example shows a 16-port switch) that may be used as the switching hardware engine of a basic switching unit according to an embodiment of the present invention.
  • commercially available ATM switches also may operate as the switching engine of the basic switching unit according to other embodiments of the present invention.
  • the main functional components of switching hardware 12b include a switch core, a microcontroller complex, and a transceiver subassembly.
  • the switch core performs the layer 2 switching
  • the microcontroller complex provides the system control for the ATM switch
  • the transceiver subassembly provides for the interface and basic transmission and reception of signals from the physical layer.
  • the switch core is based on the MMC Networks ATMS 2000 ATM Switch Chip Set which includes White chip 200, Grey chip 202, MBUF chips 204, Port Interface Device (PIF) chips 206, and common data memory 208.
  • the switch core also may optionally include VC Activity Detector 210, and Early Packet Discard function 212. Packet counters also are included but not shown.
  • White chip 200 provides configuration control and status.
  • Grey chip 202 is responsible for direct addressing and data transfer with the switch tables.
  • MBUF chips 204 are responsible for movement of cell traffic between PIF chips 206 and the common data memory 208. Common data memory 208 is used as cell buffering within the switch.
  • PIF chips 206 manage transfer of data between the MBUF chips to and from the switch port hardware.
  • VC Activity Detector 210 which includes a memory element provides information on every active virtual channel. Early Packet Discard 212 provides die ability to discard certain ATM cells as needed. Packet counters provide the switch with the ability to count all packets passing all input and output ports. Buses 214, 215, 216, 217, and 218 provide the interface between the various components of the switch.
  • the microcontroller complex includes a central processing unit (CPU) 230, dynamic random access memory (DRAM) 232, read only memory (ROM) 234, flash memory 236, DRAM controller 238, Dual Universal Asynchronous Receiver-Transmitter (DUART) ports 240 and 242, and external timer 244.
  • CPU 230 acts as the microcontroller.
  • ROM 234 acts as the local boot ROM and includes die entire switch code image, basic low-level operation system functionality, and diagnostics.
  • DRAM 232 provides conventional random access memory functions
  • DRAM controller 238 (which may be implemented by a field programmable gate array (FPGA) device or the like) provides refresh control for DRAM 232.
  • FPGA field programmable gate array
  • Flash memory 236 is accessible by the microcontroller for hardware revision control, serial number identification, and various control codes for manufacturability and tracking.
  • DUART Ports 240 and 242 are provided as interfaces to communications resources for diagnostic, monitoring, and other purposes.
  • External timer 244 interrupts CPU 230 as required.
  • Transceiver subassembly includes physical interface devices 246, located between PIF chips 206 and physical transceivers (not shown). Interface devices 246 perform processing of the data stream, and implement the ATM physical layer.
  • the components of the switch may be on a printed circuit board that may reside on a rack for mounting or for setting on a desktop, depending on die chassis that may be used.
  • Table 2 provides a list of commercially available components which are useful in operation of the switching engine, according to the above specific embodiment. It will be apparent to those of skill in the art that the components listed in Table 2 are merely representative of those which may be used in association with the inventions herein and are provided for the purpose of facilitating assembly of a device in accordance with a particular embodiment of the invention. A wide variety of components or available switches readily known to those of skill in the art could readily be substituted or functionality could be combined or separated.
  • Flash memory standard flash memory
  • DRAM controller standard FPGA, ASIC, etc.
  • IFMP is a protocol for instructing an adjacent node to attached a layer 2 "label" to a specified "flow" of packets.
  • a flow is a sequence of packets sent from a particular source to a particular destination(s) that are related in terms of their routing and logical handling policy required.
  • the label (VPI/VCI) specifies a virtual channel and allows cached routing information for that flow to be efficiently accessed.
  • the label also allows further packets belonging to the specified flow to be switched at layer 2 rather than routed at layer 3. That is, if both upstream and downstream links redirect a flow at a particular node in the network, that particular node may switch the flow at the datalink layer, rather than route and forward the flow at the network layer.
  • Fig. 5a is a simplified diagrams generally illustrating the initialization procedure in each system node according to an embodiment of the present invention.
  • each system node Upon system startup at step 260, each system node establishes default virtual channels on all ports in step 262. Then at step 264 each system node waits for packets to arrive on any port.
  • Fig. 5b is a simplified diagram that generally illustrates the operation of a system node dynamically shifting between layer 3 routing and layer 2 switching according to the present invention.
  • a packet arrives on a port of the system node at step 266. If the packet is received on a default virtual channel (step 268), the system node performs a flow classification on the packet at step 270.
  • Flow classification involves determining whether the packet belongs to a type of flow.
  • the system node determines whether that flow to which the packet belongs should preferably be switched. If the system node determines that the flow should be switched, the system node labels the flow in step 274 then proceeds to forward the packet in step 276.
  • the system node After forwarding the packet, the system node waits for a packet to arrive in step 282. Once a packet arrives, the system node returns to step 266. If the system node determines at step 268 d at the packet did not arrive on the default virtual channel, the system node does not perform flow classification at step 270 on the packet. When a packet arrives on an alternate virtual channel, the packet belongs to a flow that has already been labelled. Accordingly, if the flow is also labelled downstream (step 278), the system node switches the flow in step 280. Switching the flow involves making a connection within the switch between the label of the upstream link and the label of the downstream link. After switching the flow in step 280, the system node at step 276 forwards the packet downstream.
  • Fig. 6a is a diagram generally illustrating the steps involved in labelling a flow in the upstream link of a system node, such as shown by label flow step 274 of Fig. 5b.
  • the system node labels a flow as shown in steps 290, 292, 300 and 276 of Fig. 6a.
  • the label flow step begins (step 290)
  • the system node selects a free label x on the upstream link in step 292.
  • the system node men sends an IFMP redirect message on the upstream link in step 300 (as indicated by dotted line 293).
  • the system node tiien forwards the packet in step 276.
  • labelling a flow is also illustrated by steps 294, 296, and 298.
  • the basic switching unit selects a free label x on the upstream link in step 292.
  • the switch controller of basic switching unit selects a temporary label x' on the control port of the switch controller in step 294.
  • the switch controller then sends to the hardware switching engine a GSMP message to map label x on the upstream link to label x' on the control port.
  • the switch controller then waits in step 298 until a GSMP acknowledge message is received from the hardware switching engine that indicates that the mapping is successful.
  • the basic switching unit sends an IFMP redirect message on the upstream link in step 300.
  • the system node returns to step 176 as shown in Fig. 5b.
  • Fig. 6b is a diagram generally illustrating the steps involved in switching a flow in a basic switching unit, such as shown by switch flow step 280 of Fig. 5b.
  • switch flow step 280 of Fig. 5b.
  • the switch controller in the basic switching unit sends at step 312 a GSMP message to map label x on die upstream link to the label y on the downstream link.
  • Label y is the label which the node downstream to the basic switching unit has assigned to me flow.
  • this downstream node has labelled the flow in the manner specified by Figs. 5b and 6a, with the free label y being selected in step 292.
  • step 312 the switch controller in the basic switching unit waits in step 314 for a GSMP acknowledge message from a hardware switching engine in basic switching unit to indicate that the mapping is successful. The flow is thereby switched in layer 2 entirely within the hardware switching engine in the basic switching unit. Then the basic switching unit proceeds to forward the packet in step 276.
  • Fig. 6c is a diagram generally illustrating the steps involved in forwarding a packet in a system node, such as shown by forward packet step 276 of Fig. 5b.
  • a system node at step 318 starts the forward packet procedure. If the flow to which the packet belongs is not labelled on the downstream link (step 320), then the system node sends the packet on die default virtual channel on the downstream link in step 322 and tiien goes to a wait state 282 to wait for arrival of packets.
  • die system node checks at step 326 if the lifetime for the redirection of that flow has expired. If the lifetime has not expired, then the system node sends the packet on the labelled virtual channel in the IFMP redirect message at step 328 then goes to wait state 282. If the lifetime has expired, then the system node automatically deletes the flow redirection at step 330. The system node dien proceeds to send the packet on the default channel (step 322) and returns to the wait state of step 282 as shown in Fig. 5b.
  • the downstream node When a packet is forwarded or switched from an upstream node and received at the downstream node, the downstream node proceeds to forward the packet traffic on the chosen label.
  • multiport interfaces are created and deleted by die switch controller of a basic switching unit, where configuration information may be stored in non-volatile memory in the basic switching unit.
  • Fig. 7a is a simplified diagram generally illustrating the multiport interface establishing procedure in a basic switching unit according to an embodiment of the present invention. It is noted that both the upstream and downstream basic switching units are configured for the same multiport interfaces. Configuring a multiport interface may be achieved in the specific embodiment by a command to define the multiport interface.
  • the switch controller of the upstream basic switching unit defines a multiport interface 18 by a command (e.g., define mpif 8 9 10 12) which designates, in diis example, ports 8, 9, 10 and 12 to be sub-ports of multiport interface 18.
  • a command e.g., define mpif 8 9 10 12
  • Such a command may number the resulting multiport interface as port 8, distinctly designated as a multiport interface (e.g. , as ips0_8).
  • Managing the multiport interface may be achieved with other commands to show the multiport interface, and to delete the multiport interface.
  • the switching engine may return an acknowledgment message indicating the successful definition of the multiport interface (e.g., mul tiport interface 8 successfully defined).
  • interface numbers that correspond to real ports on the switching engine allows additional ports of engine to be added as sub-ports of the multiport interface as bandwidtii requirements increase without the need to change die configuration of the switch controller.
  • Configuration of multiport interfaces requires that the switching engine and the switch controller must re-initialize (step 356) their communication in order to exchange the new list of available interfaces. This reinitialization typically may be performed by rebooting. After re-initialization, at step 264 each basic switching unit waits for packets to arrive on any port.
  • Fig. 7b is a diagram generally illustrating some of the steps involved in determining whetiier a flow should be switched in basic switching units according to an embodiment of the present invention.
  • Fig. 7b continues at step 272.
  • a decision is made (step 360 in Fig. 7b) whether that flow is to be switched onto a defined multiport interface.
  • step 366 If the flow is to be switched onto a defined multiport interface, then that flow is labelled (step 366) for the defined multiport interface and then forwarded (step 276). After forwarding of the packet in step 276, the process continues from step 282 of Fig. 5b. If the determination in step 272 is that the flow should not be switched at all, then the packet is merely forwarded per step 276. If the determination in step 360 is that the flow should switched but not onto a defined multiport interface, then the switch controller labels that flow for the desired port (step 274) in the usual manner (as described for Fig. 6a), then forwards the packet in step 276.
  • Fig. 7c is a diagram generally illustrating the steps involved in labelling a flow in the upstream link for a designated multiport interface in a basic switching unit, such as shown by label flow step 366 of Fig. 7b according to an embodiment of the present invention.
  • label flow step begins (step 290)
  • die level of traffic on the sub-ports of the designated multiport interface is determined in step 376.
  • this determination may occur through the use of GSMP configuration messages sent by the switch controller that request statistics (e.g., die level of traffic on the sub-ports) from the switching engine, which sends an appropriate response message.
  • a label on the upstream link for the sub-port having the lowest level of traffic is selected in step 380. If all available sub-ports are running at full line rate, then the new flow is added to the particular sub-port with the shortest queue of waiting traffic at the given priority level. If all sub-ports are running at less than the full line rate but are running at equal outgoing cell rates, then the lowest numbered port will carry the flow, in accordance with a specific embodiment. If there is failure of the sub-port (determined at step 384), steps 376 and 380 are repeated. If there is no failure of the sub-port, then the switch controller selects a free label x on the upstream link on the sub-port in step 292.
  • the switch controller of the basic switching unit selects a temporary label x' on the sub-port in step 388.
  • the switch controller then sends to the switching engine a GSMP message to map label x on the upstream link to label x' on the sub-port.
  • the switch controller then waits in step 298 until a GSMP acknowledge message is received from the switching engine indicating tiiat the mapping is successful.
  • the basic switching unit sends an IFMP redirect message on the upstream link in step 300.
  • the basic switching unit returns to step 276 of Fig. 5b.
  • Fig. 7d is a simplified diagram mat generally illustrates some of the steps of the operation of the basic switching unit according to the specific embodiment of the present invention.
  • the basic switching unit does not switch the flow but rather forwards the packet downstream in step 276 and proceeds witii step 282 of Fig. 5b.
  • the upstream basic switching unit 12 switches the flow for the designated port in step 280.
  • switching the flow involves making a connection within the switch between die label of die upstream link and the label of the downstream link.
  • step 396 determines mat the flow label is for a multiport interface
  • the upstream basic switching unit 12 switches the flow for the designated multiport interface in step 280.
  • the basic switching unit 12 at step 276 forwards the packet downstream.
  • the downstream node When a packet is forwarded or switched from an upstream node and received at die downstream node, the downstream node proceeds to forward the packet traffic on the chosen label from all sub-ports of the multiport interface.
  • inverse multiplexing across sub-ports of a multiport interface may be accomplished on a flow-by - flow basis with the net result being that traffic is distributed fairly evenly across the sub- ports.
  • inverse multiplexing across sub-ports of a multiport interface may be accomplished on a flow-by-flow basis in another manner with the net result being that traffic is distributed across the sub-ports such that the flow is not balanced across all the specified sub-ports but instead the current sub- port is desired to be fully loaded witii flows before adding a flow to the next sub-port.
  • the present specific embodiment is achieved by using weighted multiport interfaces, which are configured in a similar manner as multiport interfaces.
  • both die upstream and downstream basic switching units are each configured for the same weighted multiport interfaces.
  • Configuring a weighted multiport interface may be achieved in the specific embodiment by a command to define the weighted multiport interface.
  • the switch controller of the upstream basic switching unit defines a weighted multiport interface by a command (e.g., define wmpif 8 9 10 12) which designates, in this example, ports 8, 9, 10 and 12 to be sub-ports of the weighted multiport interface.
  • a command e.g., define wmpif 8 9 10 12
  • managing the weighted multiport interface may be achieved with other commands to show d e weighted multiport interface, and to delete the weighted multiport interface.
  • VPNs virtual path interfaces
  • WAN wide area network
  • a multiport interface is first created and the basic switching unit configured, in a similar manner as described for Fig. 7a.
  • a multiport interface is created by the command (e.g., define mpif 8 9 10 11) which designates, in this example, ports 8, 9, 10 and 11 to be sub-ports of multiport interface numbered as port 8.
  • the switching engine may return an acknowledgment message indicating the successful definition of die multiport interface (e.g., mul tiport interface 8 successfully defined).
  • a virtual path interface is configured in this specific embodiment by a command to define the virtual path interface.
  • the switch controller of the upstream basic switching unit Fig.
  • a command e.g., define vpif 8 5 1 7
  • the above commands combine the virtual path interface 5 on ports 8-11 into a virtual path interface numbered 17.
  • the same VPI number should be used on all sub-ports of the created multiport interface to carry traffic across the WAN.
  • Managing the virtual path interfaces may be achieved with otiier commands to show die virtual interface, and to delete the virtual path interface.
  • the switching engine may return an acknowledgment message indicating the successful definition of the virtual path interface (e.g., virtual path interface 1 7 on port 8 vpi 5 successfully defined).
  • Configuration of a virtual path interface combined with a multiport interface requires that the switching engine and die switch controller re-initialize (e.g., by rebooting) their communication in order to exchange the new list of available interfaces.
  • Appendix I the source code of the system software ( ® Copyright, Unpublished Work, Ipsilon Networks, Inc. , All Rights Reserved) for use on the switch controller of a basic switching unit is included as Appendix I.
  • Appendix I includes the system software for configuration and operation of multiport interfaces, flow characterization and direction on sub-ports, interfacing with IFMP and GSMP protocols, routing and forwarding, device drivers, operating system interfaces, as well as drivers and modules.
  • the inventions claimed herein provide an improved method and apparatus for transmitting packets over a network by multiplexing IP switched flows over a multiport interface between basic switching units. It is to be understood that the above description is intended to be illustrative and not restrictive. Many embodiments will be apparent to those of skill in the art upon reviewing the above description. By way of example the inventions herein have been illustrated primarily with regard to transmission of IP packets capable of carrying voice, video, image, facsimile, and data signals, but they are not so limited. By way of further example, the invention has been illustrated in conjunction with specific components and operating speeds, but the invention is not so limited. The scope of the inventions should, therefore, be determined not with reference to the above description, but should instead be determined with reference to the appended claims, along witii the full scope of equivalents to which such claims are entitled, by one of ordinary skill in the art.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

A method and apparatus for dynamically shifting between switching and routing packets efficiently to provide high packet throughput. The present invention designates multiple ports of an IP switched router for communication as a single interface with a second adjacent IP switched router to provide increased bandwidth between the network interfaces on the IP switched routers. The multiple designated ports are monitored by both IP switched routers for communications from the other. Data flows are queued up and inverse multiplexed over the multiple ports to optimize the available bandwidth. The inverse multiplexing is done at layer 2 of the OSI reference model, so that layer 3 does not have to know about any reallocation among the multiple ports.

Description

MULTIPORT INTERFACES FOR A NETWORK USING INVERSE MULTIPLEXED IP SWITCHED FLOWS
CROSS-REFERENCE TO RELATED APPLICATIONS This application claims priority from commonly-owned U.S. provisional patent application no. 60/030,348 filed on November 6, 1996, the disclosure of which is herein incorporated by reference for all purposes. This application is also a continuation- in-part application of commonly-assigned U.S. patent application no. 08/792,183 filed on January 30, 1997 which is a continuation-in-part of U.S. patent no. 08/597,520 filed on January 31, 1996, the disclosures of which are herein incorporated by reference for all purposes.
COPYRIGHT NOTICE A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.
BACKGROUND OF THE INVENTION The present invention relates to the field of network communications. In particular, a specific embodiment of the invention relates to improving the bandwidth of communications between the network interfaces on two adjacent "IP switched routers. " Local area network (LAN) switches have been conventionally used as a quick, relatively inexpensive way to relieve congestion on shared-media LAN segments to more effectively manage traffic and allocate bandwidth within a LAN than shared-media hubs or simple bridges. LAN switches operate as datalink layer (layer 2 of the OSI reference model) packet-forwarding hardware engines, dealing with media access control (MAC) addresses and performing simple table look-up functions. Although switch-based networks are able to offer greater throughput, they continue to suffer from problems such as broadcast flooding and poor security. Routers, which operate at the network-layer (layer 3 of the OSI reference model), are still required to solve these types of problems. However, fast switching technology is overwhelming the capabilities of current routers, creating router bottlenecks. The traditional IP packet-forwarding device on which the Internet is based, the IP router, is showing signs of inadequacy. In addition, routers are often expensive, complex, and of limited throughput, as compared to emerging switching technology. To support the increased traffic demand of large enterprise-wide networks and the Internet, IP routers need to operate faster and cost less. In current routers, multiple paths are available between two routers and a router may switch data between one or more paths by making a decision at layer 3 of the OSI reference model. These traditional routers achieve throughput in the hundreds of thousands of packets-per-second range. However, as the need for even greater throughput increases and advanced functionalities required by more types of traffic are enabled in IP, traditional IP routers will not suffice as packet-forwarding devices, especially since these routers are often limited by their processor-intensive designs.
From the above, it is seen that another approach for avoiding bottlenecks and increasing packet throughput between nodes is needed.
SUMMARY OF THE INVENTION The present invention designates multiple ports of an IP switched router for communication as a single interface with a second adjacent IP switched router to provide increased bandwidth between the network interfaces on the IP switched routers. The multiple designated ports are monitored by both IP switched routers for communications from the other. Data flows are queued up and inverse multiplexed over the multiple ports to optimize the available bandwidth. The inverse multiplexing is done at layer 2 of the OSI reference model, so that layer 3 does not have to know about any reallocation among the multiple ports.
According to an embodiment, the present invention provides a method for transmitting packets over a multiport interface between an upstream node and a downstream node in a network, where the downstream node is downstream from the upstream node. The method includes the steps of establishing a multiport interface that includes multiple sub-ports between the upstream node and the downstream node, receiving a packet at the downstream node, and performing a flow classification at the downstream node on the packet to determine whether the packet belongs to a specified flow that should be redirected in the upstream node to the multiport interface. The method also includes the steps of selecting a free label for one of the multiple sub-ports at the downstream node, and informing the upstream node that future packets belonging to the specified flow should be sent with the selected free label attached.
According to another embodiment, the present invention provides a computer program product that enables dynamic shifting between routing and switching in a network having an upstream node and a downstream node. The downstream node is downstream from the upstream node. The computer program product includes computer- readable code that establishes a multiport interface which includes multiple sub-ports between the upstream node and the downstream node, and computer-readable code that performs a flow classification on a packet at the downstream node to determine whether the packet belongs to a specified flow that should be redirected in the upstream node to the multiport interface. The computer program product also includes computer-readable code that selects a free label for one of the multiple sub-ports at the downstream node, computer-readable code that informs the upstream node that future packets belonging to the specified flow should be sent with the selected first free label attached, and a tangible medium that stores the computer-readable codes.
These and other embodiments of the present invention, as well as its advantages and features, are described in more detail in conjunction with the text below and attached figures.
BRIEF DESCRIPTION OF THE DRAWINGS Fig. 1 is a diagram illustrating multiple port connections between two "IP switched routers, " in accordance with a specific embodiment of the present invention;
Fig. 2 is a diagram illustrating the queuing of multiple flows for the multiple ports, in accordance with the specific embodiment of the present invention;
Fig. 3 illustrates one of the many network configurations possible in accordance with the present invention; Fig. 4a is a system block diagram of a typical computer system 151 that may be used as switch controller 12a in basic switching unit 12 (as shown in Fig. 1) to execute a specific embodiment of the system software of the present invention;
Fig. 4b is a general block diagram of an architecture of an ATM switch 3 (the example shows a 16-port switch) that may be used as the switching hardware engine of a basic switching unit according to an embodiment of the present invention;
Fig. 5a is a simplified diagram generally illustrating the initialization procedure in each system node according to an embodiment of the present invention; Fig. 5b is a simplified diagram that generally illustrates the operation of a system node according to an embodiment of the present invention;
Fig. 6a is a diagram generally illustrating the steps involved in labelling a flow in a system node according to an embodiment of the present invention;
Fig. 6b is a diagram generally illustrating the steps involved in switching a flow in a basic switching unit according to an embodiment of the present invention;
Fig. 6c is a diagram generally illustrating the steps involved in forwarding a packet in a system node according to an embodiment of the present invention;
Fig. 7a is a simplified diagram generally illustrating the multiport interface establishing procedure in a basic switching unit according to an embodiment of the present invention;
Fig. 7b is a diagram generally illustrating some of the steps involved in determining whether a flow should be switched in basic switching units according to an embodiment of the present invention;
Fig. 7c is a diagram generally illustrating the steps involved in labelling a flow in the upstream link for a designated multiport interface in a basic switching unit, such as shown by label flow step 366 of Fig. 7b according to an embodiment of the present invention; and
Fig. 7d is a simplified diagram that generally illustrates some of the steps of the operation of the basic switching unit according to the specific embodiment of the present invention. DESCRIPTION OF THE SPECIFIC EMBODIMENTS CONTENTS
I. General
A. Inverse Multiplexed Transmission of Flow Labelled Packets B. Flow Classification, IFMP, GSMP in the Specific Embodiment
II. System Hardware
A. Controller Hardware
B. Switching Hardware
III. System Software Functionality A. Configuration of Multiport Interfaces
B. Flow Distribution on Multiport Interfaces
IV. Conclusion
I. General The present invention provides for a multiport interface made of two or more sub-ports used as a single interface, with flow-by-flow inverse multiplexing, to provide at layer 2 of the OSI reference model very high speed trunking capability between "IP switched routers," also referred to as "basic switching units." The multiport interface increases the effective bandwidth for transmitting packets in a network. The method and apparatus will find particular utility and is illustrated herein as it is applied in the high throughput flow-based transmission of IP packets capable of carrying voice, video, and data signals over a local area network (LAN), metropolitan area networks (MAN), wide area network (WAN), Internet, or the like, but the invention is not so limited. The invention will find use in a wide variety of applications where it is desired to transmit packets over a network.
A. Inverse Multiplexed Transmission of Flow Labelled Packets
In accordance with specific embodiments of the present invention, Fig. 1 illustrates two switch controllers 12a and 14a, respectively coupled to a switching engine 12b and 14b. Each corresponding pair of switch controller and switching engine form what is referred to as an "IP switched router" or a "basic switching unit" (basic switching unit 12 includes switch controller 12a and switching engine 12b, and basic switching unit 14 includes switch controller 14a and switching engine 14b), a specific embodiment of which is described in further detail below. Although referred to as a "switching" unit, it should be recognized that the basic switching unit of the system via system software installed on its switch controller dynamically provides both layer 2 switching functionality as well as layer 3 routing and packet forwarding functionality. In this particular example, the switching engine, which utilizes conventional and currently available asynchronous transfer mode (ATM) switching hardware, is an ATM switch. The ATM switching hardware providing the switching engine of the basic switching unit operates at the datalink layer (layer 2 of the OSI reference model). Any of the software normally associated with the ATM switch that is above the ATM Adaptation Layer type 5 (AAL-5) is completely removed. Thus, the signalling, any existing routing protocol, and any LAN emulation server or address resolution servers, etc. are removed. Of course, other switching technologies such as for example fast packet switching, frame relay, 100BaseT Fast Ethernet, Gigabit Ethernet, Fiber Distributed Data Interface (FDDI) or others may be used to provide the switching engine of the basic switching unit, depending on the application. The switch controller is a computer having multiple network adapter or network interface cards (NICs) connected to the switching engine via multiport interface 18. System software is installed in basic switching unit, more particularly in the computer serving as switch controller. The switching engine serves to perform high-speed switching functions when required by the basic switching unit, as determined by the system software. The switching capability of the switching system is limited only by the hardware used in the switching engine. Accordingly, the present embodiment of the invention is able to take advantage of the high-speed, high capacity, high bandwidth capabilities of ATM technology. In addition to performing standard connectionless IP routing functions at layer 3, the switch controller also makes flow classification decisions for packets on a local basis, as described generally below.
In accordance with a specific embodiment of the present invention, each of these basic switching units can also communicate with other nodes in a network or other networks or servers via ports 16, for example. In one possible network configuration, a trunk interface between the two basic switching units may be used. The switching engine of each basic switching unit has multiple physical ports, each being capable of being connected to a variety of devices, including for example data terminal equipment (DTE), data communication equipment (DCE), servers, switches, gateways, etc. One of these multiple ports, for example port 1, is used to provide the communication link between the switch controller and the switching engine. Two or more of these multiple ports may be used as a trunk interface to form a single multiport interface 18. For example, ports 8, 9, 10 and 12 of the basic switching unit 12 may be designated to be sub-ports of multiport interface 18, as described further below. Multiport interface 18 is created by combining several ports of the switching engine of a basic switching unit and having it appear as a single interface to the switch controller of the basic switching unit.
Fig. 2 illustrates an example of flow-by-flow inverse multiplexing across the multiple sub-ports of multiport interface 18. By way of illustration, flows 1-12 are shown being allocated to different queues for ports 8, 9, 10 and 12 of switching engine 12b. More specifically in this example, flows 4, 5 and 8 have been allocated to sub-port 8; flows 3, 9 and 11 have been allocated to sub-port 9; flows 2 and 7 have been allocated to sub-port 10; and flows 1, 6, 10 and 12 have been allocated to sub-port 12. The bandwidth can be maximized by evenly spreading the flows across the multiple sub-ports to the extent possible. In addition, there is an optimization between (a) the multiport interface or trunk 18 between switch controllers 12 and 14, and (b) the multiple connections to other nodes. If the bandwidth required on other ports 16 to other nodes increases, one of the ports allocated to multiport interface 18 could be reallocated to communicate with another node. Conversely, if the bandwidth of traffic to other nodes decreases, more ports could be allocated to the multiport trunk interface 18 between switch controllers 12 and 14. With specific embodiments of the present invention, various network configurations may be implemented to provide end-to-end seamless IP traffic flow, with the network configurations featuring high bandwidth and high throughput between network interfaces on basic switching units 12 and 14 via the flow-by-flow inverse multiplexing over multiport interface 18 established between basic switching units 12 and 14. For example, Fig. 3 illustrates one of the many network configurations possible in accordance with the present invention. Of course, many alternate configurations are possible. In one embodiment, multiport interface 18 could be used between basic switching units 12 and 14 as shown in Fig. 3, according to the present invention. Basic switching units, switch gateway units, and system software allow users to build flexible IP network topologies targeted at the workgroup, campus, and WAN environments for high performance, scaleable solution to current campus backbone congestion problems.
More specifically, Fig. 3 shows a simplified diagram of a high performance workgroup environment in which several host computers 145 are connected via ATM links 133m to multiple basic switching units 12 and 14, which each connect to a switch gateway unit 121 that connects to a LAN 135 with user devices 141. In this configuration, a first basic switching unit 12 connects to a second basic switching unit 14 via multiport interface 18, such as seen in Fig. 1. Multiple host computers 145 connect to the first basic switching unit 12 via respective 155 Mbps ATM links 133χ (where x = 2 to 5) through respective ATM NICs 147. In addition, multiple host computers 145 connect to the second basic switching unit 14 via respective 25 Mbps ATM links 133y (where y = 8 to 10) through respective ATM NICs 149. As discussed above, host computers 145 equipped with ATM NICs are installed with a subset of the system software, enabling the TCP/IP hosts to connect directly to a basic switching unit. The first and second basic switching units 12 and 14 connect to switch gateway unit 121 via ATM links 133e (155 Mbps) and 1337 (25 Mbps) respectively. Connection of the first and second basic switching units 12 and 14 to switch gateway unit 121 via an Ethernet (e.g. , lOBaseT) or FDDI link 139 enables users of host computers 145 to communicate with users devices 141 attached to LAN 135. User devices 141 may be PCs, terminals, or workstations having appropriate NICs 143 to connect to any Ethernet or FDDI LAN 135. The workgroup of host computers is thereby seamlessly integrated with the rest of the campus network.
It is noted that a "switch gateway unit," which is similar to a basic switching unit without a switching engine, includes a gateway switch controller and IFMP software installed on the gateway switch controller, in accordance with a specific embodiment. The gateway switch controller includes multiple network adaptors or NICs, and an ATM NIC. Switch gateway unit serves as an access device to enable connection of existing LAN and backbone environments to a network of basic switching units. Accordingly, the NICs of the switch gateway unit may be of different types, such as for example Ethernet NICs, Fast Ethernet NICs, FDDI NICs, and others, or any combination of the preceding. Of course, the use of particular types of NICs depends on the types of existing LAN and backbone environments to which switch gateway unit provides access. It is recognized that multiple LANs may be connected to a switch gateway unit. The ATM NIC allows the switch gateway unit to connect via an ATM link to a basic switching unit. Of course, others of the other multiple NICs may also be ATM NICs to provide a connection from the switch gateway unit to another switch gateway. In addition to basic switching units and switch gateway units, networks utilizing the present invention also may also include high performance host computers, workstations, or servers that are appropriately equipped. In particular, a subset of the IFMP software can be installed on a host computer, workstation, or server equipped with an appropriate ATM NIC to enable a host to connect directly to a basic switching unit. According to specific embodiments of the present invention, system software on a switch controller of a basic switching unit can create and delete multiport interfaces and then direct the switching engine to switch flows on the multiport interface to implement the inverse multiplexing of flows over multiport interface 18 and adds complete IP routing functionality on top of ATM switching hardware instead of any existing conventional ATM switch control software, to control the ATM switch such that the flows are appropriately multiplexed over the sub-ports of interface 18. Therefore, the present system is capable of moving between network layer IP routing when needed and high throughput datalink layer flow switching over interface 18 when possible in order to create high speed and capacity packet transmission in an efficient manner without the problem of router bottlenecks. Using a multiport interface 18 between adjacent IP switched routers, the packet throughput between their attached network interfaces may reach millions of IP packets-per-second, which is an order of magnitude faster than with traditional IP routers.
B. Flow Classification, IFMP, GSMP in the Specific Embodiment Using the Ipsilon Flow Management Protocol (IFMP), which is described in further detail in commonly-assigned U.S. patent application no. 08/597,520, a system node (such as a basic switching unit, switch gateway unit, or host computer/server/workstation) can classify IP packets as belonging to a "flow" of similar packets based on certain common characteristics. A flow is a sequence of packets sent from a particular source to a particular (unicast or multicast) destination that are related in terms of their routing and any local handling policy they may require. The present invention efficiently permits different types of flows to be handled differently, depending on the type of flow, and enables the inverse multiplexing of different flows over different sub-ports of multiport interface 18, depending on the bandwidth available on each designated sub-port. Some types of flows may be handled by mapping them into individual ATM connections using the ATM switching engine to perform high speed switching of the packets over multiport interface 18. Flows such as for example those carrying real-time traffic, those with quality of service requirements, or those likely to have a long holding time, may be configured to be switched whenever possible. Other types of flows, such as for example short duration flows or database queries, may be handled by connectionless IP routing. A particular flow of packets may be associated with a particular ATM label (i.e., a virtual path identifier (VPI) and virtual channel identifier (VCI)). It is assumed that virtual channels are unidirectional so an ATM label of an incoming direction of each link is owned by the input port to which it is connected. Each direction of transmission on a link is treated separately. Of course, flows travelling in each direction are handled by the system separately but in a similar manner.
Flow classification is a local policy decision. When an IP packet is received by a system node, the system node transmits the IP packet via the default channel. The node also classifies the IP packet as belonging to a particular flow, and accordingly decides whether future packets belonging to the same flow should preferably be switched directly in the ATM switching engine or continue to be forwarded hop-by-hop by the router software in the node. If a decision to switch a flow of packets is made, the flow must first be labelled. To label a flow, the node selects for that flow an available label (VPI/VCI) of the input port on which the packet was received. The node which has made the decision to label the flow then stores the label, flow identifier, and a lifetime, and then sends an IFMP redirect message upstream to the previous node from which the packet came. The flow identifier contains the set of header fields that characterize the flow. The lifetime specifies the length of time for which the redirection is valid. Unless the flow state is refreshed, the association between the flow and label is deleted upon the expiration of the lifetime. Expiration of the lifetime before the flow state is refreshed results in further packets belonging to the flow to be transmitted on the default forwarding channel between the adjacent nodes. A flow state is refreshed by sending upstream a redirect message having the same label and flow identifier as the original and having another lifetime. The redirect message requests the upstream node to transmit all further packets that have matching characteristics to those identified in the flow identifier via the virtual channel specified by the label. The redirection decision is also a local decision handled by the upstream node, whereas the flow classification decision is a local decision handled by the downstream node. Accordingly, even if a downstream node requests redirection of a particular flow of packets, the upstream node may decide to accept or ignore the request for redirection. In addition, redirect messages are not acknowledged. Rather, the first packet arriving on the new virtual channel serves to indicate that the redirection request has been accepted. Different encapsulations are used for the transmission of IP packets that belong to particular labelled flows on an ATM data link, depending on the different flow type of the flows. In the present example of Fig. 2, twelve types of encapsulations are used to transmit IP packets belonging to the twelve types of flows to be multiplexed over multiport interface 18.
In addition to using IFMP to classify and redirect flows, a system node such as a basic switching engine may utilize the General Switch Management Protocol (GSMP, also described in detail in U.S. patent application no. 08/597,520) to establish communication over the ATM link between the switch controller and ATM hardware switching engine of the basic switching unit and thereby enable flow-by-flow multiplexed layer 2 switching when possible and layer 3 IP routing and packet forwarding when necessary. In particular, GSMP is a general purpose, asymmetric protocol to control the switching engine, e.g., the ATM switch. That is, the switch controller acts as the master with the ATM switch as the slave. GSMP runs on a virtual channel established at initialization across the ATM link between the switch controller and the ATM switch. A single switch controller may use multiple instantiations of GSMP over separate virtual channels to control multiple ATM switches. Also included in GSMP is a GSMP adjacency protocol, which is used to synchronize state across the ATM link between the switch controller and the ATM switch, to discover the identity of the entity at the other end of the link, and to detect changes in the identity of that entity.
GSMP allows the switch controller to establish and release connections across the ATM switch, add and delete leaves on a point-to-multipoint connection, manage switch ports, request configuration information, and request statistics (such as the level of traffic on each port). GSMP also allows the ATM switch to inform the switch controller of events such as a link going down. In accordance with a specific embodiment of the present invention, a switch controller may use GSMP to configure multiport interface 18 and to direct the switching engine to switch flows on the multiport interface 18 such that the switching engine distributes the flows across the individual sub-ports designated by the configuration. Creation and deletion of multiport interfaces is done at the switch controller with configuration information being stored in, for example, non-volatile memory in the switch controller. The bandwidth and performance of the multiport interface approaches that of a single interface having a bandwidth equal to the sum of that of the individual sub- ports. For example, as seen in Fig. 2, combining four OC3 (155 Mbps) interfaces into a multiport interface 18 creates an equivalent interface having a virtual bandwidth of an OC12 (622 Mbps) connection. The multiport interface 18 is useful in relieving network congestion that might result in trunk connections between basic switching units, such as when traffic from many LANs (e.g., Fast Ethernets) on one of the basic switching units might cross a trunk before reaching an important server or server farm.
Although shown in connection with distributing multiple flows as a preferred embodiment, the present invention can also be used to allocate bandwidth across multiple ports where other communication methods are used. In this situation, the advantage of being able to use multiple ports invisible to layer 3 of the OSI reference model provides advantages by reducing the overhead and time required to do the transmissions and reconfigure as necessary.
Switching engine 12b is assumed to contain multiple ports, where each physical port is a combination of an input port and an output port. ATM cells arrive at the ATM switch from an external communication link on incoming virtual channels at an input port, and depart from the ATM switch to an external communication link on outgoing virtual channels from an output port. As mentioned earlier, virtual channels on a port or link are referenced by their VPI/VCI. A virtual channel connection across an ATM switch is formed by connecting an incoming virtual channel (or root) to one or more outgoing virtual channels (or branches). Virtual channel connections are referenced by the input port on which they arrive and the VPI/VCI of their incoming virtual channel. In the switch, each port has a hardware look-up table indexed by the VPI/VCI of the incoming ATM cell, and entries in the tables are controlled by a local control processor in the switch.
II. System Hardware
A. Controller Hardware
Fig. 4a is a system block diagram of a typical computer system 151 that may be used as switch controller 12a in basic switching unit 12 (as shown in Fig. 1) to execute the system software of the present invention. Fig. 4a also illustrates an example of the computer system that may be used as switch gateway controller of switch gateway unit 121 (as shown in Fig. 3), as well as serving as an example of a typical computer which may be used as a host computer/server/workstation loaded with a subset of the IFMP software. Of course, it is recognized that other elements such as a monitor, screen, and keyboard are added for the host.
As shown in Fig. 4a, computer system 151 includes subsystems such as a central processor 169, system memory 171, I/O controller 173, fixed disk 179, network interface 181, and read-only memory (ROM) 183. Of course, the computer system 151 optionally includes monitor 153, keyboard 159, display adapter 175, and removable disk 177, for the host. Arrows such as 185 represent the system bus architecture of computer system 151. However, these arrows are illustrative of any interconnection scheme serving to link the subsystems. For example, a local bus could be utilized to connect central processor 169 to system memory 171 and ROM 183. Configuration information for creation of multiport interfaces may be stored, for example, on ROM 183. Other computer systems suitable for use with the present invention may include additional or fewer subsystems. For example, another computer system could include more than one processor 169 (i.e., a multi-processor system) or a cache memory.
In an embodiment of the invention, the computer used as the switch controller can be a standard Intel-based central processing unit (CPU) machine equipped with a standard peripheral component interconnect (PCI) bus, as well as with an ATM network adapter or network interface card (NIC). The computer is connected to the ATM switch via a 155 Megabits per second (Mbps) ATM link using the ATM NIC. In this embodiment, the system software is installed on fixed disk 179 which is the hard drive of the computer. As recognized by those of ordinary skill in the art, the system software may be stored on a CD-ROM, floppy disk, tape, or other tangible media that stores computer- readable code.
Computer system 151 shown in Fig. 4a is but an example of a computer system suitable for use (as the switch controller of a basic switching unit, as the switch gateway controller of a switch gateway unit, or as a host computer/server/workstation) with the present invention. Other configurations of subsystems suitable for use with the present invention will be readily apparent to one of ordinary skill in the art. In addition, switch gateway unit may be equipped with multiple other NICs to enable connection to various types of LANs. Other NICs or alternative adaptors for different types of LAN backbones may be utilized in switch gateway unit. For example, SMC 10M/100M Ethernet NIC or FDDI NIC may be used.
Without in any way limiting the scope of the invention, Table 1 provides a list of commercially available components which are useful in operation of the controller, according to the above embodiments. It will be apparent to those of skill in the art that the components listed in Table 1 are merely representative of those which may be used in association with the inventions herein and are provided for the purpose of facilitating assembly of a device in accordance with one particular embodiment of the invention. A wide variety of components readily known to those of skill in the art could readily be substituted or functionality could be combined or separated.
Table 1 : Controller Components
Microprocessor Intel Pentium 133 MHz processor System memory 16Mbyte RAM/256K cache memory Motherboard
Intel Endeavor motherboard
ATM NIC Zeitnet PCI ATM NIC (155 Mbps)
Fixed or Hard disk 500Mbyte IDE disk
Drives standard floppy, CD-ROM drive Power supply standard power supply
Chassis standard chassis
B. Switching Hardware As discussed above, the ATM switch hardware provides the switching engine 12b of basic switching unit 12, in accordance with a specific embodiment. The ATM switching engine utilizes vendor-independent ATM switching hardware. However, the ATM switching engine according to the present invention does not rely on any of its usual connection-oriented ATM routing and signaling software (SSCOP, Q.2931, UNI 3.0/3.1, and P-NNI). Rather, any ATM protocols and software are completely discarded, and the basic switching unit relies on the system software to create and delete multiport interfaces and to control the ATM switching engine for inverse multiplexing of flows. The system software is described in detail later.
Separately available ATM components may be assembled into a typical ATM switch architecture. For example, Fig. 4b is a general block diagram of an architecture of an ATM switch 12b (the example shows a 16-port switch) that may be used as the switching hardware engine of a basic switching unit according to an embodiment of the present invention. However, commercially available ATM switches also may operate as the switching engine of the basic switching unit according to other embodiments of the present invention. The main functional components of switching hardware 12b include a switch core, a microcontroller complex, and a transceiver subassembly. Generally, the switch core performs the layer 2 switching, the microcontroller complex provides the system control for the ATM switch, and the transceiver subassembly provides for the interface and basic transmission and reception of signals from the physical layer. In the present example, the switch core is based on the MMC Networks ATMS 2000 ATM Switch Chip Set which includes White chip 200, Grey chip 202, MBUF chips 204, Port Interface Device (PIF) chips 206, and common data memory 208. The switch core also may optionally include VC Activity Detector 210, and Early Packet Discard function 212. Packet counters also are included but not shown. White chip 200 provides configuration control and status. In addition to communicating with White chip 200 for status and control, Grey chip 202 is responsible for direct addressing and data transfer with the switch tables. MBUF chips 204 are responsible for movement of cell traffic between PIF chips 206 and the common data memory 208. Common data memory 208 is used as cell buffering within the switch. PIF chips 206 manage transfer of data between the MBUF chips to and from the switch port hardware. VC Activity Detector 210 which includes a memory element provides information on every active virtual channel. Early Packet Discard 212 provides die ability to discard certain ATM cells as needed. Packet counters provide the switch with the ability to count all packets passing all input and output ports. Buses 214, 215, 216, 217, and 218 provide the interface between the various components of the switch. The microcontroller complex includes a central processing unit (CPU) 230, dynamic random access memory (DRAM) 232, read only memory (ROM) 234, flash memory 236, DRAM controller 238, Dual Universal Asynchronous Receiver-Transmitter (DUART) ports 240 and 242, and external timer 244. CPU 230 acts as the microcontroller. ROM 234 acts as the local boot ROM and includes die entire switch code image, basic low-level operation system functionality, and diagnostics. DRAM 232 provides conventional random access memory functions, and DRAM controller 238 (which may be implemented by a field programmable gate array (FPGA) device or the like) provides refresh control for DRAM 232. Flash memory 236 is accessible by the microcontroller for hardware revision control, serial number identification, and various control codes for manufacturability and tracking. DUART Ports 240 and 242 are provided as interfaces to communications resources for diagnostic, monitoring, and other purposes. External timer 244 interrupts CPU 230 as required. Transceiver subassembly includes physical interface devices 246, located between PIF chips 206 and physical transceivers (not shown). Interface devices 246 perform processing of the data stream, and implement the ATM physical layer. Of course, the components of the switch may be on a printed circuit board that may reside on a rack for mounting or for setting on a desktop, depending on die chassis that may be used.
Without in any way limiting the scope of the invention, Table 2 provides a list of commercially available components which are useful in operation of the switching engine, according to the above specific embodiment. It will be apparent to those of skill in the art that the components listed in Table 2 are merely representative of those which may be used in association with the inventions herein and are provided for the purpose of facilitating assembly of a device in accordance with a particular embodiment of the invention. A wide variety of components or available switches readily known to those of skill in the art could readily be substituted or functionality could be combined or separated.
Table 2: Switch Components
SWITCH CORE
Core chip set MMC Networks ATMS 2000 ATM Switch Chip Set (White chip, Grey chip, MBUF chips, PIF chips)
Common data memory standard memory modules Packet counters standard counters
MICROCONTROLLER COMPLEX
CPU Intel 960CA/CF/HX DRAM standard DRAM modules
ROM standard ROM
Flash memory standard flash memory
DRAM controller standard FPGA, ASIC, etc.
DUART 16552 DUART External timer standard timer
TRANSCEIVER SUBASSEMBLY
Physical interface PMC-Sierra PM5346
III. System Software Functionality
As generally described above in accordance with the specific embodiment, IFMP is a protocol for instructing an adjacent node to attached a layer 2 "label" to a specified "flow" of packets. A flow is a sequence of packets sent from a particular source to a particular destination(s) that are related in terms of their routing and logical handling policy required. The label (VPI/VCI) specifies a virtual channel and allows cached routing information for that flow to be efficiently accessed. The label also allows further packets belonging to the specified flow to be switched at layer 2 rather than routed at layer 3. That is, if both upstream and downstream links redirect a flow at a particular node in the network, that particular node may switch the flow at the datalink layer, rather than route and forward the flow at the network layer.
Fig. 5a is a simplified diagrams generally illustrating the initialization procedure in each system node according to an embodiment of the present invention.
Upon system startup at step 260, each system node establishes default virtual channels on all ports in step 262. Then at step 264 each system node waits for packets to arrive on any port.
Fig. 5b is a simplified diagram that generally illustrates the operation of a system node dynamically shifting between layer 3 routing and layer 2 switching according to the present invention. After initialization, a packet arrives on a port of the system node at step 266. If the packet is received on a default virtual channel (step 268), the system node performs a flow classification on the packet at step 270. Flow classification involves determining whether the packet belongs to a type of flow. At step 272, the system node determines whether that flow to which the packet belongs should preferably be switched. If the system node determines that the flow should be switched, the system node labels the flow in step 274 then proceeds to forward the packet in step 276. After forwarding the packet, the system node waits for a packet to arrive in step 282. Once a packet arrives, the system node returns to step 266. If the system node determines at step 268 d at the packet did not arrive on the default virtual channel, the system node does not perform flow classification at step 270 on the packet. When a packet arrives on an alternate virtual channel, the packet belongs to a flow that has already been labelled. Accordingly, if the flow is also labelled downstream (step 278), the system node switches the flow in step 280. Switching the flow involves making a connection within the switch between the label of the upstream link and the label of the downstream link. After switching the flow in step 280, the system node at step 276 forwards the packet downstream. If the flow is not labelled downstream (step 278), the system node does not switch the flow but rather forwards the packet downstream in step 276. Of course, it is recognized that only a system node mat is a basic switching unit performs step 280. Other system nodes (e.g., switch gateway unit or host) operate as shown in Fig. 5b but do not perform step 280 since the result of step 278 is no for a switch gateway unit or a host (as these types of system nodes have no downstream link). Fig. 6a is a diagram generally illustrating the steps involved in labelling a flow in the upstream link of a system node, such as shown by label flow step 274 of Fig. 5b. For a system node that is a switch gateway unit or a host, the system node labels a flow as shown in steps 290, 292, 300 and 276 of Fig. 6a. When the label flow step begins (step 290), the system node selects a free label x on the upstream link in step 292. The system node men sends an IFMP redirect message on the upstream link in step 300 (as indicated by dotted line 293). The system node tiien forwards the packet in step 276. For a system node that is a basic switching unit, labelling a flow is also illustrated by steps 294, 296, and 298. When the label flow step begins (step 290), the basic switching unit selects a free label x on the upstream link in step 292. The switch controller of basic switching unit then selects a temporary label x' on the control port of the switch controller in step 294. At step 296, the switch controller then sends to the hardware switching engine a GSMP message to map label x on the upstream link to label x' on the control port. The switch controller then waits in step 298 until a GSMP acknowledge message is received from the hardware switching engine that indicates that the mapping is successful. Upon receiving acknowledgement, the basic switching unit sends an IFMP redirect message on the upstream link in step 300. After step 300, the system node returns to step 176 as shown in Fig. 5b.
Fig. 6b is a diagram generally illustrating the steps involved in switching a flow in a basic switching unit, such as shown by switch flow step 280 of Fig. 5b. As mentioned above, only system nodes that are basic switching units may perform the switch flow step. When the switch flow procedure starts in step 310, the switch controller in the basic switching unit sends at step 312 a GSMP message to map label x on die upstream link to the label y on the downstream link. Label y is the label which the node downstream to the basic switching unit has assigned to me flow. Of course, this downstream node has labelled the flow in the manner specified by Figs. 5b and 6a, with the free label y being selected in step 292. After step 312, the switch controller in the basic switching unit waits in step 314 for a GSMP acknowledge message from a hardware switching engine in basic switching unit to indicate that the mapping is successful. The flow is thereby switched in layer 2 entirely within the hardware switching engine in the basic switching unit. Then the basic switching unit proceeds to forward the packet in step 276.
Fig. 6c is a diagram generally illustrating the steps involved in forwarding a packet in a system node, such as shown by forward packet step 276 of Fig. 5b. A system node at step 318 starts the forward packet procedure. If the flow to which the packet belongs is not labelled on the downstream link (step 320), then the system node sends the packet on die default virtual channel on the downstream link in step 322 and tiien goes to a wait state 282 to wait for arrival of packets. However, if the flow to which the packet belongs is labelled on die downstream link indicating that d e system node previously received an IFMP redirect message to label that flow for a lifetime, then die system node checks at step 326 if the lifetime for the redirection of that flow has expired. If the lifetime has not expired, then the system node sends the packet on the labelled virtual channel in the IFMP redirect message at step 328 then goes to wait state 282. If the lifetime has expired, then the system node automatically deletes the flow redirection at step 330. The system node dien proceeds to send the packet on the default channel (step 322) and returns to the wait state of step 282 as shown in Fig. 5b.
When a packet is forwarded or switched from an upstream node and received at the downstream node, the downstream node proceeds to forward the packet traffic on the chosen label.
A. Configuration of Multiport Interfaces
In accordance with the specific embodiment of the present invention, multiport interfaces are created and deleted by die switch controller of a basic switching unit, where configuration information may be stored in non-volatile memory in the basic switching unit. Fig. 7a is a simplified diagram generally illustrating the multiport interface establishing procedure in a basic switching unit according to an embodiment of the present invention. It is noted that both the upstream and downstream basic switching units are configured for the same multiport interfaces. Configuring a multiport interface may be achieved in the specific embodiment by a command to define the multiport interface.
Specifically, the switch controller of the upstream basic switching unit (Fig. 1) defines a multiport interface 18 by a command (e.g., define mpif 8 9 10 12) which designates, in diis example, ports 8, 9, 10 and 12 to be sub-ports of multiport interface 18. Such a command may number the resulting multiport interface as port 8, distinctly designated as a multiport interface (e.g. , as ips0_8). Managing the multiport interface may be achieved with other commands to show the multiport interface, and to delete the multiport interface. After receiving the command to define the multiport interface, the switching engine may return an acknowledgment message indicating the successful definition of the multiport interface (e.g., mul tiport interface 8 successfully defined). The use of interface numbers that correspond to real ports on the switching engine allows additional ports of engine to be added as sub-ports of the multiport interface as bandwidtii requirements increase without the need to change die configuration of the switch controller. Configuration of multiport interfaces requires that the switching engine and the switch controller must re-initialize (step 356) their communication in order to exchange the new list of available interfaces. This reinitialization typically may be performed by rebooting. After re-initialization, at step 264 each basic switching unit waits for packets to arrive on any port.
B. Flow Distribution on Multiport Interfaces
The operation of basic switching units having a multiport interface therebetween generally follows the simplified diagram of Fig. 5b, according to an embodiment of die present invention, with some differences as discussed below. Fig. 7b is a diagram generally illustrating some of the steps involved in determining whetiier a flow should be switched in basic switching units according to an embodiment of the present invention. Starting from step 270 of Fig. 5b, Fig. 7b continues at step 272. In particular, after a determination that the flow should be switched (step 272 in Fig. 5b) is made, a decision is made (step 360 in Fig. 7b) whether that flow is to be switched onto a defined multiport interface. If the flow is to be switched onto a defined multiport interface, then that flow is labelled (step 366) for the defined multiport interface and then forwarded (step 276). After forwarding of the packet in step 276, the process continues from step 282 of Fig. 5b. If the determination in step 272 is that the flow should not be switched at all, then the packet is merely forwarded per step 276. If the determination in step 360 is that the flow should switched but not onto a defined multiport interface, then the switch controller labels that flow for the desired port (step 274) in the usual manner (as described for Fig. 6a), then forwards the packet in step 276.
Fig. 7c is a diagram generally illustrating the steps involved in labelling a flow in the upstream link for a designated multiport interface in a basic switching unit, such as shown by label flow step 366 of Fig. 7b according to an embodiment of the present invention. When the label flow step begins (step 290), die level of traffic on the sub-ports of the designated multiport interface is determined in step 376. In a specific embodiment, this determination may occur through the use of GSMP configuration messages sent by the switch controller that request statistics (e.g., die level of traffic on the sub-ports) from the switching engine, which sends an appropriate response message. Then, a label on the upstream link for the sub-port having the lowest level of traffic (fewest outgoing cells in a pre-defined measured period) is selected in step 380. If all available sub-ports are running at full line rate, then the new flow is added to the particular sub-port with the shortest queue of waiting traffic at the given priority level. If all sub-ports are running at less than the full line rate but are running at equal outgoing cell rates, then the lowest numbered port will carry the flow, in accordance with a specific embodiment. If there is failure of the sub-port (determined at step 384), steps 376 and 380 are repeated. If there is no failure of the sub-port, then the switch controller selects a free label x on the upstream link on the sub-port in step 292. The switch controller of the basic switching unit then selects a temporary label x' on the sub-port in step 388. At step 390, the switch controller then sends to the switching engine a GSMP message to map label x on the upstream link to label x' on the sub-port. The switch controller then waits in step 298 until a GSMP acknowledge message is received from the switching engine indicating tiiat the mapping is successful. Upon receiving this acknowledgement, the basic switching unit sends an IFMP redirect message on the upstream link in step 300. After step 300, the basic switching unit returns to step 276 of Fig. 5b.
Fig. 7d is a simplified diagram mat generally illustrates some of the steps of the operation of the basic switching unit according to the specific embodiment of the present invention. As seen in Fig. 7d, if the flow is not labelled downstream (step 278), the basic switching unit does not switch the flow but rather forwards the packet downstream in step 276 and proceeds witii step 282 of Fig. 5b. If the flow is labelled downstream and switch controller 12a of the upstream basic switching unit 12 in step 396 determines that the flow label is not for a multiport interface, then the upstream basic switching unit 12 switches the flow for the designated port in step 280. As mentioned earlier, switching the flow involves making a connection within the switch between die label of die upstream link and the label of the downstream link. If the flow is labelled downstream and switch controller 12a of the upstream basic switching unit 12 in step 396 determines mat the flow label is for a multiport interface, then the upstream basic switching unit 12 switches the flow for the designated multiport interface in step 280. After accordingly switching the flow using the appropriate label determined downstream (for a particular port if the flow label is not for a multiport interface, or for a particular sub-port if the flow label is for a multiport interface) in step 280, the basic switching unit 12 at step 276 forwards the packet downstream.
When a packet is forwarded or switched from an upstream node and received at die downstream node, the downstream node proceeds to forward the packet traffic on the chosen label from all sub-ports of the multiport interface.
As seen in the above description for a specific embodiment, inverse multiplexing across sub-ports of a multiport interface may be accomplished on a flow-by - flow basis with the net result being that traffic is distributed fairly evenly across the sub- ports. In accordance with another specific embodiment, inverse multiplexing across sub-ports of a multiport interface may be accomplished on a flow-by-flow basis in another manner with the net result being that traffic is distributed across the sub-ports such that the flow is not balanced across all the specified sub-ports but instead the current sub- port is desired to be fully loaded witii flows before adding a flow to the next sub-port. In particular, the present specific embodiment is achieved by using weighted multiport interfaces, which are configured in a similar manner as multiport interfaces. It is noted that both die upstream and downstream basic switching units are each configured for the same weighted multiport interfaces. Configuring a weighted multiport interface may be achieved in the specific embodiment by a command to define the weighted multiport interface. Specifically, the switch controller of the upstream basic switching unit (Fig. 1) defines a weighted multiport interface by a command (e.g., define wmpif 8 9 10 12) which designates, in this example, ports 8, 9, 10 and 12 to be sub-ports of the weighted multiport interface. Similarly, managing the weighted multiport interface may be achieved with other commands to show d e weighted multiport interface, and to delete the weighted multiport interface. In this example, when sub-port 8 of the weighted multiport interface is rτιnning at full line rate, then sub-port 9 will be opened for queuing of flows. When sub-port 9 is running at the full line rate, then sub-port 10 will be opened for queuing of flows, and so forth. According to yet another specific embodiment, the use of virtual path interfaces (VPIs) may be combined with multiport interfaces to provide extremely high bandwiddi tunneled across a wide area network (WAN). More specifically, this may be accomplished by configuring a virtual path interface that refers to a multiport interface as the carrier of the virtual path provided by the WAN. It is noted tiiat both the upstream and downstream basic switching units are configured for the same virtual path interface referring to the same multiport interface. In accordance witii the specific embodiment of the present invention, a multiport interface is first created and the basic switching unit configured, in a similar manner as described for Fig. 7a. As an example, a multiport interface is created by the command (e.g., define mpif 8 9 10 11) which designates, in this example, ports 8, 9, 10 and 11 to be sub-ports of multiport interface numbered as port 8. After receiving the command to define the multiport interface, the switching engine may return an acknowledgment message indicating the successful definition of die multiport interface (e.g., mul tiport interface 8 successfully defined). Then, a virtual path interface is configured in this specific embodiment by a command to define the virtual path interface. Specifically, the switch controller of the upstream basic switching unit (Fig. 1) defines a virtual path interface 17 by a command (e.g., define vpif 8 5 1 7) which designates, in this example, a virtual patii interface 5 on the ports 8, 9, 10 and 11 (sub-ports of the multiport interface numbered as port 8). The above commands combine the virtual path interface 5 on ports 8-11 into a virtual path interface numbered 17. In the present specific embodiment, the same VPI number should be used on all sub-ports of the created multiport interface to carry traffic across the WAN. Managing the virtual path interfaces may be achieved with otiier commands to show die virtual interface, and to delete the virtual path interface. After receiving the command to define me virtual path interface combined with the multiport interface, the switching engine may return an acknowledgment message indicating the successful definition of the virtual path interface (e.g., virtual path interface 1 7 on port 8 vpi 5 successfully defined). Configuration of a virtual path interface combined with a multiport interface requires that the switching engine and die switch controller re-initialize (e.g., by rebooting) their communication in order to exchange the new list of available interfaces.
In accordance witii a specific embodiment of the present invention, the source code of the system software (® Copyright, Unpublished Work, Ipsilon Networks, Inc. , All Rights Reserved) for use on the switch controller of a basic switching unit is included as Appendix I. In particular, Appendix I includes the system software for configuration and operation of multiport interfaces, flow characterization and direction on sub-ports, interfacing with IFMP and GSMP protocols, routing and forwarding, device drivers, operating system interfaces, as well as drivers and modules.
IV. Conclusion
The inventions claimed herein provide an improved method and apparatus for transmitting packets over a network by multiplexing IP switched flows over a multiport interface between basic switching units. It is to be understood that the above description is intended to be illustrative and not restrictive. Many embodiments will be apparent to those of skill in the art upon reviewing the above description. By way of example the inventions herein have been illustrated primarily with regard to transmission of IP packets capable of carrying voice, video, image, facsimile, and data signals, but they are not so limited. By way of further example, the invention has been illustrated in conjunction with specific components and operating speeds, but the invention is not so limited. The scope of the inventions should, therefore, be determined not with reference to the above description, but should instead be determined with reference to the appended claims, along witii the full scope of equivalents to which such claims are entitled, by one of ordinary skill in the art.

Claims

WHAT IS CLAIMED IS:
1 1. A method for transmitting packets over a multiport interface between an
2 upstream node and a downstream node in a network, said downstream node being
3 downstream from said upstream node, said method comprising the steps of:
4 establishing a multiport interface comprising a plurality of sub-ports
5 between said upstream node and said downstream node;
6 receiving a packet at said downstream node;
7 performing a flow classification at said downstream node on said packet to
8 determine whether said packet belongs to a specified flow that should be redirected in the
9 upstream node to said multiport interface;
I o selecting a free label for one of said plurality of sub-ports at said
I I downstream node;
12 informing said upstream node that future packets belonging to said specified
13 flow should be sent with said selected free label attached.
1 2. The metiiod of claim 1 wherein said upstream and downstream nodes use
2 ATM.
1 3. The method of claim 2 wherein said free label comprises a VPI/VCI.
1 4. The method of claim 1 wherein said network comprises a local area
2 computer network.
1 5. The method of claim 1 wherein said network comprises a wide area network
2 (WAN).
ΓÇó 1 6. The method of claim 1 further comprising the step of:
2 determining said one of said plurality of sub-ports for which said free label
3 is selected.
1 7. The method of claim 6 wherein a lowest numbered sub-port of said plurality
2 of sub-ports is selected as said one of said plurality of sub-ports when all of said plurality of sub-ports are running at equal outgoing cell rates.
8. The method of claim 6 wherein said one of said plurality of sub-ports has the fewest outgoing cell rate of all of said plurality of sub-ports.
9. The method of claim 6 wherein said one of said plurality of sub-ports has the shortest queue of waiting traffic at a given priority level among all of said plurality of sub-ports.
10. The method of claim 6 wherein said determining step includes ensuring that said one of said plurality of sub-ports is not a sub-port experiencing failure.
11. The method of claim 6 wherein said informing step is performed by IFMP software that enables communication between said upstream and downstream node, and said determining step uses GSMP software.
12. The method of claim 6 wherein said one of said plurality of sub-ports is not yet fully loaded witii flows and each of the remaining of said plurality of sub-ports either is fully loaded with flows or is loaded witii no flows.
13. The method of claim 6 further comprising the step of: configuring a virtual path interface that refers to said multiport interface, and wherein each of said plurality of sub-ports uses said virtual path interface.
14. A computer program product that enables dynamic shifting between routing and switching in a network having an upstream node and a downstream node downstream from said upstream node, said computer program product comprising: computer-readable code that establishes a multiport interface comprising a plurality of sub-ports between said upstream node and said downstream node; computer-readable code that performs a flow classification on a packet at said downstream node to determine whether said packet belongs to a specified flow that should be redirected in said upstream node to said multiport interface; computer-readable code that selects a free label for one of said plurality of sub-ports at said downstream node; computer-readable code that informs said upstream node that future packets belonging to said specified flow should be sent with said selected first free label attached; and a tangible medium that stores the computer-readable codes.
15. The computer program product of claim 14, wherein said tangible media comprises a hard disk on a computer.
16. The computer program product of claim 14, wherein said tangible media is selected from a group consisting of CD-ROM, tape, floppy disk, and the like.
17. The computer program product of claim 14 wherein said computer-readable codes are installed on a computer attached to a switching hardware engine.
18. The computer program product of claim 17 wherein said computer attached to a switching hardware engine is an IP switched router.
19. The computer program product of claim 18 wherein said switching hardware engine utilizes asynchronous transfer mode (ATM) switching technology.
20. The computer program product of claim 19 wherein said flow classification uses VPI/VCI as labels.
21. The computer program product of claim 19 wherein said switching hardware engine utilizes a switching technology selected from a group consisting of FDDI, Ethernet, Fast Ethernet, Gigabit Ethernet, frame relay, and fast packet switching.
PCT/US1998/023535 1997-11-05 1998-11-04 Multiport interfaces for a network using inverse multiplexed ip switched flows WO1999023853A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU13077/99A AU1307799A (en) 1997-11-05 1998-11-04 Multiport interfaces for a network using inverse multiplexed ip switched flows

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US96505597A 1997-11-05 1997-11-05
US08/965,055 1997-11-05

Publications (1)

Publication Number Publication Date
WO1999023853A1 true WO1999023853A1 (en) 1999-05-14

Family

ID=25509372

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1998/023535 WO1999023853A1 (en) 1997-11-05 1998-11-04 Multiport interfaces for a network using inverse multiplexed ip switched flows

Country Status (2)

Country Link
AU (1) AU1307799A (en)
WO (1) WO1999023853A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001015383A1 (en) * 1999-08-25 2001-03-01 Nils Marchant A switch and a distributed network protection switching system incorporating said switch
SG81299A1 (en) * 1998-09-02 2001-06-19 Ibm Virtual client to gateway connection over multiple physical connections
EP1330084A1 (en) * 2002-01-22 2003-07-23 Nippon Telegraph and Telephone Corporation Capacity variable link apparatus and capacity variable link setting method
EP1718008A3 (en) * 2005-04-28 2006-12-20 Fujitsu Ten Limited Gateway apparatus and routing method
WO2007102068A2 (en) 2006-03-06 2007-09-13 Nokia Corporation Aggregation of vci routing tables

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6782091B1 (en) 2000-10-13 2004-08-24 Dunning Iii Emerson C Virtual call distribution system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1995027384A1 (en) * 1994-03-30 1995-10-12 Gpt Limited B-isdn access
US5568479A (en) * 1993-06-15 1996-10-22 Fujitus Limited System of controlling miscellaneous means associated with exchange
WO1997028505A1 (en) * 1996-01-31 1997-08-07 Ipsilon Networks, Inc. Improved method and apparatus for dynamically shifting between routing and switching packets in a transmission network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5568479A (en) * 1993-06-15 1996-10-22 Fujitus Limited System of controlling miscellaneous means associated with exchange
WO1995027384A1 (en) * 1994-03-30 1995-10-12 Gpt Limited B-isdn access
WO1997028505A1 (en) * 1996-01-31 1997-08-07 Ipsilon Networks, Inc. Improved method and apparatus for dynamically shifting between routing and switching packets in a transmission network

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
SG81299A1 (en) * 1998-09-02 2001-06-19 Ibm Virtual client to gateway connection over multiple physical connections
WO2001015383A1 (en) * 1999-08-25 2001-03-01 Nils Marchant A switch and a distributed network protection switching system incorporating said switch
EP1330084A1 (en) * 2002-01-22 2003-07-23 Nippon Telegraph and Telephone Corporation Capacity variable link apparatus and capacity variable link setting method
US7912087B2 (en) 2002-01-22 2011-03-22 Nippon Telegraph And Telephone Corporation Capacity variable link apparatus and capacity variable link setting method
EP1718008A3 (en) * 2005-04-28 2006-12-20 Fujitsu Ten Limited Gateway apparatus and routing method
US7787479B2 (en) 2005-04-28 2010-08-31 Fujitsu Ten Limited Gateway apparatus and routing method
WO2007102068A2 (en) 2006-03-06 2007-09-13 Nokia Corporation Aggregation of vci routing tables
EP1992126A2 (en) * 2006-03-06 2008-11-19 Nokia Corporation Aggregation of vci routing tables
EP1992126A4 (en) * 2006-03-06 2011-01-26 Nokia Corp Aggregation of vci routing tables
US8743865B2 (en) 2006-03-06 2014-06-03 Nokia Corporation Aggregation of VCI routing tables

Also Published As

Publication number Publication date
AU1307799A (en) 1999-05-24

Similar Documents

Publication Publication Date Title
US5892924A (en) Method and apparatus for dynamically shifting between routing and switching packets in a transmission network
US5444702A (en) Virtual network using asynchronous transfer mode
US6781994B1 (en) Distributing ATM cells to output ports based upon destination information using ATM switch core and IP forwarding
US5920705A (en) Method and apparatus for dynamically shifting between routing and switching packets in a transmission network
Newman ATM local area networks
US7151744B2 (en) Multi-service queuing method and apparatus that provides exhaustive arbitration, load balancing, and support for rapid port failover
CA2231758C (en) Improved system for routing packet switched traffic
US6826196B1 (en) Method and apparatus to allow connection establishment over diverse link types
US6345051B1 (en) Method and apparatus for multiplexing of multiple users on the same virtual circuit
US6385204B1 (en) Network architecture and call processing system
US20040015590A1 (en) Network interconnection apparatus, network node apparatus, and packet transfer method for high speed, large capacity inter-network communication
WO1997028505A9 (en) Improved method and apparatus for dynamically shifting between routing and switching packets in a transmission network
JPH08223181A (en) Atm exchange and inter-network connection device
JP2962276B2 (en) Session management system and connection management system in ATM connectionless communication network
JPH08237279A (en) Traffic controller
WO2000056113A1 (en) Internet protocol switch and method
WO1999023853A1 (en) Multiport interfaces for a network using inverse multiplexed ip switched flows
JP3124926B2 (en) Virtual LAN method
Cisco Configuring the ATM Router Module Interfaces
Cisco Configuring ATM Router Module Interfaces
Cisco Configuring ATM Router Module Interfaces
Cisco Configuring the ATM Router Module Interfaces
Cisco Configuring ATM Router Module Interfaces
Cisco ATM Commands
Cisco ATM Commands

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AL AM AT AU AZ BA BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GD GE GH GM HR HU ID IL IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT UA UG US UZ VN YU ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW SD SZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

NENP Non-entry into the national phase

Ref country code: KR

NENP Non-entry into the national phase

Ref country code: CA

122 Ep: pct application non-entry in european phase