WO2000019680A2 - Systeme et procede d'optimisation d'intensite de trafic sur un reseau, au moyen de classes de trafic - Google Patents

Systeme et procede d'optimisation d'intensite de trafic sur un reseau, au moyen de classes de trafic Download PDF

Info

Publication number
WO2000019680A2
WO2000019680A2 PCT/US1999/021684 US9921684W WO0019680A2 WO 2000019680 A2 WO2000019680 A2 WO 2000019680A2 US 9921684 W US9921684 W US 9921684W WO 0019680 A2 WO0019680 A2 WO 0019680A2
Authority
WO
WIPO (PCT)
Prior art keywords
network
interior
node
link
traffic
Prior art date
Application number
PCT/US1999/021684
Other languages
English (en)
Other versions
WO2000019680A3 (fr
Inventor
Tod Mcnamara
Original Assignee
Tod Mcnamara
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tod Mcnamara filed Critical Tod Mcnamara
Priority to AU62551/99A priority Critical patent/AU6255199A/en
Publication of WO2000019680A2 publication Critical patent/WO2000019680A2/fr
Publication of WO2000019680A3 publication Critical patent/WO2000019680A3/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/02Topology update or discovery
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/02Topology update or discovery
    • H04L45/04Interdomain routing, e.g. hierarchical routing

Definitions

  • the present invention relates to interconnectivity of computing machinery and in particular to moving information among a plurality of networked computers.
  • Modularized/layered solutions or "protocols” are known which permit computer systems to communicate, regardless of connection method or vendor-specific hardware implementation, or to permit different networks to communicate or be "internetworked.”
  • Known systems provide for connectivity in and among networks of computerized equipment, and address the problems associated with interconnectivity.
  • Layering in known systems divides the task of interconnection and communication into pieces (layers), wherein each layer solves a piece of the problem or provides a particular function and is interfaced to adjacent layers.
  • Each of the layers is responsible for providing a service to ensure that the communication is properly effected. Examples of some services provided by the various layers are error detection, error recovery, and routing among many communication paths. All the layers in conjunction present the overall communication protocol. It is generally well accepted in the art of internetworking that modularizing in layers with well defined functional interfaces, divides and effectively reduces the complexity of the connectivity problem and leads to a more flexible and extensible solution.
  • the ISO open systems interconnection (OSI) model is a seven-layer model, illustrated in FIG. 1.
  • the OSI model provides a standard for describing a network and facilitating computer communications.
  • the OSI model defines the layers and units of information that pass along a network. As illustrated, data from an application or process running on a first host (HOST A) moves down the model network layers to a Physical layer.
  • the Physical layer defines the physical connection which transmits raw bits across a communication channel to another host (HOST B) and up corresponding layers to a process running thereon.
  • HOST A first host
  • HOST B another host
  • OSI while defining a model or framework in which standards and protocols can be developed at each layer, allows for a flexible approach for implementation of the model.
  • OSI and other layered computer network communications standards are well known and described in detail in the Handbook of Computer-Communication Standards by William
  • TCP/IP are two protocols that are part of a protocol suite or family of protocols layered and designed to connect computer systems that use different operating systems and network technologies.
  • TCP/IP which provides a common set of protocols for invocation on dissimilar interconnected systems, is illustrated and mapped in FIG. la to analogous layers of the OSI model.
  • TCP/IP is described in detail in INTERNETWORKING WITH TCP/IP, VOLUME I, by Douglas E. Comer, published by Prentice-Hall Inc., 1995, and/or TCP/IP ILLUSTRATED, VOLUME I, by W. Richard Stevens, published by Addison- Wesley, 1994, which are incorporated herein by reference.
  • TCP/IP is a four layer protocol suite which facilitates the interconnection of two or more computer systems on the same or different networks.
  • TCP/IP is a requirement for interoperability.
  • the four layers comprise two independent protocols: TCP which can be used to access applications on other systems within a single network; and IP which permits identification of source and destination addresses for communication between systems on different networks.
  • TCP/IP application or process data communicated via TCP/IP is "packetized" as it passes down layers through the protocol suite.
  • the original process data first has an information block called a TCP Header prefatorily appended thereto in a TCP layer, to form a TCP packet.
  • the TCP Header contains information to assure that the data travels from point to point reliably without picking up errors or getting lost.
  • An IP layer repacketizes the TCP packet into an IP packet, by adding an IP Header which contains information needed to get the packet to a destination node.
  • the IP packet is further packetized, such as in ANSI/IEEE 802 local area network protocol, with an additional Logical
  • LLC Link Control
  • LLCPDU LLC Protocol Data Unit
  • the LLCPDU is "framed" for transmission by addition of a Media Access Control Header and Trailer, to form a MAC Frame for communication between two TCP/IP facilities.
  • a considerable amount of '"baggage" in the form of headers and trailer information is added to data which is transmitted between facilities using a layered protocol suite, such as TCP/IP and other layered protocols known in the art. Many additional bits are added at the various layers and must be processed for ultimate transmission across a communication channel at the physical layer. At its destination, the transmitted frame must be unpacketized according to embedded instructions and passed upward through the protocol layers to its receiving application or process. In addition to the substantial increase in the amount of information that must be transmitted as a result of packetization in layered protocols, there is a significant amount of processing overhead associated with packetizing data for network and inter-network transmission.
  • repeaters merely passively amplified signals passing from one network cable segment to the next. While repeaters increased the physical distances over which network data could be transmitted, they did not contribute to any increase in network bandwidth.
  • Bridges effectively replaced repeaters for extending the size and scope of networks. Bridges addressed optimization of connectivity and, to an extent, enhanced network bandwidth. In contrast to repeaters, bridges effectively isolated network segments by actually recreating a packet of signals as it is forwarded in a single network. Bridges are comprised of input and output ports, and maintain tables which map physical addresses to particular ports of the bridge. The tables are based on Data Link Layer (OSI Model level 2) information in each data packet header. The bridge maps an incoming packet for forwarding to a bridge output port based on the packet's destination address. Bridges, like Ethernet interfaces, employ collision avoidance mechanisms at their ports, so they can enhance bandwidth by ensuring that simultaneous transmissions in isolated bridged segments do not collide. Forwarding via bridges, however, introduces substantial delays or latencies in network transmissions as the packets are processed for forwarding. Also, memory requirements for maintenance of tables in bridges is substantial as traffic and number of nodes in a network increases.
  • OSI Model level 2 Data Link Layer
  • Bridges topologically configured on a single level to connect network segments may actually negatively impact bandwidth.
  • Data traversing the network from a source in segment #1 to a destination in segment # 4 must pass through intermediate segments #2 and #3. This effectively reduces the bandwidth available to systems residing on segments #2 and #3.
  • a solution to this effective reduction in bandwidth was introduced w ith the concept of a network "backbone,” as illustrated in Fig. 3b.
  • Routers operate on the Network Layer information (OSI Model level 3, IP packet level in TCP/IP) and therefore facilitate transmission of information among and between different subnet protocols. Isolation of subnets via routers localizes collisions and simplifies the implementation of subnet broadcasts. Routers enabled the configuration of complex network topologies while enhancing bandwidth and facilitating interconnectivity. However, known routers, like bridges, require large amounts of memory to maintain routing tables, and disadvantageous ⁇ introduce latencies in the transmission of information as it is processed at the appropriate protocol stack layer. Complexities in network configurations led to the implementation of hierarchical network topologies, and created the need for flexibility in reconfiguring existing networks.
  • OSI Model level 3 IP packet level in TCP/IP
  • Hubs essentially receive the wiring/interconnections for all of the systems or nodes configured in a subnet (i.e. one node per hub port), and eliminate the daisy-chaining of connections between systems in a network. Hubs can be centrally located, such as in a network cabinet or telephone closet, such that patching between hubs or subnets can be easily implemented.
  • Switches have been developed more recently, and are increasingly more popular than hubs. Switches, as opposed to hubs, actually process the network traffic or packets and, like bridges, switches maintain tables which map physical addresses to particular ports of the switch.
  • the switch tables are based on Data Link Layer (OSI Model level 2) information in each data packet header so that incoming packets are forwarded to a switch port based on the packet's destination address.
  • Switches are effectively multiport bridges, typically with enhanced capabilities that permit them to function as routers.
  • Typical switches have fast backplanes for receiving signals from nodes and either use a matrix of connections between every port connection possibility, or a central memory table repository, to effect store and forward operations for network traffic. Switches, like bridges and routers, introduce latency in network communications.
  • Switched computers In internetworked computers (generally referred to hereinafter, irrespective of the physical links, as * 'telecommunications"), in many implementations is based on the concept of switching. In telecommunications generally, switching determines which path a data stream takes as it traverses the network(s) from a source node to a destination node.
  • Routers and switches which connect networks using the same Transport Layer protocols but different Network Layer protocols, provide "connectionless" data transfers.
  • packets in connectionless router/switch implementations contain the address of their destination and therefore do not require a logical connection established between transferring nodes. It should be noted that with the TCP/IP protocol suite the destination node verifies that the packet is complete and correct, and requests re-transmission if necessary. TCP/IP can be used over connectionless or connection-oriented environments.
  • Routers and sw itches connected in Wide Area Networks contribute to possibly the single most severe network issue - limited backbone scaling - in contemporary internetworks (such as the Internet).
  • This problem is sometimes referred to as the "N- l problem.”
  • the problem arises from the fact that each independent aggregate entity, i.e. subnet or "domain”, controls the allocation of sub- network (IP) addresses. Consequently, once inter-connected to the backbone, fine gradient subnetwork level detail populates the forwarding table of every backbone switch or router (the terms “switch” and “router” are used effectively interchangeably hereinafter).
  • routers/switches employ traffic optimization algorithms that are typically based on the concept of directing traffic to the shortest path first such "shortest-path-first" router models tend to have the opposite of the desired effect in that they lead to undesirable congestion
  • the network will tend to congest nodes/links with the highest connectivity at the center of a network, first This is primarily due to the fact that the shortest-path-first algorithm is based on a two dimensional model Accordingly, the most connected nodes will have the shortest paths to the most nodes, which will make them the most likely nodes to be selected by each independent node space implementing its respective shortest-path-first optimization algorithm Since each node selects a path, independent of other nodes and what they are selecting, a link will congest before that congestion is recognized and the routers determine another route In addition since each router typically has a mechanism to exchange feedback about a congested node, each router will spin-off to calculate another path to choose, all
  • ATM Asynchronous Transfer Mode
  • ATM is a hardware specific implementation comprising ATM switches that support two kinds of interfaces user-network intertaces (UNI), and network-node interfaces (NNI) UNIs involve one type of ATM cell or information format, and connect ATM end-systems, such as hosts, routers etc , to an ATM switch NNIs in olve a second type of ATM cell and generally connect an ATM switch to an ATM switch Virtual circuits are set up across an ATM network to effect the connections for making data transfers
  • UNI user-network intertaces
  • NNI network-node interfaces
  • v irtual circuits Two types can be set up in ATM networks virtual paths which are identified in a cell by v irtual path identifiers (VPI) and v irtual channels which are identified by virtual channel identifiers (VCI) VPI and VCI are onlv significant across a particular ATM link, and are remapped as appropriate at each switch
  • each ATM switch receives a cell across a link according to a VCI or VPI value
  • Each sw itch maintains a local translation table in order to look up the outgoing port(s) of the connection and to insert a new VCI/VPI value
  • the ATM switch then retransmits the cell on the outgoing link w ith the new connection (VCI/VPI) identifiers
  • the hardware specific implementation of ATM presents numerous disadvantages apart from its complexity As with conventional (non-ATM) routers and switches ATM switches must deconstruct and reconstruct information (cells) as the information traverses the network which introduces significant latencies
  • a network implements a concept of orthogonal directional traffic classes which are identified as, but are not limited to: interior traffic, interior to exterior traffic (source traffic), exterior to interior traffic (destination traffic), and transitory traffic.
  • classified traffic transits networks of the present invention which comprise an "ordered" (i.e. numbered) set of Neuvork Entities ("NE" or elements) commonly referred to and including links, switches, and stations
  • Each NE in the network according to the invention is "ordered” based on a network "center” which is functionally determined by an NE's connectedness (i.e. the quality and quantity of connections), and by its centeredness (i.e. how close it is to the center of the network).
  • An assigned numeric address (“host number") designated during ordering specifies the "relative” location of each element, and provides information both with respect to the node's "centeredness” and “connectedness” (i.e., expressed as “relative” to the "center” of an Ordered Neuvork). Regardless of the size of the domain (control area subnet), the "relative" location of any NE, e.g.
  • a host or switch or subnet is readily discerned by one quantity, e.g., the host number, as assigned according to the methodology of the present invention.
  • the host number e.g., the host number
  • topologically static switching and mapping are used in place of currently used routing protocols, to thereby simplify identification of directionality and of flow.
  • Each multi-domain network, subdivided into subnets or "control areas,' * uses a distributed map instead of a forwarding table to determine forwarding links. Consequently, this table does not expand when the exterior inter-networks expand. This table expands only when the locally connected neuvork (i.e. subnet) expands. If the local network happens to be a backbone, then this table will increase only as domains directly connected to the local backbone domain increase.
  • the map-based approach as found in "Ordered Networking" according to the invention, maintains only locally relevant mapping information for data forwarding. Therefore, memory is never cached in or out as the overall network expands.
  • relative addressing gives each inter-connected control area independent control of access connectivity scaling.
  • Aggregating small independent domains within a backbone of interconnected domains into an area allows for scaling data traffic and resources at a backbone. This requires nothing within each of the smaller domains and these smaller domains are effectively unaware of the scaling. Only the backbone controls the scaling and has finer control over backbone resources.
  • By inter-connecting domains of the same approximate size within an area and then inter-connecting this area with other areas of the same approximate size the areas scale the inter-area backbone connections to approximately the same level. This allows the backbone to scale inter-area connectivity to approximately balance traffic or data flow.
  • the aggregation and scaling of arbitrary sets of inter-connected network entities facilitates reductions in backbone, local, and large scale neuvork resource utilization.
  • map servers can be introduced which are used for both interior and exterior "name to relative address resolution". Relative naming eliminates the need for an absolute naming authority and thereby further increases the flexibility of trafficking.
  • Ordered Networking architecture involves neuvork objects and support servers to provide inter-neuvorking communication between ne vork entities both local and remote.
  • Network objects which are distributed at each node, include a SWITCH object and a LINK object.
  • SWITCH and LINK use the same control mechanism regardless of an object's function, position, or particular data structure.
  • objects support two access levels for inter-object control communications; named object access and directed object access. Named object access allows communication between neuvork entities without knowledge of relative addresses, while directed object access allows network objects to communicate using relative neuvork addresses. Since forwarding requires distributed data structures populated for transmitting addressed data between network entities, the named object mechanism allows network entities to communicate before populating these data structures throughout the network.
  • the directed mechanism utilizes the forwarding path.
  • the named mechanism requires thread processing at each forwarding network element or switch, while the directed mechanism requires no processing above the hardware-forwarding or driver-forwarding component.
  • Either mechanism processes Query. Check, Announce, Set and Response control requests. These messages allow co-ordination between all distributed data structures within an Ordered Network.
  • Support servers in the architecture according to the invention include: an inter-Domain Map Server or service (DMS); a Domain Name Server or service (DNS, as known in the art); an Interior Map Server or service (IMS); and an Interior Name Server or service (INS).
  • DMS inter-Domain Map Server or service
  • DNS Domain Name Server or service
  • IMS Interior Map Server or service
  • INS Interior Name Server or service
  • the support servers generally, provide a communication support function for proper Ordered Network operation.
  • the IMS is a mapping service provided by the switch that is typically the lowest number in an area or domain.
  • the IMS determines the topology of the region and distributes that topology to individual switches to load their respective path switch matrixes.
  • the DNS is known to be located in edge switches for performing exterior to interior name resolution and network address translation for exterior IP or ON domains.
  • the DMS in each edge node is designated to perform a mapping function for exterior domains to determine both relative domain name paths and interior to exterior network address translation for exterior IP or ON domains.
  • Ordered Networking is implemented according to a methodology that initially determines link sets in a domain. From the link sets, a map establishing the Ordered Network topology is generated by the IMS. A path switch matrix for each node is then generated from the map, and is distributed among the nodes in the domain. That is, every node is loaded with a path switch matrix. Each path switch matrix is loaded with different data and represents a topological map of the entire domain from each router ' s perspective. The path switch matrix is generated as a function of the four traffic classes (i.e. interior traffic, interior to exterior traffic (source traffic), exterior to interior traffic (destination traffic), and transitory traffic). In operation, the map server (IMS) effectively determines which path network traffic will take. The path matrix located in each node takes the source address, the destination address and the traffic class path and uses them to determine which link to forward the traffic on.
  • IMS map server
  • Servers (within a domain) query each possible path between a "source” and the intended "destination” for data flow traffic information; they then determine which path has the greatest capacity. Once that path is identified, the corresponding path switch matrices of each switch along that optimum path is loaded by the server. The servers then return information back to the source, namely, a 'relative' address for the destination and data flow along path commences.
  • Ordered Domains simplify inter-domain communication by presenting a "reduced complexity view” to domains that are “exterior” domains. This simplified view collapses the "interior” domain complexity into a “single apparent switch element” and thereby allows for data reductions in inter-domain routing.
  • the "N- l problem” is effectively eliminated by allowing a single apparent neuvork element to represent an interior of any arbitrary size.
  • Ordered Networking effectively creates an abstract "object layer” (by treating all Network Entities as similarly addressed objects), which can be readily extended and applied to groups, named processes, and identities that come into existence in the future.
  • any two entities communicating define the meaning of a "relative" address and that definition can be expanded in the future without significantly affecting any of the algorithms, methods, and existing implementations of the Ordered Network.
  • the abstract layer is like an overlay which, when applied over disparate elements, renders them apparently (and functionally) uniform.
  • the network servers thereby control and select specific paths for traffic.
  • abstract objects for links, areas, and domains allow for uniform control and collection of this distributed data. Introducing abstract objects also facilitates network controls over and above the physical media layer. Accordingly, pre-allocation of bandwidth and predictable latency can be achieved over networks, e.g. Ethernet, currently lacking those characteristics at the media layer.
  • mapping methods according to the invention simplify neuvork management and control, as well as provide for full integration with
  • ATM Ethernet
  • point to point satellite
  • any of various physical media without the need for complex protocols or special applications.
  • Fig. 1 is a block diagram of an OSI model neuvork protocol stack as known in the art
  • Fig. la is a block diagram of a TCP/IP protocol stack as known in the art, as compared to the
  • Fig. 2 is a diagrammatic representation of packetization of information according to the TCP/IP protocol as known in the art
  • Fig. 3a is a diagrammatic representation of a segmented network with segments interconnected by bridges, as known in the art
  • Fig. 3b is a diagrammatic representation of a segmented network with segments connected to a backbone, as known in the art
  • Fig. 4 shows how various types of network traffic are classified according to the present invention
  • Fig. 5 shows the steps for ordering routers according to the illustrative embodiment of the present invention
  • Fig. 6 shows a networking example with a router connecting two separate networks together
  • Fig. 7 shows the network example of Fig. 6 with a plurality of hosts on each neuvork
  • Fig. 8 shows the network example of Fig. 6 connected to a larger network with multiple routers
  • Fig. 9 shows how the network example of Fig. 8 are designated according to the illustrative embodiment of the present invention.
  • Fig. 10 shows an example network with links designated according to the illustrative embodiment of the present invention
  • Fig. 1 1 is a flowchart of the steps performed for routing a package through an ON (Ordered
  • Fig. 12 is a sample three-dimensional matrix for selecting paths according to the illustrative embodiment
  • Fig. 13 is another sample three-dimensional matrix for selecting paths according to the illustrative embodiment
  • Fig.- 14 is another sample three-dimensional matrix for selecting paths according to the illustrative embodiment
  • Fig. 15 is yet another sample three-dimensional matrix for selecting paths according to the illustrative embodiment
  • Fig. 16 is a flowchart of propagating node updating according to the present invention
  • Fig. 17 illustrates standard IP Inter-domain communication elements
  • Fig. 18 illustrates a loosely coupled, network centered, inter-domain communication model according to the present invention
  • Fig. 19 illustrates a relative appearance of ordered domains according to the present invention
  • Fig. 20 illustrates a perspective of another domain, from Domain NEW;
  • Fig. 21 illustrates the INS query resolution processing for two hosts connected to the same router on the same links
  • Fig. 22 shows the router ' s INS response to the to the query shown in Fig. 21 ;
  • Fig. 23 shows an INS Database Structure for an isolated router according to the illustrative embodiment
  • Fig. 24 shows an ordered domain to demonstrate the structure and processing of INS within a more complex ordered network
  • Fig. 25 shows an INS database according to the illustrative embodiment for the domain shown in Fig. 24;
  • Fig. 26 is a block diagram showing how net orked disk servers for routers are connected by network links;
  • Fig. 27 is a block diagram showing how network traffic is reduced if the memory requirements for the router are fully contained within the router.
  • Fig. 28 is a block diagram of layers of network support functions of a typical network
  • Fig. 29 is a block diagram of the components of an ON Switch according to the illustrative embodiment
  • Fig. 30 is a block diagram of a minimal network with uvo hosts
  • Fig. 31 expands on the network of Fig. 30 to show a simple network with many hosts;
  • Fig. 32 is a block diagram of a simple switched network with one switch and two links with many hosts;
  • Fig. 33 is a block diagram of a neuvork with multiple forwarding switches connected by multiple physical links and many hosts.
  • Fig. 34 is a block diagram of the elements of the illustrative embodiment for controlling a router.
  • the present invention is implemented in the context of networked and/or interneuvorked computing machinery, as known in the art.
  • a method and apparatus is provided which effectively classifies neuvork traffic, and optimizes network traffic flow in deference to and as a function of those classifications. By dividing data into four classes, several optimizations are possible.
  • Implementing and manipulating traffic based on traffic classes markedly improves neuvork performance, as will be described below. These classes are illustrated in Fig. 4, relative to a neuvork domain 40.
  • the classes include interior traffic 42, transitory traffic 44, interior to exterior traffic 46, and exterior to interior traffic 48.
  • the best routing technique for transitory traffic 44 would be the shortest path around the edges of the neuvork domain 40. Since all other local traffic will tend to be direct to the interior of the network, or directed out to a specific external domain, routing transitory traffic 44 around the edges of a domain will tend to minimize its impact on interior traffic (which is routed shortest path first). In fact, orthogonal routing of these two traffic classes, according to the invention, can markedly improve throughput in a network. There are distinct differences between the final two classes, interior to exterior traffic 46 and exterior to interior traffic 48, which is why they are differentiated according to the invention. Within an organization, traffic tends to be balanced between hosts and heavily imbalanced between hosts and servers. Traffic between an organization and the outside world will tend to be imbalanced heavily on the exterior to interior path.
  • each node would improve, because the effects of each class on another class would tend to be minimized. Within a class, however, the basic algorithmic flaw would tend to congest the most connected routes first.
  • these traffic classes or algorithms will be referred to as base class algorithms.
  • the model developed here can easily identify each traffic class based solely on source and destination addresses.
  • Multi-dimensional route selection algorithms use these four algorithms as the base algorithms for path selection when other information is not available. Therefore, each router should use each of the four algorithms to calculate base class topological routes. These routes should be entered into a table for class based path selection. These table entries are mainly used for class selection when other information is not available.
  • the network In order to determine direction within a neuvork, the network must be ordered. Any devices composing the neuvork must have addressing controlled by the neuvork. Currently, each network element has independent control of addressing characteristics. If the network had control over the assignment of neUvork addressing, the assignment could be done to minimize routing costs, simplify lookups, and provide tighter security. Currently, each piece of equipment in a network dictates characteristics to the network. This creates chaos within the network.
  • step 200 finds the router with the most connections to other routers with no exterior domain connections, step 200 in order to define a "center". If there is more than one router to choose from, check the routers that connect to these routers and see how many connections to center candidate routers the second tier has and pick the router that has the most interior connections. This will be the center node for an Ordered NeUvork. If there are still multiple contenders, check to see which router has the most 3TM level interior connections. In counting interior connections, do not count links connected to edge routers. This weights interior links over exterior links. Ordering is shown in step 202 and commences from the center node.
  • routers orNEs in this illustrative embodiment is from the most connected to the least connected.
  • the exterior domain connection routers are numbered starting with the router with the most interior domain connections first, followed by the router with the next most interior domain connections, etc., as shown in steps 206-210.
  • This numbering sequence identifies the most connected interior routers by low numbers followed by the least connected interior routers, and finally the highest numbered routers are exterior domain routers. This number also has the following properties: the lower the number of a router the greater the influence on interior traffic; and conversely the higher the number of a router the greater the influence on transitory traffic. It should be appreciated that the numbering sequence direction is somewhat arbitrary, in that one can instead number from high interior numbers to low exterior numbers. The importance is the sequencing and not the numeric direction of the sequence.
  • Transitory Traffic 44 is routed through routers selected from the highest numbered routers among the shortest path candidates.
  • Interior to Exterior Directional Traffic 46 is routed from the lowest number to higher number routers among the shortest path candidates.
  • Exterior to Interior Directional Traffic 48 is routed from the highest number to lower number routers among the shortest path candidates.
  • Interior Traffic 42 is routed with routers of relatively equal numeric values from among the shortest path candidates.
  • each router looks like a box with some connections extending from it. Each connection is connected to multiple computers. If the router can tell the connection the host is connected to by the host number portion of an address, it doesn't need to know a network number to forward data traffic. Any connected host could conceivably be numbered as if it appeared on the same network that is connected to the same router. In addition, since all hosts in a domain are unique regardless of the network they are attached to, the only quantity required to uniquely identify a host would be host number. The routers, however, must know that a specific host is attached to a specific network link. This is required so that if two host are attached to different network links but the same router. The router can correctly forward the data to the other network.
  • Fig. 6 shows a single router 50 connecting two different networks.
  • all networks on an individual router are identified by a quantity called a link number.
  • network 52 is on linkl and network 54 is on link2.
  • link number For example, network 52 is on linkl and network 54 is on link2.
  • Fig. 7. No additional information would be required to know the host was on linkl but the host number.
  • by numbering the hosts on link2 with a numerically higher number than that of linkl. sequentially to max hosts on link2 you may uniquely identify hosts on either link l or link2 by the range that the host number fell into. If this process is continued for all numeric links from 1 to max links on a given router, all hosts on a specific router would fall within a specific host number range for a specific router.
  • routerl 50 had a data packet with a specific host number, the range of the number would be enough to uniquely identify the network link to forward the data packet onto, as shown in Fig. 7. If routerl 50 has hosts 56 numbered from 1 to total hosts and the next router started numbering hosts on its network links in the same way but with a number numerically greater than routerl . it is possible to uniquely identify the router that a specific host is attached to by host number alone. In other words, if hosts on each sequenced router are uniquely numbered such that the hosts on the next router are sequentially numbered higher than the previous router, all hosts, routers, and links (networks) will be uniquely identified by the range that a host number fell into. No other quantity would be required to know identify the neUvork entities associated with a specific host.
  • Fig. 10 shows an example of an ordered network with numbered router links according to the previous described steps.
  • each GR# represents a GRACE node
  • R# is an IP router
  • each L# is a network link.
  • Network links are numbered router-relative and not network domain unique.
  • Each arrow is a connection to a domain (either internal or external). Notice the centered characteristic of the lower numbered nodes, as relative addresses according to the invention are constructed. Detailing the sequencing will help explain orthogonal data traffic classes and the simplified path switching explained in following sections.
  • An auto-configuration algorithm can be implemented in order to facilitate ordered sequencing, as described.
  • Normal IP addressing uses absolute, arbitrary, authoritative, and universally unique addressing.
  • Each network connection within any inter-connected neUvork domains has a unique IP address.
  • each network entity has a unique name as well.
  • Two unique identifiers within a domain of influence are redundant and create configuration complexity. One unique identifier would allow relative domain usage of the other redundant identifier.
  • IP addressing could be made neuvork entity relative instead of fixed. This would require neUvork address translation across domain name space boundaries, but with proper handling relative addressing would simplify addressing within a network domain by significant levels.
  • the illustrative embodiment implements relative addressing, which uses a standard four octet IP address.
  • the present invention is not limited to such address structures.
  • a domain number and a unique host number are required. Simply put, if each domain within an interconnected network fabric had a unique relative number and each host within the destination domain had a unique identifier, these quantities allow selection of paths to destination domains and end stations. Since the numeric quantity of hosts within a domain is limited and the number of domains within a universe can be made limited, these quantities would allow space for relative path elements within a structure substantially like IP address.
  • the domain numbers are stored within a relative IP address in the higher bits. Since the host number is the last part of the relative IP address that must be constant as the relative IP address passes through network elements, the host number is stored in the lower ordered bits.
  • the fixed address portions of a Ordered NeUvork relative address (effectively using the construct of an IP address): DomainNumber.O.O.HostNumber.
  • the quantities between the two fixed numbers represent path relative values filled in by the inter-connecting network fabric as the data passes across connecting domains. In reality, even for large networks within highly connected domains, there are enough relative bit positions to allow complex path designations.
  • bit positions are required for path selection based on the fact that there are four base classes, i.e. two bit position, and a fixed upper limit to the number of links, usually under 64 numbers, i.e. 6 bit positions, to any individual router.
  • the other zero quantities are used to route between hosts within the interior network domain. Once the data gets into a router that the host is attached to, the non-host part is masked away. This means all hosts within a domain appear to IP within the domain as if they were connected to the same physical network.
  • the address appearance will vary based on the connectivity of the Uvo quantities.
  • the quantities appear as: O.O.linknumber.hostnumber. Therefore the lookup for the router has been reduced to a direct index into an ordered array of quantities based on link number for this type of forwarding. This can be implemented in hardware as can the masking of the local IP address. Compatibility with standard IP on the end hosts is assured because to the two end hosts they appear on differing networks. End stations check the destination IP network address for a match with the source host's network to determine if it is a local address or not.
  • the hosts communicate together without a router, if the two addresses are different, the end stations send the IP data to their default router. This simplifies router lookup for this type of forwarding.
  • an additional number is added to the
  • IP address O.baseclasspath.linknumber.hostnumber.
  • Traffic Base Classes are numbered: 1 - Interior, 2 - Interior to Exterior, 3 - Exterior to Interior, 4 - Transitory.
  • Each router has two links, numbered LI (52 and 64) and L2 (54 and 66).
  • Each link on each router has 10 hosts:
  • Link 1 hosts are numbered Hll (56a) to H20 (56j).
  • Link 2 hosts are numbered H21 (58a) to H30 (58j).
  • Fig. 1 The steps for forwarding traffic are outlined in Fig. 1 1.
  • Host 1 1 (56a) needs to communicate with Host 50 (70j).
  • Host 1 l(56a) queried DNS for Host 50's (70j) IP address, which is 0.0.3.50.
  • Host 1 l(56a) delivers data destined for Host 50 to Router 1 (50), source address as 0.0.0.1 1 and destination address as 0.0.3.50.
  • Router 1 (50) first looks at the destination address, host part only, to determine where to send it as follows:
  • the packet is forwarded unaltered until it reaches the destination router, where only the non-host part gets masked off prior to local delivery to the destination.
  • Router I indexes into a switch matrix using source router number, 1, and destination router number determined algorithmically from the destination host number, 2. 6.
  • the base class is used as a third dimension, path. At the location is the interface number to forward the data based on Interior class shortest path first, link 1 (52). This has been previously filled by a topological path propagation or from router based calculation of base class paths beUveen each node, according to the present invention. 7.
  • Router 2 (62) receives the forwarded data from router 1 (50).
  • Router 2 looks at the destination Host number and determines that it is local on interface (link) 3.
  • Router 2 (62) masks host number field and forwards the data to Host 50 (70j) on interface (link) 3: source -0.0.2.1 1 and destination - 0.0.0.50.
  • Implementation of these steps is easily reduced to direct indexing into topologically static tables of values described hereinafter with reference to path switch matrices. This results in significant levels of efficiencies over current implementations of routers.
  • One reason for this efficiency is the address directly indexes to the forwarding link rather than requiring walking unordered forwarding tables. If the example had intermediate routers, these routers would use source and destination host numbers to determine source and destination routers. Then source router number, destination router number, and base class path class would be used as direct indexes into an array to determine the interface to forward the data packet on. This is a direct delivery.
  • the array is different at every node in the path and can be thought of as a directional topological map of the internal domain. This array changes only on permanent topological changes.
  • edge routers all external communication from a local domain to an external domain occurs within edge routers.
  • network address translation is added to handle communication to and from the outside world.
  • the elements of the internal IP address representation to forward to a correct edge router will be considered without examining the details of edge router address translation.
  • the address translation is detailed hereinafter.
  • Each instance of an exterior domain exit point is sequenced. As with all other quantities, the sequencing is from most influential to least influential. Then select the edge routers with the internal connections, lowest numbered, and count the number of links that connect to outside domains. Number these sequentially 1 to last on the node. Move to the second most internally connected edge router and number each exterior link starting at a number greater than the previous node. Continue until all edge routers have been numbered.
  • IP addresses would look like Source- 1 4 1 temphostnumtim. Destination- 8 4 2 temphostnumjim The temphost numbers are assigned by the domain map sen. er. and are used for translation
  • An illustrative embodiment for ordered domain routing is implemented using a path switch matrix that is populated with multiple network topological paths tor all routers within a domain
  • This path switch matrix is reloaded for permanent topological changes only
  • Temporary topological changes such as down routers
  • Permanent topological changes would propagate from the interior most router to the exterior most router
  • the interior router's path switch matrices could be loaded in many ways
  • a computer within the network could be used to store topological information and create media, flash memory, disk files, etc whenever a change was required within the network
  • a computer could be used as an external path server and propagate path data to each node using the distribution algorithm detailed herein
  • the routers may run a standard interior gateway protocol with the base class variants described to populate the s itch matrixes locally This choice most closely integrates w ith the previous networking model, however it may not be optimal
  • every node is loaded w ith a path switch matrix
  • Each path switch matrix is loaded with different data and represents a topological map of the entire domain from each router's perspective
  • One axis represents the source host's connecting router
  • Another axis represents the destination host's connecting router
  • the path axis represents four base class algorithmic paths, and potentially optional paths selected for specific traffic by an exterior path selection server
  • the element stored in the data selected by the three dimensions is the link interface number on the current router that the data should be forwarded on
  • the source axis is determined by the source host number, range checked to determine the source router
  • the destination axis is determined by the destination host number, range checked to determine the destination router.
  • the path is determined from the path portion of the destination address.
  • the shortest paths beUveen R4 and R5 are: R4,L1 to R1,L3 to R5, and R4.L4 to GR8,L3 to R5.
  • the shortest paths between R4 and GR9 are: R4,L1 to R1,L6 to GR9, and R4, L2, to R3, L4 to GR9.
  • node Rl One path choice between the two different destination pairs would go through node Rl . Using normal shortest path, this path would be selected by both destination pairs. But because of base class algorithmic differences the interior to exterior class selects shortest path with higher numbered nodes and interior class selects shortest path with lower numbered nodes. The R4 to R5 would have selected path R4,L1 to R1 ,L3 to RS.
  • the R4 to GR8 would have selected path, R4, L2, to R3, L4 to GR9.
  • FIGs. 12-15 show how the switch matrix, source and destination routers, and base classes are populated to facilitate this path selection capability. Both paths are shown with the path switch matrix populated for each node. The switch matrix uses the three indexes to quickly select the proper link for forwarding.
  • Appendix A is an example analysis of the network shown in Fig. 10 according to the illustrative embodiment, along with the resulting PSM (path switch matrix) for each node.
  • PSM path switch matrix
  • the source and destination pair in combination with the PSM entry indicates an orthogonal path or a directed path.
  • the path switch matrix at the orthogonal node for the source and destination address would have a zero in it.
  • the link that entered the orthogonal node provides the switch with the information necessary to select a forwarding path toward the original path.
  • the original source and destination addresses provide indexes into the PSM as detailed.
  • the path switch matrix could be loaded with optional paths in addition to the base class paths.
  • the optional paths would allow selection based on different quality of service (QOS) types. If the optional paths are populated dynamically, these paths could be used with an exterior path selection node to dynamically load balance routes.
  • QOS quality of service
  • Memory required by the matrix is minimal compared to the code size of standard protocols as the following table demonstrates.
  • Table 1 Memory required by path switch matrix according to the present invention.
  • networks can be expanded by adding connected domains, this would allow expansion without increasing the memory at every router when capacity within an area is reached.
  • the requirements for using a universal route distribution algorithm are:
  • Each link address must be configurable under the control of software or hardware but not arbitrarily fixed. Current "Established protocols" require fixed and arbitrary addressing assigned not to facilitate routing but under control of an authority.
  • Each router node must be capable of responding to a ring packet from the source node on one of the destination node's links to the source. There must be an inbound and an outbound path capable of reaching the source. Bi-directional communication over the same link is NOT required. This allows for unbalanced loading.
  • Each router's lowest level code should process responses and be capable of getting into the driver queue ahead of any data not already in the process of being transmitted. This is essential to get the most simplicity in implementation. If the ring/ring response sequence is effected by queued data, the "normal data flow status" capability will be limited.
  • the characteristics of the universal route distribution algorithm according to the present invention include: Determines routes using a ring/ring response sequence and no other protocol.
  • Broadcast is not used for the distribution of routes between nodes. All communications are directed. This reduces the amount of unnecessary information and only differences from existing information are delivered.
  • Topological state changes are not handled by the same mechanism as topological changes.
  • Topological state changes disable selection of certain routes with a set of routes at each node. Topological state changes must be populated throughout the network in a coordinated way that does not create unnecessary packets, provide node route calculation as soon as possible, and does not rely on broadcast. There are various methods of distributing topological state information to routers that all suffer from the same basic drawbacks. If two routers detect topological state changes in neighbors in differing parts of a network, topological state information may be incorrectly gathered because of collisions of state packets. These are caused by the fact that the topological state changes are broadcast and because each node is then responsible for recalculating only one possible path between any two nodes.
  • each node maintained multiple sets of paths for each source and destination that used differing links, when a link went down somewhere else in the net, all that the node would need to know would be the effected paths.
  • the node would disable those paths and route data to an accessible path from its list of predetermined sets of paths.
  • the link change was a transient condition, when the link came back up, the previous known paths could be re-enabled without re-calculation of the route from scratch. Permanent changes in topology only would necessitate redistribution of topological state information and recalculation of routes.
  • this model of the neuvork has a predetermined structure designated from greatest connecting, interior node, out to the most remote and least connected node, a coordinated method of disseminating topologic change information can be used. Neither previous invalid routes should be used nor should unnecessary packets be generated. Understanding this algorithm is easier by looking at how initial routing information would be populated by a newly installed network.
  • AU topological state models determine the nodes across a link on a particular node by ringing that interface and getting a response that identifies the router on that link, in this case by router number. This is done for each interface on a node.
  • the set of links and associated nodes pairs for a specific node will be called a linkset.
  • a linkset is a complete collection of link-node pairs associated with a particular node.
  • no topological data is known. On node
  • each link is rung querying the node(s) on the other side of the link, step 220 Fig. 16. These responses are collected into a "ring check links” packet and directed to node 2, step 222.
  • Node 2 checks the contents of the "ring check links” packet from node 1 against a current topology table of linksets, step 224. Since there are no previous linksets (startup condition), node 2 adds the link set to the topology change table and sees if there are any routes that can be calculated.
  • Node 2 now rings each individual interface starting with the lowest numbered link to the highest numbered link and assembles this data into a linkset, step 226.
  • This linkset is added to the "ring check links" packet and the packet is directed back to node 1, step 230, and forwarded to node 3, step 228.
  • Node 1 adds the new link information to node l's topology change table and starts calculating routes, step 232.
  • node 2 is doing the same thing and node 3 is ringing each of node 3's interfaces, step 226.
  • each node is progressively populated with new link information allowing each node to calculate routes as soon as possible, step 232.
  • the "ring check links" packet is sent back through each node to node 1 (steps 230 and 232). This allows node 1 to verify that all links were traversed and all responses were properly determined, step 234. The last packet must have an entry with a linkset for each router node in the network. In addition, if any node gets a later "ring check links" packet with data that it may have missed during an earlier propagation, all new routes should be added in a sorted way based on the first node to the last node numerically.
  • This technique generates more packets than it needs to for two reasons. It allows each node to begin route calculations as early as possible and it minimizes the handshaking beUveen each node because missed packet information is echoed in later packets. In addition, it does not require broadcast or multicast to be available on any link.
  • node 1 receives the topological state change packet from the last node, node 1 sends a "ring check links done" message directed to the last node, step 234.
  • the last node is set up to repeat the last packet until node 1 sends this termination handshake.
  • the last node reflects the final "ring check links" packet back to node 1, each node upon receipt enables data forwarding on its interfaces, step 236.
  • Nodes are initialized in three stages, control data only enabled, interior and control data enabled, and exterior and all data traffic enabled. This sequence assures that all routes are populated with basic routes at the same time. This method has additional benefits when applying topological changes to an operational neuvork. It minimizes the number of nodes operating on old topological data and maximizes the number of paths that will be populated with correct data. It also allows independent routing among old nodes and new topological nodes for the longest possible time.
  • the most effective routes selected for transitory traffic 44 would travel the edges of a domain between nodes that connect directly to other outside domains. This would allow interior neuvork load to have little effect on transitory traffic and more importantly, the transitory traffic would have little effect on internal traffic, for example local NFS systems used only by internal hosts. Equally true, an intelligent network manager would connect public Web servers closer to the external domain edge routers, while internal web servers and disk servers would be connected closer to the interior with the greatest number of possible path combinations. This would minimize congestion on any link.
  • the effects of the inside to outside, progressive propagation of topological changes tends to have the least effect on the two greatest, i.e. highest volume, classes of data traffic, transitory and internal, when proper traffic based route selection algorithms are used. Since the interior nodes process the changes first, internal nodes will be the quickest to update their routes allowing communication of interior traffic 42 to begin earliest in time. The edges will detect the changes last but represent the data of least concern to a particular local domain, the transitory traffic 46, 48. Transit data should normally travel around the edges of the domain anyway. The most effected group would be those connections to and from the outside world and the local domain. By picking a time when people are less likely to be on the network, trying to communicate with the outside world, this impact will be negligible. This updating system guarantees a functional neuvork.
  • the "ring check links" packet is originally populated with the previous topology of the neUvork by node 1.
  • the linkset of the associated node is compared to the actual nodes topology.
  • the entry is replaced and the packet is reflected back to the last node, who reflects it back to the node 1, until node 1 gets the packet.
  • node 1 gets the packet.
  • each renumbered router w ill have all interfaces disabled until the end is reached When the packet is reflected back by the last router towards node 1, each node now properly numbered and topology tables updated will be re-enabled
  • INTEGRATING ORDERED NETWORK DOMAINS The ordered domain model presented here is in contrast to the accepted IP network "cloud” model.
  • the neUvork "cloud,” connectionless model creates more problems than it solves.
  • Fig. 17 depicts the standard connectionless model elements for inter-domain communication.
  • Fig. 17 depicts the loosely coupled network centered model which is the subject matter for the present invention.
  • the first function called by a program to begin communication is an acknowledgment of the connection oriented nature within the IP model. As known in the art, every program calls
  • GetHostByName this becomes a query/response sequence, establishing a source location's interest in communicating with a destination location. It is actually the limit of loosely coupled communications. Knowing the person on the other end is the bare minimum needed for two party communications. For group communication, this is not even required, only that someone out there wants to hear the communicator.
  • the host Lael 74 would attempt to communicate with host Sue 76 by querying DNS 78 for Sue's IP address. Because standard DNS 78 is floating within a domain rather than physically attached to network elements within the domain, nothing about paths or connectivity can be gleaned from the initial communication between domain BOB 80 and domain NEW 82 from this inter-domain exchange. If the DNS functionality were coupled to edge routers at each exterior domain connection, the DNS query could physically travel down multiple paths as it does for standard DNS. For standard DNS, however, the path is absolute from an arbitrary root node rather than egocentric from the domain of interest. If the query were sent out each edge router of the domain to determine relative paths to the destination domain, the DNS query could provide information about relative paths as well as absolute address translation. Each path that detects a unique connection to the destination domain could be used for data traffic as well. If there were multiple paths through different edge routers to the remote domain, selection of paths could be based on Quality of Service, QOS, criteria or available bandwidth.
  • the initial DNS query represents a source domain/host's intention to communicate with a remote domain/host. This is the essence of loosely coupled communication.
  • the limiting problem with standard DNS is that no physical relationship between the DNS server and the domain it serves relating to path selection is available. This limits network capacity to pre-determined end to end domain paths across adjoining network entities.
  • Physically implementing DNS at edge routers makes domain resolution relative rather than absolute. Both instances of DNS, however, could peacefully co-exist without modification to end station IP software.
  • Relative DNS domain paths and absolute DNS domain paths are interchangeable. In the real world, local mail, regional mail, and international mail do not necessarily always travel the same postal paths to get to their destinations.
  • the relative path model represents the general case.
  • Fig. 18 an ordered, network centered, inter-domain network model, according to the invention, is presented.
  • interior routers provide both forwarding and INS name to relative IP address translation.
  • the initial DNS query from the source domain to the destination domain loosely establishes multiple paths between the destination domain and the source domain.
  • the DNS query would return sets of paths between the two destinations as well as the absolute reference of standard DNS. This allows the source network or even the source end station to select the optimum path for the characteristics required by its communication application.
  • Ordered networks using a relative version of DNS allows the source network entities control over network resource utilization by selecting a path.
  • a distributed data base of each connecting element within a path is maintained and may be queried to determine dynamic path condition along any path between the source and destination domains.
  • the large arrow 84 represents the inter-domain paths between the source and destination domains.
  • each router between source and destination has been determined.
  • the DNS query sequence aids in establishing inter-domain routes. This has the effect of recognizing that the DNS query sequence is actually a form of loose inter-domain coupling that is ignored in the standard connectionless model.
  • the processing of data traffic relies on network quantity ordering and switching to deliver traffic from edge routers to host connected routers.
  • the INS routers perform relative IP address manipulation within the interior namespace.
  • the edge routers translate INS relative paths into inter-domain relative addresses. All transit routers within the network interrogate the packet and perform direct indexed delivery. Because relative IP addressing modifies the apparent address of traffic data, the physical appearance of a domain is relative rather than permanent. In standard IP, each network looks like a cloud because the path choices are randomly distributed across all connecting routers beUveen two destinations and all IP addresses are permanent. With relative addressing, the network appearance changes with the perspective of the observer. For example, on the source host, the connection from host to destination looks as depicted in Fig. 19 The destination host has the same perspective of the source host's domain. Each domain however, sees its own complexity. Fig. 20 depicts an interior perspective of domain NEW 82 coupled with the apparent perspective of domain BOB 80.
  • DNS Domain Name Server
  • INS Interior Name Space
  • IP's connectionless model must work with the loosely coupled connection oriented model of the ordered domain.
  • Standard Domain Name Servers must become physical rather than floating within a domain. Interior neuvork addresses must be translated into standard IP (Internet) addresses.
  • Ordered domains without routing protocols must be integrated with exterior domains requiring them.
  • IP CONNECTIONLESS MODEL AN INSTANCE OF A LOOSELY COUPLED NETWORK MODEL
  • the standard IP connectionless, router centered, network model actually maps easily into this new "ordered network" model once the elements of IP that perform the connection establishment are identified.
  • any Uvo IP hosts may begin communication, the two hosts must determine each other's identity within the network, -i.e. each others IP address.
  • a program on either host calls an application program interface to the domain name server client, GetHostByName, with the hostname and domain of the other host. This translates to a query/response sequence to domain name servers that eventually translates into the IP address of the remote host. This sequence may be idled locally or it may be sent by the local DNS services to the remote domain of the remote host for resolution.
  • the reason for the delivery to the remote DNS system is that only the remote domain knows the physical characteristics, including IP address, of that domain's hosts. If a local host had previously sought the name to address translation, the local DNS may respond with previously stored data.
  • the local DNS server's query was previously delivered to the remote DNS system, to provide the local domain with the remote systems IP address.
  • This "end to end" exchange is the essence of establishing a loosely coupled communication mechanism.
  • the difference beUveen this model and the standard IP model, is that other connection requirements will be tentatively established during this connection setup in addition to IP address, for example, inter-domain path selection and interior to exterior router path selection within the remote domain.
  • This new ordered network model replaces the connectionless IP model with a neuvork centered, loosely coupled connection oriented model, as described hereinbefore.
  • This model is based on map resolution of "end to end" paths from source to destination.
  • the map resolution has Uvo components, interior domain destination map resolution, and inter-domain map resolution.
  • the details of the interior domain map resolution for local interior host communication have been addressed. Communication with outside host must be handled within the model to make this model's usage practical.
  • Three forms of communication that must be mapped into this model include: 1. Outside domain transiting data through the local domain to another outside domain, transitory traffic 44.
  • this model assumes that every aspect of the local network is well known. Only the information connecting the outside world to the inside domain may not be known. By looking at each of outside domain communication paths, it can be determined what local characteristics must be known to handle the exterior data traffic from within the local domain.
  • the local edge router closest to the source domain must be known and the edge route closest to the destination domain must be known. No local host numbers are required. None but a path between the local domain edge routers is required to deliver data across the local domain. The following presents a summary of transitory traffic requirements for the local domain:
  • Source domain's closest local edge router must be known. This requires knowledge of a domain map at edge routers.
  • IP source and destination neUvork addresses must be in each IP packet when the edge router forwards the packet out of the domain. Address translation from interior temporary transit addresses must be translated to valid IP addresses for standard IP.
  • the interior domain is mapped into a standard router at the edge routers to translate this interior model to the standard IP model. Because the existence or even the need of an inter-domain mapping function does not exist, this type of traffic must map into existing methods. Once inter- domain mapping becomes a normal approach to transitional domain path resolution, a simplified method of handling transitory traffic is possible. Extensions for inter-domain communications are required if connecting domains use this ordered network model. For a local host sending traffic to a remote host in another domain, the local host needs to know the local edge router with the closest connection to the remote domain. A path beuveen the local host's connected router to the edge router closest to the remote domain must be chosen. The local source host number must be translated into a valid IP address at the edge router. The destination IP host address must be correct in the packet forwarded by the edge router. A local interior representation of the destination IP address must be chosen to facilitate routing from the local host to the destination edge router. A summary of local source and remote destination traffic requirements for the local domain are:
  • Destination domain's closest local edge router must be known. A local representation for the remote IP address must be used on interior addresses. • Path from local source host's router to destination edge router must be selected.
  • the IP source host must be translated into a valid IP address for the edge router to forward.
  • the remote host IP network address must be in the packet for the edge router to forward.
  • the edge router closest to the remote domain is easily identified by the point of entry of the remote host data to the local domain.
  • a path between the edge router and the local host must be selected.
  • An internal representation for the remote host based on host number must be used for interior path determination.
  • the local host's exterior IP address must be translated into an interior host number representation. The following provides a summary of remote source to local destination traffic requirements for the local domain: • Representation for the remote IP address to local host number must take place.
  • the local destination host's exterior IP address must be translated into a local host number.
  • DNS Domain Name Server
  • Domain name servers primarily resolve domain and host names into network and host IP addresses. Using the model described here, fixed network IP addresses are not used internally. Therefore, the nature of domain name space for this model exists at the edges of the domain, the edge routers. Any domain name space resolution must occur at the edge routers and is not required within the domain. Within the domain, only a resolution beuveen hostname and internal host number is required to resolve internal domain name server queries. Therefore, it is desirable to implement the domain name server functionality within the edge routers. This would make the domain name space a physical entity at the position within the domain requiring resolution. In addition, since the internal name space translates into the hostnames and associated host numbers, internal name space resolution could be performed at each internal router.
  • each router would maintain a list of hostnames and associated addresses, as well as host numbers in all ordered networks.
  • the edges need only deal with the domain in relationship to other domains, in order to represent the highest level of order within a domain.
  • Each instance of a router appears as an instance of DNS for an ordered domain. This will be referred to as "Interior Name Service” (INS).
  • INS Interior Name Service
  • the highest numbered router is designated as the primary INS router and any other routers are designated as secondary, INS routers.
  • INS and DNS does not carry to end station hosts. Hosts will have a primary DNS and secondary DNS designations to the closest INS routers. By designating the higher numbered routers as primary, the loading of more connected nodes is minimized, and INS functionality is distributed over the least connected nodes.
  • router 1 is always the most connected node, as established by the ordering mechanism described hereinbefore, no primary INS functionality is performed there.
  • the reason each router on a neuvork is designated as primary or secondary, is in the event that a router is temporarily disconnected from operation, in which case another router(s) on the network link can respond to queries.
  • Interior Name Space service provides an ordered domain with domain name space resolution for names and addresses within an ordered domain in the same way that DNS provides name service resolution for standard DNS.
  • Other network characteristics not currently provided by DNS, are provided to support functionality not currently available with standard DNS and IP networks.
  • INS Interior Name Service
  • IP groups for example IP groups, pre-allocated bandwidth, pre-determined paths, etc.
  • additional features that are provided by INS that are not currently provided by DNS include:
  • INS service provides two main functions. The resolution of internal relative IP addresses between interior network entities, host or routers. Hierarchical distributed storage for network data entities across the domain with access by both interior and exterior network entities through a common query/response facility. This will provide "end to end” network characteristics for multiple paths giving both end stations and end station domains the ability to select a path among sets of paths based on independent criteria. Quality of Service control is returned to the parties attempting to communicate, rather than the routers providing communication.
  • Interior Name Space service is a reduced functional set of standard DNS service.
  • INS service provides host name to relative IP address resolution of interior domain host(s). All other DNS queries and requests are forwarded to edge routers for processing through standard DNS.
  • INS service routers to provide for "end to end" network characteristics determination could process additional queries.
  • INS must process host name to relative IP address resolution in order for the performance improvements of switch IP forwarding.
  • each INS router only processes queries for attached hosts. A query destined for a host on another router is directly forwarded to the other router for INS processing. Queries for exterior hosts are forwarded to the closest edge router for resolution into an interior relative IP address and an exterior IP address (either relative or standard depending on the connected domain).
  • a network address translation entry is created within the edge router for traffic bound for the outside destination.
  • Fig. 21 illustrates the INS query resolution processing for Uvo host connected to the same router on and the same links.
  • Host John 84 queries for host Ringo's 86 address.
  • host Paul 88 queries for host Ringo's 86 address.
  • Fig. 22 shows router R l 's (90) INS response for Ringo to John and Paul.
  • router Rl 90 is in a position to correctly respond to relative address queries and will return the correct address regardless of the local interface of the host to which it is attached. Note that although the addresses appear different to each host, end station communication will correctly send data to the right host because of the relative address processing with each router.
  • INS Unlike normal DNS with an arbitrarily located server responding to absolute addressing queries, INS must be distributed to the connected router because of the relative address resolution requirement. INS database queries are minimally limited to host name resolution only because within a domain the domain and sub domain portions are constant.
  • the actual database structure for an INS isolated router according to the illustrative embodiment is shown in Fig. 23. Note that combining INS with router functionality eliminates the need for configuring multiple network support servers within the host end stations.
  • INS Unlike DNS, all network entities have entries within INS.
  • DNS and INS differ.
  • INS works to architecturally anchor objects within a network.
  • This distributed database structure facilitates abstract network object queries for characteristics along a path or multiple paths between other network objects.
  • This example has focused on using INS to determine relative IP addresses, INS is also intended to allow distributed data flow characteristics queries, capacity queries, quality of service queries, group capabilities, and bandwidth pre-allocation queries.
  • Each entry, at a distribution level is an object class abstraction. This allows integration of future components within the distributed architecture while assuring compatibility with older object instances. Note also that all entities are physical rather than abstract. Part of the confusion introduced by DNS centers around the abstraction selected in the naming conventions used by the roots of DNS.
  • a network is a physical distributed traffic system, yet the naming of root elements for DNS are based on a categorical rather than a physical traffic system.
  • the present invention alleviates this problem in that the hierarchy is based on network connectivity rather than on abstract layering.
  • Fig. 24 shows a domain to demonstrate the structure and processing of INS within a more complex ordered network, similar to the network shown in Fig. 10.
  • Each router in domain NEW would be at the same level and organized from lowest to highest numbered.
  • Each link on each router would be shown the same way followed by each host.
  • Another way to consider INS is as a physical database of interior network connective entities. This includes servers required to perform network connections like DNS and INS, but not servers that provide host system support functions utilizing the network, like neuvork disk servers.
  • Ordered networking architecture is based on clearly separating networking connective functions from network support functions to minimize the complexity of communication elements.
  • the support functions may be integrated in parallel with the networking functions but are separable from the network functions.
  • Ordered neUvork components will migrate within the network based on outages, capacity changes, and temporary configuration alignments for scheduled traffic.
  • Fig. 25 shows the INS database according to the illustrative embodiment for the domain shown in Fig. 24. Note that both the interior and exterior routers are listed This database exhaustively contains records required by each network entity or network element required to characterize and control an entire interior domain. This database when combined with either a similar database for exterior ordered domains or a GRACE node routers for compatibility with standard IP provides networking without protocols. No other information is typically required.
  • INS - Interior Name Service Resolves relative addresses and provides distributed physical network database.
  • DMS - Domain Map Service Provides inter-domain map determination and relative domain address resolution for order network edge routers. Responds to
  • IMS - Interior Map Service Provides topological map determination path selection, and path switch matrix data distribution Responds to Interior Path queries
  • Fig 34 shows typical components which are part of a node or switch implementing Ordered
  • the functions by the DMS and IMS map servers are essentially the same, and the software is the same within the two t pes of servers Only the location within the network and the neUvork scale differs between the two servers, domain topological scale or interior switch topological scale or area topology scale, etc
  • the map server processes the link sets that describe the topological neighbor nodes within a bound neUvork
  • the map server attempts to determine the center node or center nodes within the neuvork
  • the topological analysis assesses the memory capacity and CPU capacity of each switch If the static structures required by Ordered Networking data forwarding, the path switch matrix and the link header tables, exceeds the capacity of any node within the network, topological analysis will suggest breaking the neuvork into areas Areas reduces the memory and CPU requirements of switches bv breaking the neuvork into smaller sub networks This effectively scales the forwarding complexity and reduces resource requirements at each switch Center analysis occurs before resource checking so that for multi-centered topologies, the areas will be organized around each independent center switch node
  • a multi-centered network is a network with two or more equally likely center nodes that are not connected directly to each other
  • a dual centered network is a network with two equally likely centers that are in close or direct proximity to each other
  • a map server would then be designated in each area, and the boundaries or edges of each area would be identified The analysis would begin all over again within each area
  • the topological analysis determined a center or list of centers, the switches have capacity to store all topological data structures
  • the map server next begins the process of applying an Ordering Algorithm from the center of the network out to the edges
  • Ordering Algorithm There are several viable ordering algorithms Many ordering algorithms exits, but for an algorithm to work properly, it should allow for separation of traffic along either physical direction (as with the base classes detailed) or quality of service requirements Without directional separation, recovery from temporary link disruption will not result in re- establishment of an original path but will result in a new path choice possibly further disruption data flow within the ne vork.
  • the ordering algorithm ranks switches within the network.
  • the map server distributes this ranking to each switch within the neuvork through the named entity addressing method. Ordering allows identifying of proximity and connective direction within a bound network. Each switch, once given a rank designation within the network, has the ability to resolve addresses for networking entities directly connected it.
  • each switch Once the map server has disseminated the neUvork rank, based on the ordering algorithm, each switch knows the addresses to assign to directly connected neuvork objects. The ranking guarantees that each address within the overall network is unique and deducible by the host number. The map server next discerns paths to every switch. Each path must be unique and without loop paths. Details of this are provided in Appendix A in reference to the example network in Fig. 10.
  • Ordered networking attempts to design the network as an entity, rather that connecting independently designed elements together to form a network. This is done by clearly defining the physical elements and their inter-relationships. Creating object abstractions that map over the physical layers to obtain an object structure network instead of a protocol structured network.
  • ordered networking uses this redundancy to handle dynamic conditions within the network. In fact, every support server function could run on any host or router on any arbitrarily small section of a network domain should that section become isolated. This happens automatically without user configuration or intervention. Equally true, independently controlled areas of a network can be combined by re-ordering the network entities and combining the INS database elements.
  • An authority assigns Standard IP addresses in a block for a particular company or organization. If a network grows beyond a previous range and is given a new IP address block, this space would not be contiguous to the previous range. In other words, mapping from Ordered IP addresses to Standard IP addresses may be non-contiguous in addressing. This is the only limiting factor in raw translation. Each block, however, will have a known consecutive range.
  • IP Base Address Standard IP Address If there are ranges of IP Base Addresses, store them in a table. The host numbers are adjusted to map into each base range:
  • Areas are abstractions created in the neuvork hierarchical model to facilitate the handling of physical issues. For example, assignment and allocation of standard IP addresses are easily handled by an area.
  • An area represents a set of neuvork elements that is ordered sequentially in the same manner as described hereinbefore, but in isolation from other network elements within the domain. Then the separate areas ithin the domain are ordered relative to each other. Area numbering fixes base ranges for individual entity numbering within the area. Within the domain, each area's base addresses are sequenced numerically by the area numbers. As relative addresses crossing an area boundary calculations are automatically made zero base sequential, prior to table lookup, by subtracting the area base host number.
  • Areas may be used for a variety of physical groupings.
  • the problem of packet congestion from exterior traffic on shortest internal routes is handled by declaring two or more geographically distinct areas. Each area is then restricted from exporting neighbor routes to the exterior view of the domain from the other area. Note that this restriction has no effect on the interior domain.
  • the interior domain can access neighbors from either domain portal with no fear of these neighbors plugging up the local domain.
  • the ordered network design criteria centers on two primary distinctions from current router based ne vorking.
  • Hosts and routers are treated the same both by hardware and by software, and topology and data flow are the primary design aspects instead of protocols and hardware connectivity. Both hosts and routers generate data. Both hosts and routers sink data. Routers interpret protocols and host applications transfer data between each other. Both require similar resources, CPU power, memory, and storage space for network or user programs. The previous reason for the distinction is that neuvork protocol software required too many resources to perform both user functions and network support functions. Because ordered networks eliminate protocols in favor of distributed data structures, little software is required beyond handling of the universal distribution algorithm and the neUvork objects that apply to host end stations. The amount of code and CPU power utilized by a standard IP host to handle IP far exceeds the capacity requirements of ordered neUvork- forwarding support only.
  • the switch matrix data is calculated by one of two map servers, an interior map server and an exterior map server for interior and edge switches respectively. This reduces the unnecessary software redundancy inherent in the current router design.
  • ON switches according to the present invention have redundancy at each switch on distributed data structures so that in the event any network component fails a neighboring component may replace that entity without user intervention or neuvork configuration.
  • the redundancy in current routers poses no benefit to the network as a whole and squanders precious CPU capacity and memory.
  • Ordered neuvorks reduce software complexity by separating the route determination function from the route selection function.
  • the route selection function or forwarding function once separated, will migrate into the hardware.
  • the route determination function will migrate into communication support servers.
  • the gap between functions is bridged by shared distributed data structures.
  • the CPU actually sees reductions in both bus utilization and interrupt generation on local host transfers.
  • the ON Switch board aggregates and processes all interface cards prior to generating notification of locally terminated data, thereby reducing CPU overhead.
  • the CPU will perceive no bus utilization from the forwarding with enhancements to interface boards and the introduction of ON switch network boards.
  • neUvork design With the requirement of special router boxes removed from the network, software neUvork design becomes extremely flexible. If both hosts and routers were designed to be the same, the neUvork software developed for either would be indistinguishable. This would include host end station software, ON Switch driver software, ON communication support server software, and all other network software provided. When a network function is required, the resources are available, and a host or switch should perform a support function, that network device will automatically perform that function. A network entity -when neuvorking conditions require it- can perform any function required by the neuvork automatically and fluidly without configuration or user intervention.
  • the conditions for execution are defined as part of the individual neUvork objects. All neuvork object entities, taken as a whole, represent the Ordered Neuvork Object Entity and the capability of the network domain.
  • IP can be layered on ordered neuvork objects, until this occurs and has been integrated with operating system layering to support applications, standard host running network support software will not act as both IP host and Ordered Network support entities. This restriction may easily be removed but not by the design of an Ordered Network, only by the acceptance and integration of ordered networking by operating systems. Since every ON Switch has standard host capability, the certain design criteria should be enforced. An Ordered Network switch should be configured so that it never drops local domain destined packets unless a data flow profile indicates that delay introduced in storing them locally would render the packet meaningless at the terminating end system.
  • Network faults, topological changes, and capacity changes may cause network support functions to change location within the neuvork.
  • Functions previously implemented in the network that were better implemented in the source and destination systems will migrate to the host application objects of ordered network entities.
  • NeUvork congestion control focuses on monitoring network link capacity and limiting source traffic before exceeding a network profile.
  • the source switch nearest a particular source will remove/reflect packets from/to a source, if that source improperly generates more data than a network profile would allow. Reflected packets indicate to the source that network throttling needs adjustment, (it is also a way for the source to measure the amount of exceeded bandwidth). If a source continues to generate excess packets, the network will logically disconnect that source.
  • Ordered Network entity behaves in an unordered or suspicious way, the network will logically disconnect that entity.
  • the foregoing represent the primary design criteria behind the design of Ordered Network objects. As each network entity's specific design is examined, these criteria will be followed so that the network as a entity will operate in an ordered, distributed, and fault tolerant way. Significant improvements over current networking are achieved by the first two elements alone.
  • a further design consideration is that nothing in an Ordered NeUvork should happen by accident or in an uncoordinated way.
  • the neuvork ultimately controls the connectivity, configuration, and distributed data for each individual network entity. Information presented by one neUvork entity will be distributed to any other entity that would be affected by that information. The rest of the network will assimilate no local change of a single neUvork entity until the change has been coordinated within the network hierarchy by the network support servers described hereinbefore.
  • Ordered Networks maintain compatibility at a domain border and host end station only. Connectivity with interior routers running standard protocols is sub-optimal. A domain or sub domain should be all Ordered or all Standard IP.
  • An ordered network is constructed of ordered neuvork (ON) components including ON switches which are the basic Ordered Network forwarding components, and which incorporate ON support servers.
  • Support servers in the architecture according to the invention include: an inter- Domain Map Server or service (DMS); a Domain Name Server or service (DNS, as known in the art); an Interior Map Server or service (IMS); and an Interior Name Server or service (INS), as described.
  • the support servers generally, provide a communication support function for proper Ordered Network operation.
  • the INS in each host, switch, and edge switch performs distributed database and relative addressing functions. That is, the Ordered Network Interior Name Service, is a server providing name to relative address resolution.
  • INS is configured as a distributed database component used by all network elements to coordinate communication capacity information.
  • the Ordered Network Interior Map Service is a server providing mapping for interior switches.
  • the IMS mapping service is provided by the sw itch that is typically the lowest number in an area or domain, determined as described hereinbefore.
  • the IMS determines the topology of the region and distributes that topology to individual switches to load their respective path switch matrix.
  • the ON DNS is Ordered Network server support of a standard Domain Name Space server known in the art.
  • the DNS as described hereinbefore, is known to be located in edge switches for performing exterior to interior name resolution.
  • the ON DMS or Ordered Network Domain Map Service, is a server providing inter-domain mapping for edge switches and IP compatibility/inter-operability.
  • the DMS in each edge node is designated to perform a mapping function for exterior domains to determine both relative domain name paths and interior to exterior network address translation for exterior IP or ON domains.
  • Ordered Networking architecture further involves network objects to provide inter-networking communication between network entities both local and remote.
  • NeUvork objects which are distributed at each node, include a SWITCH object and a LINK object.
  • NeUvork objects (SWITCH and LINK) use the same control mechanism regardless of an object's function, position, or particular data structure.
  • objects support Uvo access levels for inter-object control communications; named object access and directed object access. Named object access allows communication between neuvork entities without knowledge of relative addresses, while directed object access allows neuvork objects to communicate using relative network addresses.
  • the named object mechanism allows network entities to communicate before populating these data structures throughout the network. After these structures are populated by support servers, the directed mechanism utilizes the forwarding path.
  • the named mechanism requires thread processing at each forwarding neuvork element or switch, while the directed mechanism requires no processing above the hardware-forwarding or driver-forwarding component.
  • Either mechanism processes Query, Check. Announce, Set and Response control requests. These messages allow co-ordination beuveen all distributed data structures within an Ordered Network.
  • An ordered network requires neuvorking functions and data.
  • an object e.g. LINK or SWITCH
  • Objects are data and functions operating on that data. For an object definition at the Ordered Network level to be viable, different physical networks should map into the network objects with the same level of control, configuration, and performance
  • the Ordered Networking Architecture replaces forwarding protocols with topologically static data structures. These data structures directly tie source and destination relative addresses to an end-to-end network path for data traffic between a source system and one or more destinations systems. If the source and destination addresses are equal, then the network path is a group path. If the source and destination addresses differ, then the network path is a point-to-point path. This is the only distinction required to perform group multi-point transfers at the forwarding level within an Ordered Neuvork.
  • RetumData2, and Optional ReturnData3 or the value 0 if the access data is out of range.
  • Optional values are indicated by an *.
  • An optional field may contain a comment specifying the condition for the optional data as follows:
  • this nomenclature may specify any data structure consisting of substructures. If an address field is comprised of subfields, the address may be specified as follows:
  • an address consists of four main fields; Domain, Path, Link, and Host, as described hereinbefore. Two of those fields optionally consist of sub fields for Darea and Dnumber of Domain and Harea and Hnumber of Host. Since the area parts have the optional indicator, when the area part is zero then the Domain field consists of the Dnumber and the Host field consists of the Hnumber fields. Note that the comment field indicates a notation name alias for each sub field. For example, the Darea part of the Domain may be designated for simplicity as DA and the Dnumber part of the Domain may be designated as DN.
  • the Harea part of the Host may be designated as HA and the Hnumber part of the Host may be designated as HN. Note that when a structure is designated in table nomenclature, nothing appears after the access fields. This implies that accessing the structure yields the indicated access fields.
  • Transport Header (*SrcAddress, DstAddress, *Sequencing, *OtherFieIds, ... ), TransportData )
  • DstAddress Domain ( *Darea: DA, Dnumber: DN ), Path, Link, Host ( *Harea: HA, Hnumber: HN )
  • Ordered Networking architecture supports two views for neuvork aggregates, at times several of the addressing fields are associative with a specific view.
  • the Domain fields are always associated with the external view.
  • the Host fields are always associated with the internal view.
  • the Path and Link fields are associative and change association as the addresses are translated beUveen neUvork control authorities. In order to indicate the association of the optional fields during the following table discussions an addition to the nomenclature indicates the field association.
  • Brackets around the Path and Link fields associate the fields when indicating a specific view.
  • the above address implies an external view because the associative fields are bracketed with the Domain and the Domain field is associated with the external view .
  • the source and destination address pair can be viewed as a co-ordinate system presented to the neUvork for path association.
  • the network decodes the various fields into directional traffic classes and aggregate entity associations. Each aggregate entity then associates a specific path through its control area with the source and destination address pair. As the packet moves through the neUvork, different parts of the address determine the forwarding path.
  • the core network lookup data structure is the path switch matrix, (PSM).
  • PSM path switch matrix
  • traversing between source and destination network entities through multiple connective entities requires accessing the path switch matrix to discern the next forwarding link (or links for groups) on the end- to-end path.
  • base address class designations as illustrated in and previously discussed with respect to Fig. 4, for Ordered Neuvork addressing, the Domain and Host portions of an address represent Uvo scales of addressing information that are directly associated with two mapping scales.
  • Inter-domain mapping is associated with the Domain address portion.
  • Interior Domain mapping is associated with the Host address portion.
  • the Area subfields of both the Host and Domain address portions represent additional scaling levels.
  • the basic address class designations considered here are for the Domain and Host scales. However, if the focus shifts to either Area scale, the class designations may be used relative to that Area scale.
  • the general format is as follows:
  • both the source and destination Domain are zero, then the traffic class for the associated Transport Packet is interior.
  • these fields are optional. However, since the bit positions associated with the domain fields are unused within a local control area or domain, adding these optional fields to the address will expedite forwarding at each interior switch, by allowing direct index look up for local link delivery. This bypasses algorithmically determining them from the host number.
  • the traffic class for the Transport Packet is Interior to Exterior Class.
  • the destination link is exterior view associative and not optional. This means that this link number references an inter-domain scale and is required by the network to determine forwarding path.
  • the source host is remote (non-local). If the destination domain is zero, the destination host is local. Therefore, the traffic class for the Transport Packet is Exterior to Interior Class. Note that the source link is exterior view associative and not optional. This means that this link number references an inter-domain scale and is required by the neuvork to determine forwarding path.
  • both the source and destination domains are non-zero the traffic class is transitory.
  • both Link fields are exterior view associative, which means the links are relative to inter- domain connections.
  • interior traffic class network data structures interior complexity grows as network element inter-connections increases. The following list covers the step by step complexity increases that require additional network data structures for interior traffic handling.
  • the Local Link with Multiple Hosts Network Data Structures require ON Link Objects at each Host. No ON Switch Objects are required.
  • the Local Switch Network Data Structures require ON Link Objects for each Host, One ON Switch Object with a Link Object for each Sw itch Local
  • Link The Multiple Switch Network Data Structures, require ON Link Objects for Source and Destination Hosts, Switch Objects at each Source and Destination Connective Entity, and Switch Objects at each Intermediate Connective Entity.
  • Transport Header (*SrcAddress, DstAddress, ""Sequencing, *OtherFields. ... ), TransportData )
  • the source host Link Object Since there are no Path or Link fields, the source host Link Object processes the packet. Each Network Entity would look up the associated MAC frames maintained in it's Link Object and add the local MAC framing such that the data was directly sent over the local link to the attached destination Host.
  • the ConnectiveEntityLookup core function implemented in software or hardware:
  • HostNumberBase "Numeric value for lowest numbered host on the link.”, ... ⁇
  • Link Object Data Structures Here are the data structures required by the source Host Network Link Object to process the Transport
  • the ConnectiveEntityLookup core function implemented in software or hardware:
  • HostNumberBase '"Numeric value for lowest numbered host on the link. " , ... ⁇
  • the destination Because the destination has a Link number associated with the address, the destination is not link local, but there is no path number, therefore it is switch local.
  • the Host Link Object forwards the data to the local switch based on the source address.
  • DstAddress contained all of the necessary information for forwarding. Two direct index lookups results in proper data forwarding to the local destination host by the local switch.
  • the source and destination host ON Link Objects will use the same data structures as previously described in the simpler network case.
  • Source Switch Object Network Data Structures when both the SrcHost and the DstHost are from the inbound Transport Packet:
  • Sw SrcSvv; "Switch associated with this Host number”
  • Link SrcLink: “Link associated with this Host number”
  • HostNumberBase "Numeric value for lowest numbered host on the link.”
  • PathSwitchMatrix ( ⁇ SrcSvv, DstSw, Path ) ⁇ Link: LinkN: “Link to forward the data to”, ⁇ Sw: SvvN: “Switch to receive forwarded data” ⁇
  • the path switch matrix is reduced to a Uvo dimensional structure.
  • the efficiency for failure path handling dramatically reduces.
  • the switch designation stored data, SwN is required only for networks that support more than two switches per link. Most networks restrict topologies to two switches per link Multiple switches per link usually occur in high-level fault tolerance networks only. Ordered Networks will operate properly under both conditions If the network topology supports more than two switches per link. Ordered Networking architecture allows for load balancing beUveen switches under control of the Interior Map Ser er Once the next link and next switch are known the following steps are performed
  • SwLinkMacAddress Constant for a specific Switch and link
  • SrcMacAddress SwLinkMacAddress
  • DstMacAddress MacSwTable (SwN)
  • N will be incremented, until the next switch matches the DstSw This indicates that the data is being forwarded to the switch connected to the destination Connective Entity.
  • Both the SrcHost and the DstHost are from the inbound Transport Packet
  • SrcMacAddress SwLinkMacAddress
  • DstMacAddress MacSwTable (SvvN);
  • the local switch does not need to call the ConnectiveEntityLookup lookup because the DstAddress contained all of the necessary information for forwarding. Two direct index lookups results in proper data forwarding to the local destination host by the local switch.
  • Local Destination Switch Object steps are the same whether local or intermediate switches are involved.
  • illustrative ordered network Data Structures are as follows. Data originating inside of the local domain or area but terminating remotely requires one additional step for interior switch processing.
  • the edge switches may or may not have additional processing depending on the control agreement for the shared inter-domain link. Since the additional processing step is required regardless of whether the switch is an intermediate or a source switch, only the intermediate switch case will be detailed.
  • the source switch Since the source switch is determined from the Connective Entity of the local Source host, the address contains this information. Equally true the path information is locally relevant and obtained from the address.
  • the Destination Switch cannot be determined from the Destination Host number.
  • the Destination Address host number has relevance within the destination domain only.
  • the shared link to the destination is shown as Exterior View relevant and not optional. Each exterior link is numbered consecutively and attached to the switch that represents the last stop before exiting the local domain. Thus a table is used to translate the exterior link numbers to interior edge switches.
  • the path switch matrix as described in detail hereinbefore performs this function.
  • Both the SrcHost and the DstEVLink are from the inbound Transport Packet.
  • HostNumberBase "Numeric value for lowest numbered host on the link.” ⁇
  • DstMacAddress MacSwTable (SwN):
  • Edge switches may or may not have additional processing depending on the control agreement for the shared inter-domain link. Since the additional processing step is required regardless of whether the switch is a intermediate or a destination switch, only the intermediate switch case will be detailed.
  • Transport Header ( ⁇ SrcAddress, DstAddress, ⁇ Sequencing, ⁇ OtherFields, ... ), TransportData )
  • the path switch matrix is used for forwarding and the required input to access the stored data is the source switch, which is unknown because the Source Host is not locally relevant; the destination switch, which is obtained from the locally relevant Destination Host Address; and the path, which is obtained from the locally relevant Path portion of the Destination Address.
  • the address contains this information. Equally true the path information is locally relevant and obtained from the address.
  • the Source Switch cannot be determined from the Source Host number.
  • the Source Address host number has relevance within the Source domain only.
  • the shared link to the Source is shown as Exterior View relevant and not optional. Again, since each exterior link is numbered consecutively and attached to the switch that represents the first stop upon entering the local domain, a table, i.e. the path switch matrix, is used to translate the exterior link numbers to interior edge switches. In the following discussion, for each intermediate switch N will be incremented, until the next switch matches the DstSw. This indicates that the data is being forwarded to the switch connected to the destination Connective Entity.
  • Both the SrcEVLink and the DstHost are from the inbound Transport Packet.
  • SrcMacAddress SwLinkMacAddress
  • DstMacAddress MacSwTable (SvvN);
  • Transit supplied data structures attained from the Transport Packet are described.
  • Transport Header ( SrcAddress, DstAddress, ⁇ Sequencing, ⁇ OtherFields, ... ), TransportData )
  • Source Switch - Unknown because the Destination Host is not locally relevant.
  • Destination Switch - Unknown because the Source Host is not locally relevant.
  • the Addressees' host numbers have relevance within the remote domains only.
  • the shared links to the addresses are shown as Exterior View relevant and not optional.
  • Each exterior link is numbered consecutively and attached to the switch that represents the first stop upon entering or exiting the local domain.
  • the path switch matrix includes a table that translates the exterior link numbers to interior edge switches.
  • N For each intermediate switch, N should be incremented until the next switch matches the DstSw. This indicates that the data is being forwarded to the switch connected to the destination Connective Entity.
  • Both the SrcEVLink and the DstEVLink are from the inbound Transport Packet.
  • SwitchTable ( LinkN ) ⁇ MacTable: MacSwTable, BaseHostNumber, SwLinkMacAddress: "'Constant for a specific Switch and link", ... ⁇
  • SrcMacAddress SwLinkMacAddress
  • DstMacAddress MacSwTable (SwN);
  • Absolute Authoritative Shared Links are links which map to standard IP. and are only implicated if ordered networking is implemented in the context of standard IP
  • Mutually Controlled Shared Links are links in which all connected entities agree to a control authority that is responsible to provide inter-domain mapping information and proper forwarding, if inter-connected entities obtain Domain addresses for Transport packets from the Mutual Control
  • Independently Controlled Shared Links are links where each entity connected by the shared link independently determines inter-domain maps Each shared link responds to DMS neighbor queries to create Network Address Translation (NAT) tables These NAT entries translate locally relevant Domain addresses into neighbor relevant Domain addresses as packets pass the inter-domain shared link The translation is from locally rele ant local tables to neighbor relevant upon domain exit This allows inbound inter-domain packets to already be relevant to the local domain upon entry
  • NAT Network Address Translation
  • DMS Domain Map Server
  • Fabric Domains or backbone networks provide inter-connections not beUveen hosts but beUveen domains. With Ordered Neuvorking, substantially every data structure and algorithm previously explained applies directly to backbone inter-connections with a simple change of scale.
  • the source and destination address pair represented a co-ordinate system for a local interior domain consisting of hosts. If the word host is replaced with domain, and each of the access fields was change from host address fields to domain address fields, nothing else would be required. The exact same data structures will work for inter-domain. Only the ordering applied to the addresses must be applied to the domain numbering within the backbone fabric. The following duplicates the intermediate switch section and highlights the change required to properly work with a Fabric Domain Ordered Network.
  • Both the SrcDomain and the DstDomain are from the inbound Transport Packet.
  • PathSwitchMatrix ( ⁇ SrcSw, DstSw, Path ) ⁇ Link: LinkN: “Link to forward the data to”, + Sw: SwN: "Switch to receive forwarded data” ⁇
  • SwitchTable LinkN ⁇ MacTable: MacSwTable BaseDomainNumber, SwLinkMacAddress:
  • SrcMacAddress SwLinkMacAddress
  • DstMacAddress MacSwTable (SvvN);
  • Both the SrcDarea and the DstDarea are from the inbound Transport Packet.
  • DareaNumberBase "Numeric value for lowest numbered Darea on the link.”
  • DareaNumberBase " Numeric value for lowest numbered Darea on the link.”
  • PathSwitchMatrix ( ⁇ SrcSw, DstSw, Path ) ⁇ Link: LinkN: “Link to forward the data to”, ⁇ Sw: SwN: "Switch to receive forwarded data” ⁇
  • SwitchTable LinkN ⁇ MacTable: MacSwTable, BaseDareaNumber, SwLinkMacAddress:
  • SrcMacAddress SwLinkMacAddress
  • DstMacAddress MacSwTable (SvvN);
  • Interior Domain Area scaling allows aggregation of smaller sized interior neuvork elements to provide for efficient control and network resource usage. Again, nothing changes to provide for this efficiency except a change of scale. Only the ordering applied to the addresses must be applied to the Interior Area numbering within the local domain. The following section duplicates the intermediate switch section and highlights the minor changes required to properly work with Interior Areas according to the Ordered Network concept(s).
  • both the SrcHarea and the DstHarea are from the inbound Transport Packet.
  • Sw SrcSw: "Switch associated with this Harea number”
  • Link SrcLink: “Link associated with this Harea number”
  • HareaNumberBase "Numeric value for lowest numbered Harea on the link.”
  • PathSwitchMatrix ( ⁇ SrcSvv, DstSw, Path ) ⁇ Link: LinkN: “Link to forward the data to”, + Sw: SvvN: "Switch to receive forwarded data” ⁇
  • networks are analyzed to further characterize the communication functions, communication characteristics, and end station functions of ordered networking.
  • the networks progress from simple to moderately complex. Particular physical networks are not considered.
  • Individual network specifics are implementation details that are generalized by the network object abstractions described hereinbefore Each specific network, however, must be accurately represented by the object abstractions, as illustrated.
  • Fig. 30 shows the minimalist neUvork, comprising a single link 96 with 2 Hosts 98, 100.
  • the communication functions involved in such a network, according to ordered networking of the invention are: • ON Address Assignment: a host must identify itself to the other host and assign Ordered Network addresses.
  • the communication characteristics of such a simple network are, since each host can talk to the other, there is one bi-directional connection on one path (Bob 98 to/from Jim 100), two unidirectional connections on one path (Bob 98 to Jim 100, Jim 100 to Bob 98), and no multi-point connections on one path.
  • Communication bandwidth on such a simple network since each host can determine the amount of inbound data arriving, each host can determine the outbound neUvork capacity available on the local network simply by knowing the total network link capacity link and subtracting the inbound network capacity. This assumes inbound and outbound traffic shares the neuvork medium. Since each host communicates through their computer directly to the connecting link, latency is fixed and constant with respect to a given direction. The latency per direction, however, may be different based on the types of computers, communication cards, and software on each system.
  • end station functions should include data chopping. Chopping the data is required because the physical network link transfer size will be limited and most likely smaller than that of the application transfer data size. Data sequencing between the two end stations may be included if the Uvo applications required sequential data and the physical network can drop or cause erroneous packets. If the two applications need all data to arrive sequentially, the applications may use a window acknowledgment method as known in the art. If the applications require all data exchanged but not necessarily sequenced. the applications may use a mosaic acknowledgment method as known in the art.
  • packet integrity is provided for header and payload at the physical interface layer.
  • sequencing functionality is listed as part of the end station functionality.
  • Topological considerations may require sequencing, although the host end station applications do not require it. Since sequencing, as a function, will use less software when performed at the source, it is listed as an end station function for both situations. Sequencing data at the network introduces tremendous overhead, while adding it to the source causes virtually no additional overhead. Also note, that chopping functionality is not referred to as framing. Ordered networks have the source chop data into the smallest possible frame size required by links along the selected path. This data, however, will be aggregated along the path when transiting links of larger frame size. When the data traverses a smaller link, the data will automatically be framed for the smaller link without software intervention. This occurs because the chopped pieces have proper addressing already in place as they leave the source. Remember that nothing in an ordered network is layered except the end station software. In fact, switch forwarding requires substantially no software at all.
  • ON address to physical address association must be done. Ordered NeUvork addresses must be mapped onto the physical neuvork link addresses. Since the ON assignment entity will receive ON address requests framed by the MAC address of the physical layer, this entity could provide the mapping function beuveen inter-network to physical network mapping as well during the request handling. This would make the address assignment entity responsible for ON address assignment, and ON address to MAC (Medium Access Control) address mapping.
  • N - Hosts per single link More specifically there are:
  • bandwidth capacity will be random and uncoordinated.
  • ON local bandwidth checking can be effected, including querying/controlling the number of simultaneously active host, multi-point groups, etc.
  • querying/controlling of locally active host and simultaneous connection levels, and querying/controlling active host data source throttling can be effected. Since each host communicates through their computer directly to the connecting link, the latency will be a function of the number of simultaneously active parallel connections and the transfer characteristics of the physical network plus the fixed overhead of the local host. To control latency capacity the same criteria for bandwidth would apply.
  • End station functions in the more complex configuration include chopping the data into packets, which is required because the physical neuvork link transfer size will be limited and most likely smaller than that of the application transfer data size.
  • data may require sequencing beUveen two end stations.
  • Each network application may open communications with multiple hosts simultaneously. Equally, differing applications may be communicating simultaneously on the same host. Consequently, once the data arrives at the local host end station a mechanism for delivering the data to specific application code threads must exist.
  • transfer characteristics for neuvorks, like Ethernet, token ring. etc. for a known packet size starts out linear until a certain number of connects is exceeded. After hitting this "knee", usually a sudden and exponential drop in capacity occurs.
  • the neuvork may be kept from exceeding this knee thereby maintaining a predictable performance for both capacity and throughput.
  • the number of simultaneous connections will collapse network devices using back-off algorithms like most LANs.
  • the only way to have predictable communication would be to coordinate connectivity and data throttle by the neuvork for each data source. If the mechanism for coordinating can be applied generally, characterizing the physical network is the easiest part. In addition, no users suffer, if the duration of connectivity during peak periods is limited and application were designed to accept these limits by operating in the background or rescheduling network activity, everyone is better off.
  • each host has to have the address previously configured or the network needs to assign the addresses to each host upon request.
  • the latter implies a network entity designated to assign addresses in a coordinated way so that no two addresses are assigned to the same host.
  • each address and host name has to be unique within this network.
  • ON name resolution via the ON Interior Name Service, is implicated in that each link's hosts need to be known across links. In addition, those names need translation into relative ON addresses to locate hosts. Since only the switch knows about both links, this functionality belongs on the switch.
  • ON Mapping i.e. ON address to network link association, is implicated in that hosts on one link must be differentiated from hosts on another link by assigned addressing space.
  • Each links' addresses are assigned by an independent method before co-ordination by a switch.
  • Each link initializes as an independent network. When the switch initializes, the single link addresses must be re-ordered. The re-ordered address identifies both a specific link as well as a specific host. This assignment is required when multiple links attach by the same switch. This implies that the switches should be responsible for that assignment.
  • This ON mapping function is performed by the ON Interior
  • Ordered NeUvork addresses must be mapped onto the physical neuvork link addresses. Since the ON assignment entity will receive ON address requests framed by the MAC address of the physical layer, this entity could provide the mapping function beuveen Inter-neUvork to physical network mapping as well during the request handling. Although this function has not changed, other address related requirements have created uvo network entities: ON Interior Name Service and ON Interior Map Service. This functionality could be incorporated into either but since it is primarily database in nature, it belongs in the ON Interior Name Service. This would make the address assignment entity responsible for ON address assignment, and ON address to MAC address mapping.
  • Multi-point Multiple communications between differing independent hosts or sets of hosts (multi-point) can occur at the same time. With the introduction of multiple links, each connection becomes interdependent on the activity of other links. Without the network coordinating the beginning and ending of communications or querying all active hosts in an ordered, low bandwidth consuming way, bandwidth capacity will be random and uncoordinated.
  • ON Bandwidth Query. Check. & Set are used to control link capacity of local transfers, including: querying/controlling the number of simultaneously active host, multi-point groups, etc; querying/controlling locally active host and simultaneous connections levels; and querying/controlling active hosts' data source throttle.
  • This information must then be broken down into local link traffic for each independent link and shared link traffic.
  • the shared link traffic is limited to the capacity available of the lowest capacity link for the current data traffic flow.
  • a higher capacity link feeding a lower capacity link cannot generate more data than the low capacity link will handle without wasting bandwidth of the higher capacity link.
  • the higher capacity links waste bandwidth because if more capacity is generated than can be delivered, the network capacity from the source to the point that the network drops the data is wasted. This capacity could have been used by traffic terminating before the bottleneck occurs.
  • the only way to minimize bandwidth waste is to coordinate capacity for each new connection, i.e. ensure the capacity exists before data traffic begins.
  • the only way to control congestion is to prevent congestion.
  • Ordered neuvorks prevent congestion by allowing connections when capacity is available. Once a connection spans a switch, the latency of the switch's forwarding must be considered as well as the latency inherent in individual network links and host end stations. Characterizing the latency of the switch depends on whether the forwarding is done in software or hardware. If done in hardware, the switch latency should be constant within physical queue depth limits. If done in driver software, the switch latency will be dependent on memory and CPU capacity as well.
  • end station functions include chopping the data into packets because the physical network link transfer size will be limited and most likely smaller that of the application transfer data size. As connections span multiple links, the smallest frame size of a link limits the source chop size.
  • each neuvork application may open communications with multiple hosts simultaneously. Consequently, once the data arrives at the local host end station a mechanism for delivering the data to specific application code threads must exist.
  • a map server computer should be able to analyze the inter-dependencies of multiple links spanning diverse networks in order to control connections and have predictable communication behavior. This is the algorithmic goal of ordered neuvorking. Equally true, by detailing the limits associated with basic quantities during design, a choice of calculating versus, pre-storing information in tables should be made. As the number of connections quickly increases, data associated with these connections would exhaust large amounts of memory. Nevertheless, the total number of hosts associated with a particular link is a relatively fixed quantity by comparison, the data associated with each host is accessed often for mapping and resolution. Putting this into a table would save considerable processing.
  • Ordered Networks are composed primarily of distributed data structures, calculated performance data, and neUvork forwarding elements. All support server functions either load forwarding data structures, calculate performance capacity, or resolve relative addresses. Consequently, each object responds to the same set of commands; i.e. query, set, and check.
  • Query allows dynamic determination of a request.
  • Check allows information presented to be compared to network data that may be either dynamic or static in nature. Set allows the user, the network manager, or the network to modify neUvork data or conditions. If an object supports multiple commands, these may be combined or made conditional, i.e. If (Query > Check) then Set.
  • This simple control set allows the elimination of other network protocols and allows consistent, uniform development of distributed network applications. Aggregate network abstractions, like paths, domains, etc. may be queried for multiples, but a set may only operate on an individual instance. Conditionals allow for combinations of plurals as long as the conditional evaluates to one instance.
  • a command set ON Query, Check, and Set Best Path will query all paths between a source and destination, compare the paths dynamic performance against the profile represented by the
  • NeUvork and addresses must be assigned. This function now, requires coordination at three levels. On each link a neuvork entity must be designated to assign addresses in a coordinated way so that no Uvo addresses are assigned to hosts on the same neuvork link. Again, once the choice is made to designate an assignment entity, the risk of failure must be addressed such that if the assignment entity disappears, the network address assignment function continues to work properly.
  • ON Address reordering for hosts and links must take place.
  • the switches will be reordered based on the ON IMS interior map, according to ON address reordering for switch identification.
  • the reordering enables the host addresses to identify specific switches, links and hosts. This function is implemented in each switch object but is controlled by the ON IMS.
  • the ON IMS switch is normally designated as the lowest ordered, highest connected switch, as described hereinbefore.
  • Each link's hosts also need to be known across links, in addition, those names need translation into relative ON addresses to locate hosts. Since only the switch knows about multiple links, this functionally belongs on the switch. When multiple switches exist, a specific switch aggregates all interior name resolution for the interior domain. Normally, this information is stored and duplicated in each domain edge switch. When a network has no edge switches, the control entity becomes the highest ordered network switch.
  • Link addresses are assigned by an independent method before co-ordination by a switch. Each link initializes as an independent neuvork. When the switch initializes, the individual link addresses must be re-ordered. The re-ordered address identifies a specific switch, a specific link, as well as a specific host. This assignment is required when multiple links attach by the same switch. This implies that the switches should be responsible for that assignment. Re-ordering switches in multi- switch networks is ON Mapping, and is performed by the ON Interior Map Service.
  • Ordered Network addresses must be mapped onto the physical network link addresses. Since the ON assignment entity will receive ON address requests framed by the MAC address of the physical layer, this entity could provide the mapping function between Inter-network to physical neUvork mapping as well during the request handling. Although this function has not changed, other address related requirements have created two network entities: ON Interior Name
  • Multiple paths are introduced when multiple switches are interconnected. Multiple switches with multiple inter-connecting links create multiple paths. The ON IMS determines these path relationships. Both the number switches and the number of links effect the total number of path combinations.
  • Multi-point Multiple communications between differing independent hosts or sets of hosts (multi-point) can occur at the same time. With the introduction of multiple links, each connection becomes interdependent on the activity of other links. Without the network coordinating the beginning and ending of communications or querying all active host in an ordered, low bandwidth consuming way, bandwidth capacity will be random and uncoordinated. ON bandwidth commands Query, Check. & Set are used to control link capacity of local transfers as with other less complex cases.
  • the switches queue depth must be designed in conjunction with the source data throttle and switch data flow profile mechanism. To compound the complexity, the number of connections from independent sources through the same link effects queuing as well. Queue depth at a switch for no drop conditions may ultimately be the limiting factor to the number of concurrent connections per link as opposed to link capacity .
  • the latency of the switch's forwarding must be considered as well as the latency inherent in individual network links and host end stations. Characterizing the latency of the switch depends on whether the forwarding is done in software or hardware. If done in hardware, the switch latency should be constant within physical queue depth limits. If done in driver software, the switch latency will be dependent on memory and CPU capacity as well.
  • End station functions in this complex case illustrated in Fig. 33 are substantially the same as described hereinbefore in less complex cases, and include data chopping, data sequencing and data separation, as previously discussed.
  • ON IMS and ON INS services are ordinarily at opposite ends of the network.
  • ON IMS functionality is calculation intensive and outbound data intensive ON IMS functions distribute data to switch for handling topology, data flow, and quality of service issues (which are beyond the scope of this application). The more connected the switch the shorter, better the outbound data distribution.
  • the ON INS functions primarily as a distributed database processor to resolve queries and store dynamic responses. This data is not directly required for forwarding by switches and therefore, is less time critical. The less connected a switch, the less forwarding traffic, therefore, the more capacity for incoming request handling.
  • the ON IMS server would identify the sets of topological links, while the ON INS server collected the link capacity information for those links. Finally, the ON IMS would aggregate this data and compare the data to the users data flow profile. The ON IMS would return the set of paths ordered from closest profile matching to least profile matching to no capacity paths. When the user responds with a Set Path, the ON IMS would distribute the path switch matrix information required, establishing the path. The ON INS server would distribute connectivity and profile data as required by the path elements. Each server effectively relies on the same distributed database for information.
  • the ON INS handles distributed database related services.
  • the ON IMS handles path calculation and switch matrix data distribution. Both services are required for request handling but by distributing the workload, the network as a whole becomes resilient.
  • network topology analysis showed that as network complexity increased the functions required by the network to control and coordinate communications increased (even if slightly) and shifted position within the network. For simple topologies, host end stations could perform all functions necessary to coordinate communications but as individual links were interconnected by switches additional server functions were required to coordinate and control communications paths, network topologies, and network addressing.
  • data traffic classes for physical link categorization might include: Local
  • Transit Traffic where source and destination systems terminate on the same link; Transit Traffic where source and destination systems are both on different links than the selected link, the selected link is an intermediate link on an end to end path; Source Traffic where the local link has the source system attached but not the destination; and Destination Traffic where the local link has the destination system attached but not the source.
  • the following analysis shows the possible paths of the sample network shown in Fig. 10, starting at node 1 and traversing unique path combinations, starting from the shortest to the longest.
  • the map server determines path combinations for all source and destination pairs within a network segment. Then the map server sorts them according to the base class algorithms documented earlier.
  • the path switch matrixes at each switch point are loaded with directed paths as well as failure link rerouting alternatives.
  • Paths marked with a * are alternate but unique routes, which become orthogonal path choices in the PSM:
  • the number in the box tell the number of path unique alternatives determined for the source and destination pairs and the number of network hops (intermediate switches).
  • the map server will crunch paths until enough alternates to populate the path switch matrix have been determined. Some topologies in example trees, will not provide for alternate routes for all combinations of source and destination pairs.
  • the map server will analyze the topology to determine and identify isolation links, a link that when broken cause section of the neuvork to become isolated. For the topology in Figure 10, with Uvo hops, there are more than enough alternate paths determined. E.xcept for the isolated R6 node branch. This node would be flagged as an isolated node.
  • the topological analysis provides path lists for each combination of source and destination node pairs within the network.
  • the map server would now sort these combinations based on the shortest path first and traffic classes. Only paths with uvo hops or less are maintained and longer paths should be removed from the sort to minimize the calculation time.
  • the following tables contain the path information for each of the other nodes reduced to vo hops or less.
  • Fig.10 Node 2 Paths
  • the base class routes are selected for each node pairs of source and destination. This is done by sorting the paths from a given source node to differing destinations.
  • the sort criteria ill vary with the specific neuvork. Separation of traffic classes may ouUveigh hop distance for some neuvorks. Nevertheless, the outcome of the sort will be paths based on directional traffic classes or quality of service issues. Alternates to route around failed links and nodes will additionally be determined.
  • the first failure alternate represents a link fault redirection while the second failure alternate attempts to go through different nodes as a node failure alternate.
  • the choice in failure alternates as with basic sort will be controlled by the neUvork administrator. The importance in the invention, that conditions normally handled by protocols are effectively handled with static topological data tables.
  • Fig. 10 Node 1 Paths Sorted to Destinations
  • EI Rl on Ll to R2 on L4 to GR7 on L3 to R4 ⁇
  • Destination Node 5 R3 on L4 to GR9 on L4 to R5 Interior path R3 on Ll to R2 on Ll to Rl on L3 to R5 ⁇ R3 on Ll to R2 on L5 to GR9 on L4 to R5 ⁇ R3 L4 Alternate 1, IE R3 on L2 to R4 on Ll to Rl on L3 to R5 ⁇ Failure Alternate 2, EI R3 on L2 to R4 on L4 to GR8 on L3 to R5 * R3 on L4 to GR9 on Ll to Rl on L3 to R5 * GR9 L4 Failure Alternate, IE
  • Destination Node 7 R4onL3toGR7 Interior, Interior to Exterior
  • Destination Node 4 R ⁇ on Ll to R2 on L l to Rl on L2 to R4 Interior path R6 on Ll to R2 on L2 to R3 on L2 to R4 ⁇ Failure Alternate 1, IE R6 on Ll to R2 on L4 to GR7 on L3 to R4 ⁇ Failure Alternate 2, EI
  • GR7 on L3 to R4 on L4 to GR8 Transitory, Interior GR7 on Ll to R2 on Ll to Rl on L4 to GR8 ⁇ Failure Alternate 1, IE GR7 on Ll to R2 on L2 to R3 on L4 to GR8 ⁇ GR7 on L2 to R3 on L2 to R4 on L4 to GR8 ⁇ GR7 on L3 to R4 on Ll to Rl on L4 to GR8 ⁇
  • Figure 10 Node 9 Paths Sorted to Destinations Destination Node 1 : GR9onLl to Rl Interior, Exterior to Interior
  • Destination Node 8 GR9 on Ll to Rl on L4 to GR ⁇ Interior path GR9 on L4 to R5 on L3 to GR ⁇ * Transitory path GR9 on Ll to Rl on L2 to R4 on L4 to GR ⁇ * GR9 on Ll to Rl on L3 to R5 on L3 to GR8 * GR9 on L2 to R2 on Ll to Rl on L4 to GR ⁇ ⁇ GR9 on L3 to R3 on L2 to R4 on L4 to GR ⁇ ⁇ GR9 on L4 to R5 on L l to R l on L4 to GR ⁇ ⁇ ⁇

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

Un réseau implante des classes de trafic directionnelles orthogonales, comprenant: le trafic intérieur, le trafic de l'intérieur vers l'extérieur (trafic source), le trafic de l'extérieur vers l'intérieur (trafic destination) et le trafic transitoire. Le trafic classé transite sur des réseaux formant un ensemble « ordonné » (c.a.d. numéroté) d'entités de Réseau (« ER » ou éléments) se rapportant communément à des liaisons, des commutateurs et des stations et les comprenant. Chaque ER est « ordonné »n fonction d'un centre de réseau qui est déterminé fonctionnellement par une connectivité des ER (c'est-à-dire la qualité et la quantité de connexions), et par sa centralité (c'est-à-dire à quelle distance proche du centre du réseau il se trouve). Une adresse numérique attribuée (« numéro hôte »), désignée pendant le classement, spécifie l'emplacement relatif de chaque élément, et produit des informations relatives à la fois à la centralité et à la connectivité du noeud (c.a.d. exprimée comme étant « relative » au centre d'un Réseau Ordonné). Pour l'évaluation du débit de données (trafic), le mappage et la commutation statique au plan topologique sont utilisés. Chaque réseau multidomaines, subdivisé en sous-réseaux ou « zones de commande », utilise une mappe répartie au lieu d'une table d'envoi, pour déterminer les liaisons d'envoi. Seules les informations de mappage pertinentes localement sont conservées pour l'envoi de données. Des objets de réseau et des serveurs de soutien permettent la communication en réseau entre des entités de réseau locales et éloignées. Les objets de réseau, qui sont répartis au niveau de chaque noeud, comprennent un objet COMMUTATEUR et un objet LIAISON. Les objets de réseau (COMMUTATEUR et LIAISON) utilisent le même mécanisme de commande quelles que soient la fonction, la position ou la structure de données particulières de l'objet. Les serveurs de soutien comprennent: un serveur de mappage entre domaines (DMS); un serveur de nom de domaine (DNS); un serveur de mappage intérieur (IMS); et un serveur de nom intérieur (INS). Les serveurs de soutien ont une fonction de soutien de communication pour l'exploitation de réseau. La mise en réseau ordonnée s'effectue selon une méthodologie qui détermine initialement des ensembles de liaison dans un domaine. A partir des ensembles de liaison, une mappe établissant la topologie du Réseau Ordonné est générée par l'IMS. Une matrice de commutation de trajet pour chaque noeud est ensuite générée à partir de la mappe, et est répartie entre les noeuds du domaine. La matrice de commutation de trajet est générée en fonction des quatre classes de trafic. La matrice de trajet située dans chaque noeud prend l'adresse source, l'adresse de destination et la classe de trafic et les utilise pour déterminer la liaison à utiliser pour le trafic. En outre, la matrice de commutation de trajet gère les liaisons temporaires sans protocole, conformément à l'orthogonalité des classes.
PCT/US1999/021684 1998-09-17 1999-09-17 Systeme et procede d'optimisation d'intensite de trafic sur un reseau, au moyen de classes de trafic WO2000019680A2 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU62551/99A AU6255199A (en) 1998-09-17 1999-09-17 System and method for network flow optimization using traffic classes

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US10072398P 1998-09-17 1998-09-17
US60/100,723 1998-09-17

Publications (2)

Publication Number Publication Date
WO2000019680A2 true WO2000019680A2 (fr) 2000-04-06
WO2000019680A3 WO2000019680A3 (fr) 2000-12-21

Family

ID=22281202

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1999/021684 WO2000019680A2 (fr) 1998-09-17 1999-09-17 Systeme et procede d'optimisation d'intensite de trafic sur un reseau, au moyen de classes de trafic

Country Status (3)

Country Link
US (1) US6262976B1 (fr)
AU (1) AU6255199A (fr)
WO (1) WO2000019680A2 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000074303A2 (fr) * 1999-05-27 2000-12-07 Telefonaktiebolaget Lm Ericsson (Publ) Optimisation de la topologie et de la technologie d'un reseau central pour traiter le trafic
CN101426031B (zh) * 2008-12-09 2011-09-21 中兴通讯股份有限公司 一种以太网环的地址刷新方法和装置
CN111435545A (zh) * 2019-04-16 2020-07-21 北京仁光科技有限公司 标绘处理方法、共享图像标绘方法及标绘再现方法

Families Citing this family (197)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6636800B1 (en) * 1997-10-27 2003-10-21 Siemens Aktiengesellschaft Method and device for computer assisted graph processing
US6591299B2 (en) * 1997-11-25 2003-07-08 Packeteer, Inc. Method for automatically classifying traffic with enhanced hierarchy in a packet communications network
US6412000B1 (en) * 1997-11-25 2002-06-25 Packeteer, Inc. Method for automatically classifying traffic in a packet communications network
US6353616B1 (en) * 1998-05-21 2002-03-05 Lucent Technologies Inc. Adaptive processor schedulor and method for reservation protocol message processing
US6963545B1 (en) 1998-10-07 2005-11-08 At&T Corp. Voice-data integrated multiaccess by self-reservation and stabilized aloha contention
US6747959B1 (en) 1998-10-07 2004-06-08 At&T Corp. Voice data integrated mulitaccess by self-reservation and blocked binary tree resolution
DE19905893A1 (de) * 1999-02-11 2000-08-17 Bosch Gmbh Robert Verfahren zur Übertragung von digital codierten Verkehrsnachrichten und Funkempfänger dazu
US6459788B1 (en) * 1999-04-27 2002-10-01 Sprint Communications Company L.P. Call center resource processor
US20060034275A1 (en) * 2000-05-03 2006-02-16 At&T Laboratories-Cambridge Ltd. Data transfer, synchronising applications, and low latency networks
TW571599B (en) * 1999-09-27 2004-01-11 Qualcomm Inc Method and system for querying attributes in a cellular communications system
US6978311B1 (en) * 2000-02-09 2005-12-20 Surf Communications Solutions, Ltd. Scheduling in a remote-access server
US6721335B1 (en) * 1999-11-12 2004-04-13 International Business Machines Corporation Segment-controlled process in a link switch connected between nodes in a multiple node network for maintaining burst characteristics of segments of messages
US6684253B1 (en) * 1999-11-18 2004-01-27 Wachovia Bank, N.A., As Administrative Agent Secure segregation of data of two or more domains or trust realms transmitted through a common data channel
US6728808B1 (en) * 2000-02-07 2004-04-27 3Com Corporation Mechanism for optimizing transaction retries within a system utilizing a PCI bus architecture
US7035934B1 (en) * 2000-03-23 2006-04-25 Verizon Corporate Services Group Inc. System and method for improving traffic analysis and network modeling
US7209959B1 (en) 2000-04-04 2007-04-24 Wk Networks, Inc. Apparatus, system, and method for communicating to a network through a virtual domain providing anonymity to a client communicating on the network
US7173912B2 (en) * 2000-05-05 2007-02-06 Fujitsu Limited Method and system for modeling and advertising asymmetric topology of a node in a transport network
TW480858B (en) * 2000-06-15 2002-03-21 Nat Science Council Expandability design of QoS route and transfer
US6914905B1 (en) 2000-06-16 2005-07-05 Extreme Networks, Inc. Method and system for VLAN aggregation
US7111163B1 (en) 2000-07-10 2006-09-19 Alterwan, Inc. Wide area network using internet with quality of service
US6804222B1 (en) 2000-07-14 2004-10-12 At&T Corp. In-band Qos signaling reference model for QoS-driven wireless LANs
US6970422B1 (en) 2000-07-14 2005-11-29 At&T Corp. Admission control for QoS-Driven Wireless LANs
US6850981B1 (en) 2000-07-14 2005-02-01 At&T Corp. System and method of frame scheduling for QoS-driven wireless local area network (WLAN)
US6862270B1 (en) 2000-07-14 2005-03-01 At&T Corp. Architectural reference model for QoS-driven wireless LANs
US7039032B1 (en) 2000-07-14 2006-05-02 At&T Corp. Multipoll for QoS-Driven wireless LANs
US7756092B1 (en) 2000-07-14 2010-07-13 At&T Intellectual Property Ii, L.P. In-band QoS signaling reference model for QoS-driven wireless LANs connected to one or more networks
US7068632B1 (en) 2000-07-14 2006-06-27 At&T Corp. RSVP/SBM based up-stream session setup, modification, and teardown for QOS-driven wireless LANs
US7151762B1 (en) 2000-07-14 2006-12-19 At&T Corp. Virtual streams for QoS-driven wireless LANs
US6950397B1 (en) 2000-07-14 2005-09-27 At&T Corp. RSVP/SBM based side-stream session setup, modification, and teardown for QoS-driven wireless lans
US7068633B1 (en) 2000-07-14 2006-06-27 At&T Corp. Enhanced channel access mechanisms for QoS-driven wireless lans
US6999442B1 (en) 2000-07-14 2006-02-14 At&T Corp. RSVP/SBM based down-stream session setup, modification, and teardown for QOS-driven wireless lans
US7031287B1 (en) 2000-07-14 2006-04-18 At&T Corp. Centralized contention and reservation request for QoS-driven wireless LANs
US6738825B1 (en) * 2000-07-26 2004-05-18 Cisco Technology, Inc Method and apparatus for automatically provisioning data circuits
US6963537B2 (en) * 2000-07-27 2005-11-08 Corrigent Systems Ltd. Resource reservation in a ring network
US6996631B1 (en) * 2000-08-17 2006-02-07 International Business Machines Corporation System having a single IP address associated with communication protocol stacks in a cluster of processing systems
US8087064B1 (en) 2000-08-31 2011-12-27 Verizon Communications Inc. Security extensions using at least a portion of layer 2 information or bits in the place of layer 2 information
US7315554B2 (en) 2000-08-31 2008-01-01 Verizon Communications Inc. Simple peering in a transport network employing novel edge devices
US6850495B1 (en) * 2000-08-31 2005-02-01 Verizon Communications Inc. Methods, apparatus and data structures for segmenting customers using at least a portion of a layer 2 address header or bits in the place of a layer 2 address header
US6771673B1 (en) * 2000-08-31 2004-08-03 Verizon Communications Inc. Methods and apparatus and data structures for providing access to an edge router of a network
US7149795B2 (en) * 2000-09-18 2006-12-12 Converged Access, Inc. Distributed quality-of-service system
US7454500B1 (en) * 2000-09-26 2008-11-18 Foundry Networks, Inc. Global server load balancing
US7657629B1 (en) 2000-09-26 2010-02-02 Foundry Networks, Inc. Global server load balancing
US9130954B2 (en) 2000-09-26 2015-09-08 Brocade Communications Systems, Inc. Distributed health check for global server load balancing
JP3632756B2 (ja) * 2000-11-22 2005-03-23 日本電気株式会社 通信システム、サーバ、その方法及び記録媒体
US6529481B2 (en) * 2000-11-30 2003-03-04 Pluris, Inc. Scalable and fault-tolerant link state routing protocol for packet-switched networks
US6954581B2 (en) * 2000-12-06 2005-10-11 Microsoft Corporation Methods and systems for managing multiple inputs and methods and systems for processing media content
US6834390B2 (en) * 2000-12-06 2004-12-21 Microsoft Corporation System and related interfaces supporting the processing of media content
FR2818850B1 (fr) * 2000-12-22 2003-01-31 Commissariat Energie Atomique Procede de routage adaptatif par reflexion avec apprentissage par renforcement
US6912592B2 (en) * 2001-01-05 2005-06-28 Extreme Networks, Inc. Method and system of aggregate multiple VLANs in a metropolitan area network
US7035279B2 (en) * 2001-01-09 2006-04-25 Corrigent Systems Ltd. Flow allocation in a ring topology
US7180855B1 (en) 2001-04-19 2007-02-20 At&T Corp. Service interface for QoS-driven HPNA networks
US7142563B1 (en) 2001-02-20 2006-11-28 At&T Corp. Service interface for QoS-driven HPNA networks
US7310326B1 (en) 2001-02-20 2007-12-18 At&T Corporation Enhanced channel access mechanisms for an HPNA network
US20020118642A1 (en) * 2001-02-27 2002-08-29 Lee Daniel Joseph Network topology for use with an open internet protocol services platform
US7269157B2 (en) * 2001-04-10 2007-09-11 Internap Network Services Corporation System and method to assure network service levels with intelligent routing
EP1253746A3 (fr) * 2001-04-24 2005-12-07 Siemens Aktiengesellschaft Procédé et dispositif de multidiffusion
US20020159468A1 (en) * 2001-04-27 2002-10-31 Foster Michael S. Method and system for administrative ports in a routing device
US6832248B1 (en) * 2001-05-10 2004-12-14 Agami Systems, Inc. System and method for managing usage quotas
US20030079005A1 (en) * 2001-05-29 2003-04-24 61C Networks, Inc. System and method for efficient wide area network routing
US6970432B1 (en) * 2001-06-18 2005-11-29 Packeteer, Inc. System and method for dynamically identifying internal hosts in a heterogeneous computing environment with multiple subnetworks
US7720980B1 (en) * 2001-06-19 2010-05-18 Packeteer, Inc. System and method for dynamically controlling a rogue application through incremental bandwidth restrictions
US20030014532A1 (en) * 2001-07-16 2003-01-16 Shean-Guang Chang Method and apparatus for multicast support
US7145878B2 (en) * 2001-07-27 2006-12-05 Corrigent Systems Ltd. Avoiding overlapping segments in transparent LAN services on ring-based networks
US7406424B2 (en) * 2001-08-29 2008-07-29 Hewlett-Packard Development Company, L.P. Migration of a workflow system to changed process definitions
WO2003023640A2 (fr) * 2001-09-07 2003-03-20 Sanrad Procede d'equilibrage de charge pour l'echange de donnees entre plusieurs hotes et entites de memorisation dans un reseau de stockage a base ip
DE60128155T2 (de) * 2001-09-07 2008-01-03 Telefonaktiebolaget Lm Ericsson (Publ) Verfahren und anordnungen zur erzielung einer dynamischen betriebsmittelverteilungsrichtlinie in paketgestützten kommunikationsnetzen
DE60237292D1 (de) * 2001-09-14 2010-09-23 Nokia Inc Vorrichtung und Verfahren zur Paketweiterleitung
US7076564B2 (en) * 2001-09-17 2006-07-11 Micromuse Ltd. Method and apparatus for determining and resolving missing topology features of a network for improved topology accuracy
US7406522B2 (en) * 2001-09-26 2008-07-29 Packeteer, Inc. Dynamic partitioning of network resources
FR2831743B1 (fr) * 2001-10-25 2004-01-30 Cit Alcatel Systeme de routage is-is tolerant aux fautes et procede correspondant
US7668966B2 (en) 2001-11-02 2010-02-23 Internap Network Services Corporation Data network controller
US7133365B2 (en) * 2001-11-02 2006-11-07 Internap Network Services Corporation System and method to provide routing control of information over networks
US7561517B2 (en) 2001-11-02 2009-07-14 Internap Network Services Corporation Passive route control of data networks
US7222190B2 (en) * 2001-11-02 2007-05-22 Internap Network Services Corporation System and method to provide routing control of information over data networks
US7283478B2 (en) * 2001-11-28 2007-10-16 Corrigent Systems Ltd. Traffic engineering in bi-directional ring networks
US7346056B2 (en) * 2002-02-01 2008-03-18 Fujitsu Limited Optimizing path selection for multiple service classes in a network
FR2844946B1 (fr) * 2002-03-15 2004-10-22 Thales Sa Procede de selection et de tri de paquets mis a disposition d'un equipement par un reseau de transmission de donnees par paquets
US8451711B1 (en) * 2002-03-19 2013-05-28 Cisco Technology, Inc. Methods and apparatus for redirecting traffic in the presence of network address translation
WO2003083703A1 (fr) * 2002-03-28 2003-10-09 Precache, Inc. Procede et appareil permettant l'acheminement base sur le contenu, fiable et efficace, et l'interrogation-reponse dans un reseau de publication-abonnement
US20040125745A9 (en) * 2002-04-09 2004-07-01 Ar Card Two-stage reconnect system and method
US6954435B2 (en) * 2002-04-29 2005-10-11 Harris Corporation Determining quality of service (QoS) routing for mobile ad hoc networks
US7383330B2 (en) * 2002-05-24 2008-06-03 Emc Corporation Method for mapping a network fabric
US8051213B2 (en) * 2002-06-06 2011-11-01 International Business Machines Corporation Method for server-directed packet forwarding by a network controller based on a packet buffer threshold
US7315896B2 (en) * 2002-06-06 2008-01-01 International Business Machines Corporation Server network controller including packet forwarding and method therefor
AU2002328749A1 (en) * 2002-06-11 2003-12-22 Bigbangwidth Inc. Method and apparatus for switched physical alternate links in a packet network
US7086061B1 (en) * 2002-08-01 2006-08-01 Foundry Networks, Inc. Statistical tracking of global server load balancing for selecting the best network address from ordered list of network addresses based on a set of performance metrics
US7574508B1 (en) 2002-08-07 2009-08-11 Foundry Networks, Inc. Canonical name (CNAME) handling for global server load balancing
US20040042393A1 (en) * 2002-08-30 2004-03-04 Muneyb Minhazuddin Apparatus and method for data acquisition from network elements having reserved resources for specialized traffic
US7305464B2 (en) * 2002-09-03 2007-12-04 End Ii End Communications, Inc. Systems and methods for broadband network optimization
US20040056862A1 (en) * 2002-09-25 2004-03-25 Swedberg Daniel I. Method and apparatus facilitating adaptation of an entity to an information-based economy
AU2003300900A1 (en) * 2002-12-13 2004-07-09 Internap Network Services Corporation Topology aware route control
US7983239B1 (en) 2003-01-07 2011-07-19 Raytheon Bbn Technologies Corp. Systems and methods for constructing a virtual model of a multi-hop, multi-access network
US7420922B2 (en) * 2003-03-12 2008-09-02 Corrigent Systems Ltd Ring network with variable rate
US20050021683A1 (en) * 2003-03-27 2005-01-27 Chris Newton Method and apparatus for correlating network activity through visualizing network data
US7251216B2 (en) * 2003-04-23 2007-07-31 At&T Corp. Methods and systems for configuring voice over internet protocol network quality of service
US8254267B2 (en) * 2003-07-15 2012-08-28 Agere Systems Inc. Extensible traffic generator for synthesis of network data traffic
US7881229B2 (en) * 2003-08-08 2011-02-01 Raytheon Bbn Technologies Corp. Systems and methods for forming an adjacency graph for exchanging network routing data
US7606927B2 (en) 2003-08-27 2009-10-20 Bbn Technologies Corp Systems and methods for forwarding data units in a communications network
US20080089347A1 (en) * 2003-08-29 2008-04-17 End Ii End Communications Inc. Systems and methods for broadband network optimization
US9584360B2 (en) 2003-09-29 2017-02-28 Foundry Networks, Llc Global server load balancing support for private VIP addresses
US20050086385A1 (en) * 2003-10-20 2005-04-21 Gordon Rouleau Passive connection backup
US7668083B1 (en) 2003-10-28 2010-02-23 Bbn Technologies Corp. Systems and methods for forwarding data in a communications network
US7516492B1 (en) * 2003-10-28 2009-04-07 Rsa Security Inc. Inferring document and content sensitivity from public account accessibility
US7369512B1 (en) 2003-11-06 2008-05-06 Bbn Technologies Corp. Systems and methods for efficient packet distribution in an ad hoc network
US7974191B2 (en) * 2004-03-10 2011-07-05 Alcatel-Lucent Usa Inc. Method, apparatus and system for the synchronized combining of packet data
JP4530707B2 (ja) * 2004-04-16 2010-08-25 株式会社クラウド・スコープ・テクノロジーズ ネットワーク情報提示装置及び方法
US7865617B1 (en) * 2004-06-10 2011-01-04 Infoblox Inc. Maintaining consistency in a database
US7584301B1 (en) 2004-05-06 2009-09-01 Foundry Networks, Inc. Host-level policies for global server load balancing
US7496651B1 (en) 2004-05-06 2009-02-24 Foundry Networks, Inc. Configurable geographic prefixes for global server load balancing
US7536693B1 (en) 2004-06-30 2009-05-19 Sun Microsystems, Inc. Method for load spreading of requests in a distributed data storage system
US7328303B1 (en) 2004-06-30 2008-02-05 Sun Microsystems, Inc. Method and system for remote execution of code on a distributed data storage system
US7734643B1 (en) 2004-06-30 2010-06-08 Oracle America, Inc. Method for distributed storage of data
US7552356B1 (en) 2004-06-30 2009-06-23 Sun Microsystems, Inc. Distributed data storage system for fixed content
CA2572948A1 (fr) * 2004-07-09 2006-02-16 Interdigital Technology Corporation Separation d'un reseau maille logique et physique
US7423977B1 (en) 2004-08-23 2008-09-09 Foundry Networks Inc. Smoothing algorithm for round trip time (RTT) measurements
US7330431B2 (en) * 2004-09-03 2008-02-12 Corrigent Systems Ltd. Multipoint to multipoint communication over ring topologies
US20060078126A1 (en) * 2004-10-08 2006-04-13 Philip Cacayorin Floating vector scrambling methods and apparatus
US7974223B2 (en) * 2004-11-19 2011-07-05 Corrigent Systems Ltd. Virtual private LAN service over ring networks
DE102004057496B4 (de) * 2004-11-29 2006-08-24 Siemens Ag Verfahren und Vorrichtung zur automatischen Neueinstellung von Grenzen für Zugangskontrollen zur Beschränkung des Verkehrs in einem Kommunikationsnetz
US7804787B2 (en) * 2005-07-08 2010-09-28 Fluke Corporation Methods and apparatus for analyzing and management of application traffic on networks
US7536187B2 (en) * 2005-08-23 2009-05-19 Cisco Technology, Inc. Supporting communication sessions at a mobile node
US8060534B1 (en) * 2005-09-21 2011-11-15 Infoblox Inc. Event management
US7870232B2 (en) 2005-11-04 2011-01-11 Intermatic Incorporated Messaging in a home automation data transfer system
US7698448B2 (en) 2005-11-04 2010-04-13 Intermatic Incorporated Proxy commands and devices for a home automation data transfer system
US7694005B2 (en) * 2005-11-04 2010-04-06 Intermatic Incorporated Remote device management in a home automation data transfer system
US7742432B2 (en) * 2006-01-05 2010-06-22 International Busniness Machines Corporation Topology comparison
US7983150B2 (en) * 2006-01-18 2011-07-19 Corrigent Systems Ltd. VPLS failure protection in ring networks
US7509434B1 (en) * 2006-01-26 2009-03-24 Rockwell Collins, Inc. Embedded MILS network
US7808931B2 (en) * 2006-03-02 2010-10-05 Corrigent Systems Ltd. High capacity ring communication network
US20070276915A1 (en) * 2006-04-04 2007-11-29 Wireless Services Corp. Managing messages between multiple wireless carriers to multiple enterprises using a relatively limited number of identifiers
US7782759B2 (en) * 2006-04-21 2010-08-24 Microsoft Corporation Enabling network devices to run multiple congestion control algorithms
US7593400B2 (en) * 2006-05-19 2009-09-22 Corrigent Systems Ltd. MAC address learning in a distributed bridge
CN100571185C (zh) * 2006-06-05 2009-12-16 华为技术有限公司 一种跨不同管理域网络的边缘连接选路方法
US7672238B2 (en) * 2006-08-08 2010-03-02 Opnet Technologies, Inc. Mapping off-network traffic to an administered network
US7660303B2 (en) 2006-08-22 2010-02-09 Corrigent Systems Ltd. Point-to-multipoint functionality in a bridged network
US7660234B2 (en) * 2006-09-22 2010-02-09 Corrigent Systems Ltd. Fault-tolerant medium access control (MAC) address assignment in network elements
US8584199B1 (en) 2006-10-17 2013-11-12 A10 Networks, Inc. System and method to apply a packet routing policy to an application session
US8312507B2 (en) 2006-10-17 2012-11-13 A10 Networks, Inc. System and method to apply network traffic policy to an application session
CN101141284B (zh) * 2007-01-31 2011-01-19 中兴通讯股份有限公司 业务带宽配置方法和网管系统
EP2057793A1 (fr) * 2007-03-14 2009-05-13 Hewlett-Packard Development Company, L.P. Création d'un pont synthétique
US8805982B1 (en) * 2007-06-29 2014-08-12 Ciena Corporation Progressively determining a network topology and using neighbor information to determine network topology
US8224942B1 (en) 2007-10-02 2012-07-17 Google Inc. Network failure detection
US8199671B2 (en) * 2008-06-09 2012-06-12 Hewlett-Packard Development Company, L.P. Throttling network traffic generated by a network discovery tool during a discovery scan
US8301583B2 (en) 2008-10-09 2012-10-30 International Business Machines Corporation Automated data conversion and route tracking in distributed databases
US9183260B2 (en) 2008-10-09 2015-11-10 International Business Machines Corporation Node-level sub-queries in distributed databases
US8145652B2 (en) 2008-10-09 2012-03-27 International Business Machines Corporation Automated propagation of non-conflicting queries in distributed databases
US8285710B2 (en) * 2008-10-09 2012-10-09 International Business Machines Corporation Automated query path reporting in distributed databases
US8005016B2 (en) * 2008-10-28 2011-08-23 Nortel Networks Limited Provider link state bridging (PLSB) computation method
US9264307B2 (en) 2008-11-12 2016-02-16 Teloip Inc. System, apparatus and method for providing improved performance of aggregated/bonded network connections between remote sites
US8155158B2 (en) * 2008-11-12 2012-04-10 Patricio Humberto Saavedra System, apparatus and method for providing aggregated network connections
US9426029B2 (en) 2008-11-12 2016-08-23 Teloip Inc. System, apparatus and method for providing improved performance of aggregated/bonded network connections with cloud provisioning
US9929964B2 (en) 2008-11-12 2018-03-27 Teloip Inc. System, apparatus and method for providing aggregation of connections with a secure and trusted virtual network overlay
US9264350B2 (en) 2008-11-12 2016-02-16 Teloip Inc. System, apparatus and method for providing improved performance of aggregated/bonded network connections with multiprotocol label switching
US9692713B2 (en) 2008-11-12 2017-06-27 Teloip Inc. System, apparatus and method for providing a virtual network edge and overlay
US7913024B2 (en) * 2008-12-09 2011-03-22 International Business Machines Corporation Differentiating traffic types in a multi-root PCI express environment
US7856024B1 (en) * 2008-12-12 2010-12-21 Tellabs San Jose, Inc. Method and apparatus for integrating routing and bridging functions
US8144582B2 (en) * 2008-12-30 2012-03-27 International Business Machines Corporation Differentiating blade destination and traffic types in a multi-root PCIe environment
US8300637B1 (en) * 2009-01-05 2012-10-30 Sprint Communications Company L.P. Attribute assignment for IP dual stack devices
US7929440B2 (en) * 2009-02-20 2011-04-19 At&T Intellectual Property I, Lp Systems and methods for capacity planning using classified traffic
US8139504B2 (en) * 2009-04-07 2012-03-20 Raytheon Bbn Technologies Corp. System, device, and method for unifying differently-routed networks using virtual topology representations
US8417938B1 (en) * 2009-10-16 2013-04-09 Verizon Patent And Licensing Inc. Environment preserving cloud migration and management
US8472313B2 (en) * 2009-10-26 2013-06-25 Telcordia Technologies, Inc. System and method for optical bypass routing and switching
US8825255B2 (en) * 2010-03-02 2014-09-02 International Business Machines Corporation Reconciling service class-based routing affecting user service within a controllable transit system
US20110218835A1 (en) * 2010-03-02 2011-09-08 International Business Machines Corporation Changing priority levels within a controllable transit system
US20110218833A1 (en) * 2010-03-02 2011-09-08 International Business Machines Corporation Service class prioritization within a controllable transit system
US10956999B2 (en) 2010-03-02 2021-03-23 International Business Machines Corporation Service class prioritization within a controllable transit system
US8549148B2 (en) 2010-10-15 2013-10-01 Brocade Communications Systems, Inc. Domain name system security extensions (DNSSEC) for global server load balancing
WO2012157017A1 (fr) * 2011-05-16 2012-11-22 Hitachi, Ltd. Système informatique pour affecter une adresse ip à un appareil de communications dans un sous-système informatique nouvellement ajouté, et procédé pour ajouter un nouveau sous-système informatique à un système informatique
US9083627B2 (en) * 2011-12-20 2015-07-14 Cisco Technology, Inc. Assisted traffic engineering for minimalistic connected object networks
US10158554B1 (en) * 2012-02-29 2018-12-18 The Boeing Company Heuristic topology management system for directional wireless networks
US9118618B2 (en) 2012-03-29 2015-08-25 A10 Networks, Inc. Hardware-based packet editor
US9338225B2 (en) 2012-12-06 2016-05-10 A10 Networks, Inc. Forwarding policies on a virtual service network
US20150237400A1 (en) * 2013-01-05 2015-08-20 Benedict Ow Secured file distribution system and method
WO2014144837A1 (fr) 2013-03-15 2014-09-18 A10 Networks, Inc. Traitement de paquets de données au moyen d'un chemin de réseau basé sur une politique
US10038693B2 (en) 2013-05-03 2018-07-31 A10 Networks, Inc. Facilitating secure network traffic by an application delivery controller
US10003536B2 (en) 2013-07-25 2018-06-19 Grigore Raileanu System and method for managing bandwidth usage rates in a packet-switched network
US9307018B2 (en) * 2013-09-11 2016-04-05 International Business Machines Corporation Workload deployment with real-time consideration of global network congestion
US9942152B2 (en) 2014-03-25 2018-04-10 A10 Networks, Inc. Forwarding data packets using a service-based forwarding policy
WO2016003332A1 (fr) * 2014-07-01 2016-01-07 Telefonaktiebolaget L M Ericsson (Publ) Procédés et nœuds pour réguler l'encombrement
US10333832B2 (en) 2014-09-17 2019-06-25 Adaptiv Networks Inc. System, apparatus and method for providing improved performance of aggregated/bonded network connections with multiprotocol label switching
US10924408B2 (en) 2014-11-07 2021-02-16 Noction, Inc. System and method for optimizing traffic in packet-switched networks with internet exchanges
US10268467B2 (en) 2014-11-11 2019-04-23 A10 Networks, Inc. Policy-driven management of application traffic for providing services to cloud-based applications
US9769070B2 (en) 2015-01-28 2017-09-19 Maxim Basunov System and method of providing a platform for optimizing traffic through a computer network with distributed routing domains interconnected through data center interconnect links
TWI566544B (zh) * 2015-05-14 2017-01-11 鴻海精密工業股份有限公司 網路檢測方法與使用該方法的控制器
CN106301973B (zh) * 2015-05-14 2019-07-23 南宁富桂精密工业有限公司 网络检测方法与使用所述方法的控制器
US9954777B2 (en) * 2016-01-14 2018-04-24 International Business Machines Corporation Data processing
US11122063B2 (en) * 2017-11-17 2021-09-14 Accenture Global Solutions Limited Malicious domain scoping recommendation system
US10742553B1 (en) 2018-05-29 2020-08-11 Juniper Networks, Inc. Forwarding information base caching
EP3987830A1 (fr) 2019-06-21 2022-04-27 Lutron Technology Company LLC Amélioration des pièces jointes dans un réseau
EP4070484A1 (fr) 2019-12-02 2022-10-12 Lutron Technology Company LLC Qualification de liaison basée sur le bruit de fond en percentiles
US11770324B1 (en) 2019-12-02 2023-09-26 Lutron Technology Company Llc Processing advertisement messages in a mesh network
US10931552B1 (en) * 2020-01-23 2021-02-23 Vmware, Inc. Connectivity check with service insertion
CN115868126A (zh) * 2020-05-08 2023-03-28 路创技术有限责任公司 在网状网络中分配路由器装置
US11252018B2 (en) 2020-07-01 2022-02-15 Vmware, Inc. Service chaining with service path monitoring
US11533265B2 (en) 2020-07-23 2022-12-20 Vmware, Inc. Alleviating flow congestion at forwarding elements
US11165676B1 (en) * 2020-11-11 2021-11-02 Vmware, Inc. Generating network flow profiles for computing entities
EP4123971A1 (fr) * 2021-07-20 2023-01-25 Nokia Solutions and Networks Oy Traitement de données dans une pile de protocoles ethernet

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5682479A (en) * 1995-05-05 1997-10-28 Silicon Graphics, Inc. System and method for network exploration and access
US5732086A (en) * 1995-09-21 1998-03-24 International Business Machines Corporation System and method for determining the topology of a reconfigurable multi-nodal network
US5793765A (en) * 1993-09-07 1998-08-11 Koninklijke Ptt Nederland N.V. Method for selecting links in networks

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5455865A (en) * 1989-05-09 1995-10-03 Digital Equipment Corporation Robust packet routing over a distributed network containing malicious failures
CA2124974C (fr) * 1993-06-28 1998-08-25 Kajamalai Gopalaswamy Ramakrishnan Methode et appareil d'affectation de metriques de liaison dans les reseaux a trajet minimal
US5699347A (en) * 1995-11-17 1997-12-16 Bay Networks, Inc. Method and apparatus for routing packets in networks having connection-oriented subnetworks
US5734580A (en) 1996-03-13 1998-03-31 Rakov; Mikhail A. Method of interconnecting nodes and a hyperstar interconnection structure

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5793765A (en) * 1993-09-07 1998-08-11 Koninklijke Ptt Nederland N.V. Method for selecting links in networks
US5682479A (en) * 1995-05-05 1997-10-28 Silicon Graphics, Inc. System and method for network exploration and access
US5732086A (en) * 1995-09-21 1998-03-24 International Business Machines Corporation System and method for determining the topology of a reconfigurable multi-nodal network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
RODRIGUEZ-MORAL A: "LIBRA - AN INTEGRATED FRAMEWORK FOR TYPE OF SERVICE-BASED ADAPTIVE ROUTING IN THE INTERNET AND INTRANETS" BELL LABS TECHNICAL JOURNAL,US,BELL LABORATORIES, vol. 2, no. 2, 21 March 1997 (1997-03-21), pages 42-67, XP000695169 ISSN: 1089-7089 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000074303A2 (fr) * 1999-05-27 2000-12-07 Telefonaktiebolaget Lm Ericsson (Publ) Optimisation de la topologie et de la technologie d'un reseau central pour traiter le trafic
WO2000074303A3 (fr) * 1999-05-27 2001-01-25 Ericsson Telefon Ab L M Optimisation de la topologie et de la technologie d'un reseau central pour traiter le trafic
US6631128B1 (en) 1999-05-27 2003-10-07 Telefonaktiebolaget L M Ericcson (Publ) Core network optimization of topology and technology for traffic handling
CN101426031B (zh) * 2008-12-09 2011-09-21 中兴通讯股份有限公司 一种以太网环的地址刷新方法和装置
CN111435545A (zh) * 2019-04-16 2020-07-21 北京仁光科技有限公司 标绘处理方法、共享图像标绘方法及标绘再现方法

Also Published As

Publication number Publication date
AU6255199A (en) 2000-04-17
WO2000019680A3 (fr) 2000-12-21
US6262976B1 (en) 2001-07-17

Similar Documents

Publication Publication Date Title
US6262976B1 (en) System and method for network flow optimization using traffic classes
JP4076586B2 (ja) マルチレイヤ・ネットワーク要素用のシステムおよび方法
US6449279B1 (en) Aggregation of data flows over a pre-established path to reduce connections
JP3842303B2 (ja) 多層ネットワーク要素のためのシステムおよび方法
US6643292B2 (en) Efficient packet data transport mechanism and an interface therefor
US6876654B1 (en) Method and apparatus for multiprotocol switching and routing
US5444702A (en) Virtual network using asynchronous transfer mode
EP0937353B1 (fr) Acheminement dans un element de reseau reparti multicouches
US7697527B2 (en) Method and apparatus for direct frame switching using frame contained destination information
US6205146B1 (en) Method of dynamically routing to a well known address in a network
KR20030085016A (ko) 확장 근거리 통신망에서 사용하기 위한 우선순위 기반부하 분산 방법 및 장치
JP2002507366A (ja) 多層ネットワーク要素におけるサービス品質のためのシステムおよび方法
EP1002401A1 (fr) Element de reseau reparti multicouches
US6289017B1 (en) Method of providing redundancy and load sharing among multiple LECs in an asynchronous mode network
Cisco Internetworking Technology Overview
Cisco Bridging and IBM Networking Overview
Cisco Bridging and IBM Networking Overview
Cisco Bridging and IBM Networking Overview
Cisco Bridging and IBM Networking Overview
Cisco Bridging and IBM Networking Overview
Cisco Bridging and IBM Networking Overview
Cisco Bridging and IBM Networking Overview
Cisco Bridging and IBM Networking Overview
Cisco Designing Switched LAN Internetworks
Cisco Designing Switched LAN Internetworks

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AL AM AT AU AZ BA BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GD GE GH GM HR HU ID IL IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT UA UG UZ VN YU ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
AK Designated states

Kind code of ref document: A3

Designated state(s): AL AM AT AU AZ BA BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GD GE GH GM HR HU ID IL IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT UA UG UZ VN YU ZW

AL Designated countries for regional patents

Kind code of ref document: A3

Designated state(s): GH GM KE LS MW SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase