EP0724795A1 - Appareil et procede de determination de la topologie d'un reseau - Google Patents

Appareil et procede de determination de la topologie d'un reseau

Info

Publication number
EP0724795A1
EP0724795A1 EP94929738A EP94929738A EP0724795A1 EP 0724795 A1 EP0724795 A1 EP 0724795A1 EP 94929738 A EP94929738 A EP 94929738A EP 94929738 A EP94929738 A EP 94929738A EP 0724795 A1 EP0724795 A1 EP 0724795A1
Authority
EP
European Patent Office
Prior art keywords
data
network
devices
relay devices
port
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP94929738A
Other languages
German (de)
English (en)
Inventor
Timothy L. Orr
Eric W. Gray
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cabletron Systems Inc
Original Assignee
Cabletron Systems Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cabletron Systems Inc filed Critical Cabletron Systems Inc
Publication of EP0724795A1 publication Critical patent/EP0724795A1/fr
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/02Topology update or discovery
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/12Discovery or management of network topologies
    • H04L41/122Discovery or management of network topologies of virtualised topologies, e.g. software-defined networks [SDN] or network function virtualisation [NFV]

Definitions

  • This invention relates to a system for determining the topology of a computer network which includes data-relay devices and node devices, the topology being determined based on a comparison of source addresses "heard" by the various data-relay devices.
  • Computer networks are widely used to provide increased computing power, sharing of resources and communication between users.
  • Computer systems and computer system components are interconnected to form a network.
  • Networks may include a number of computer devices within a room, building or site that are interconnected by a high speed local data link such as local area network (LAN), token ring, Ethernet, or the like.
  • Local networks in different locations may be interconnected by techniques such as packet switching, microwave links and satellite links to form a world-wide network.
  • a network may include several hundred or more interconnected devices.
  • Network management systems have been utilized in the past in attempts to address such issues.
  • Prior art network management systems typically operated by remote access to and monitoring of information from network devices.
  • the network management system collected large volumes of information which required evaluation by a network administrator.
  • Prior art network management systems place a tremendous burden on the network administrator. He/she must be a networking expert in order to understand the implications of a change in a network device parameter.
  • the administrator must also understand the topology of each section of the network in order to understand what may have caused the change.
  • the administrator must sift. through reams of information and false alarms in order.to determine the cause of a problem.
  • An important aspect of any network management system is its ability to accurately represent interactions within the network and between network devices. Toward this end it is crucial that any network management system be provided with some means to determine the topology of the network.
  • a data-relay device e.g., a bridge
  • a list of network addresses "heard" by each port of a data-relay device is compiled for each data-relay device in a computer network.
  • the network includes a plurality of data-relay devices and node devices interconnected by a data bus.
  • the node devices e.g., work stations or disk units
  • the data-relay devices also transmit their own source address with replies sent to the network management system.
  • Each of the data-relay devices acquires and maintains a source address table which lists the addresses heard by each port of the data-relay device. These lists are then compared to determine whether there is a direct or transitive connection between select ports on different data-relay devices and the resulting connections are used to define a topology showing the interconnections between the devices in the network.
  • a node device z that is not in the set of addresses heard by X must be in some set X where q does not equal i. If port Y. is connected to port X. , then z must be in set Y. and in no other set Y such that r does not equal j. Thus, to determine which port of Y is connected to port X. we test each port of Y against the other ports of X, eliminating the ports of Y for which the intersection is the empty set. If enough of the set X ' ⁇ and Y 's are empty, there may be no ports remaining and the technique fails. Otherwise, one is left with one port Y which is the port that is connected to X ⁇ .
  • the apparatus of the present invention is a system for use with a digital computer network and preferably with a network management system.
  • the network management system includes a virtual network having models representing network devices, each model containing network data relating to the corresponding network device and means for processing the network data to provide user information.
  • the system includes means for transferring network data from the network devices to the corresponding models, and means for supplying user information from the virtual network to a user.
  • a model of each data-relay device includes as network data a list of source addresses heard by each port of the data-relay device and the processing means are inference handlers which compare the sets of addresses of the various data-relay devices to determine the topology of the computer network.
  • the determined network topology is used by the management system for controlling traffic on the network, optimizing network resources, security, isolation of network faults, and the like.
  • the determined topology may be provided to the user by a video display unit connected to the computer.
  • Fig. 1 is a schematic diagram showing an example of a network
  • Fig. 2 is a schematic diagram showing the partition imposed by data-relay device A in the network of Fig. 1;
  • Fig. 3 is a schematic diagram showing the partition imposed by data-relay device B in the network of Fig. 1;
  • Fig. 4 is a schematic diagram showing the partition imposed by data-relay device C in the network of Fig. 1;
  • Fig. 5 is a schematic diagram showing the combined partitions of the data-relay devices A, B and C in the network of Fig. 1;
  • Fig..6 is a schematic diagram showing trial connections between the data-relay devices of the network of Fig. 1, determined according to the present invention
  • Fig. 7a is a schematic diagram showing the final connections between the data-relay devices in the network of Fig. 1, determined according to the present invention, and Fig. 7b is the same as Fig. 7a but further includes the node devices between the data-relay devices;
  • Fig. 8 shows an example of the source address tables for each port of the data-relay devices in the network of Fig. 1;
  • Fig. 9 is a flow chart showing a method of determining a connection between data-relay devices according to the present invention.
  • Fig. 10 is a flow chart showing a further method of determining connections between data-relay devices according to the present invention.
  • Fig. 11a is a flow chart showing a method of making the trial connections and final connections (see Figs. 6-7) according to the present invention to determine the topology of the network of Fig. 1;
  • Figs, lib and lie are schematic diagrams showing the elimination of redundant connections in two sample networks;
  • Fig. 12 is a block diagram of a network management system which may utilize the present invention.
  • Fig. 13 is a block diagram showing an example of a network connected to the network management system of Fig. 13;
  • Fig. 14 is a schematic diagram showing the structure of models and the relations between models in the network management system of Fig. 13 which utilizes the present invention.
  • the present invention is an apparatus and method for determining the topology of a computer network consisting of data-relay devices and node devices, which topology may be computed automatically by obtaining and processing information from the network devices.
  • a data-relay device is a network element that provides electrical and/or logical isolation of local area network segments.
  • a data-relay device forwards network packets from one port of the device to other ports of the device, amplifying and retiming the electrical signals to (a) increase the maximum cable length, (b) increase the number of permissible nodes on the network, and/or (c) increase the number of network segments.
  • Examples of data-relay devices include bridges, repeaters, hubs and the like. Bridges are defined in IEEE 802.1(d) specification, and repeaters are defined in IEEE 802.1(e) specification. In addition to the above functions, bridges also provide a storage and forward mechanism with packet filtering, to reduce the network load. Hubs are bridges but forward all traffic without filtering. For use in this invention, the hub must be an "intelligent" hub with a source address table for network management as described below.
  • each data-relay device on the network has a source address table, which is a list of network addresses "heard" by each port of the device.
  • a source address table which is a list of network addresses "heard" by each port of the device.
  • Entries in the source address table are usually "aged,” so that an entry exists in the table for a finite period of time unless refreshed by another packet being received from the same source address on the same port. This maintains the currency of the table, since a device which is physically removed from the network will eventually disappear from the source address table.
  • the data-relay device will not receive a packet from that node for entry on the source address table. However, this can be corrected for by periodically poling all of the node devices on the network to confirm the existence of all devices on the network.
  • data-relay devices In order to prevent loops in the network, and thus duplication of packets, data-relay devices must not be connected redundantly. Some devices, such as IEEE 802.1(d) compliant bridges, use a spanning tree algorithm to automatically detect and disable redundant devices. In performing their primary function of packet forwarding, data-relay devices are essentially transparent to the network; They do not transmit packets with their own addresses. However, data-relay devices do transmit responses to network management requests with their own address in the source address field of the packet. This feature is important to the computation of the network topology according to the present invention since, otherwise, the data-relay devices would not appear in each other's source address tables.
  • Some data-relay devices perform directed transmissions of management packets. Thus, instead of the management response being transmitted on all ports of the device, it is transmitted only on the port where the management request was received. This is usually evident in bridges where the management packet itself is subject to a filtering process.
  • the address of the device may or may not appear in the source address tables of other data-relay devices, depending on the relative placement of the data-relay devices and the management station. This fact is taken into account in accordance with the present invention when computing the network topology form the source address tables as described below.
  • a sample network is shown in Fig. 1.
  • the network includes three data-relay devices, 10, 12 and 14, which have been identified as devices A, B and C. each with two numbered ports.
  • the data-relay devices separate the network into three sections or partitions, with end-node devices d-k and management station m at various locations on the network.
  • the partitions generated by devices A, B and C are shown in Figs. 2, 3 and 4, respectively.
  • the combination of partitions is shown in Fig. 5.
  • partition 9 includes node devices d, e and m connected to port 1 of data-relay device A; • partition 11 includes node devices f and g and between port 2 of data-relay device A and port 1 of data-relay device C;
  • partition 13 includes node devices h and i between port 2 of data-relay device C and port 1 of data-relay device B;
  • partition 15 includes node devices j and k connected to port 2 of data-relay device B.
  • the source address table for each data-relay device reflects the partitioning.
  • Each port "hears" all the devices contained in its partition.
  • the sets of devices heard by the various ports are:
  • A2 (B, C, f, g, h, i, j, k)
  • the data-relay devices may not be heard by other data-relay devices due to directed transmission of management packets. For example, note that Bl does not hear device C, since in this example C is directing its management traffic toward the management station m. However, since in this example A does not direct its management traffic, Bl does hear A.
  • a transitive connection is defined as a connection between two nodes that crosses one or more data-relay devices. For example, in Fig. 1 port A2 is transitively connected to port Bl via device C.
  • a direct connection has no data-relay device intervening.
  • port A2 is directly connected to port Cl.
  • a first method of the present invention for determining connections between the data-relay devices is illustrated by the flow chart of Fig. 9, wherein the address tables for a pair of data-relay devices X and Y, such as devices A and C in Fig. 8.
  • the address tables for a pair of data-relay devices X and Y such as devices A and C in Fig. 8.
  • C is an element of A2
  • A is an element of Cl
  • A2 is connected to Cl (see boxes around relevant entries in the address tables of Fig. 8).
  • a node device z that is not in the set X. must be in some set X such that q does not equal i. If port Y. is connected to port X., then z must be in the set Y. , and in no other set Y such that r does not equal j . In other words, for all q not equal to i, there is some intersection of the sets Xq and Yr if p c ort Xl- is connected to port Y . If there is no intersection between X and Y (i.e., the null set), then port X. is not connected to port Y .
  • step 300 of Fig. 10 we test each port of Y against the other ports of X, eliminating the ports of Y for which the intersection is the empty set. Thus, in step 300 of Fig. 10, we consider whether Y hears X . If the answer is, "yes” then there is a connection between X. and Y (step 301); if the answer is, "no” we check whether this is the last port on Y (step 302) and, if not, we increment to the next port on Y (step 303), After checking all of the ports of Y, if the intersection is the null set, then no connection can be determined between X. and any port on Y (the method fails)(step 304).
  • connections, or edges between the two ports include both transitive connections as well as direct connections. If a transitive connection exists, then the direct connection is redundant and should be eliminated. For example, as shown in Fig. 7, because there is a transitive connection between A2 and Bl, the direct connection between A2 and Bl has been eliminated.
  • step 320 we first form trial connections between X. and Y. , as shown in Fig. 6.
  • step 321 we check whether X. is directly connected to Y. (step 321). If there is no direct connection, we check whether this is the last port on X (step 322) and either increment to the next port on X (step 323) or, if this is the last port on X, we increment to the next device (step 324). If we find that X. is directly connected to Y. , we then check whether X. is transitively connected to Y. (step 325). If the answer is, "yes" we remove the direct connection between X. and Y. (step 326) .
  • the topology of Fig. 7 can be further defined to include the node devices in each network segment.
  • Each node device belongs to a segment if its address appears in the intersection of the port sets X. of each data-relay port which is directly connected to the segment.
  • node devices h and i appear in the address tables for each ports C2 and Bl and, therefore, nodes h and i should be included as part of the connection 22 between devices A and C.
  • Fig. 7b shows a topology further including the node devices.
  • the method and apparatus for determining network topology form a part of a network management system, or provide network topology information to a network management system, such as that described in copending and commonly assigned U.S. Application Serial No. 07/583,509, filed September 17, 1990, entitled, NETWORK MANAGEMENT SYSTEM USING MODEL BASED INTELLIGENCE, by R. Dev et al; that application was also filed as PCT/US91/06725 on September 17, 1991 and published as Int'l Publ. No. WO92/05485 on April 2, 1992; that application is hereby incorporated by reference in its entirety.
  • the present invention may also be used in combination with other network management systems, or in any application where it is desired to obtain the network topology.
  • Fig. 12 shows a block diagram of the system.
  • the major components of the network management system are a user interface 410, a virtual network machine 412, and a device communication manager 414.
  • the user interface 410 which may include a video display screen, keyboard, mouse and printer, provides all interaction with the user.
  • the user interface controls the screen, keyboard, mouse and printer and provides the user with different views of the network that is being managed.
  • the user interface receives network information from the virtual network machine 412.
  • the virtual network machine 412 contains a software representation of the network being managed, including models that represent the devices and other entities associated with the network, and relations between the models.
  • the virtual network machine 412 is associated with a database manager 416 which manages the storage and retrieval of disk-based data. Such data includes configuration data, an event log, statistics, history and current state information.
  • the device communication manager 414 is connected to a network 418 and handles communication between the virtual network machine 412 and network devices.
  • the data received from the network devices is provided by the device communication manager to the virtual network machine 412.
  • the device communication manager 414 converts generic requests from the virtual network machine 412 to the required network management protocol for communicating with each network device.
  • Existing network management protocols include Simple Network Management Protocol (SNMP), Internet Control Message Protocol (ICMP) and many proprietary network management protocols. Certain types of network devices are designed to communicate with a network management system using one of these protocols.
  • a view personality module 420 connected to the user interface 410 contains a collection of data modules which permit the user interface to provide different views of the network.
  • a device personality module 422 connected to the virtual network machine 412 contains a collection of data modules which permit devices and other network entities to be configured and managed with the network management system.
  • a protocol personality module 424 connected to the device communication manager contains a collection of data modules which permit communication with all devices that communicate using the network management protocols specified by the module 424.
  • the personality modules 420, 422 and 424 provide a system that is highly flexible and user configurable. By altering the personality module 420, the user can specify customized views or displays. By changing the device personality module 422, the user can add new types of network devices to the system. Similarly, by changing the protocol personality module 424, the network management system can operate with new or different network management protocols.
  • the personality modules permit the system to be reconfigured and customized without changing the basic control code of the system.
  • the hardware for supporting the system of Fig. 12 is typically a workstation such as a Sun Model 3 or 4, or a 386 PC compatible computer running Unix. A minimum of 8 megabytes of memory is required with a display device which supports a minimum of 640 x 680 pixels x 256 color resolution.
  • the basic software includes a Unix release that supports sockets, X-windows and Open Software Foundation Motif 1.0.
  • the network management system is implemented using the C++ programming language, but could be implemented in other object-oriented languages such as Eiffel, Smalltalk, ADA, or the like.
  • the virtual network machine 412 and the device communication manager 414 may be run on a separate computer from the user interface 410 for increased operating speed.
  • the network includes workstations 430, 431, 432, 433 and disk units 434 and 435 interconnected by a data bus 436.
  • Workstations 430 and 431 and disk unit 434 are located in a room 438, and workstations 432 and 433 and disk unit 435 are located in a room 440.
  • the rooms 438 and 440 are located within a building 442.
  • Network devices 444, 445 and 446 are interconnected by a data bus 447 and are located in a building 448 at the same site as building 442.
  • the network portions in buildings 42 and 448 are interconnected by a bridge 450.
  • the network devices in building 452 are interconnected to the network in building 448 by interface devices 459 and 460, which may communicate by a packet switching system, a microwave link or a satellite link.
  • the network management system shown in Fig. 12 and described above is connected to the network of Fig. 13 at any convenient point, such as data bus 436.
  • the network management system shown in Fig. 12 performs two major operations during normal operation. It services user requests entered by the user at user interface 410 and provides network information such as alarms and events to user interface 410.
  • the virtual network machine 412 polls the network to obtain information for updating the network models as described hereinafter.
  • the network devices send status information to the network management system automatically without polling. In either case, the information received from the network is processed so that the topology, operational status, faults and other information pertaining to the network are presented to the user in a systematized and organized manner.
  • Each model includes a number a attributes and one or more inference handlers.
  • the attributes are data which define the characteristics and status of the network entity being modeled.
  • Basic attributes include a model name, a model type name, a model type handle, a polling interval, a next-time-to-poll, a retry count, a contact status, an activation status, a time-of-last-poll and statistics pertaining to the network entity which is being modeled. Polling of network devices will be described hereinafter.
  • attributes that are unique to a particular type of network device can be defined. For example, a network bridge will contain an address source table that defines the devices that are heard on each port of the bridge. A model of the network bridge can contain, as one of its attributes, a copy of the table.
  • the models used in the virtual network machine also include one or more inference handlers.
  • An inference handler is a C++ object which performs a specified computation, decision, action or inference.
  • the inference handlers collectively constitute the intelligence of the model.
  • An individual inference handler is defined by the type of processing performed, the source or sources of the stimulus and the destination of the result.
  • the result is an output of an inference handler and may include attribute changes, creation or destruction of models, alarms or any other valid output.
  • the operation of the inference handler is initiated by a trigger, which is an event occurring in the virtual network machine. Triggers include attribute changes in the same model, attribute changes in another model, relation changes, events, model creation or destruction, and the like.
  • each model includes inference handlers which perform specified functions upon the occurrence of predetermined events which trigger the inference handlers.
  • the inference handlers may process the source address lists according to the methods of this invention (Figs. 9-11).
  • a schematic diagram of a simple model configuration is shown in Fig. 14 to illustrate the concepts of this management system.
  • a device model 480 includes attributes 1 to x and inference handlers 1 to y.
  • a device model 482 includes attributes 1 to u and inference handlers 1 to v.
  • a connect relation 484 indicates that models 480 and 482 are connected in the physical network.
  • a room model 486 includes attributes 1 to m and inference handlers 1 to n.
  • a relation 488 indicates that model 480 is contained within room model 486, and a relation 490 indicates that model 482 is contained within room model 486.
  • Each of the models and the model relations shown in Fig. 14 is implemented as a C++ object. It will be understood that a representation of an actual network would be much more complex than the configuration shown in Fig. 14.
  • the collection of models and model relations in the virtual network machine form a representation of the physical network being managed.
  • the models represent not only the configuration of the network, but also represent its status on a dynamic basis.
  • the status of the network and other information and data relating to the network is obtained by the models in a number of different ways.
  • a primary technique for obtaining information from the network involves polling.
  • a model in the virtual network machine 412 requests the device communication manager 414 to poll the network device which corresponds to the model .
  • the device communication manager 414 converts the request to the necessary protocol for communicating with the network device.
  • the network device returns the requested information to the device communication manager 414, which extracts the device information and forwards it to the virtual network machine 412 for updating one or more attributes in the model of the network device.
  • the polling interval is specified individually for each model and corresponding network device, depending on the importance of the attribute, the frequency with which it is likely to change, and the like.
  • the polling interval in general, is a compromise between a desire that the models accurately reflect the present status of the network device and a desire to minimize network management traffic which could adversely impact normal network operation.
  • the network devices automatically transmit information to the network management system upon the occurrence of significant events without polling. This requires that the network devices be preprogrammed for such operation.
  • the network entity being modeled is not capable of communicating its status to the network management system.
  • models of buildings or rooms containing network devices and models of cables cannot communicate with the corresponding network entities.
  • the status of the network entity is inferred by the model from information contained in models of other network devices. Since successful polling of a network device connected to a cable may indicate that the cable is functioning properly, the status of the cable can be inferred from information contained in a model of the attached network device. Similarly, the operational status of a room can be inferred from the operational status contained in models of the network devices located within the room.
  • a model watch In order for a model to make such inferences, it is necessary for the model to obtain information from related models.
  • a function called a model watch an attribute in one model is monitored or watched by one or more other models. A change in the watched attribute may trigger inference handlers in the watching models.
  • One method is to make a manual data entry. Another is to conduct a "ping sweep" in which requests are sent blindly to a range of addresses, and the active devices respond with a reply. Another is to read system tables, such as the network name servers. Another is to read the address translation tables; and still another is to read the tables f om known network devices.
  • the above techniques are most useful for determining the presence of some network node at a given address; identification of the node as a data-relay device is then accomplished via the management protocol. In many cases, it will be necessary to translate the "management layer address" to a network address; the source address tables will typically utilize management level addresses, whereas the management protocol usually operates via network layer addresses.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Transition And Organic Metals Composition Catalysts For Addition Polymerization (AREA)

Abstract

Appareil et procédé de détermination de la topologie d'un réseau informatique comprenant des dispositifs (A, B, C) de retransmission de données et des dispositifs nodaux (d, ... k), cette détermination étant effectuée sur la base d'une comparaison d'adresses émettrices captées par différents dispositifs de retransmission de données. Une table d'adresses émettrices est compilée pour chaque point d'accès de chaque dispositif de retransmission de données, et, pour chaque paire de points d'accès sélectionnée, les adresses incluses dans lesdites tables sont comparées afin de déterminer s'il existe une intersection des dispositifs captés. Afin de tenir compte des transmissions acheminées qui ne sont pas captées au niveau de chaque point d'accès, une comparaison supplémentaire est effectuée pour tous les autres points d'accès du dispositif, ce qui permet de supprimer les points pour lesquels l'intersection représente l'ensemble vide. A partir des connexions déterminées, une topologie du réseau est représentée sous forme graphique indiquant les connexions directes et à transition. Lorsqu'il existe à la fois une connexion dirigée et une connexion à transition, la connexion directe rodondante est supprimée.
EP94929738A 1993-09-01 1994-08-29 Appareil et procede de determination de la topologie d'un reseau Withdrawn EP0724795A1 (fr)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US115232 1987-10-30
US11523293A 1993-09-01 1993-09-01
PCT/US1994/009690 WO1995006989A1 (fr) 1993-09-01 1994-08-29 Appareil et procede de determination de la topologie d'un reseau

Publications (1)

Publication Number Publication Date
EP0724795A1 true EP0724795A1 (fr) 1996-08-07

Family

ID=22360078

Family Applications (1)

Application Number Title Priority Date Filing Date
EP94929738A Withdrawn EP0724795A1 (fr) 1993-09-01 1994-08-29 Appareil et procede de determination de la topologie d'un reseau

Country Status (4)

Country Link
EP (1) EP0724795A1 (fr)
JP (1) JPH09504913A (fr)
AU (1) AU675362B2 (fr)
WO (1) WO1995006989A1 (fr)

Families Citing this family (44)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6393486B1 (en) 1995-06-23 2002-05-21 Cisco Technology, Inc. System and method using level three protocol information for network centric problem analysis and topology construction of actual or planned routed network
US6883034B1 (en) 1995-06-23 2005-04-19 Cisco Technology, Inc. Method of resolving conflicts in access control lists in router by comparing elements in the lists based on subsumption relations
DE69632144T2 (de) * 1995-11-16 2004-11-25 Loran Network Systems, L.L.C., Wilmington Verfahren zur bestimmung der topologie eines netzwerkes von objekten
US5710885A (en) * 1995-11-28 1998-01-20 Ncr Corporation Network management system with improved node discovery and monitoring
AU6394096A (en) * 1996-06-24 1998-01-07 Netsys Technologies, Inc. Method and apparatus for network centric problem analysis and topology construction
US6115362A (en) * 1997-03-28 2000-09-05 Cabletron Systems, Inc. Method and apparatus for determining frame relay connections
US6128729A (en) * 1997-12-16 2000-10-03 Hewlett-Packard Company Method and system for automatic configuration of network links to attached devices
US6108702A (en) 1998-12-02 2000-08-22 Micromuse, Inc. Method and apparatus for determining accurate topology features of a network
CA2268495C (fr) 1998-12-16 2008-11-18 Loran Network Management Ltd. Methode de determination de topologie de reseaux informatiques
US7856599B2 (en) 2001-12-19 2010-12-21 Alcatel-Lucent Canada Inc. Method and system for IP link management
US8040869B2 (en) * 2001-12-19 2011-10-18 Alcatel Lucent Method and apparatus for automatic discovery of logical links between network devices
DE10163606A1 (de) * 2001-12-21 2003-07-10 Daimler Chrysler Ag Verfahren zum Betreiben eines Netzwerkes mit mehreren Teilnehmern
EP2597816B1 (fr) 2007-09-26 2019-09-11 Nicira Inc. Système d'exploitation de réseau pour la gestion et la sécurisation des réseaux
CA3081255C (fr) 2009-04-01 2023-08-22 Nicira, Inc. Procede et appareil destines a mettre en application et a gerer des commutateurs virtuels
US8743888B2 (en) 2010-07-06 2014-06-03 Nicira, Inc. Network control apparatus and method
US9525647B2 (en) 2010-07-06 2016-12-20 Nicira, Inc. Network control apparatus and method for creating and modifying logical switching elements
US8964528B2 (en) 2010-07-06 2015-02-24 Nicira, Inc. Method and apparatus for robust packet distribution among hierarchical managed switching elements
US9680750B2 (en) 2010-07-06 2017-06-13 Nicira, Inc. Use of tunnels to hide network addresses
US10103939B2 (en) 2010-07-06 2018-10-16 Nicira, Inc. Network control apparatus and method for populating logical datapath sets
US9043452B2 (en) 2011-05-04 2015-05-26 Nicira, Inc. Network control apparatus and method for port isolation
JP5941703B2 (ja) * 2012-02-27 2016-06-29 株式会社日立製作所 管理サーバ及び管理方法
US9407580B2 (en) 2013-07-12 2016-08-02 Nicira, Inc. Maintaining data stored with a packet
US9282019B2 (en) 2013-07-12 2016-03-08 Nicira, Inc. Tracing logical network packets through physical network
US9344349B2 (en) 2013-07-12 2016-05-17 Nicira, Inc. Tracing network packets by a cluster of network controllers
US9264330B2 (en) 2013-10-13 2016-02-16 Nicira, Inc. Tracing host-originated logical network packets
US9967199B2 (en) 2013-12-09 2018-05-08 Nicira, Inc. Inspecting operations of a machine to detect elephant flows
US10158538B2 (en) 2013-12-09 2018-12-18 Nicira, Inc. Reporting elephant flows to a network controller
US9419889B2 (en) 2014-03-07 2016-08-16 Nicira, Inc. Method and system for discovering a path of network traffic
US9419874B2 (en) 2014-03-27 2016-08-16 Nicira, Inc. Packet tracing in a software-defined networking environment
US9553803B2 (en) 2014-06-30 2017-01-24 Nicira, Inc. Periodical generation of network measurement data
US9379956B2 (en) 2014-06-30 2016-06-28 Nicira, Inc. Identifying a network topology between two endpoints
US10469342B2 (en) 2014-10-10 2019-11-05 Nicira, Inc. Logical network traffic analysis
US10805239B2 (en) 2017-03-07 2020-10-13 Nicira, Inc. Visualization of path between logical network endpoints
JPWO2019031258A1 (ja) * 2017-08-08 2020-08-13 ソニー株式会社 送信端末、送信方法、情報処理端末、および情報処理方法
US10608887B2 (en) 2017-10-06 2020-03-31 Nicira, Inc. Using packet tracing tool to automatically execute packet capture operations
US11283699B2 (en) 2020-01-17 2022-03-22 Vmware, Inc. Practical overlay network latency measurement in datacenter
US11570090B2 (en) 2020-07-29 2023-01-31 Vmware, Inc. Flow tracing operation in container cluster
US11196628B1 (en) 2020-07-29 2021-12-07 Vmware, Inc. Monitoring container clusters
US11558426B2 (en) 2020-07-29 2023-01-17 Vmware, Inc. Connection tracking for container cluster
US11736436B2 (en) 2020-12-31 2023-08-22 Vmware, Inc. Identifying routes with indirect addressing in a datacenter
US11336533B1 (en) 2021-01-08 2022-05-17 Vmware, Inc. Network visualization of correlations between logical elements and associated physical elements
US11687210B2 (en) 2021-07-05 2023-06-27 Vmware, Inc. Criteria-based expansion of group nodes in a network topology visualization
US11711278B2 (en) 2021-07-24 2023-07-25 Vmware, Inc. Visualization of flow trace operation across multiple sites
US11706109B2 (en) 2021-09-17 2023-07-18 Vmware, Inc. Performance of traffic monitoring actions

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE69126666T2 (de) * 1990-09-17 1998-02-12 Cabletron Systems Inc Netzwerkverwaltungssystem mit modellbasierter intelligenz
US5179554A (en) * 1991-04-08 1993-01-12 Digital Equipment Corporation Automatic association of local area network station addresses with a repeater port
US5297138A (en) * 1991-04-30 1994-03-22 Hewlett-Packard Company Determining physical topology across repeaters and bridges in a computer network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO9506989A1 *

Also Published As

Publication number Publication date
WO1995006989A1 (fr) 1995-03-09
AU7868994A (en) 1995-03-22
JPH09504913A (ja) 1997-05-13
AU675362B2 (en) 1997-01-30

Similar Documents

Publication Publication Date Title
US5727157A (en) Apparatus and method for determining a computer network topology
AU675362B2 (en) Determination of network topology
EP1234407B1 (fr) Systeme de correlation d'evenements de reseau utilisant des modeles de comportement protocolaire specifies systematiquement
US6205122B1 (en) Automatic network topology analysis
US6115743A (en) Interface system for integrated monitoring and management of network devices in a telecommunication network
AU682272B2 (en) A method for displaying information relating to a computer network
AU700018B2 (en) Method and apparatus for testing the responsiveness of a network device
US5751933A (en) System for determining the status of an entity in a computer network
US6697338B1 (en) Determination of physical topology of a communication network
US5964837A (en) Computer network management using dynamic switching between event-driven and polling type of monitoring from manager station
US5774669A (en) Scalable hierarchical network management system for displaying network information in three dimensions
US5978845A (en) Network management relay mechanism
US20020165934A1 (en) Displaying a subset of network nodes based on discovered attributes
CN110620693A (zh) 一种基于物联网的铁路沿线车站路由远程重启控制系统及方法
JP2005237018A (ja) ネットワークマネージメントシステムへのデータ送信
US20190207805A1 (en) Node fault isolation
US7646729B2 (en) Method and apparatus for determination of network topology
GECKIL Apparatus and method for determining a computer network topology
JPH09247146A (ja) ネットワーク管理システム
Simmons et al. Knowledge sharing between distributed knowledge based systems
Markley et al. Management of space networks
Fried et al. Implementing Integrated Monitoring Systems for Heterogeneous Networks
Deodhar et al. Distributed analyzer architecture permits real-time monitoring of heterogeneous local area networks
Muller Managing the Enterprise Network with LightWatch/Open
El-Emary NEW SOFTWARE TOOL FOR MANAGING THE PERFORMANCE OF LANS

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 19960320

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FR GB GR IE IT LI LU MC NL PT SE

17Q First examination report despatched

Effective date: 19971110

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 19980321