WO2013118690A1 - Système informatique et procédé de visualisation de réseau virtuel - Google Patents

Système informatique et procédé de visualisation de réseau virtuel Download PDF

Info

Publication number
WO2013118690A1
WO2013118690A1 PCT/JP2013/052527 JP2013052527W WO2013118690A1 WO 2013118690 A1 WO2013118690 A1 WO 2013118690A1 JP 2013052527 W JP2013052527 W JP 2013052527W WO 2013118690 A1 WO2013118690 A1 WO 2013118690A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual
information
node
controller
virtual node
Prior art date
Application number
PCT/JP2013/052527
Other languages
English (en)
Japanese (ja)
Inventor
増田 剛久
Original Assignee
日本電気株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電気株式会社 filed Critical 日本電気株式会社
Priority to JP2013557510A priority Critical patent/JP5811196B2/ja
Priority to US14/376,831 priority patent/US9425987B2/en
Priority to CN201380008944.7A priority patent/CN104137479B/zh
Priority to EP13746843.5A priority patent/EP2814204B1/fr
Publication of WO2013118690A1 publication Critical patent/WO2013118690A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/46Interconnection of networks
    • H04L12/4641Virtual LANs, VLANs, e.g. virtual private networks [VPN]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/12Discovery or management of network topologies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/12Discovery or management of network topologies
    • H04L41/122Discovery or management of network topologies of virtualised topologies, e.g. software-defined networks [SDN] or network function virtualisation [NFV]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/38Flow based routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/70Virtual switches
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/02Topology update or discovery
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/58Association of routers
    • H04L45/586Association of routers of virtual routers

Definitions

  • the present invention relates to a computer system and a computer system visualization method, and more particularly to a computer system virtual network visualization method using open flow (also referred to as programmable flow) technology.
  • open flow also referred to as programmable flow
  • OpenFlow Switch A network switch compatible with this technology (hereinafter referred to as OpenFlow Switch (OFS)) holds detailed information such as protocol type and port number in a flow table, and can control the flow and collect statistical information. it can.
  • OFS OpenFlow Switch
  • a communication path is set by an OpenFlow controller (also referred to as a programmable flow controller, hereinafter referred to as OFC), and a transfer operation (relay operation) for OFS on the path is set.
  • the OFC sets a flow entry in which a rule for specifying a flow (packet data) and an action for defining an operation for the flow are associated with each other in a flow table held by the OFS.
  • the OFS on the communication path determines the transfer destination of the received packet data according to the flow entry set by the OFC, and performs transfer processing.
  • the client terminal can transmit and receive packet data to and from other client terminals using the communication path set by the OFC. That is, in a computer system using OpenFlow, OFC that sets a communication path and OFS that performs transfer processing are separated, and communication of the entire system can be controlled and managed centrally.
  • OFC can control transfer between client terminals in units of flows defined by L1 to L4 header information
  • the network can be arbitrarily virtualized.
  • restrictions on the physical configuration are relaxed, the construction of the virtual tenant environment is facilitated, and the initial investment cost due to scale-out can be reduced.
  • a plurality of OFCs may be installed in one system (network). Or, since one OFC is usually installed for each data center, in the case of a system having a plurality of data centers, a plurality of OFCs manage the network in the entire system.
  • Patent Document 1 describes a system that performs network flow control using OpenFlow with a plurality of controllers sharing topology information.
  • Patent Document 2 a plurality of controllers for instructing setting of a flow entry to which priority is added to a switch on a communication path, and whether flow entry setting is permitted or not according to priority are determined and set in itself.
  • a system includes a switch that performs a relay operation according to a flow entry that is set for a received packet that conforms to the flow entry.
  • a plurality of controllers 1 for instructing setting of a flow entry to a switch on a communication path and one of the plurality of controllers 1 are designated as a route determiner, and are set by the route determiner.
  • a system including a plurality of switches that perform relay processing of received packets according to a flow entry is described.
  • the status of the virtual network managed by each controller can be grasped individually, but the entire virtual network managed by multiple controllers is grasped as one virtual network. It is not possible.
  • one virtual tenant network “VTN1” is formed by two virtual networks “VNW1” and “VNW2” managed by two OFCs
  • the status of each of the two virtual networks “VNW1” and “VNW2” Can be grasped by each of the two OFCs.
  • the two virtual networks “VNW1” and “VNW2” cannot be integrated, it is impossible to centrally grasp the status of the entire virtual tenant network “VTN1”.
  • an object of the present invention is to centrally manage the entire virtual network controlled by a plurality of controllers using the open flow technology.
  • a computer system includes a plurality of controllers, a switch, and a management device.
  • Each of the plurality of controllers calculates a communication path, sets a flow entry for a switch on the communication path, and manages a virtual network constructed based on the communication path.
  • the switch performs the relay operation of the received packet according to the flow entry set in its own flow table.
  • one controller acquires a reception notification of packet data transferred between two virtual networks managed by itself and another controller from the switch, thereby transmitting a packet data transmission virtual node.
  • the management apparatus combines the two virtual networks with the transmission virtual node and the reception virtual node as a common virtual node and outputs the combined virtual network so that the virtual node can be visually recognized.
  • a virtual network visualization method includes a plurality of controllers that calculate a communication path and set a flow entry for a switch on the communication path, and are set in its own flow table.
  • the processing is executed in a computer system including a switch that performs a relay operation of a received packet.
  • the controller acquires, from the switch, reception notification of packet data transferred between two virtual networks managed by each of the controller and the other controller among the plurality of controllers.
  • FIG. 1 is a diagram showing the configuration of an embodiment of a computer system according to the present invention.
  • FIG. 2 is a diagram showing a configuration in the embodiment of the OpenFlow controller according to the present invention.
  • FIG. 3 is a diagram showing an example of VN topology information held by the OpenFlow controller according to the present invention.
  • FIG. 4 is a conceptual diagram of VN topology information held by the OpenFlow controller according to the present invention.
  • FIG. 5 is a diagram showing a configuration in the embodiment of the management apparatus according to the present invention.
  • FIG. 6 is a sequence diagram illustrating an example of an operation in which the management device according to the present invention acquires VN topology information and corresponding virtual node information from the OpenFlow controller.
  • FIG. 7 is a diagram showing an example of the structure of packet data used for specifying a common virtual node according to the present invention.
  • FIG. 8 is a diagram showing an example of VN topology information held by each of the plurality of OpenFlow controllers shown in FIG.
  • FIG. 9 is a diagram illustrating an example of corresponding virtual node information specified by the corresponding virtual node specifying process.
  • FIG. 10 is a diagram illustrating an example of the VTN topology information of the entire virtual network generated by integrating the VN topology information illustrated in FIG.
  • FIG. 1 is a diagram showing the configuration of an embodiment of a computer system according to the present invention.
  • the computer system according to the present invention performs communication path construction and packet data transfer control using OpenFlow.
  • the computer system according to the present invention includes open flow controllers 1-1 to 1-5 (hereinafter referred to as OFC 1-1 to 1-5), a plurality of open flow switches 2 (hereinafter referred to as OFS2), and a plurality of L3 routers 3.
  • a plurality of hosts 4 for example, storage 4-1, server 4-2, client terminal 4-3) and management apparatus 100 are provided. Note that the OFCs 1-1 to 1-5 are collectively referred to as OFC1 without distinction.
  • the host 4 is a computer device including a CPU, a main storage device, and an external storage device (not shown), and communicates with other hosts 4 by executing a program stored in the external storage device. Communication between the hosts 4 is performed via the switch 2 or the L3 router 3.
  • the host 4 realizes functions exemplified in the storage 4-1, server (for example, Web server, file server, application server), client terminal 4-3, and the like according to the program to be executed and its hardware structure.
  • the OFC 1 includes a flow control unit 13 that controls communication path packet transfer processing related to packet transfer in the system using the open flow technology.
  • the OpenFlow technology refers to a technology in which the controller (here OFC1) performs route control and node control by setting multi-layer and flow unit route information in OFS2 according to a routing policy (flow entry: flow + action). (For details, see Non-Patent Document 1.) As a result, the route control function is separated from the routers and switches, and optimal routing and traffic management are possible through centralized control by the controller.
  • OFS2 to which the OpenFlow technology is applied, handles communication as a flow of END2END, not as a unit of packet or frame like a conventional router or switch.
  • the OFC 1 controls the operation of the OFS 2 (for example, packet data relay operation) by setting a flow entry (rule + action) in a flow table (not shown) held by the OFS 2.
  • the setting of the flow entry for OFS2 by OFC1 and the notification of the first packet from OFS2 to OFC1 (packet IN) are sent to OFC1 set in advance in OFS2 via control network 200 (hereinafter referred to as control NW200). Done.
  • OFC 1-1 to 1-4 are installed as OFC1 for controlling the network (OFS2) in the data center DC1, and OFC1-5 is set as the OFC1 for controlling the network (OFS2) in the data center DC2.
  • OFC1-1 to 1-4 are connected to OFS2 in the data center DC1 via the control NW200-1
  • OFC1-5 is connected to OFS2 in the datacenter DC2 via the control NW200-2.
  • the network (OFS2) of the data center DC1 and the network (OFS2) of the data center DC2 are networks (sub-networks) of different IP address ranges connected via the L3 router 3 that performs routing in layer 3.
  • FIG. 2 is a diagram showing the configuration of the OFC 1 according to the present invention.
  • the OFC 1 is preferably realized by a computer including a CPU and a storage device.
  • the functions of the corresponding virtual node identification unit 11, the VN topology management unit 12, and the flow control unit 13 illustrated in FIG. 2 are realized by a CPU (not shown) executing a program stored in a storage device.
  • the OFC 1 holds the VN topology information 14 stored in the storage device.
  • the flow control unit 13 sets or deletes a flow entry (rule + action) in the OFS 2 managed by itself.
  • the OFS 2 refers to the set flow entry and executes an action (for example, relay or discard of packet data) corresponding to the rule according to the header information of the received packet. Details of the rules and actions will be described later.
  • a combination of layer 1 to layer 4 addresses and identifiers of an OSI (Open Systems Interconnection) reference model included in header information in TCP / IP packet data is defined.
  • OSI Open Systems Interconnection
  • each combination of a layer 1 physical port, a layer 2 MAC address, a VLAN tag (VLAN id), a layer 3 IP address, and a layer 4 port number is set as a rule.
  • the VLAN tag may be given a priority (VLAN priority).
  • identifiers such as port numbers and addresses set in the rules may be set within a predetermined range.
  • an identifier for specifying the data transfer protocol may be set as a rule.
  • a method for processing TCP / IP packet data is defined. For example, information indicating whether or not the received packet data is to be relayed and the transmission destination in the case of relaying are set. In the action, information instructing to copy or discard the packet data may be set.
  • a preset virtual network (VN) is constructed for each OFC 1 by flow control by the OFC 1.
  • one virtual tenant network (VTN) is constructed by at least one virtual network (VN) managed for each OFC 1.
  • VTN1 is constructed by virtual networks managed by the OFCs 1-1 to 1-5 that control different IP networks.
  • one virtual tenant network VTN2 may be constructed by virtual networks managed by the OFCs 1-1 to 1-4 that control the same IP network.
  • a virtual network managed by one OFC1 (for example, OFC1-5) may constitute one virtual tenant network VTN3.
  • a plurality of virtual tenant networks (VTN) may be constructed in the system.
  • the corresponding virtual node specifying unit 11 specifies a corresponding virtual node in accordance with an instruction from the management device 100.
  • the corresponding virtual node indicates a common (identical) virtual node among virtual networks managed by each of the plurality of OFCs 1, and is represented by, for example, a combination of virtual node names specified as common (identical) virtual nodes. .
  • the corresponding virtual node specifying unit 11 specifies a virtual node that is common (identical) to the virtual node that is a component of the virtual network managed by the other OFC 1 and corresponds to each of the virtual networks managed by the other OFC 1.
  • the virtual node information 105 is recorded in a storage device (not shown).
  • the corresponding virtual node specifying unit 11 transmits a test packet to another OFC 1 and receives the response packet, and receives the virtual node name extracted from the packet IN from the OFS 2 that received the response packet.
  • the combination of the virtual node name of the same element as the receiving virtual node is recorded as the corresponding virtual node information 105.
  • the corresponding virtual node specifying unit 11 notifies the management device 100 of the corresponding virtual node information 105.
  • the notification of the corresponding virtual node information 105 may be executed in response to a request from the management apparatus 100 or may be executed at an arbitrary time. The detailed operation of the corresponding virtual node specifying unit 11 will be described later.
  • the VN topology management unit 12 manages the topology information (VN topology information 14) of the virtual network (VN) managed by the OFS 1 to which it belongs. Further, the VN topology management unit 12 notifies the management device 100 of the VN topology information 14 of the virtual network managed by the VN topology management unit 12.
  • the VN topology information 14 includes information on the topology of the virtual network (VN) managed (controlled) by the OFC 1, as shown in FIGS. 1, the computer system according to the present invention realizes a plurality of virtual tenant networks VTN1, VTN2,... By being controlled by a plurality of OFC1.
  • the virtual tenant network includes a virtual network (VN) managed (controlled) by each of the OFCs 1-1 to 1-5.
  • the OFC 1 holds, as VN topology information 14, information related to the topology of a virtual network managed by itself (hereinafter referred to as a management target virtual network).
  • FIG. 3 is a diagram illustrating an example of the VN topology information 14 held by the OFC 1.
  • FIG. 4 is a conceptual diagram of the VN topology information 14 held by the OFC 1.
  • the VN topology information 14 includes information regarding the connection status of virtual nodes in a virtual network realized by a physical switch such as OFS 2 or a router (not shown).
  • the VN topology information 14 includes information (virtual node information 142) for identifying a virtual node belonging to the management target virtual network, and connection information 143 indicating the connection status of the virtual node.
  • the virtual node information 142 and the connection information 143 are recorded in association with a VTN number 141 that is an identifier of a virtual network (for example, a virtual tenant network) to which the managed virtual network belongs.
  • the virtual node information 142 includes information (for example, a virtual bridge name, a virtual external name, and a virtual router name) that identifies each of a virtual bridge, a virtual external, and a virtual router as a virtual node.
  • the virtual external indicates a terminal (host) or router to which a virtual bridge is connected.
  • an identifier (virtual router name) of a virtual router and information on a virtual bridge connected to the lower layer router are associated with each other and set as the virtual node information 142.
  • the virtual node name exemplified in the virtual bridge name, virtual external name, virtual router name, etc. may be set uniquely for each OFC1, or a common name is set for all OFC1s in the system. Also good.
  • the connection information 143 includes information for specifying the connection destination of the virtual node, and is associated with the virtual node information 142 of the virtual node.
  • virtual router (vRouter) “VR11” and virtual external (vExternal) “VE11” are set as connection information 143 as the connection destination of virtual bridge (vBridge) “VB11”.
  • the connection information 143 may include a connection type (bridge / external / router / external network (L3 router)) for specifying a connection partner and information (for example, port number, MAC address, VLAN name) for specifying a connection destination.
  • the VLAN name belonging to the virtual bridge is associated with the virtual bridge identifier (virtual bridge name) and set as the connection information 143.
  • the virtual external identifier (virtual external name) is set to VLAN.
  • a combination of a name and a MAC address (or port number) is associated and set as connection information 143. That is, a virtual external is defined by a combination of a VLAN name and a MAC address (or port number).
  • the virtual network shown in FIG. 4 belongs to the virtual tenant network VTN1, and includes a virtual router “VR11”, virtual bridges “VB11” and “VB12”, and virtual externals “VE11” and “VE12”.
  • Virtual bridges “VB11” and “VB12” are separate sub-networks connected via a virtual router “VR11”.
  • a virtual external “VE11” is connected to the virtual bridge “VB11”, and a MAC address of the virtual router “VR22” managed by the OFC1-2 “OFC2” is associated with the virtual external “VE11”.
  • the corresponding virtual node specifying unit 11 and the VN topology management unit 12 are configured to correspond to the corresponding virtual node information 105 and the VN topology information 14 via a secure management network 300 (hereinafter, referred to as a management NW 300).
  • a management NW 300 a secure management network 300
  • the management apparatus 100 combines the VN topology information 14 collected from the OFCs 1-1 to 1-5 on the basis of the corresponding virtual node information 105, and the virtual network of the entire system (for example, virtual tenant networks VTN1, VTN2,...) Is generated.
  • FIG. 5 is a diagram showing a configuration in the embodiment of the management apparatus 100 according to the present invention.
  • the management device 100 is preferably realized by a computer including a CPU and a storage device.
  • the functions of the VN information collection unit 101, the VN topology combination unit 102, and the VTN topology output unit 103 shown in FIG. 5 are realized by executing a visualization program stored in a storage device by a CPU (not shown).
  • the management apparatus 100 holds the VTN topology information 104 and the corresponding virtual node information 105 stored in the storage device.
  • the VTN topology information 104 is not recorded in the initial state, but is recorded for the first time when it is generated by the VN topology coupling unit 102. Also, the corresponding virtual node information 105 is not recorded in the initial state, but the corresponding virtual node information 105 notified from the OFC 1 is recorded.
  • the VN information collection unit 101 issues a VN topology information collection instruction to the OFC 1 via the management NW 300, and acquires the VN topology information 14 and the corresponding virtual node information 105 from the OFC 1.
  • the acquired VN topology information 14 and corresponding virtual node information 105 are temporarily stored in a storage device (not shown).
  • the VN topology combining unit 102 Based on the corresponding virtual node information 105, the VN topology combining unit 102 combines (integrates) the VN topology information 14 in units of virtual networks (for example, in units of virtual tenants) in the entire system, and topology corresponding to the virtual networks in the entire system. Generate information.
  • the topology information generated by the VN topology coupling unit 102 is recorded as VTN topology information 104 and is output so as to be visible by the VTN topology output unit 103.
  • the VTN topology output unit 103 displays the VTN topology information 104 in a text format or graphically on an output device (not shown) such as a monitor.
  • the VTN topology information 104 has the same configuration as the VN topology information 14 shown in FIG. 3, and includes virtual node information and connection information associated with the VTN number.
  • the VN topology coupling unit 102 selects a common (identical) virtual node among the virtual nodes on the managed virtual network for each OFC 1. Identify.
  • the VN topology coupling unit 102 couples the virtual network to which the virtual node belongs via a common virtual node.
  • the VN topology coupling unit 102 couples the virtual networks via a virtual bridge common to the networks.
  • the VN topology combining unit 102 combines the virtual networks via a virtual external that is connected in the network.
  • the OFC 1 transmits a test packet from the host on the virtual bridge in its managed network to the host on the virtual bridge in the other managed network of the OFC 1. Subsequently, the OFC 1 identifies the reception virtual node included in the response packet (test packet reception information) of the test packet as the same virtual node (corresponding virtual node) as the transmission virtual node, together with the VN topology information 14 managed by itself. Notify the management device 100. Similarly, the management apparatus 100 acquires the VN topology information 14 and the corresponding virtual node information 105 from all the OFCs 1 in the system, and joins the managed virtual networks based on these.
  • the management apparatus 100 issues a VN topology information collection instruction to the OFC 1-1 (step S101).
  • the VN topology information collection instruction includes information for specifying a virtual network to be visualized (here, the virtual tenant network “VTN1” as an example).
  • the OFC 1-1 assigns a common virtual node between its own managed virtual network and the managed virtual networks of the other OFCs 1-2 to 1-5.
  • An identifying process is performed (steps S102 to S107). In the following, an operation for specifying a corresponding virtual node between the management target virtual network of OFC 1-1 (controller name “OFC1”) and the management target virtual network of OFC 1-2 (controller name “OFC2”) will be described.
  • the OFC 1-1 transmits a test packet information request to the OFC 1-2 in response to the VN topology information collection instruction (step S102).
  • the test packet information request is transmitted to the OFC 1-2 via the management NW 300.
  • the test packet information request includes information for specifying a virtual network to be visualized.
  • the test packet information request includes information specifying the virtual tenant network “VTN1”.
  • the management IP address indicates an IP address assigned to the OFC 1 connected to the management NW 300.
  • the identification number is an identifier for associating with a destination address notification described later.
  • the VTN name is information for specifying a virtual network to be visualized.
  • the OFC 1-2 notifies the destination address information in response to the test packet information request (step S102). If the managed virtual network of the OFC 1-2 belongs to the virtual network with the VTN name included in the test packet information request, the OFC 1-2 responds to the request. On the other hand, if its own managed virtual network does not belong to the requested virtual network with the VTN name, the OFC 1-2 does not respond and discards the request.
  • the OFC 1-2 requests the IP addresses of all hosts existing on the managed virtual network belonging to the virtual network with the VTN name included in the test packet information request as destination address information. Notify the original OFC 1-1.
  • the OFC 1-2 notifies, for example, transmission address information as shown in FIG. 7 via the management NW 300.
  • an identifier (in this case, “X”) indicating a response to the test packet information request in step S102 is given.
  • the IP address of the destination host of the test packet is the IP address of the host on the virtual network belonging to VTN1 designated by OFC1-2 as the destination of the test packet.
  • a plurality of host IPs are set in the transmission destination address information as transmission destination addresses.
  • the OFC 1-1 When the OFC 1-1 receives the transmission destination address information, the OFC 1-1 transmits a test packet having the transmission destination address (host IP address of VTN1) included in the transmission destination address information as a transmission destination (step S104). Specifically, the OFC 1-1 identifies the destination address information requested in step S102 by an identification number (here, “X”), and tests the host IP address included in the identified destination address information as a destination. The packet is transmitted through the virtual network designated by the VTN name. As an example, the OFC 1-1 transmits a test packet as shown in FIG. 7 via the virtual tenant network VTN1 shown in FIG.
  • the test packet includes the MAC address of the host managed by OFC1-2 “OFC2” on the virtual tenant network “VTN1” as the destination MAC address, and the OFC1-1 “OFC1” on the virtual tenant network “VTN1” as the source MAC address.
  • the IP address of the destination host is the IP address acquired by the OFC 1-1 by the destination address notification.
  • the identification number is an identifier associated with a test packet reception notification described later.
  • the OFC 1-1 is under its control and transmits a test packet to the OFS 2-1 constituting the virtual bridge belonging to the virtual tenant network “VTN1” via the control NW 200-1. At this time, the OFC 1-1 sets a flow entry for transferring the test packet on the virtual tenant network “VTN1” in the OFS 2-1. As a result, the test packet is transferred to the destination host via the virtual tenant network “VTN1”.
  • the test packet transferred via the virtual tenant network “VTN1” is received by the OFS 2-2 under the control of the OFC 1-2. Since there is no flow entry that matches the received test packet, the OFS 2-2 notifies the OFC 1-2 of the test packet as a first packet (packet IN, step S105). Here, the packet IN to the OFC 1-2 is performed via the control NW 200-1. The OFC 1-2 acquires the test packet received at the OFS 2-2 based on the packet IN from the OFS 2-2. In the case of the packet IN, the OFS 2-2 notifies the OFC 1-2 of the VLAN name and port number assigned to the port that received the test packet.
  • the OFC 1-2 can identify the virtual bridge to which the OFS 2 that has received the test packet belongs (that is, the virtual bridge that has received the test packet) based on the notified VLAN name and the VN topology information 14.
  • the OFC 1-2 can identify the virtual external that has received the test packet based on the notified VLAN name, the source host MAC address of the test packet, and the VN topology information 14.
  • the OFC 1-2 transmits test packet reception information indicating that the test packet has been received to the transmission source host of the test packet (step S106). Specifically, the OFC 1-2 transmits test packet reception information to the OFS 2-1 via the control NW 200-1, and a flow for transferring the test packet reception information on the virtual tenant network “VTN1”. An entry is set in the OFS 2-2. As a result, the test packet reception information is transferred to the transmission source host via the virtual tenant network “VTN1”.
  • the OFC 1-2 identifies the virtual bridge name and virtual external name that received the test packet from the VLAN name and port number notified together with the packet IN, and transfers test packet reception information including these from the OFS 2-2.
  • the OFC 1-2 sets the destination host of the test packet as the transmission source of the test packet reception information, and sets the transmission source host of the test packet as the destination of the test packet reception information.
  • the OFC 1-2 transmits the test packet reception information as illustrated in FIG. 7 via the virtual tenant network VTN1 illustrated in FIG.
  • the test packet reception information includes the MAC address of the host managed by OFC1-1 “OFC1” on the virtual tenant network “VTN1” as the destination MAC address and the OFC1-2 on the virtual tenant network “VTN1” as the source MAC address.
  • the MAC address and IP address of the destination host are the MAC address and IP address of the source host of the test packet.
  • the identification number an identifier (in this case, “Y”) indicating the response of the test packet is given.
  • the reception vBridge name and the reception vExternal name are names that identify the virtual bridge or virtual external that has received the test packet specified by the OFC 1-2.
  • the test packet reception information transferred via the virtual tenant network “VTN1” is received by the OFS 2-2 under the control of the OFC 1-1. Since there is no flow entry that matches the received test packet reception information, the OFS 2-1 notifies the OFC 1-1 of the test packet reception information as a first packet (packet IN, step S107). Here, the packet IN to the OFC 1-1 is performed via the control NW 200-1. The OFC 1-1 acquires the test packet reception information received by the OFS 2-1 from the packet IN from the OFS 2-1. Further, at the time of packet IN, OFS 2-1 notifies OFC 1-1 of the VLAN name and port number assigned to the port that received the test packet reception information.
  • the OFC 1-1 identifies the virtual bridge to which the OFS 2 that has received the test packet belongs (that is, the virtual bridge that has received the test packet) based on the notified VLAN name and the VN topology information 14. Further, the OFC 1-1 identifies the virtual external that has received the test packet based on the notified VLAN name, the MAC address of the transmission source host of the test packet, and the VN topology information 14.
  • the OFC 1-1 receives the reception virtual bridge name and reception virtual external name included in the test packet reception information, and the reception virtual bridge name and reception virtual external name (that is, the test virtual external name) of the test packet reception information specified by the packet IN from the OFS 2-1.
  • the packet transmission virtual bridge name and the transmission virtual external name) are associated and recorded as corresponding virtual node information 105 (step S108).
  • the transmission destination address notified from the other OFC1 is in the same IP address range as the IP address assigned to the network managed by the OFC1, the OFC1-1 and the managed virtual network of the OFC1 itself Are assumed to be L2 connected.
  • the OFC 1-1 records the test packet reception virtual bridge and the transmission virtual bridge in association with each other as the corresponding virtual node information 105.
  • the OFC 1-1 manages the virtual network to be managed by the OFC 1 and its own management. It is assumed that the target virtual network is L3 connected. In this case, the OFC 1-1 records the test packet reception virtual external and the transmission virtual external in association with each other as the corresponding virtual node information 105.
  • the management apparatus 100 identifies a virtual node (virtual bridge or virtual external) common to each of the managed virtual networks of OFC 1-1 and OFC 1-2 in the virtual tenant network “VTN1” based on the corresponding virtual node information 105. Is possible.
  • the OFC 1-1 transmits to the management apparatus 100 the VN topology information 14 of the management target virtual network belonging to the virtual network to be visualized in step S101 and the corresponding virtual node information 105 recorded in step S108.
  • the VN topology information 14 of the management target virtual network of the OFC 1-1 belonging to the virtual tenant network “VTN1”, and the corresponding virtual node information 105 for specifying the virtual node common to the management target virtual networks of the OFC 1-1 and OFC 1-2. Is transmitted to the management apparatus 100.
  • the reception virtual bridge or reception virtual external that has received a packet on the virtual network is specified by the packet IN from OFS2, which is one of the functions of OpenFlow.
  • the OFC 1 also uses the virtual bridge and virtual external that received the test packet, and the virtual bridge and virtual external that received the test packet reception information obtained by switching the destination host and the source host of the test packet as a common virtual bridge and virtual external. Identify.
  • the OFC 1-1 also transmits a test packet to the other OFCs 1-3 to 1-5 in the same manner, and is common to its own management target network in the virtual tenant network “VNT1” based on the test packet reception information.
  • a virtual node (virtual bridge, virtual external) is specified and notified to the management apparatus 100 as corresponding virtual node information.
  • the other OFCs 1-2 to 1-5 in response to the VTN topology information collection instruction from the management apparatus 100, the VN topology information 14 of their own management target virtual network and the corresponding virtual generated by the same method as described above.
  • the node information 105 is notified to the management apparatus 100.
  • FIG. 8 is a diagram illustrating an example of the VN topology information 14 of the managed virtual network belonging to the virtual tenant network VTN1 held by each of the plurality of OFCs 1-1 to 1-5 illustrated in FIG.
  • OFC1-1 “OFC1” holds virtual bridge “VB11” and virtual external “VE11” connected to each other as VN topology information 14 of its own management target virtual network.
  • a host “H11” is connected to the virtual bridge “VB11”.
  • the OFC1-2 “OFC2” holds the virtual router “VR21”, the virtual bridges “VB21”, “VB22”, the virtual external “VE21”, and “VE22” as the VN topology information 14 of the management target virtual network.
  • Virtual bridges “VB21” and “VB22” indicate different sub-networks connected via the virtual router “VR21”.
  • a connection node between the virtual router “VR21” and the virtual bridge “VB21” indicates the host “H21”, and a connection node between the virtual router “VR21” and the virtual bridge “VB22” indicates the host “H22”.
  • a virtual external “VE21” is connected to the virtual bridge “VB21”.
  • a virtual external “VE22” is connected to the virtual bridge “VB22”, and an L3 router “SW1” is associated with the virtual external “VE22”.
  • the OFC1-3 “OFC3” holds the virtual bridge “VB31”, the virtual external “VE31”, and “VE32” as the VN topology information 14 of the management target virtual network.
  • a host “H31” is connected to the virtual bridge “VB31”.
  • the OFC1-4 “OFC4” holds a virtual bridge “VB41” and a virtual external “VE41” as the VN topology information 14 of its own virtual network to be managed.
  • a host “H41” is connected to the virtual bridge “VB41”.
  • the OFC 1-5 “OFC 5” holds the virtual router “VR 51”, the virtual bridges “VB 51”, “VB 52”, the virtual external “VE 51”, and “VE 52” as the VN topology information 14 of the management target virtual network.
  • Virtual bridges “VB51” and “VB52” indicate different sub-networks connected via the virtual router “VR51”.
  • a connection node between the virtual router “VR21” and the virtual bridge “VB21” indicates the host “H21”, and a connection node between the virtual router “VR21” and the virtual bridge “VB22” indicates the host “H22”.
  • a virtual external “VE51” is connected to the virtual bridge “VB51”, and an L3 router “SW2” is associated with the virtual external “VE51”.
  • a virtual external “VE52” is connected to the virtual bridge “VB52”.
  • the OFCs 1-2 to 1-5 respond to the test packet information request from the OFC 1-1 “OFC1”.
  • the host “H21”, “H22”, “H31”, “H41”, “H51”, “H52” are returned as the respective destination addresses.
  • the OFC 1-1 uses the hosts “H21”, “H22”, “H31”, “H41”, “H51”, “H51”, which manage the test packets in which the transmission source host is the host “H11” by the OFC 1-2 to 1-5.
  • a virtual node common to the managed virtual networks (corresponding virtual node) is specified by the same operation as in FIG.
  • packets other than the test packet are transferred to the TCP / IP protocol stack.
  • the relay process for the test packet according to the present invention is performed immediately before the TCP / IP protocol stack in the virtual network. For this reason, the test packet is returned to the transmission source as a response packet without being passed to the TCP / IP protocol stack.
  • the test packets addressed to the hosts “H22”, “H51”, and “H52” are discarded by the virtual router “VR21” being transferred, and the test addressed to the host “H41” The packet is discarded by the virtual external “VE32”. In this case, the test packet reception information is transmitted only from the hosts “H21” and “H31”.
  • a virtual bridge and a virtual external that have received a test packet are designated as a reception virtual bridge and a reception external
  • a virtual bridge and a virtual external that have received test packet reception information are designated as a transmission virtual bridge and a transmission external.
  • the transmission virtual bridge becomes “VB11” and the reception virtual bridge becomes “VB21” by the test packet in which the transmission source host is the host “H11” and the destination is the host “H21”, the virtual bridges “VB11” and “VB21” "Is a common virtual bridge. Similarly, the virtual bridges “VB11” and “VB21” are specified as a common virtual bridge by the test packet in which the transmission source and the destination are switched.
  • a test packet in which the transmission source host is the host “H11” and the destination is the host “H31” causes the transmission virtual bridge to be “VB11” and the reception virtual bridge to be “VB31”. It is specified that “VB31” is a common virtual bridge.
  • the virtual bridges “VB11” and “VB21” are specified as a common virtual bridge by the test packet in which the transmission source and the destination are switched.
  • the transmission virtual bridge becomes “VB22” and the reception virtual bridge becomes “VB51” by the test packet in which the transmission source host is the host “H22” and the destination is the host “H51”.
  • the OFC 1-2 has the host “H22”.
  • the host “H51” are L3 connected, and the corresponding virtual node is specified.
  • the transmission virtual external and the reception virtual external are specified as corresponding virtual externals.
  • the virtual externals “VE22” and “VE51” are a common virtual external.
  • the virtual packets “VE22” and “VE51” are specified as a common virtual bridge by the test packet in which the transmission source and the destination are switched.
  • the transmission virtual bridge becomes “VB31” and the reception virtual bridge becomes “VB41” by the test packet in which the transmission source host is the host “H31” and the destination is the host “H41”
  • the virtual bridge “VB31” It is specified that “VB41” is a common virtual bridge.
  • the virtual bridges “VB31” and “VB41” are specified as a common virtual bridge by the test packet in which the transmission source and the destination are switched.
  • the management apparatus 100 Based on the information related to the corresponding virtual node (corresponding virtual node information 105) identified as described above, the management apparatus 100 combines the VN topology information 14 transmitted from each of the OFCs 1-1 to 1-5.
  • the topology information of the virtual tenant network “VTN1” shown in FIG. 10 can be created.
  • virtual bridges “VB11”, “VB21”, “VB31” managed by each of OFCs 1-1 to 1-3 are hosts “H11”, “H21”, “H31”, “H41”. "Is recognized as a common virtual bridge” VB11 "to be connected.
  • the virtual externals “VE22” and “VB51” managed by each of the OFC 1-2 and OFC1-5 are recognized as a common virtual external “VE22” to which the virtual bridges “VB21” and “VB51” are connected. .
  • the management apparatus 100 generates the topology information of the designated virtual tenant network “VTN1” by combining the VN topology information 14 managed for each OFC1 through the common virtual node, and makes it visible. Can be output.
  • the network manager can centrally manage the topology of the virtual network in the entire system shown in FIG.
  • the collection of the virtual node information 105 corresponding to the VN topology information 14 by the management apparatus 100 may be performed at any time or periodically. When periodically performed, the network topology can be automatically changed following the change of the virtual network.
  • the management apparatus 100 shown in FIG. 1 is provided separately from the OFC 1, but is not limited thereto, and may be provided in any of the OFC 1-1 to OFC 1-5.
  • the computer system is shown with five OFCs 1 provided, but the number of OFCs 1 connected to the network and the number of hosts 4 are not limited thereto.
  • the management apparatus 100 may collect and hold the VN topology information 14 managed for each OFC 1 in advance before acquiring the corresponding virtual node information 105.
  • the OFC 1 that manages the virtual network uses the backup virtual bridge as well as the host address on the operational virtual bridge as the destination address of the test packet.
  • the above host address may be notified.
  • the PFC 1 includes information requesting the backup host address in the test packet information request, thereby acquiring the backup host address and making the backup virtual network communicable. Under such a state, the topology of the backup system can be confirmed by the same method as described above.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

Cette invention porte sur un contrôleur qui spécifie un nœud virtuel d'envoi et un nœud virtuel de réception pour des données par paquets, par obtention d'une notification de réception de données par paquets provenant d'un commutateur pour des données par paquets acheminées entre des réseaux virtuels gérés par différents contrôleurs. Un dispositif de gestion règle le nœud virtuel d'envoi et le nœud virtuel de réception spécifiés à titre de nœuds virtuels partagés, et lie des réseaux virtuels gérés par différents contrôleurs. Par conséquent, il est possible de gérer d'une manière unidimensionnelle tous les réseaux virtuels commandés par une pluralité de contrôleurs utilisant la technologie OpenFlow.
PCT/JP2013/052527 2012-02-10 2013-02-05 Système informatique et procédé de visualisation de réseau virtuel WO2013118690A1 (fr)

Priority Applications (4)

Application Number Priority Date Filing Date Title
JP2013557510A JP5811196B2 (ja) 2012-02-10 2013-02-05 コンピュータシステム、及び仮想ネットワークの可視化方法
US14/376,831 US9425987B2 (en) 2012-02-10 2013-02-05 Computer system and visualization method of virtual network
CN201380008944.7A CN104137479B (zh) 2012-02-10 2013-02-05 计算机系统和虚拟网络的可视化方法
EP13746843.5A EP2814204B1 (fr) 2012-02-10 2013-02-05 Système informatique et procédé de visualisation d'un réseau virtuel

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2012027780 2012-02-10
JP2012-027780 2012-02-10

Publications (1)

Publication Number Publication Date
WO2013118690A1 true WO2013118690A1 (fr) 2013-08-15

Family

ID=48947454

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2013/052527 WO2013118690A1 (fr) 2012-02-10 2013-02-05 Système informatique et procédé de visualisation de réseau virtuel

Country Status (5)

Country Link
US (1) US9425987B2 (fr)
EP (1) EP2814204B1 (fr)
JP (1) JP5811196B2 (fr)
CN (1) CN104137479B (fr)
WO (1) WO2013118690A1 (fr)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015162834A (ja) * 2014-02-28 2015-09-07 日本電気株式会社 パケット転送経路取得システムおよびパケット転送経路取得方法
JP2016192660A (ja) * 2015-03-31 2016-11-10 日本電気株式会社 ネットワークシステム、ネットワーク制御方法、制御装置および運用管理装置
JP2017507572A (ja) * 2014-01-28 2017-03-16 オラクル・インターナショナル・コーポレイション クラウドに基づく仮想オーケストレーターのための方法、システム、およびコンピュータ読取可能な媒体
JP2018088650A (ja) * 2016-11-29 2018-06-07 富士通株式会社 情報処理装置、通信制御方法及び通信制御プログラム
US11388082B2 (en) 2013-11-27 2022-07-12 Oracle International Corporation Methods, systems, and computer readable media for diameter routing using software defined network (SDN) functionality

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9515947B1 (en) * 2013-03-15 2016-12-06 EMC IP Holding Company LLC Method and system for providing a virtual network-aware storage array
US9800549B2 (en) * 2015-02-11 2017-10-24 Cisco Technology, Inc. Hierarchical clustering in a geographically dispersed network environment
US10812336B2 (en) * 2017-06-19 2020-10-20 Cisco Technology, Inc. Validation of bridge domain-L3out association for communication outside a network
US10567228B2 (en) 2017-06-19 2020-02-18 Cisco Technology, Inc. Validation of cross logical groups in a network
US10536563B2 (en) * 2018-02-06 2020-01-14 Nicira, Inc. Packet handling based on virtual network configuration information in software-defined networking (SDN) environments

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5948055A (en) * 1996-08-29 1999-09-07 Hewlett-Packard Company Distributed internet monitoring system and method
JP2006019866A (ja) * 2004-06-30 2006-01-19 Fujitsu Ltd 伝送装置
WO2011083780A1 (fr) * 2010-01-05 2011-07-14 日本電気株式会社 Système de communication, appareil de commande, procédé d'établissement de règle de traitement, procédé de transmission de paquet et programme
JP2011160363A (ja) 2010-02-03 2011-08-18 Nec Corp コンピュータシステム、コントローラ、スイッチ、及び通信方法
JP2011166384A (ja) 2010-02-08 2011-08-25 Nec Corp コンピュータシステム、及び通信方法
JP2011166692A (ja) 2010-02-15 2011-08-25 Nec Corp ネットワークシステム、ネットワーク機器、経路情報更新方法、及びプログラム
JP2012027780A (ja) 2010-07-26 2012-02-09 Toshiba Corp 画面転送装置、画面受信装置、画面転送方法及び画面転送プログラム

Family Cites Families (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09130421A (ja) * 1995-11-02 1997-05-16 Furukawa Electric Co Ltd:The 仮想ネットワーク管理方法
US7752024B2 (en) 2000-05-05 2010-07-06 Computer Associates Think, Inc. Systems and methods for constructing multi-layer topological models of computer networks
US20030115319A1 (en) 2001-12-17 2003-06-19 Dawson Jeffrey L. Network paths
US7219300B2 (en) 2002-09-30 2007-05-15 Sanavigator, Inc. Method and system for generating a network monitoring display with animated utilization information
WO2004056047A1 (fr) 2002-12-13 2004-07-01 Internap Network Services Corporation Commande d'acheminement tenant compte de la topologie
US7366182B2 (en) 2004-08-13 2008-04-29 Qualcomm Incorporated Methods and apparatus for efficient VPN server interface, address allocation, and signaling with a local addressing domain
US7681130B1 (en) 2006-03-31 2010-03-16 Emc Corporation Methods and apparatus for displaying network data
US7852861B2 (en) * 2006-12-14 2010-12-14 Array Networks, Inc. Dynamic system and method for virtual private network (VPN) application level content routing using dual-proxy method
US10313191B2 (en) 2007-08-31 2019-06-04 Level 3 Communications, Llc System and method for managing virtual local area networks
US8161393B2 (en) 2007-09-18 2012-04-17 International Business Machines Corporation Arrangements for managing processing components using a graphical user interface
WO2009042919A2 (fr) 2007-09-26 2009-04-02 Nicira Networks Système d'exploitation de réseau pour la gestion et la sécurisation des réseaux
US8447181B2 (en) 2008-08-15 2013-05-21 Tellabs Operations, Inc. Method and apparatus for displaying and identifying available wavelength paths across a network
JP5408243B2 (ja) * 2009-03-09 2014-02-05 日本電気株式会社 OpenFlow通信システムおよびOpenFlow通信方法
US7937438B1 (en) 2009-12-07 2011-05-03 Amazon Technologies, Inc. Using virtual networking devices to manage external connections
JP5190084B2 (ja) 2010-03-30 2013-04-24 株式会社日立製作所 仮想マシンのマイグレーション方法およびシステム
US8407366B2 (en) 2010-05-14 2013-03-26 Microsoft Corporation Interconnecting members of a virtual network
US8837493B2 (en) 2010-07-06 2014-09-16 Nicira, Inc. Distributed network control apparatus and method
CA2818375C (fr) 2010-12-15 2014-06-17 ZanttZ, Inc. Moteur de simulation reseau
US8625597B2 (en) 2011-01-07 2014-01-07 Jeda Networks, Inc. Methods, systems and apparatus for the interconnection of fibre channel over ethernet devices
US9043452B2 (en) 2011-05-04 2015-05-26 Nicira, Inc. Network control apparatus and method for port isolation
US8593958B2 (en) 2011-09-14 2013-11-26 Telefonaktiebologet L M Ericsson (Publ) Network-wide flow monitoring in split architecture networks
US9178833B2 (en) 2011-10-25 2015-11-03 Nicira, Inc. Chassis controller
WO2013074828A1 (fr) 2011-11-15 2013-05-23 Nicira, Inc. Pare-feu dans des réseaux logiques
US8824274B1 (en) 2011-12-29 2014-09-02 Juniper Networks, Inc. Scheduled network layer programming within a multi-topology computer network

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5948055A (en) * 1996-08-29 1999-09-07 Hewlett-Packard Company Distributed internet monitoring system and method
JP2006019866A (ja) * 2004-06-30 2006-01-19 Fujitsu Ltd 伝送装置
WO2011083780A1 (fr) * 2010-01-05 2011-07-14 日本電気株式会社 Système de communication, appareil de commande, procédé d'établissement de règle de traitement, procédé de transmission de paquet et programme
JP2011160363A (ja) 2010-02-03 2011-08-18 Nec Corp コンピュータシステム、コントローラ、スイッチ、及び通信方法
JP2011166384A (ja) 2010-02-08 2011-08-25 Nec Corp コンピュータシステム、及び通信方法
JP2011166692A (ja) 2010-02-15 2011-08-25 Nec Corp ネットワークシステム、ネットワーク機器、経路情報更新方法、及びプログラム
JP2012027780A (ja) 2010-07-26 2012-02-09 Toshiba Corp 画面転送装置、画面受信装置、画面転送方法及び画面転送プログラム

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
"VLAN, Multi Tenant, ''Mieru-ka'' Kizon Gijutsu ga Kakaeru Kadai o Kaiketsu, Nikkei Communications", TOSHINDAN NO OPENFLOW PART 2, 1 February 2012 (2012-02-01), pages 20 - 23, XP008174778 *
OPENFLOW SWITCH SPECIFICATION VERSION 1.1.0 IMPLEMENTED (WIRE PROTOCOL 0X02, 28 February 2011 (2011-02-28)
See also references of EP2814204A4

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11388082B2 (en) 2013-11-27 2022-07-12 Oracle International Corporation Methods, systems, and computer readable media for diameter routing using software defined network (SDN) functionality
JP2017507572A (ja) * 2014-01-28 2017-03-16 オラクル・インターナショナル・コーポレイション クラウドに基づく仮想オーケストレーターのための方法、システム、およびコンピュータ読取可能な媒体
JP2015162834A (ja) * 2014-02-28 2015-09-07 日本電気株式会社 パケット転送経路取得システムおよびパケット転送経路取得方法
JP2016192660A (ja) * 2015-03-31 2016-11-10 日本電気株式会社 ネットワークシステム、ネットワーク制御方法、制御装置および運用管理装置
JP2018088650A (ja) * 2016-11-29 2018-06-07 富士通株式会社 情報処理装置、通信制御方法及び通信制御プログラム

Also Published As

Publication number Publication date
EP2814204A1 (fr) 2014-12-17
CN104137479B (zh) 2017-06-20
JPWO2013118690A1 (ja) 2015-05-11
US9425987B2 (en) 2016-08-23
US20150036538A1 (en) 2015-02-05
EP2814204B1 (fr) 2016-05-25
EP2814204A4 (fr) 2015-06-24
JP5811196B2 (ja) 2015-11-11
CN104137479A (zh) 2014-11-05

Similar Documents

Publication Publication Date Title
JP5967109B2 (ja) コンピュータシステム、及び仮想ネットワークの可視化方法
JP5811196B2 (ja) コンピュータシステム、及び仮想ネットワークの可視化方法
JP5300076B2 (ja) コンピュータシステム、及びコンピュータシステムの監視方法
JP5884832B2 (ja) コンピュータシステム、コントローラ、スイッチ、通信方法、及びネットワーク管理プログラムが格納された記録媒体
JP5874726B2 (ja) 通信制御システム、制御サーバ、転送ノード、通信制御方法および通信制御プログラム
US9215175B2 (en) Computer system including controller and plurality of switches and communication method in computer system
JP5590262B2 (ja) 情報システム、制御装置、仮想ネットワークの提供方法およびプログラム
JP5837989B2 (ja) コントローラでネットワークハードウェアアドレス要求を管理するためのシステム及び方法
EP3958509A1 (fr) Procédé, appareil et système de communication entre des contrôleurs dans tsn
JP5488979B2 (ja) コンピュータシステム、コントローラ、スイッチ、及び通信方法
JP2010178089A (ja) 遠隔管理システム、遠隔管理装置及び接続装置
JP5861772B2 (ja) ネットワークアプライアンス冗長化システム、制御装置、ネットワークアプライアンス冗長化方法及びプログラム
JP2011170718A (ja) コンピュータシステム、コントローラ、サービス提供サーバ、及び負荷分散方法
CN104901825B (zh) 一种实现零配置启动的方法和装置
WO2014054691A1 (fr) Programme, procédé de commande, appareil de commande et système de communication
JP6206493B2 (ja) 制御装置、通信システム、中継装置の制御方法及びプログラム
WO2015106506A1 (fr) Procédés de configuration d'informations de commande et d'établissement de communication, organe de commande de gestion et organe de commande
JP2017158103A (ja) 通信管理装置、通信システム、通信管理方法およびプログラム
JP2018113564A (ja) 通信システム、スイッチ、制御装置、通信方法、および、プログラム
JP2015226235A (ja) ネットワーク輻輳回避システム及び方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13746843

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 14376831

Country of ref document: US

ENP Entry into the national phase

Ref document number: 2013557510

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2013746843

Country of ref document: EP