US20100046532A1 - Routing control system for l3vpn service network - Google Patents

Routing control system for l3vpn service network Download PDF

Info

Publication number
US20100046532A1
US20100046532A1 US12/542,878 US54287809A US2010046532A1 US 20100046532 A1 US20100046532 A1 US 20100046532A1 US 54287809 A US54287809 A US 54287809A US 2010046532 A1 US2010046532 A1 US 2010046532A1
Authority
US
United States
Prior art keywords
routing
controller
migration
routing server
logical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/542,878
Inventor
Hideki Okita
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hitachi Ltd
Original Assignee
Hitachi Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hitachi Ltd filed Critical Hitachi Ltd
Assigned to HITACHI, LTD. reassignment HITACHI, LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: Okita, Hideki
Publication of US20100046532A1 publication Critical patent/US20100046532A1/en
Priority to US14/450,368 priority Critical patent/US9185031B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/58Association of routers
    • H04L45/586Association of routers of virtual routers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0805Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters by checking availability
    • H04L43/0817Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters by checking availability by checking functioning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/16Threshold monitoring
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/50Routing or path finding of packets in data switching networks using label swapping, e.g. multi-protocol label switch [MPLS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/64Routing or path finding of packets in data switching networks using an overlay routing layer

Definitions

  • the present invention relates to a routing control system and, more particularly, to a routing control system that is suitable for a Layer 3 Virtual Private Network (L3VPN) accommodating a plurality of customer or user networks.
  • L3VPN Layer 3 Virtual Private Network
  • An IP network system is composed of a plurality of communication nodes such as routers and switches.
  • Each communication node is provided with a packet transport function unit for switching packets among a plurality of line interfaces and a control function unit connected to the packet transport function unit.
  • Each communication node updates its route information table maintained thereon by communicating route information with the other communication nodes, using a distributive routing protocol such as OSPF (Open Shortest Path First), BGP (Broader Gateway Protocol), or the like.
  • OSPF Open Shortest Path First
  • BGP Broader Gateway Protocol
  • the reliability of the control function unit of each communication node influences the stability of the entire network. For example, if a malfunction occurs in one of communication nodes due to a coding error in a control program or shortage of memory capacity, this malfunction has an impact on routing control across the entire network system and may give rise to disrupted communication on a particular route according to circumstances.
  • C/U Control plane/User plane
  • This network control scheme separates the routing function from the IP packet transport function of routers in an IP network.
  • An example of the C/U separation scheme is provision of a server called a route server for intensively handling route control in the IP network.
  • the route server collectively calculates route information for every communication node in the network when a link status changes in the IP network and distributes optimum route information to each communication node. According to this control scheme, it is able to reduce the necessary time for route optimization, by notifying link status changes from each node to the route server so that the route server intensively controls routes in the network.
  • VPNs Virtual Private Networks
  • communication carriers provide various types of private communication networks (VPNs: Virtual Private Networks) as wide area connection services instead of traditional dedicated line services.
  • VPN service because a plurality of customers can share network resources provided by a carrier, each carrier can offer communication services to a larger number of customers at a lower price with reduced infrastructure investment cost.
  • L3 Layer 3
  • the L3VPN service can be implemented in several ways and a representative one is a peer-to-peer communication system using MPLS/BGP (Multi-Protocol Label Switching/Broader Gateway Protocol), e.g., described in “BGP/MPLS VPNs” RFC2547, Internet Engineering Task Force (IETF), March 1999 (Non-Patent Document 1).
  • MPLS/BGP Multi-Protocol Label Switching/Broader Gateway Protocol
  • IETF Internet Engineering Task Force
  • March 1999 Non-Patent Document 1
  • an overlay type using IPsec for example, an overlay type using IPsec, a separation type employing virtual routers, etc. are known.
  • Patent Document 1 a technique to recovery the communication by path switching when a route failure occurs is known, for example, as disclosed in Japanese Unexamined Patent Publication No. 2006-135686 (Patent Document 1).
  • Patent Document 1 Japanese Unexamined Patent Publication No. 2006-135686
  • a routing control interface at a network edge is prescribed so that the carrier network can be seen as a single router from each user network (customer network).
  • each of routers in the user networks can communicate route information with a routing system located in the carrier network in accordance with a routing protocol such as OSPF or RIP.
  • OSPF OSPF or RIP.
  • each user can reduce management cost because all route information for the VPN provided by the carrier and a plurality of access points connected to the VPN can be managed by a single routing protocol.
  • the route server (routing system) has to be provided with the following functions of:
  • the carrier has to operate the routing system (route server) so that route setup requests issued from the user networks do not interfere with each other.
  • the routing system route server
  • the load of the routing system increases on account of various factors.
  • the load of the routing system increases because of increasing in the number of networks to be controlled. Further, if a loop has occurred in an Ethernet (registered trademark) serving as a user network due to incorrect cable connection, for example, there is a risk that a storm of routing control packets (routing requests) transmitted from routers in the user network may occur. In this case, a burst of routing control packets not foreseen by the routing protocol will be transmitted into the routing system (route server), thereby causing a surge in the processing load on the route server.
  • Ethernet registered trademark
  • each router recalculates a route according to the routing protocol and advertises updated route information to other routers in the network. In this case, if a failed router performs routing control in a sequence different from the other routers, there is a possibility of no convergence of route calculation in the network. Transmission of a burst of routing control packets from a user network by a malicious user also causes a surge in the load on the route server.
  • An object of the present invention is to provide a routing control system that prevents negative impact of an increased routing control load for a particular user network from affecting on routing control for the other user networks in an L3VPN service network wherein routing control is performed by routing servers.
  • the present invention provides a routing control system to be located in an L3VPN service network connected to a plurality of user networks, comprising a system controller, a master routing server, and a slave routing server,
  • the master routing server includes a plurality of logical controllers, each of which is associated with one of the plurality of user networks to perform routing control for the user network,
  • system controller monitors a load state of the master routing server and migrates at least one logical controller selected out of the plurality of logical controllers operating on the master routing server from the master routing server to the slave routing server when the load state has satisfied a predetermined condition, so that the slave routing server inherits routing control for a particular user network associated with the migrated logical controller by activating the logical controller.
  • each of logical controllers is provided with the above-mentioned VPN route information management function and VPN route information calculation function.
  • a routing server is configured so as to perform routing control by a plurality of individual logical controllers, each of which is associated with specific one of user networks. Therefore, when an extraordinary number of routing requests occur in a particular user network, the system controller can migrate a logical controller from the master routing server to the slave routing server so as to reduce the load of the master routing server and to avoid impact on other user networks.
  • the routing control system of the present invention is characterized by that the system controller includes a migration controller which issues migration commands for the selected logical controller to the master routing server and the slave routing server, and in response to the migration commands from the migration controller, the master routing server transfers the selected logical controller to the slave routing server and the slave routing server activates the logical controller to inherit routing control for the particular user network.
  • the system controller includes a migration controller which issues migration commands for the selected logical controller to the master routing server and the slave routing server, and in response to the migration commands from the migration controller, the master routing server transfers the selected logical controller to the slave routing server and the slave routing server activates the logical controller to inherit routing control for the particular user network.
  • the system controller includes a CPU load monitor which obtains CPU load information from the master routing server to determine whether the CPU load has reached a predefined threshold value, and when the CPU load has reached the predefined threshold value, the CPU load monitor selects at least one logical controller out of the plurality of logical controllers operating on the master routing server and issues a migration request for the selected logical controller to the migration controller.
  • the system controller may include, alternative to or in addition to the above CPU load monitor, a routing control packet monitor which obtains load information including the amount of routing control packets for each of said logical controllers from the master routing server to determine whether there exists a logical controller for which the amount of routing control packets has reached a predefined threshold value.
  • the routing control packet monitor selects at least one logical controller out of the plurality of logical controllers operating on the master routing server and issues a migration request for the selected logical controller to the migration controller.
  • the system controller obtains load information indicating the amount of routing control packets from edge nodes to which the user networks are connected and migrates at least one logical controller selected out of the plurality of logical controllers operating on the master routing server from the master routing server to the slave routing server when the amount of routing control packets has satisfied a predetermined condition, and the slave routing server inherits routing control for a particular user network associated with the migrated logical controller by activating the logical controller.
  • the system controller may include, for example, a user network monitor which obtains load information indicating the amount of routing control packets from the edge nodes to determine whether the amount of routing control packets has reached a predefined threshold value, and the user network monitor selects at least one logical controller out of the plurality of logical controllers operating on the master routing server and issues a migration request for the selected logical controller to the migration controller when the amount of routing control packets has reached a predefined threshold value.
  • a user network monitor which obtains load information indicating the amount of routing control packets from the edge nodes to determine whether the amount of routing control packets has reached a predefined threshold value, and the user network monitor selects at least one logical controller out of the plurality of logical controllers operating on the master routing server and issues a migration request for the selected logical controller to the migration controller when the amount of routing control packets has reached a predefined threshold value.
  • routing control packets routing requests
  • FIG. 1 shows a first embodiment of a communication network to which the present invention is applied
  • FIG. 2 shows an example of structure of a routing server 40 ;
  • FIG. 3 shows an example of structure of a logical controller 49 provided in the routing server 40 ;
  • FIG. 4 shows an example of structure of a system controller 50 ;
  • FIG. 5 shows an embodiment of a routing server management table 550 provided in the system controller 50 ;
  • FIG. 6 shows an embodiment of a server resource management table 540 provided in the system controller 50 ;
  • FIG. 7 shows an example of structure of an edge node 10 ;
  • FIG. 8 illustrates flows of routing control packets in the communication network shown in FIG. 1 ;
  • FIG. 9 illustrates a basic sequence for updating route information in the communication network of the present invention.
  • FIG. 10 shows an example of format of a routing control packet 100 to be transmitted from a user node 60 to an edge node 10 ;
  • FIG. 11 shows an example of format of a routing control information forwarding packet 110 to be forwarded from the edge node 10 to the routing server 40 - 1 ;
  • FIG. 12 schematically illustrates a method for migrating a logical controller 49 from a master routing server 40 - 1 to a slave routing server 40 - 2 ;
  • FIG. 13 is a sequence diagram illustrating migration of a logical controller to be executed in response to a migration request issued from a CPU load monitor 56 ;
  • FIG. 14 is a flowchart of migration check 560 to be executed by the CPU load monitor 56 ;
  • FIG. 15 is a sequence diagram illustrating migration of logical controller to be executed in response to a migration request issued from a routing control packet monitor 57 ;
  • FIG. 16 is a flowchart for migration check 570 to be executed by the routing control packet monitor 57 ;
  • FIG. 17 is a sequence diagram illustrating migration of logical controller to be executed in response to a migration request issued by a user network monitor 58 ;
  • FIG. 18 shows an example of format of a load information notification packet 120 to be transmitted from an edge node 10 to the system controller 50 ;
  • FIG. 19 shows an example of structure of a user network destination management table 530 to be referred to by the user network monitor 58 ;
  • FIG. 20 is a flowchart of migration check 580 to be executed by the user network monitor 58 ;
  • FIG. 21 shows a second embodiment of a communication network to which the present invention is applied.
  • FIG. 22 illustrates a state in which logical controllers 49 - 2 and 49 - 3 have migrated in the second embodiment
  • FIG. 23A and FIG. 23B illustrate the contents of a user management table 170 provided in the edge node 10 ;
  • FIG. 24 is a sequence diagram illustrating the migration of logical controller in the communication network of the second embodiment.
  • FIG. 1 shows a first embodiment of a communication network to which the present invention is applied.
  • the communication network of the first embodiment comprises a carrier network SNW which provides L3VPN service and a plurality of user (or customer) networks NWs (NW-a, NW-b, NWc, and so forth).
  • the carrier network SNW includes a plurality of edge nodes 10 ( 10 a, 10 b, 10 c, and so forth), each of which accommodates one of the user networks, and a core node 20 for connecting the edge nodes.
  • Each user network NW comprises a node equipment (hereinafter referred to as a user node) 60 ( 60 a, 60 b, 60 c, and so forth) and one or more segments 61 connected to the user node 60 .
  • the carrier network SNW is provided with a routing control system 30 for intensively handling routing control across the communication network.
  • the routing control system 30 comprises a plurality of routing servers 40 , each of which performs optimum route calculation, route information management and route information distribution, and a system controller 50 .
  • the routing control system 30 is provided with a master routing server 40 - 1 and a slave routing server 40 - 2 . Switching from the master to the slave is performed in units of logical controller by the system controller 50 .
  • Each logical controller is provided with VPN route information management function and VPN route information calculation function.
  • On the master routing server 40 - 1 a plurality of logical controllers corresponding to the user networks NWs are operating.
  • the system controller 50 migrates a part of the logical controllers from the master routing server 40 - 1 to the slave routing server 40 - 2 , so that the load of routing control processing is distributed to the master routing server and the slave routing server.
  • the system controller 50 is connected to both the master and slave routing servers 40 - 1 , 40 - 2 via an internal switch 300 of the routing control system 30 . Communication between the system controller 50 and each routing server and migration of logical controllers from the master routing server 40 - 1 to the slave routing server 40 - 2 are carried out via the internal switch 300 , but these operations may be carried out via the core node 20 .
  • FIG. 2 shows an example of structure of a routing server 40 ( 40 - 1 , 40 - 2 ).
  • the routing server 40 comprises a processor (CPU) 41 , a network interface 42 for communicating with the core node 20 , an internal interface 43 for communicating with the system controller 50 and the other routing server via the internal switch 300 , and memories 44 A and 44 B.
  • processor CPU
  • the routing server 40 comprises a processor (CPU) 41 , a network interface 42 for communicating with the core node 20 , an internal interface 43 for communicating with the system controller 50 and the other routing server via the internal switch 300 , and memories 44 A and 44 B.
  • the memory 44 A stores a main control unit 45 , a virtualization controller 46 , a load monitoring agent 47 , and a migration control agent 48 which are provided as programs relevant to the present invention to be executed by the processor 41 .
  • the memory 44 B stores a plurality of logical controllers 49 ( 49 - 1 , 49 - 2 , and so forth) for implementing control server functions independent for each user network.
  • the virtualization controller 46 makes each of the logical controllers 49 function as a logical control server according to the communication status of each user network, by controlling CPU resources, memory resources, and communication line resources to be allocated to each logical controller 49 .
  • the load monitoring agent 47 monitors the CPU load of the routing server 40 and the CPU load for each logical controller 49 and notifies the system controller 50 of the monitoring results periodically.
  • the migration control agent 48 controls the start and stop of a specific one of logical controllers 49 and its migration between the routing servers, in accordance with a command from the system controller 50 .
  • each logical controller 49 comprises a route information manager 410 , an OSPF controller 420 for calculating a route in accordance with the OSPF protocol, a routing control packet monitoring agent 430 for monitoring input and output amount of routing control packets and notifying the system controller 50 of the monitoring results, and a route information file (route table) 440 .
  • Each logical controller 49 handles routing control packets received from a certain user network associated with it in advance to manage the route information for each user network.
  • the route information manager 410 corresponds to the above-mentioned VPN route information management function
  • the OSPF controller 420 corresponds to the VPN route information calculation function.
  • FIG. 4 shows an example of structure of the system controller 50 .
  • the system controller 50 comprises a processor (CPU) 51 , a network interface 52 for communicating with the core node 20 , an internal interface 53 for communicating with the routing servers 40 via the internal switch 300 , and memories lo 54 A and 54 B.
  • the memory 54 A stores a main control unit 55 , a CPU load monitor 56 , a routing control packet monitor 57 , a user network monitor 58 , and a migration controller 59 which are provided as programs relevant to the present invention to be executed by the processor 51 .
  • the memory 54 B stores a CPU load management table 510 , a routing server load management table 520 , a user network destination management table 530 , a server resource management table 540 , and a routing server management table 550 .
  • the CPU load monitor 56 monitors the CPU load of each logical controllers 49 running on the routing server 40 by using the CPU load management table 510 , and detects a logical controller 49 to be moved from the master routing server to the slave routing server by migration.
  • the CPU load monitor 56 detects a logical controller 49 whose CPU load has exceeded a predefined threshold value, for example, the CPU load monitor 56 selects one or more logical controllers to be moved to the slave routing server and issues a migration request to the migration controller 59 .
  • the routing control packet monitor 57 monitors the amount of routing control packets transmitted and received between each of logical controllers 49 on the routing server 40 and the user network, by using the routing server load management table 520 , and detects a logical controller 49 to be moved from the master routing server to the slave routing server.
  • the routing control packet monitor 57 detects a logical controller 49 for which the amount of transmitted and received routing control packets has exceeded a predefined threshold value, for example, the routing control packet monitor 57 selects one or more logical controllers to be moved to the slave routing server and issues a migration request to the migration controller 59 .
  • the user network monitor 58 monitors the amount of routing control packets transmitted and received between the edge nodes 10 and the user network NW, by using the user network destination management table 530 , and detects a logical controller 49 to be moved from the master routing server to the slave routing server.
  • the user network monitor 58 detects a logical controller 49 for which the amount of transmitted and received routing control packets has exceeded a predefined threshold value, for example, the user network monitor 58 selects one or more logical controllers to be moved to the slave routing server and issues a migration request to the migration controller 59 .
  • the migration controller 59 Upon receiving a migration request from the CPU load monitor 56 , the routing control packet monitor 57 or the user network monitor 58 , the migration controller 59 determines whether the logical controller specified by the migration request is allowed to migrate from the master routing server to the slave routing server, by referring to the server resource management table 540 and the routing server management table 550 .
  • the migration controller 59 issues migration commands for performing migration in units of logical controller to the master routing server 40 - 1 and the slave routing server 40 - 2 .
  • the migration control agent 48 on each routing server is instructed to stop or start the logical controller specified as a migration target, and the logical controller (a software structure shown in FIG. 3 ) is migrated from the master routing server 40 - 1 to the slave routing server 40 - 2 and the logical controller is activated on the slave routing server 40 - 2 .
  • the server resource management table 540 stores the utilization status of CPU resources on the master routing server 40 - 1 and the slave routing server 40 - 2 .
  • the routing server management table 550 stores information about logical controllers 49 operating on the master routing server and the slave routing server.
  • FIG. 5 exemplifies an embodiment of the routing server management table 550 provided in the system controller 50 .
  • the routing server management table 550 comprises a plurality of table entries, each table entry indicating the relation among the identifier (VPN ID) of each user network 551 connected to the carrier network SNW, the identifier (routing server ID) of the routing server 40 ( 40 - 1 or 40 - 2 ) 552 , and an IP address 553 as a logical controller address assigned to the logical controller 49 operating on the routing server 40 .
  • the routing server management table 550 shown here indicates that three user networks (NW-a, NW-b and NW-c) having VPN IDs “a”, “b” and “c” are connected to the carrier network SNW.
  • the logical controller 49 - 3 has migrated from the master routing server 40 - 1 to the slave routing server 40 - 2 , for example, the value of routing server ID 552 in the table entry EN-c is changed from “1” to “2”.
  • FIG. 6 shows an embodiment of the server resource management table 540 provided in the system controller 50 .
  • the server resource management table 540 comprises a plurality of table entries each corresponding to a routing server ID 541 .
  • Each table entry indicates total resources 542 representing a total amount of CPU resources available in a routing server (the master or slave routing server in this embodiment) having the routing server ID 541 , allocated resources 543 representing the amount of CPU resources already allocated to logical controllers 49 in the routing server, and CPU utilization rate 544 for each logical controller.
  • the CPU utilization rate 544 for each logical controller is expressed by pairs of the ID of logical controller and its CPU utilization rate.
  • the logical controller ID is represented by a serial number for simplification purposes and CPU utilization rate is shown in parentheses.
  • the logical controller address as shown in FIG. 5 can be used as the logical controller ID.
  • the server resource management table 540 shown here indicates that: the amount of CPU resources is “100” for both the master routing server 40 - 1 and the slave routing server 40 - 2 ; “90” of the CPU resources has been already allocated to three logical controllers 49 in the master routing server 40 - 1 ; no logical controller operates and no CPU resources are allocated to the logical controllers in the slave routing server 40 - 2 .
  • FIG. 7 shows an example of structure of an edge node 10 .
  • the edge node 10 comprises a plurality of network interfaces 11 ( 11 - 1 to 11 - n ), a packet transport unit 12 connected to these network interfaces 11 , and a control unit 13 connected to the packet transport unit 12 .
  • the control unit 13 comprises a processor 14 and memories 15 A and 15 B.
  • the memory 15 A stores a main control unit 16 , a route information processing unit 17 , and a load monitoring agent 18 which are provided as programs relevant to the present invention to be executed by the processor 14 .
  • the memory 15 B stores a route information file 150 and a control packet counter 160 .
  • the route information processing unit 17 updates the route information file 150 in accordance with a command from the routing server 40 associated with the edge node 10 .
  • the load monitoring agent 18 counts the number of routing control packets communicated with the user node in the user network for a given period of time by means of the control packet counter 160 and periodically notifies the system controller 50 in the routing control system 30 of the amount of the routing control packets.
  • FIG. 8 illustrates flows of routing control packets in the communication network shown in FIG. 1 .
  • the user node 60 ( 60 a to 60 c ) of user network generates a routing control packet (routing request) indicating configuration change of the user network, for example, when a new segment (link) was added to the user network.
  • This routing control packet is transmitted to the routing server 40 - 1 operating as the master, as denoted by a dashed-dotted line, in accordance with a routing protocol prescribed by service specifications of the carrier.
  • the master routing server 40 - 1 Upon receiving the routing control packet from the user node 60 , the master routing server 40 - 1 (the corresponding logical controller 49 ) calculates route information in each node (router) within the carrier network SNW, according to a given route calculation algorithm defined by the routing protocol. New route information is distributed from the master routing server 40 - 1 to each node in the carrier network SNW, as denoted by a dashed line.
  • routing control packets from a particular user network corresponding to the logical controller are forwarded to the slave routing server 40 - 2 .
  • FIG. 9 shows a basic sequence for updating route information in the communication network of the present invention.
  • a routing control packet including control information on the new segment to be served is transmitted from the user node 60 of the user network to the corresponding edge node 10 (SQ 02 ).
  • the edge node 10 updates the count value of routing control packet counter (SQ 03 ) and forwards the received routing control packet to the master routing server 40 - 1 (SQ 04 ).
  • the master routing server 40 - 1 updates the route information file (SQ 05 ) based on the control information specified in the routing control packet, and calculates a new route between the core node and the edge node in the carrier network (SQ 06 ).
  • the master routing server 40 - 1 distributes the route information indicating the new route to the core node 20 and the edge node 10 by a route information forwarding packet (SQ 07 ).
  • the core node 20 and the edge node 10 update the route database (route table) according to the new route information (SQ 08 ) and start the forwarding service of packets to be communicated through the new segment added to the user network.
  • FIG. 10 shows an example of format of a routing control packet 100 to be transmitted from the user node 60 to the edge node 10 .
  • the routing control packet 100 to be transmitted from the user node 60 when a new segment was added includes a node ID 101 indicating the source node of the routing control packet, link type 102 indicating the type of link accommodating the new segment, a link ID 103 uniquely assigned to the link, link information 104 indicating IP information of the new segment, and metric 105 indicating weight information for the link.
  • FIG. 11 shows an example of format of a routing information forwarding packet 110 to be forwarded from the edge node 10 to the routing server 40 - 1 .
  • the routing information forwarding packet 110 includes the routing control packet 100 received from the user node 60 in its payload 114 .
  • the payload 114 is preceded by header information including the identifier (node ID 111 ) of the edge node which is the source of the received routing control packet 110 , VPN ID 112 assigned to the user network to which the source user node 60 of the routing control packet 100 belongs, and reception time 113 at which the edge node 10 has received the routing control packet 100 .
  • the master routing server 40 - 1 and the slave routing server 40 - 2 notify the CPU load monitor 56 and the routing control packet monitor 57 in the system controller 50 of the CPU load information and the amount of input and output routing control packets counted at the servers.
  • Each edge node 10 notifies the user network monitor 58 in the system controller 50 of the amount of input and output routing control packets counted at the node.
  • each of the CPU load monitor 56 , routing control packet monitor 57 , and user network monitor 58 in the system controller 50 checks whether a condition for switching from the master routing server to the slave routing server is satisfied for a particular logical controller. If the switching condition is satisfied at one of the monitors, the monitor selects at least one logical controller to be migration target and issues a migration request to the migration controller 59 . As the migration target, the particular logical controller for which the switching condition is satisfied is selected. Alternatively, at least one of the other logical controllers may be selected as the migration target.
  • the migration controller 59 Upon receiving the migration request, the migration controller 59 determines whether the migration of the logical controller specified by the migration request is allowed or not. If it was determined that the logical controller can migrate to the slave routing server, the migration controller 59 issues a migration command for the target logical controller to the master routing server 40 - 1 and the slave routing server 40 - 2 . For example, in the case where only one logical controller is operating on the master routing server, or if the memory space in the slave routing server is not sufficient to accept the migration of new logical controller, the migration controller 59 determines that the target logical controller should not migrate to the slave routing server.
  • FIG. 12 schematically illustrates a migration method of one of logical controllers 49 from the master routing server 40 - 1 to the slave routing server 40 - 2 .
  • the migration controller 59 in the system controller 50 issues migration commands for the logical controller 49 - 1 to the master routing server 40 - 1 and the slave routing server 40 - 2 .
  • the migration control agent 48 instructs the virtualization controller 46 to stop the logical controller 49 - 1 and to move the logical controller 49 - 1 to the slave routing server 40 - 2 .
  • the virtualization controller 46 of the master routing server stops the operation of the logical controller 49 - 1 and transfers configuration information (components 410 - 440 shown in FIG. 3 ) of the logical controller 49 - 1 stored in the memory 44 B to the slave routing server 40 - 2 via the internal switch 300 , as denoted by a solid-line arrow in FIG. 12 .
  • the migration control agent 48 instructs the virtualization controller 46 to accept and activate the migrated logical controller 49 - 1 .
  • the virtualization controller 46 of the slave server stores the components of the logical controller 49 - 1 received from the master routing server via the system internal bus into the memory 44 B and starts the operation of the logical controller 49 - 1 upon the completion of storing all the components into the memory 44 B.
  • Migration of logical controller from the master routing server 40 - 1 to the slave routing server 40 - 2 may be performed such that, instead of migrating the target logical controller 49 - 1 in a high load state, the other logical controllers (logical controllers 49 - 2 , 49 - 3 in this example) in a relatively low load state are moved as denoted by dashed-line arrows in FIG. 12 .
  • the logical controller 49 - 1 in a high load state is frequently communicating packets with the corresponding user node 60 , if the logical controller 49 - 1 is selected as the migration target, it may happen that routing control packets transmitted by the user node 60 cannot be processed and would be lost during a period from the stop of the logical controller 49 - 1 on the master routing server 40 - 1 until the start of the logical controller 49 - 1 on the slave routing server 40 - 2 .
  • the master routing server 40 - 1 issues a migration request to move logical controllers other than the logical controller 49 - 1 as migration targets, it is possible to reduce the probability of routing control packet loss because the logical controllers under a relatively low load are moved from the master routing server 40 - 1 to the slave routing server 40 - 2 .
  • all logical controllers other than the logical controller 49 - 1 migrate simultaneously. However, these logical controllers may be moved to the slave routing server 40 - 2 one by one, each time the migration request is issued.
  • FIG. 13 is a sequence diagram illustrating logical controller migration from the master routing server 40 - 1 to the slave routing server 40 - 2 , which is executed in response to a migration request issued by the CPU load monitor 56 in the system controller 50 .
  • the load monitoring agent 47 of the master routing server 40 - 1 periodically calculates the load of the CPU (processor) 41 and the CPU utilization rate for each logical controller (SQ 10 ) and notifies the system controller 50 of them as CPU load information (SQ 11 ).
  • the CPU load monitor 56 of the system controller 50 Upon receiving the CPU load information from the master routing server 40 - 1 , the CPU load monitor 56 of the system controller 50 checks whether the condition for logical controller migration from the master routing server 40 - 1 to the slave routing server 40 - 2 is satisfied (SQ 12 ).
  • the CPU load monitor 56 waits for the next notification of CPU load information from the master routing server 40 - 1 . If the migration condition is satisfied, the CPU load monitor 56 selects a target logical controller to be moved to the slave routing server 40 - 2 (SQ 13 ) and issues a migration request to the migration controller 59 (SQ 14 ).
  • the migration controller 59 Upon receiving the migration request from the CPU load monitor 56 , the migration controller 59 checks surplus resources available on the slave routing server 40 - 2 by referring to the server resource management table 540 , and determines whether the migration of logical controller specified by the migration request is executable (SQ 15 ). Migration of the logical controller 49 from the master routing server 40 - 1 to the slave routing server 40 - 2 is executed only when sufficient surplus resources are available on the slave routing server 40 - 2 .
  • the migration controller 59 issues migration commands for the particular logical controller to the master routing server 40 - 1 and the slave routing server 40 - 2 (SQ 16 , SQ 17 ).
  • the master routing server 40 - 1 and the slave routing server 40 - 2 carry out migration of the particular logical controller, in cooperation with the migration control agent 48 and the virtualization controller 46 as described in regard to FIG. 12 (SQ 18 ).
  • FIG. 14 shows a flowchart of migration check 560 to be executed by the CPU load monitor 56 of the system controller 50 upon receiving CPU load information from the routing server 40 - 1 .
  • the CPU load monitor 56 updates the CPU load 32 management table 50 and the server resource management table 540 in accordance with the received CPU load information (step 561 ). After that, the CPU load monitor 56 compares the CPU load of the routing server (master 40 - 1 ), which is the source of the CPU load information, with a predefined threshold value ( 562 ). When the CPU load is equal to or less than the threshold value, the CPU load monitor 56 terminates the migration check 560 and waits for the next notification of CPU load information.
  • the CPU load monitor 56 checks the number of logical controllers 49 operating on the routing server 40 - 1 , by referring to the server resource management table 540 shown in FIG. 6 ( 563 ). If the number of logical controllers is only one, CPU load monitor 56 terminates the migration check 560 and waits for the next notification of CPU load information.
  • the CPU load monitor 56 compares the respective CPU utilizations rates 544 stored in the server resource management table 540 to each other ( 564 ) and selects a logical controller to be a migration target ( 565 ). After that, the CPU load monitor issues a migration request for the logical controller having been selected in step 565 to the migration controller 59 ( 566 ) and terminates the current migration check 560 .
  • the logical controller having the largest CPU utilization was selected as the migration target in step 565 , for example, it is possible to perform the migration as denoted by the solid-line arrow in FIG. 12 .
  • the logical controller having the largest CPU utilization rate is left on the routing server 40 - 1 and the other logical controllers were selected as migration targets, it is possible to perform migration as denoted by the dashed-line arrows in FIG. 12 .
  • Logical controller migration is performed for the purposes of reducing the CPU load of the master routing server 40 - 1 and lessening the influence on other logical controllers. Accordingly, in the case of leaving the logical controller having the largest CPU utilization rate on the master routing server 40 - 1 , at least one logical controller may be selected from among the other logical controllers.
  • FIG. 15 is a sequence diagram illustrating logical controller migration from the master routing server 40 - 1 to the slave routing server 40 - 2 , which is executed in response to a migration request issued from the routing control packet monitor 57 in the system controller 50 .
  • the migration request is issued when the amount of routing control packets transmitted and received by master routing server has increased.
  • the routing control packet monitoring agent 430 in each logical controller 49 counts the number of routing control packets transmitted and received by the logical controller for a given period of time (SQ 20 ).
  • the number of routing control packets monitored by each logical controller is periodically notified as load information to the system controller 50 by the load monitoring agent 47 of the server 40 - 1 (SQ 21 ).
  • the routing control packet monitor 57 of the system controller 50 Upon receiving the number of routing control packets for each logical controller as the load information from the routing server 40 - 1 , the routing control packet monitor 57 of the system controller 50 checks whether a condition for logical controller migration from the master routing server 40 - 1 to the slave routing server 40 - 2 is satisfied (SQ 22 ).
  • the routing control packet monitor 57 waits for the next notification of load information from the master routing server 40 - 1 . If the migration condition is satisfied, the routing control packet monitor 57 selects a target logical controller to be moved to the slave routing server 40 - 2 (SQ 23 ) and issues a migration request to the migration controller 59 (SQ 24 ).
  • the migration controller 59 Upon receiving the migration request from the routing control packet monitor 57 , the migration controller 59 determines whether the migration of the logical controller is executable (SQ 25 ), as described in regard to FIG. 13 .
  • the migration controller 59 issues migration commands for the particular logical controller to the master routing server 40 - 1 and the slave routing server 40 - 2 (SQ 26 , SQ 27 ).
  • the master routing server 40 - 1 and the slave routing server 40 - 2 carry out migration of the particular logical controller, in cooperation with the migration control agent 48 and the virtualization controller 46 , as described in regard to FIG. 12 (SQ 28 ).
  • FIG. 16 shows a flowchart of migration check 570 to be executed by the routing control packet monitor 57 in the system controller 50 upon receiving the load information indicating the number of routing control packets for each logical controller from the routing server 40 - 1 .
  • the routing control packet monitor 57 updates the routing server load management table 520 in accordance with the load information received from the routing server 40 - 1 ( 571 ). After that, the routing control packet monitor 57 compares the number of routing control packets for each logical controller with a predefined threshold value ( 572 ). If there is no logical controller for which the number of routing control packets exceeds the threshold value, the routing control packet monitor 57 terminates the migration check 570 and waits for the reception of next load information.
  • the routing control packet monitor 57 selects a logical controller to be a migration target ( 574 ), issues a migration request for the logical controller having been selected in step 574 to the migration controller 59 ( 575 ), and terminates the current migration check 570 .
  • the logical controller for which the number of routing control packets exceeds the threshold value was selected as the migration target in step 574 , it is possible to perform the migration as denoted by the solid-line arrow in FIG. 12 .
  • the logical controller for which the number of routing control packets exceeds the threshold value is left on the routing server 40 - 1 and the other logical controllers were selected as migration targets, it is possible to perform migration as denoted by the dashed-line arrows in FIG. 12 .
  • FIG. 17 is a sequence diagram illustrating logical controller migration from the master routing server 40 - 1 to the slave routing server 40 - 2 , which is executed in response to the migration request issued from the user network monitor 58 in the system controller 50 .
  • the migration request is issued when the amount of routing control packets transmitted and received by the edge node 10 has increased.
  • Each edge node 10 counts the number of routing control packets transmitted and received for a given period of time (SQ 30 ) and periodically transmits load information indicating the number of routing control packets to the system controller 50 (SQ 31 ).
  • the user network monitor 58 of the system controller 50 checks whether a condition for logical controller migration from the master routing server 40 - 1 to the slave routing server 40 - 2 is satisfied (SQ 32 ). When the migration condition is not satisfied and the migration of logical controller is determined not necessary, the user network monitor 58 waits for the next notification of load information. If the migration condition is satisfied, the user network monitor 58 selects a target logical controller to be moved to the slave routing server 40 - 2 (SQ 33 ) and issues a migration request to the migration controller 59 (SQ 34 ).
  • the migration controller 59 Upon receiving the migration request from the user network monitor 58 , the migration controller 59 determines whether the migration of the logical controller is executable (SQ 35 ), as described in regard to FIG. 13 . When it was decided that the migration of a particular logical controller specified by the migration request from the master routing server 40 - 1 to the slave routing server 40 - 2 is executable, the migration controller 59 issues migration commands for the particular logical controller to the master routing server 40 - 1 and the slave routing server 40 - 2 (SQ 36 , SQ 37 ).
  • the master routing server 40 - 1 and the slave routing server 40 - 2 Upon receiving the migration command from the system controller 50 , the master routing server 40 - 1 and the slave routing server 40 - 2 carry out migration of the particular logical controller, in cooperation with the migration control agent 48 and the virtualization controller 46 , as described in regard to FIG. 12 (SQ 38 ).
  • FIG. 18 shows an example of format of a load information notification packet 120 to be transmitted from the edge node 10 to the system controller 50 .
  • the load information notification packet 120 transmitted from the edge node 10 includes the identifier 121 of the source edge node 10 and load information for each logical interface.
  • the load information for each logical interface indicates, in association with an interface ID 122 , routing protocol type 123 , and input count 124 and output count 125 of routing control packets.
  • a packet format specific to the present embodiment is adopted here as the load information notification packet 120
  • a message format specified by a packet sampling such as sFlow protocol, or a packet statistics reporting protocol may be used for the load information notification packet 120 .
  • a packet sampling such as sFlow protocol
  • a packet statistics reporting protocol may be used for the load information notification packet 120 .
  • each user node 10 notifies the user network monitor 58 in the system controller 50 of the amount of transmitted and received routing control packets only, it is able to reduce the size of reporting packet by using the counter information in the sFlow protocol.
  • FIG. 19 shows an example of structure of the user network destination management table 530 to be referred to by the user network monitor 58 .
  • the user network destination management table 530 is prepared for each edge node in the carrier network SNW.
  • the user network monitor 58 is able to correlate a load information notification packet 120 received from each edge node to a user network, by referring to the user network destination management table 530 .
  • the user network destination management table 530 comprises a plurality of table entries, each having an interface ID 531 to identify a logical network interface of each edge node.
  • Each table entry includes a physical port ID 532 indicating a physical network interface corresponding to the interface ID 531 , a logical controller ID 533 which is the identifier of a logical controller to perform routing for a user network connected to the physical network interface, and an identifier (VPN ID) 534 to uniquely identify the user network.
  • VPN ID identifier
  • FIG. 20 shows a flowchart of migration check 580 to be executed by the user network monitor 58 of the system controller 50 when a load information notification packet 120 was received from the edge node 10 .
  • the user network monitor 58 extracts the input and output counts of routing control packet for each interface of the source edge node from the load information notification packet 120 received from the edge node 10 ( 581 ). By comparing the packet counts with a predefined threshold value, the user network monitor 58 determines whether there is an interface for which the input and output counts of routing control packets exceed the threshold value ( 582 ). If there is no interface for which the input and output counts of routing control packets exceed the threshold value, the user network monitor 58 terminates the current migration check and waits for the reception of a next load information notification packet.
  • the user network monitor 58 specifies the identifier (VPN-ID) of a user network connected to the interface and the ID of a logical controller to be the destination of routing control packets, by searching the user network destination management table 530 for a table entry having the interface ID 531 matched with the ID of the interface ( 583 )
  • the user network monitor 58 refers to the routing server management table 550 and determines the number of logical controllers operating on the routing server (the master routing server 40 - 1 in this example) associated with the identifier (VPN-ID) of the user network connected to the interface ( 584 ).
  • the number of logical controllers operating on the same routing server can be specified from the number of table entries having the same routing server ID registered in the routing server management table 550 .
  • the user network monitor 58 checks the number of logical controllers operating on the same routing server ( 585 ). If the number of logical controller is only one, the user network monitor 58 terminates the current migration check. If a plurality of logical controllers are operating on the same routing server, the user network monitor 58 selects a logical controller to be the migration target from among the logical controllers ( 586 ), issues a migration request for the selected logical controller to the migration controller 59 ( 587 ), and terminates the migration check.
  • step 583 In a case where a logical controller having the logical controller ID specified in step 583 was selected as the migration target in step 586 , it is possible to perform the migration as denoted by the solid-line arrow in FIG. 12 . In a case where the other logical controllers were selected as migration targets, it is possible to perform the migration as denoted by the dashed-line arrows in FIG. 12 .
  • FIG. 21 shows a second embodiment of a communication network to which the present invention is applied.
  • the slave routing server 40 - 2 can inherit the IP address of the migrated logical controller, without changing the route setting at the core node 20 or the internal switch 300 .
  • a plurality of routing servers 40 - 1 and 40 - 2 forming the routing system 30 are separately located at different sites distant from each other.
  • the system controller 50 is connected to a core node 20 - 1 in a control network 70 - 1
  • the master routing server 40 - 1 is connected to a core node 20 - 2 in a control network 70 - 2
  • the slave routing server 40 - 2 is connected to a core node 20 - 3 in a control network 70 - 3 .
  • These control networks 70 - 1 to 70 - 3 are connected to an edge node 10 in the carrier network.
  • connection point of the master routing server 40 - 1 and the connection point of the slave routing server 40 - 2 belong to IP segments different from each other, when a logical controller has migrated from the master routing server 40 - 1 to the slave routing server 40 - 2 , the slave routing server 40 - 2 cannot inherit the previous IP address of the migrated logical controller.
  • the slave routing server 40 - 2 assigns a new IP address to a migrated logical controller moved from the master routing server 40 - 1 .
  • the edge node is provided with a user management table 170 for indicating the correspondence of the VPN IDs with any routing server IP address, so that the routing control packet transmitted from a user network can be forwarded correctly to a corresponding logical controller, even after the logical controller has migrated from the master routing server 40 - 1 to the slave routing server 40 - 2 .
  • the master routing server 40 - 1 , the slave routing server 40 - 2 , and the system controller 50 perform communication of migration command and the migration of the logical controller via a network interface 42 or 52 .
  • user networks NW-a, NW-b and NW-c are associated with logical controllers on the master routing server, respectively.
  • routing control packets transmitted from user nodes 60 a, 60 b and 60 c are forwarded to the master routing server via the edge node 10 and the control network 70 - 2 , as denoted by dashed lines in FIG. 21 .
  • FIG. 22 shows the state in which two logical controllers 49 - 2 and 49 - 3 have migrated from the master routing server 40 - 1 to the slave routing server 40 - 2 .
  • routing control packets transmitted from the user nodes 60 b and 60 c are forwarded to the corresponding logical controllers on the slave routing server via the edge node 10 and the control network 70 - 3 , as denoted by dashed lines in FIG. 22 .
  • Switching of forwarding routes for these routing control packets is realized by changing the contents of the user management table on the edge node 10 in association with the migration.
  • FIG. 23A and FIG. 23B illustrate the contents of the user management table 170 provided in the edge node 10 .
  • the user management table 170 comprises a plurality of table entries, each entry indicating the correspondence among a VPN ID 171 which is the identifier of a user network, IP address 172 of a routing server to be the destination of routing control packets, and a user port ID indicating a port of the edge node 10 , through which a user network (user node 60 ) is connected to the carrier network.
  • the edge node 10 Upon receiving a routing control packet from a user node 60 ( 60 a, 60 b, 60 c ), the edge node 10 searches the user management table 170 for a table entry corresponding to the port ID of the port having received the routing control packet, specifies the IP address of the routing server to be the destination of the routing control packet, and forwards the routing control packet to the routing server corresponding to the user network to which the user node belongs.
  • FIG. 23A shows the user management table 170 before executing the migration.
  • the edge node 10 forwards all routing control packets received from the user networks NW-a, NW-b and NW-c to the master routing server 40 - 1 , as shown in FIG. 21 .
  • FIG. 23B shows the user management table 170 after migrating the logical controllers 49 - 2 and 29 - 3 corresponding to the user networks NW-b and NW-c to the slave routing server 40 - 2 , as described in regard to FIG. 22 .
  • the routing server IP address 172 of the table entries EN- 02 and EN- 03 was rewritten to the IP address “192.168.100.1” of the slave routing server 40 - 2 .
  • FIG. 24 is a sequence diagram illustrating the migration of logical controller from the master routing server 40 - 1 to the slave routing server 40 - 2 , which is performed in response to a migration request issued from the user network monitor 58 of the system controller 50 , in the communication network of the second embodiment.
  • Steps SQ 30 through SQ 38 are the same as SQ 30 through SQ 38 described in regard to FIG. 17 .
  • migration commands (SQ 36 , SQ 37 ) are transmitted from the migration controller 59 of the system controller 50 to the master and slave routing servers 40 - 1 and 40 - 2 via the network interface 52 .
  • the migration controller 59 instructs the slave routing server 40 - 2 to execute the migration of a particular logical controller and changing the IP address of the logical controller by means of the migration command (SQ 37 ).
  • the migration controller 59 searches the routing server management table 550 for the identifier (VPN ID) of the user network corresponding to the logical controller to be the migration-target to specify the IP address of the edge node 10 accommodating the user network (SQ 39 ).
  • the migration controller 59 transmits to the edge node 10 an address change request command to change the routing server IP address corresponding to the VPN ID to the IP address of the slave routing server 40 - 2 (SQ 40 ).
  • the edge node 10 updates the user management table (SQ 41 ) and operates to forward routing control packets received thereafter from the user network, in accordance with the updated user management table 170 .
  • the slave routing server 40 - 2 Upon completing the migration of the logical controller (SQ 38 ) specified by the migration command, the slave routing server 40 - 2 assigns the migrated logical controller an IP address of the IP segment to which the slave routing server 40 - 2 belongs (SQ 42 ) and notifies the migration controller 59 in the system controller 50 of the new IP address assigned to the logical controller (SQ 43 ).
  • the migration controller 59 updates the logical controller address 553 in the routing server management table 550 , in accordance with the IP address notified from the slave routing server 40 - 2 (SQ 44 ).

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Environmental & Geological Engineering (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

A routing control system comprising a system controller and master and slave routing servers, wherein the master routing server includes a plurality of logical controllers, each of which performs routing control for each of the user networks, the system controller monitors a load state of the master routing server and migrates at least one of the plurality of logical controllers from the master routing server to the slave routing server when the load state has satisfied a predetermined condition, so that the slave routing server inherits routing control for a particular user network associated with the migrated logical controller.

Description

    CLAIM OF PRIORITY
  • The present patent application claims priority from Japanese patent application JP 2008-213250, filed on Aug. 21, 2008, the content of which is hereby incorporated by reference into this application.
  • BACKGROUND OF THE INVENTION
  • (1) Field of the Invention
  • The present invention relates to a routing control system and, more particularly, to a routing control system that is suitable for a Layer 3 Virtual Private Network (L3VPN) accommodating a plurality of customer or user networks.
  • (2) Description of Related Art
  • An IP network system is composed of a plurality of communication nodes such as routers and switches. Each communication node is provided with a packet transport function unit for switching packets among a plurality of line interfaces and a control function unit connected to the packet transport function unit. Each communication node updates its route information table maintained thereon by communicating route information with the other communication nodes, using a distributive routing protocol such as OSPF (Open Shortest Path First), BGP (Broader Gateway Protocol), or the like.
  • For a network system in which a distributive routing protocol is applied, the reliability of the control function unit of each communication node influences the stability of the entire network. For example, if a malfunction occurs in one of communication nodes due to a coding error in a control program or shortage of memory capacity, this malfunction has an impact on routing control across the entire network system and may give rise to disrupted communication on a particular route according to circumstances.
  • For such a distributive routing based network, a network control scheme called C/U (Control plane/User plane) separation is under study. This network control scheme separates the routing function from the IP packet transport function of routers in an IP network. An example of the C/U separation scheme is provision of a server called a route server for intensively handling route control in the IP network. The route server collectively calculates route information for every communication node in the network when a link status changes in the IP network and distributes optimum route information to each communication node. According to this control scheme, it is able to reduce the necessary time for route optimization, by notifying link status changes from each node to the route server so that the route server intensively controls routes in the network.
  • Meanwhile, communication carriers provide various types of private communication networks (VPNs: Virtual Private Networks) as wide area connection services instead of traditional dedicated line services. In such a VPN service, because a plurality of customers can share network resources provided by a carrier, each carrier can offer communication services to a larger number of customers at a lower price with reduced infrastructure investment cost.
  • One of VPN services provided by a carrier is an L3 (Layer 3) VPN service that provides virtual IP networks to multiple customers. The L3VPN service can be implemented in several ways and a representative one is a peer-to-peer communication system using MPLS/BGP (Multi-Protocol Label Switching/Broader Gateway Protocol), e.g., described in “BGP/MPLS VPNs” RFC2547, Internet Engineering Task Force (IETF), March 1999 (Non-Patent Document 1). As other implementations, for example, an overlay type using IPsec, a separation type employing virtual routers, etc. are known.
  • In order to improve communication reliability in the VPN services, a technique to recovery the communication by path switching when a route failure occurs is known, for example, as disclosed in Japanese Unexamined Patent Publication No. 2006-135686 (Patent Document 1). By adopting the path switching technique, in the case where disconnection of communication line or a fault in a communication node occurs, communication over the VPN via the faulty line or faulty node can be recovered.
  • In one of L3VPN services, a routing control interface at a network edge is prescribed so that the carrier network can be seen as a single router from each user network (customer network). In this case, each of routers in the user networks can communicate route information with a routing system located in the carrier network in accordance with a routing protocol such as OSPF or RIP. According to this architecture, each user can reduce management cost because all route information for the VPN provided by the carrier and a plurality of access points connected to the VPN can be managed by a single routing protocol.
  • In a case where a carrier builds a routing system for L3VPN service by employing the above-mentioned route server, in order to improve the reliability of communication service and the performance of system management, and opens the routing protocol interface of the route server to the respective routers in the user networks, the route server (routing system) has to be provided with the following functions of:
    • (1) collecting routing control packets from each router in the user networks;
    • (2) VPN route information management for managing route information for each user network;
    • (3) VPN routing control for calculating route information for each user network; and
    • (4) route information distribution for converting route information for each user network into route information in the carrier network and reflecting the route information to each router in the user networks.
  • In a case where a plurality of user networks are connected to a single network, e.g., an L3VPN service network provided by the carrier and routing control for the user networks is performed by a route server, the carrier has to operate the routing system (route server) so that route setup requests issued from the user networks do not interfere with each other. However, when a plurality of user networks are connected to the L3VPN service network, the load of the routing system increases on account of various factors.
  • If a new customer joins in the L3VPN service, for example, the load of the routing system increases because of increasing in the number of networks to be controlled. Further, if a loop has occurred in an Ethernet (registered trademark) serving as a user network due to incorrect cable connection, for example, there is a risk that a storm of routing control packets (routing requests) transmitted from routers in the user network may occur. In this case, a burst of routing control packets not foreseen by the routing protocol will be transmitted into the routing system (route server), thereby causing a surge in the processing load on the route server.
  • Failure occurred in one of the routers in the user networks and bugs involved in routing control software running on the router may result in a burst of routing control packets. In a communication network in which the routing protocol such as OSPF or RIP is applied, when reception of keep-alive packets from a neighboring router ceases, each router recalculates a route according to the routing protocol and advertises updated route information to other routers in the network. In this case, if a failed router performs routing control in a sequence different from the other routers, there is a possibility of no convergence of route calculation in the network. Transmission of a burst of routing control packets from a user network by a malicious user also causes a surge in the load on the route server.
  • SUMMARY OF THE INVENTION
  • An object of the present invention is to provide a routing control system that prevents negative impact of an increased routing control load for a particular user network from affecting on routing control for the other user networks in an L3VPN service network wherein routing control is performed by routing servers.
  • In order to accomplish the above object, the present invention provides a routing control system to be located in an L3VPN service network connected to a plurality of user networks, comprising a system controller, a master routing server, and a slave routing server,
  • wherein the master routing server includes a plurality of logical controllers, each of which is associated with one of the plurality of user networks to perform routing control for the user network,
  • wherein the system controller monitors a load state of the master routing server and migrates at least one logical controller selected out of the plurality of logical controllers operating on the master routing server from the master routing server to the slave routing server when the load state has satisfied a predetermined condition, so that the slave routing server inherits routing control for a particular user network associated with the migrated logical controller by activating the logical controller. Here, each of logical controllers is provided with the above-mentioned VPN route information management function and VPN route information calculation function.
  • According to the present invention, a routing server is configured so as to perform routing control by a plurality of individual logical controllers, each of which is associated with specific one of user networks. Therefore, when an extraordinary number of routing requests occur in a particular user network, the system controller can migrate a logical controller from the master routing server to the slave routing server so as to reduce the load of the master routing server and to avoid impact on other user networks.
  • More specifically, the routing control system of the present invention is characterized by that the system controller includes a migration controller which issues migration commands for the selected logical controller to the master routing server and the slave routing server, and in response to the migration commands from the migration controller, the master routing server transfers the selected logical controller to the slave routing server and the slave routing server activates the logical controller to inherit routing control for the particular user network.
  • In one exemplary embodiment of the present invention, the system controller includes a CPU load monitor which obtains CPU load information from the master routing server to determine whether the CPU load has reached a predefined threshold value, and when the CPU load has reached the predefined threshold value, the CPU load monitor selects at least one logical controller out of the plurality of logical controllers operating on the master routing server and issues a migration request for the selected logical controller to the migration controller.
  • In the routing control system of the present invention, the system controller may include, alternative to or in addition to the above CPU load monitor, a routing control packet monitor which obtains load information including the amount of routing control packets for each of said logical controllers from the master routing server to determine whether there exists a logical controller for which the amount of routing control packets has reached a predefined threshold value. When the amount of routing control packets of any logical controller has reached the predefined threshold value, the routing control packet monitor selects at least one logical controller out of the plurality of logical controllers operating on the master routing server and issues a migration request for the selected logical controller to the migration controller.
  • In another exemplary embodiment of the present invention, the system controller obtains load information indicating the amount of routing control packets from edge nodes to which the user networks are connected and migrates at least one logical controller selected out of the plurality of logical controllers operating on the master routing server from the master routing server to the slave routing server when the amount of routing control packets has satisfied a predetermined condition, and the slave routing server inherits routing control for a particular user network associated with the migrated logical controller by activating the logical controller.
  • In this case, the system controller may include, for example, a user network monitor which obtains load information indicating the amount of routing control packets from the edge nodes to determine whether the amount of routing control packets has reached a predefined threshold value, and the user network monitor selects at least one logical controller out of the plurality of logical controllers operating on the master routing server and issues a migration request for the selected logical controller to the migration controller when the amount of routing control packets has reached a predefined threshold value.
  • According to the present invention, when the processing load of the routing server increases due to a larger number of routing control packets (routing requests) transmitted from a particular user network, it becomes possible to avoid impact on the other user networks by migrating at least one of logical controllers from the master routing server to the slave routing server.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 shows a first embodiment of a communication network to which the present invention is applied;
  • FIG. 2 shows an example of structure of a routing server 40;
  • FIG. 3 shows an example of structure of a logical controller 49 provided in the routing server 40;
  • FIG. 4 shows an example of structure of a system controller 50;
  • FIG. 5 shows an embodiment of a routing server management table 550 provided in the system controller 50;
  • FIG. 6 shows an embodiment of a server resource management table 540 provided in the system controller 50;
  • FIG. 7 shows an example of structure of an edge node 10;
  • FIG. 8 illustrates flows of routing control packets in the communication network shown in FIG. 1;
  • FIG. 9 illustrates a basic sequence for updating route information in the communication network of the present invention;
  • FIG. 10 shows an example of format of a routing control packet 100 to be transmitted from a user node 60 to an edge node 10;
  • FIG. 11 shows an example of format of a routing control information forwarding packet 110 to be forwarded from the edge node 10 to the routing server 40-1;
  • FIG. 12 schematically illustrates a method for migrating a logical controller 49 from a master routing server 40-1 to a slave routing server 40-2;
  • FIG. 13 is a sequence diagram illustrating migration of a logical controller to be executed in response to a migration request issued from a CPU load monitor 56;
  • FIG. 14 is a flowchart of migration check 560 to be executed by the CPU load monitor 56;
  • FIG. 15 is a sequence diagram illustrating migration of logical controller to be executed in response to a migration request issued from a routing control packet monitor 57;
  • FIG. 16 is a flowchart for migration check 570 to be executed by the routing control packet monitor 57;
  • FIG. 17 is a sequence diagram illustrating migration of logical controller to be executed in response to a migration request issued by a user network monitor 58;
  • FIG. 18 shows an example of format of a load information notification packet 120 to be transmitted from an edge node 10 to the system controller 50;
  • FIG. 19 shows an example of structure of a user network destination management table 530 to be referred to by the user network monitor 58;
  • FIG. 20 is a flowchart of migration check 580 to be executed by the user network monitor 58;
  • FIG. 21 shows a second embodiment of a communication network to which the present invention is applied;
  • FIG. 22 illustrates a state in which logical controllers 49-2 and 49-3 have migrated in the second embodiment;
  • FIG. 23A and FIG. 23B illustrate the contents of a user management table 170 provided in the edge node 10; and
  • FIG. 24 is a sequence diagram illustrating the migration of logical controller in the communication network of the second embodiment.
  • DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS
  • Exemplary embodiments of the present invention will be described hereinafter with reference to the drawings.
  • First Embodiment FIG. 1 shows a first embodiment of a communication network to which the present invention is applied.
  • The communication network of the first embodiment comprises a carrier network SNW which provides L3VPN service and a plurality of user (or customer) networks NWs (NW-a, NW-b, NWc, and so forth). The carrier network SNW includes a plurality of edge nodes 10 (10 a, 10 b, 10 c, and so forth), each of which accommodates one of the user networks, and a core node 20 for connecting the edge nodes. Each user network NW comprises a node equipment (hereinafter referred to as a user node) 60 (60 a, 60 b, 60 c, and so forth) and one or more segments 61 connected to the user node 60.
  • In the embodiment, the carrier network SNW is provided with a routing control system 30 for intensively handling routing control across the communication network. The routing control system 30 comprises a plurality of routing servers 40, each of which performs optimum route calculation, route information management and route information distribution, and a system controller 50. In the embodiment, the routing control system 30 is provided with a master routing server 40-1 and a slave routing server 40-2. Switching from the master to the slave is performed in units of logical controller by the system controller 50.
  • Each logical controller is provided with VPN route information management function and VPN route information calculation function. On the master routing server 40-1, a plurality of logical controllers corresponding to the user networks NWs are operating. In the first embodiment, when the load on the master routing server 40-1 has increased, the system controller 50 migrates a part of the logical controllers from the master routing server 40-1 to the slave routing server 40-2, so that the load of routing control processing is distributed to the master routing server and the slave routing server.
  • In the first embodiment shown in FIG. 1, the system controller 50 is connected to both the master and slave routing servers 40-1, 40-2 via an internal switch 300 of the routing control system 30. Communication between the system controller 50 and each routing server and migration of logical controllers from the master routing server 40-1 to the slave routing server 40-2 are carried out via the internal switch 300, but these operations may be carried out via the core node 20.
  • FIG. 2 shows an example of structure of a routing server 40 (40-1, 40-2).
  • The routing server 40 comprises a processor (CPU) 41, a network interface 42 for communicating with the core node 20, an internal interface 43 for communicating with the system controller 50 and the other routing server via the internal switch 300, and memories 44A and 44B.
  • The memory 44A stores a main control unit 45, a virtualization controller 46, a load monitoring agent 47, and a migration control agent 48 which are provided as programs relevant to the present invention to be executed by the processor 41. The memory 44B stores a plurality of logical controllers 49 (49-1, 49-2, and so forth) for implementing control server functions independent for each user network.
  • The virtualization controller 46 makes each of the logical controllers 49 function as a logical control server according to the communication status of each user network, by controlling CPU resources, memory resources, and communication line resources to be allocated to each logical controller 49. The load monitoring agent 47 monitors the CPU load of the routing server 40 and the CPU load for each logical controller 49 and notifies the system controller 50 of the monitoring results periodically. The migration control agent 48 controls the start and stop of a specific one of logical controllers 49 and its migration between the routing servers, in accordance with a command from the system controller 50.
  • As shown in FIG. 3, each logical controller 49 comprises a route information manager 410, an OSPF controller 420 for calculating a route in accordance with the OSPF protocol, a routing control packet monitoring agent 430 for monitoring input and output amount of routing control packets and notifying the system controller 50 of the monitoring results, and a route information file (route table) 440. Each logical controller 49 handles routing control packets received from a certain user network associated with it in advance to manage the route information for each user network. Here, the route information manager 410 corresponds to the above-mentioned VPN route information management function and the OSPF controller 420 corresponds to the VPN route information calculation function.
  • FIG. 4 shows an example of structure of the system controller 50.
  • The system controller 50 comprises a processor (CPU) 51, a network interface 52 for communicating with the core node 20, an internal interface 53 for communicating with the routing servers 40 via the internal switch 300, and memories lo 54A and 54B. The memory 54A stores a main control unit 55, a CPU load monitor 56, a routing control packet monitor 57, a user network monitor 58, and a migration controller 59 which are provided as programs relevant to the present invention to be executed by the processor 51. The memory 54B stores a CPU load management table 510, a routing server load management table 520, a user network destination management table 530, a server resource management table 540, and a routing server management table 550.
  • The CPU load monitor 56 monitors the CPU load of each logical controllers 49 running on the routing server 40 by using the CPU load management table 510, and detects a logical controller 49 to be moved from the master routing server to the slave routing server by migration. When the CPU load monitor 56 detects a logical controller 49 whose CPU load has exceeded a predefined threshold value, for example, the CPU load monitor 56 selects one or more logical controllers to be moved to the slave routing server and issues a migration request to the migration controller 59.
  • The routing control packet monitor 57 monitors the amount of routing control packets transmitted and received between each of logical controllers 49 on the routing server 40 and the user network, by using the routing server load management table 520, and detects a logical controller 49 to be moved from the master routing server to the slave routing server. When the routing control packet monitor 57 detects a logical controller 49 for which the amount of transmitted and received routing control packets has exceeded a predefined threshold value, for example, the routing control packet monitor 57 selects one or more logical controllers to be moved to the slave routing server and issues a migration request to the migration controller 59.
  • The user network monitor 58 monitors the amount of routing control packets transmitted and received between the edge nodes 10 and the user network NW, by using the user network destination management table 530, and detects a logical controller 49 to be moved from the master routing server to the slave routing server. When the user network monitor 58 detects a logical controller 49 for which the amount of transmitted and received routing control packets has exceeded a predefined threshold value, for example, the user network monitor 58 selects one or more logical controllers to be moved to the slave routing server and issues a migration request to the migration controller 59.
  • Upon receiving a migration request from the CPU load monitor 56, the routing control packet monitor 57 or the user network monitor 58, the migration controller 59 determines whether the logical controller specified by the migration request is allowed to migrate from the master routing server to the slave routing server, by referring to the server resource management table 540 and the routing server management table 550.
  • When the logical controller was allowed to migrate to the slave routing server, the migration controller 59 issues migration commands for performing migration in units of logical controller to the master routing server 40-1 and the slave routing server 40-2. By the migration commands, the migration control agent 48 on each routing server is instructed to stop or start the logical controller specified as a migration target, and the logical controller (a software structure shown in FIG. 3) is migrated from the master routing server 40-1 to the slave routing server 40-2 and the logical controller is activated on the slave routing server 40-2.
  • The server resource management table 540 stores the utilization status of CPU resources on the master routing server 40-1 and the slave routing server 40-2. The routing server management table 550 stores information about logical controllers 49 operating on the master routing server and the slave routing server.
  • FIG. 5 exemplifies an embodiment of the routing server management table 550 provided in the system controller 50.
  • The routing server management table 550 comprises a plurality of table entries, each table entry indicating the relation among the identifier (VPN ID) of each user network 551 connected to the carrier network SNW, the identifier (routing server ID) of the routing server 40 (40-1 or 40-2) 552, and an IP address 553 as a logical controller address assigned to the logical controller 49 operating on the routing server 40.
  • The routing server management table 550 shown here indicates that three user networks (NW-a, NW-b and NW-c) having VPN IDs “a”, “b” and “c” are connected to the carrier network SNW.
  • A table entry EN-a indicates that a user network (NW-a) having VPN ID=“a” is controlled by a logical controller 49-1 having an IP address of “192.168.99.101” which is operating on the routing server (master routing server 40-1) having the routing server ID=1. A table entry EN-b indicates that a user network (NW-b) having VPN ID=“b” is controlled by a logical controller 49-2 having an IP address of “192.168.99.102” which is operating on the routing server (master routing server 40-1) having the routing server ID=1.
  • Moreover, a table entry EN-c indicates that a user network (NW-c) having VPN ID=“c” is controlled by a logical controller 49-3 having an IP address of “192.168.99.103” which is operating on the routing server (master routing server 40-1) having the routing server ID=1. Here, if the logical controller 49-3 has migrated from the master routing server 40-1 to the slave routing server 40-2, for example, the value of routing server ID 552 in the table entry EN-c is changed from “1” to “2”.
  • FIG. 6 shows an embodiment of the server resource management table 540 provided in the system controller 50.
  • The server resource management table 540 comprises a plurality of table entries each corresponding to a routing server ID 541. Each table entry indicates total resources 542 representing a total amount of CPU resources available in a routing server (the master or slave routing server in this embodiment) having the routing server ID 541, allocated resources 543 representing the amount of CPU resources already allocated to logical controllers 49 in the routing server, and CPU utilization rate 544 for each logical controller. The CPU utilization rate 544 for each logical controller is expressed by pairs of the ID of logical controller and its CPU utilization rate. In FIG. 6, the logical controller ID is represented by a serial number for simplification purposes and CPU utilization rate is shown in parentheses. Alternatively, the logical controller address as shown in FIG. 5 can be used as the logical controller ID.
  • The server resource management table 540 shown here indicates that: the amount of CPU resources is “100” for both the master routing server 40-1 and the slave routing server 40-2; “90” of the CPU resources has been already allocated to three logical controllers 49 in the master routing server 40-1; no logical controller operates and no CPU resources are allocated to the logical controllers in the slave routing server 40-2.
  • FIG. 7 shows an example of structure of an edge node 10.
  • The edge node 10 comprises a plurality of network interfaces 11 (11-1 to 11-n), a packet transport unit 12 connected to these network interfaces 11, and a control unit 13 connected to the packet transport unit 12. The control unit 13 comprises a processor 14 and memories 15A and 15B. The memory 15A stores a main control unit 16, a route information processing unit 17, and a load monitoring agent 18 which are provided as programs relevant to the present invention to be executed by the processor 14. The memory 15B stores a route information file 150 and a control packet counter 160.
  • The route information processing unit 17 updates the route information file 150 in accordance with a command from the routing server 40 associated with the edge node 10. The load monitoring agent 18 counts the number of routing control packets communicated with the user node in the user network for a given period of time by means of the control packet counter 160 and periodically notifies the system controller 50 in the routing control system 30 of the amount of the routing control packets.
  • FIG. 8 illustrates flows of routing control packets in the communication network shown in FIG. 1.
  • The user node 60 (60 a to 60 c) of user network generates a routing control packet (routing request) indicating configuration change of the user network, for example, when a new segment (link) was added to the user network. This routing control packet is transmitted to the routing server 40-1 operating as the master, as denoted by a dashed-dotted line, in accordance with a routing protocol prescribed by service specifications of the carrier.
  • Upon receiving the routing control packet from the user node 60, the master routing server 40-1 (the corresponding logical controller 49) calculates route information in each node (router) within the carrier network SNW, according to a given route calculation algorithm defined by the routing protocol. New route information is distributed from the master routing server 40-1 to each node in the carrier network SNW, as denoted by a dashed line.
  • In a state where a certain logical controller 49 has migrated to the slave routing server 40-2, routing control packets from a particular user network corresponding to the logical controller are forwarded to the slave routing server 40-2.
  • FIG. 9 shows a basic sequence for updating route information in the communication network of the present invention.
  • For example, when a new segment was added to a certain user network (SQ01), a routing control packet including control information on the new segment to be served is transmitted from the user node 60 of the user network to the corresponding edge node 10 (SQ02). Upon receiving the routing control packet, the edge node 10 updates the count value of routing control packet counter (SQ03) and forwards the received routing control packet to the master routing server 40-1 (SQ04).
  • The master routing server 40-1 updates the route information file (SQ05) based on the control information specified in the routing control packet, and calculates a new route between the core node and the edge node in the carrier network (SQ06). The master routing server 40-1 distributes the route information indicating the new route to the core node 20 and the edge node 10 by a route information forwarding packet (SQ07). The core node 20 and the edge node 10 update the route database (route table) according to the new route information (SQ08) and start the forwarding service of packets to be communicated through the new segment added to the user network.
  • FIG. 10 shows an example of format of a routing control packet 100 to be transmitted from the user node 60 to the edge node 10.
  • The routing control packet 100 to be transmitted from the user node 60 when a new segment was added includes a node ID 101 indicating the source node of the routing control packet, link type 102 indicating the type of link accommodating the new segment, a link ID 103 uniquely assigned to the link, link information 104 indicating IP information of the new segment, and metric 105 indicating weight information for the link.
  • FIG. 11 shows an example of format of a routing information forwarding packet 110 to be forwarded from the edge node 10 to the routing server 40-1.
  • The routing information forwarding packet 110 includes the routing control packet 100 received from the user node 60 in its payload 114. The payload 114 is preceded by header information including the identifier (node ID 111) of the edge node which is the source of the received routing control packet 110, VPN ID 112 assigned to the user network to which the source user node 60 of the routing control packet 100 belongs, and reception time 113 at which the edge node 10 has received the routing control packet 100.
  • In the routing control system 30 shown in FIG. 1, the master routing server 40-1 and the slave routing server 40-2 notify the CPU load monitor 56 and the routing control packet monitor 57 in the system controller 50 of the CPU load information and the amount of input and output routing control packets counted at the servers. Each edge node 10 notifies the user network monitor 58 in the system controller 50 of the amount of input and output routing control packets counted at the node.
  • When the CPU load information or the amount of input and output routing control packets is notified, each of the CPU load monitor 56, routing control packet monitor 57, and user network monitor 58 in the system controller 50 checks whether a condition for switching from the master routing server to the slave routing server is satisfied for a particular logical controller. If the switching condition is satisfied at one of the monitors, the monitor selects at least one logical controller to be migration target and issues a migration request to the migration controller 59. As the migration target, the particular logical controller for which the switching condition is satisfied is selected. Alternatively, at least one of the other logical controllers may be selected as the migration target.
  • Upon receiving the migration request, the migration controller 59 determines whether the migration of the logical controller specified by the migration request is allowed or not. If it was determined that the logical controller can migrate to the slave routing server, the migration controller 59 issues a migration command for the target logical controller to the master routing server 40-1 and the slave routing server 40-2. For example, in the case where only one logical controller is operating on the master routing server, or if the memory space in the slave routing server is not sufficient to accept the migration of new logical controller, the migration controller 59 determines that the target logical controller should not migrate to the slave routing server.
  • FIG. 12 schematically illustrates a migration method of one of logical controllers 49 from the master routing server 40-1 to the slave routing server 40-2.
  • For example, it is assumed that, due to an increase in the load on the logical controller 49-1 operating on the master routing server 40-1, the migration controller 59 in the system controller 50 issues migration commands for the logical controller 49-1 to the master routing server 40-1 and the slave routing server 40-2.
  • When the master routing server 40-1 receives the migration command from the system controller 50, the migration control agent 48 instructs the virtualization controller 46 to stop the logical controller 49-1 and to move the logical controller 49-1 to the slave routing server 40-2. The virtualization controller 46 of the master routing server stops the operation of the logical controller 49-1 and transfers configuration information (components 410-440 shown in FIG. 3) of the logical controller 49-1 stored in the memory 44B to the slave routing server 40-2 via the internal switch 300, as denoted by a solid-line arrow in FIG. 12.
  • At the slave routing server 40-2, when the migration command from the system controller 50 was received, the migration control agent 48 instructs the virtualization controller 46 to accept and activate the migrated logical controller 49-1. The virtualization controller 46 of the slave server stores the components of the logical controller 49-1 received from the master routing server via the system internal bus into the memory 44B and starts the operation of the logical controller 49-1 upon the completion of storing all the components into the memory 44B.
  • Migration of logical controller from the master routing server 40-1 to the slave routing server 40-2 may be performed such that, instead of migrating the target logical controller 49-1 in a high load state, the other logical controllers (logical controllers 49-2, 49-3 in this example) in a relatively low load state are moved as denoted by dashed-line arrows in FIG. 12.
  • Because the logical controller 49-1 in a high load state is frequently communicating packets with the corresponding user node 60, if the logical controller 49-1 is selected as the migration target, it may happen that routing control packets transmitted by the user node 60 cannot be processed and would be lost during a period from the stop of the logical controller 49-1 on the master routing server 40-1 until the start of the logical controller 49-1 on the slave routing server 40-2.
  • In this case, if the master routing server 40-1 issues a migration request to move logical controllers other than the logical controller 49-1 as migration targets, it is possible to reduce the probability of routing control packet loss because the logical controllers under a relatively low load are moved from the master routing server 40-1 to the slave routing server 40-2. In the example shown in FIG. 12, all logical controllers other than the logical controller 49-1 migrate simultaneously. However, these logical controllers may be moved to the slave routing server 40-2 one by one, each time the migration request is issued.
  • FIG. 13 is a sequence diagram illustrating logical controller migration from the master routing server 40-1 to the slave routing server 40-2, which is executed in response to a migration request issued by the CPU load monitor 56 in the system controller 50.
  • The load monitoring agent 47 of the master routing server 40-1 periodically calculates the load of the CPU (processor) 41 and the CPU utilization rate for each logical controller (SQ10) and notifies the system controller 50 of them as CPU load information (SQ11). Upon receiving the CPU load information from the master routing server 40-1, the CPU load monitor 56 of the system controller 50 checks whether the condition for logical controller migration from the master routing server 40-1 to the slave routing server 40-2 is satisfied (SQ12).
  • When the migration condition is not satisfied and the migration of logical controller is determined not necessary, the CPU load monitor 56 waits for the next notification of CPU load information from the master routing server 40-1. If the migration condition is satisfied, the CPU load monitor 56 selects a target logical controller to be moved to the slave routing server 40-2 (SQ13) and issues a migration request to the migration controller 59 (SQ14).
  • Upon receiving the migration request from the CPU load monitor 56, the migration controller 59 checks surplus resources available on the slave routing server 40-2 by referring to the server resource management table 540, and determines whether the migration of logical controller specified by the migration request is executable (SQ15). Migration of the logical controller 49 from the master routing server 40-1 to the slave routing server 40-2 is executed only when sufficient surplus resources are available on the slave routing server 40-2.
  • When it was decided that the migration of a particular logical controller specified by the migration request from the master routing server 40-1 to the slave routing server 40-2 is executable, the migration controller 59 issues migration commands for the particular logical controller to the master routing server 40-1 and the slave routing server 40-2 (SQ16, SQ17). Upon receiving the migration command from the system controller 50, the master routing server 40-1 and the slave routing server 40-2 carry out migration of the particular logical controller, in cooperation with the migration control agent 48 and the virtualization controller 46 as described in regard to FIG. 12 (SQ18).
  • FIG. 14 shows a flowchart of migration check 560 to be executed by the CPU load monitor 56 of the system controller 50 upon receiving CPU load information from the routing server 40-1.
  • The CPU load monitor 56 updates the CPU load 32 management table 50 and the server resource management table 540 in accordance with the received CPU load information (step 561). After that, the CPU load monitor 56 compares the CPU load of the routing server (master 40-1), which is the source of the CPU load information, with a predefined threshold value (562). When the CPU load is equal to or less than the threshold value, the CPU load monitor 56 terminates the migration check 560 and waits for the next notification of CPU load information.
  • When the CPU load of the routing server 40-1 exceeds the threshold value, the CPU load monitor 56 checks the number of logical controllers 49 operating on the routing server 40-1, by referring to the server resource management table 540 shown in FIG. 6 (563). If the number of logical controllers is only one, CPU load monitor 56 terminates the migration check 560 and waits for the next notification of CPU load information.
  • If two or more logical controllers are operating on the routing server 40-1, the CPU load monitor 56 compares the respective CPU utilizations rates 544 stored in the server resource management table 540 to each other (564) and selects a logical controller to be a migration target (565). After that, the CPU load monitor issues a migration request for the logical controller having been selected in step 565 to the migration controller 59 (566) and terminates the current migration check 560.
  • In a case where the logical controller having the largest CPU utilization was selected as the migration target in step 565, for example, it is possible to perform the migration as denoted by the solid-line arrow in FIG. 12. In a case where the logical controller having the largest CPU utilization rate is left on the routing server 40-1 and the other logical controllers were selected as migration targets, it is possible to perform migration as denoted by the dashed-line arrows in FIG. 12.
  • Logical controller migration is performed for the purposes of reducing the CPU load of the master routing server 40-1 and lessening the influence on other logical controllers. Accordingly, in the case of leaving the logical controller having the largest CPU utilization rate on the master routing server 40-1, at least one logical controller may be selected from among the other logical controllers.
  • FIG. 15 is a sequence diagram illustrating logical controller migration from the master routing server 40-1 to the slave routing server 40-2, which is executed in response to a migration request issued from the routing control packet monitor 57 in the system controller 50. Here, the migration request is issued when the amount of routing control packets transmitted and received by master routing server has increased.
  • On the master routing server 40-1, the routing control packet monitoring agent 430 in each logical controller 49 counts the number of routing control packets transmitted and received by the logical controller for a given period of time (SQ20). The number of routing control packets monitored by each logical controller is periodically notified as load information to the system controller 50 by the load monitoring agent 47 of the server 40-1 (SQ21).
  • Upon receiving the number of routing control packets for each logical controller as the load information from the routing server 40-1, the routing control packet monitor 57 of the system controller 50 checks whether a condition for logical controller migration from the master routing server 40-1 to the slave routing server 40-2 is satisfied (SQ22).
  • When the migration condition is not satisfied and the migration of logical controller is determined not necessary, the routing control packet monitor 57 waits for the next notification of load information from the master routing server 40-1. If the migration condition is satisfied, the routing control packet monitor 57 selects a target logical controller to be moved to the slave routing server 40-2 (SQ23) and issues a migration request to the migration controller 59 (SQ24).
  • Upon receiving the migration request from the routing control packet monitor 57, the migration controller 59 determines whether the migration of the logical controller is executable (SQ25), as described in regard to FIG. 13.
  • When it was decided that the migration of a particular logical controller specified by the migration request from the master routing server 40-1 to the slave routing server 40-2 is executable, the migration controller 59 issues migration commands for the particular logical controller to the master routing server 40-1 and the slave routing server 40-2 (SQ26, SQ27). Upon receiving the migration command from the system controller 50, the master routing server 40-1 and the slave routing server 40-2 carry out migration of the particular logical controller, in cooperation with the migration control agent 48 and the virtualization controller 46, as described in regard to FIG. 12 (SQ28).
  • FIG. 16 shows a flowchart of migration check 570 to be executed by the routing control packet monitor 57 in the system controller 50 upon receiving the load information indicating the number of routing control packets for each logical controller from the routing server 40-1.
  • The routing control packet monitor 57 updates the routing server load management table 520 in accordance with the load information received from the routing server 40-1 (571). After that, the routing control packet monitor 57 compares the number of routing control packets for each logical controller with a predefined threshold value (572). If there is no logical controller for which the number of routing control packets exceeds the threshold value, the routing control packet monitor 57 terminates the migration check 570 and waits for the reception of next load information.
  • When a logical controller for which the number of routing control packets exceeds the threshold value was found, the routing control packet monitor 57 selects a logical controller to be a migration target (574), issues a migration request for the logical controller having been selected in step 574 to the migration controller 59 (575), and terminates the current migration check 570.
  • In a case where the logical controller for which the number of routing control packets exceeds the threshold value was selected as the migration target in step 574, it is possible to perform the migration as denoted by the solid-line arrow in FIG. 12. In a case where the logical controller for which the number of routing control packets exceeds the threshold value is left on the routing server 40-1 and the other logical controllers were selected as migration targets, it is possible to perform migration as denoted by the dashed-line arrows in FIG. 12.
  • FIG. 17 is a sequence diagram illustrating logical controller migration from the master routing server 40-1 to the slave routing server 40-2, which is executed in response to the migration request issued from the user network monitor 58 in the system controller 50. Here, the migration request is issued when the amount of routing control packets transmitted and received by the edge node 10 has increased.
  • Each edge node 10 counts the number of routing control packets transmitted and received for a given period of time (SQ30) and periodically transmits load information indicating the number of routing control packets to the system controller 50 (SQ31). Upon receiving the load information indicating the number of transmitted and received routing control packets from each edge node 10, the user network monitor 58 of the system controller 50 checks whether a condition for logical controller migration from the master routing server 40-1 to the slave routing server 40-2 is satisfied (SQ32). When the migration condition is not satisfied and the migration of logical controller is determined not necessary, the user network monitor 58 waits for the next notification of load information. If the migration condition is satisfied, the user network monitor 58 selects a target logical controller to be moved to the slave routing server 40-2 (SQ33) and issues a migration request to the migration controller 59 (SQ34).
  • Upon receiving the migration request from the user network monitor 58, the migration controller 59 determines whether the migration of the logical controller is executable (SQ35), as described in regard to FIG. 13. When it was decided that the migration of a particular logical controller specified by the migration request from the master routing server 40-1 to the slave routing server 40-2 is executable, the migration controller 59 issues migration commands for the particular logical controller to the master routing server 40-1 and the slave routing server 40-2 (SQ36, SQ37).
  • Upon receiving the migration command from the system controller 50, the master routing server 40-1 and the slave routing server 40-2 carry out migration of the particular logical controller, in cooperation with the migration control agent 48 and the virtualization controller 46, as described in regard to FIG. 12 (SQ38).
  • FIG. 18 shows an example of format of a load information notification packet 120 to be transmitted from the edge node 10 to the system controller 50.
  • The load information notification packet 120 transmitted from the edge node 10 includes the identifier 121 of the source edge node 10 and load information for each logical interface. The load information for each logical interface indicates, in association with an interface ID 122, routing protocol type 123, and input count 124 and output count 125 of routing control packets.
  • Although a packet format specific to the present embodiment is adopted here as the load information notification packet 120, a message format specified by a packet sampling, such as sFlow protocol, or a packet statistics reporting protocol may be used for the load information notification packet 120. In a case where each user node 10 notifies the user network monitor 58 in the system controller 50 of the amount of transmitted and received routing control packets only, it is able to reduce the size of reporting packet by using the counter information in the sFlow protocol.
  • FIG. 19 shows an example of structure of the user network destination management table 530 to be referred to by the user network monitor 58.
  • The user network destination management table 530 is prepared for each edge node in the carrier network SNW. The user network monitor 58 is able to correlate a load information notification packet 120 received from each edge node to a user network, by referring to the user network destination management table 530.
  • The user network destination management table 530 comprises a plurality of table entries, each having an interface ID 531 to identify a logical network interface of each edge node. Each table entry includes a physical port ID 532 indicating a physical network interface corresponding to the interface ID 531, a logical controller ID 533 which is the identifier of a logical controller to perform routing for a user network connected to the physical network interface, and an identifier (VPN ID) 534 to uniquely identify the user network.
  • The user network destination management table 530 shown here indicates that two logical interfaces having the interface IDs “VLAN001” and “VLAN002” are formed on a physical interface having the physical port ID=“Ether100” and a logical interface having the interface ID “Ether002” is formed on a physical interface having the physical port ID=“Ether002”. It can also be seen that user networks having VPN IDs=“a”, “b”, “c” are connected to these three logical interfaces, respectively, and routing for these user networks is controlled by logical controllers having logical controller IDs of “1”, “2”, “3”, respectively.
  • FIG. 20 shows a flowchart of migration check 580 to be executed by the user network monitor 58 of the system controller 50 when a load information notification packet 120 was received from the edge node 10.
  • The user network monitor 58 extracts the input and output counts of routing control packet for each interface of the source edge node from the load information notification packet 120 received from the edge node 10 (581). By comparing the packet counts with a predefined threshold value, the user network monitor 58 determines whether there is an interface for which the input and output counts of routing control packets exceed the threshold value (582). If there is no interface for which the input and output counts of routing control packets exceed the threshold value, the user network monitor 58 terminates the current migration check and waits for the reception of a next load information notification packet.
  • If an interface for which the input and output counts of routing control packets exceed the threshold value, the user network monitor 58 specifies the identifier (VPN-ID) of a user network connected to the interface and the ID of a logical controller to be the destination of routing control packets, by searching the user network destination management table 530 for a table entry having the interface ID 531 matched with the ID of the interface (583) Next, the user network monitor 58 refers to the routing server management table 550 and determines the number of logical controllers operating on the routing server (the master routing server 40-1 in this example) associated with the identifier (VPN-ID) of the user network connected to the interface (584). The number of logical controllers operating on the same routing server can be specified from the number of table entries having the same routing server ID registered in the routing server management table 550.
  • The user network monitor 58 checks the number of logical controllers operating on the same routing server (585). If the number of logical controller is only one, the user network monitor 58 terminates the current migration check. If a plurality of logical controllers are operating on the same routing server, the user network monitor 58 selects a logical controller to be the migration target from among the logical controllers (586), issues a migration request for the selected logical controller to the migration controller 59 (587), and terminates the migration check.
  • In a case where a logical controller having the logical controller ID specified in step 583 was selected as the migration target in step 586, it is possible to perform the migration as denoted by the solid-line arrow in FIG. 12. In a case where the other logical controllers were selected as migration targets, it is possible to perform the migration as denoted by the dashed-line arrows in FIG. 12.
  • Second Embodiment
  • FIG. 21 shows a second embodiment of a communication network to which the present invention is applied.
  • In the communication network of the first embodiment, since a plurality of routing servers 40-1 and 40-2 forming the routing control system 30 are connected by the internal switch 300 and located in the same IP segment, the slave routing server 40-2 can inherit the IP address of the migrated logical controller, without changing the route setting at the core node 20 or the internal switch 300.
  • In the communication network of the second embodiment, a plurality of routing servers 40-1 and 40-2 forming the routing system 30 are separately located at different sites distant from each other. In the communication network shown here, the system controller 50 is connected to a core node 20-1 in a control network 70-1, the master routing server 40-1 is connected to a core node 20-2 in a control network 70-2, and the slave routing server 40-2 is connected to a core node 20-3 in a control network 70-3. These control networks 70-1 to 70-3 are connected to an edge node 10 in the carrier network.
  • According to this network configuration, because the connection point of the master routing server 40-1 and the connection point of the slave routing server 40-2 belong to IP segments different from each other, when a logical controller has migrated from the master routing server 40-1 to the slave routing server 40-2, the slave routing server 40-2 cannot inherit the previous IP address of the migrated logical controller.
  • In the second embodiment, therefore, the slave routing server 40-2 assigns a new IP address to a migrated logical controller moved from the master routing server 40-1. Further, the edge node is provided with a user management table 170 for indicating the correspondence of the VPN IDs with any routing server IP address, so that the routing control packet transmitted from a user network can be forwarded correctly to a corresponding logical controller, even after the logical controller has migrated from the master routing server 40-1 to the slave routing server 40-2.
  • In the second embodiment, the master routing server 40-1, the slave routing server 40-2, and the system controller 50 perform communication of migration command and the migration of the logical controller via a network interface 42 or 52.
  • Before executing the migration of logical controller, user networks NW-a, NW-b and NW-c are associated with logical controllers on the master routing server, respectively. Thus, routing control packets transmitted from user nodes 60 a, 60 b and 60 c are forwarded to the master routing server via the edge node 10 and the control network 70-2, as denoted by dashed lines in FIG. 21.
  • FIG. 22 shows the state in which two logical controllers 49-2 and 49-3 have migrated from the master routing server 40-1 to the slave routing server 40-2.
  • When the logical controllers 49-2 and 49-3 have migrated from the master routing server 40-1 to the slave routing server 40-2 in a state where the user networks NW-a, NW-b and NW-c are associated with the logical controllers 49-1, 49-2 and 49-3, respectively, routing control packets transmitted from the user nodes 60 b and 60 c are forwarded to the corresponding logical controllers on the slave routing server via the edge node 10 and the control network 70-3, as denoted by dashed lines in FIG. 22. Switching of forwarding routes for these routing control packets is realized by changing the contents of the user management table on the edge node 10 in association with the migration.
  • FIG. 23A and FIG. 23B illustrate the contents of the user management table 170 provided in the edge node 10.
  • The user management table 170 comprises a plurality of table entries, each entry indicating the correspondence among a VPN ID 171 which is the identifier of a user network, IP address 172 of a routing server to be the destination of routing control packets, and a user port ID indicating a port of the edge node 10, through which a user network (user node 60) is connected to the carrier network.
  • Upon receiving a routing control packet from a user node 60 (60 a, 60 b, 60 c), the edge node 10 searches the user management table 170 for a table entry corresponding to the port ID of the port having received the routing control packet, specifies the IP address of the routing server to be the destination of the routing control packet, and forwards the routing control packet to the routing server corresponding to the user network to which the user node belongs.
  • FIG. 23A shows the user management table 170 before executing the migration. At this time, as indicated by the entries EN-01 to EN-03, three user networks having VPN IDs “a”, “b” and “c” are associated with the same IP address “192.168.99.1” indicating the master routing server 40-1. Therefore, the edge node 10 forwards all routing control packets received from the user networks NW-a, NW-b and NW-c to the master routing server 40-1, as shown in FIG. 21.
  • FIG. 23B shows the user management table 170 after migrating the logical controllers 49-2 and 29-3 corresponding to the user networks NW-b and NW-c to the slave routing server 40-2, as described in regard to FIG. 22. In consequence of the migration, the routing server IP address 172 of the table entries EN-02 and EN-03 was rewritten to the IP address “192.168.100.1” of the slave routing server 40-2.
  • FIG. 24 is a sequence diagram illustrating the migration of logical controller from the master routing server 40-1 to the slave routing server 40-2, which is performed in response to a migration request issued from the user network monitor 58 of the system controller 50, in the communication network of the second embodiment.
  • Steps SQ30 through SQ38 are the same as SQ30 through SQ38 described in regard to FIG. 17. In the second embodiment, migration commands (SQ36, SQ37) are transmitted from the migration controller 59 of the system controller 50 to the master and slave routing servers 40-1 and 40-2 via the network interface 52.
  • In the second embodiment, the migration controller 59 instructs the slave routing server 40-2 to execute the migration of a particular logical controller and changing the IP address of the logical controller by means of the migration command (SQ37). After that, the migration controller 59 searches the routing server management table 550 for the identifier (VPN ID) of the user network corresponding to the logical controller to be the migration-target to specify the IP address of the edge node 10 accommodating the user network (SQ39). Then, the migration controller 59 transmits to the edge node 10 an address change request command to change the routing server IP address corresponding to the VPN ID to the IP address of the slave routing server 40-2 (SQ40). In response to the address change request command, the edge node 10 updates the user management table (SQ41) and operates to forward routing control packets received thereafter from the user network, in accordance with the updated user management table 170.
  • Upon completing the migration of the logical controller (SQ38) specified by the migration command, the slave routing server 40-2 assigns the migrated logical controller an IP address of the IP segment to which the slave routing server 40-2 belongs (SQ42) and notifies the migration controller 59 in the system controller 50 of the new IP address assigned to the logical controller (SQ43). The migration controller 59 updates the logical controller address 553 in the routing server management table 550, in accordance with the IP address notified from the slave routing server 40-2 (SQ44).
  • Although the sequence of migration to be executed in response to a migration request issued from the user network monitor 58 has been described here as the second embodiment, it is also possible to perform migration, in the communication network shown in FIG. 21, in response to a migration request issued as a result of migration check executed by the CPU load monitor 56 or a migration request issued as a result of migration check executed by the routing control packet monitor 57, as described for the first embodiment.

Claims (16)

1. A routing control system to be located in an L3VPN service network connected to a plurality of user networks, comprising: a system controller; a master routing server; and a slave routing server,
wherein said master routing server includes a plurality of logical controllers, each of which is associated with one of said plurality of user networks to perform routing control for the user network,
wherein said system controller monitors a load state of said master routing server and migrates at least one logical controller selected out of said plurality of logical controllers operating on said master routing server from the master routing server to said slave routing server when the load state has satisfied a predetermined condition, so that the slave routing server inherits routing control for a particular user network associated with the migrated logical controller by activating the logical controller.
2. The routing control system according to claim 1,
wherein said system controller includes a migration controller which issues migration commands for said selected logical controller to said master routing server and said slave routing server, and
wherein, in response to said migration commands from said migration controller, said master routing server transfers said selected logical controller to said slave routing server and said slave routing server activates the logical controller to inherit routing control for said particular user network.
3. The routing control system according to claim 2,
wherein said system controller includes a CPU load monitor which obtains CPU load information from said master routing server to determine whether the CPU load has reached a predefined threshold value, and
wherein, when the CPU load has reached the predefined threshold value, said CPU load monitor selects at least one logical controller out of said plurality of logical controllers operating on said master routing server and issues a migration request for the selected logical controller to said migration controller.
4. The routing control system according to claim 2,
wherein said system controller includes a routing control packet monitor which obtains load information including the amount of routing control packets for each of said logical controllers from said master routing server to determine whether there exists a logical controller for which the amount of routing control packets has reached a predefined threshold value, and
wherein, when the amount of routing control packets of any logical controller has reached the predefined threshold value, said routing control packet monitor selects at least one logical controller out of said plurality of logical controllers operating on said master routing server and issues a migration request for the selected logical controller to said migration controller.
5. The routing control system according to claim 3,
wherein said migration controller determines whether the migration of said selected logical controller is allowed or not when said migration request was received, and issues migration commands for the logical controller to said master routing server and said slave routing server when the migration of the logical controller was allowed.
6. The routing control system according to claim 4,
wherein said migration controller determines whether the migration of said selected logical controller is allowed or not when said migration request was received, and issues migration commands for the logical controller to said master routing server and said slave routing server when the migration of the logical controller was allowed.
7. The routing control system according to claim 1,
wherein said master routing server, said slave routing server and said system controller are connected to the same core node in said L3VPN service network, and said migration of logical controller is carried out via the core node or an internal switch for interconnecting the master and slave routing servers.
8. The routing control system according to claims 1,
wherein said master routing server and said slave routing server are connected to different core nodes in said L3VPN service network, and said migration of logical controller is carried out via a communication line in the L3VPN service network.
9. The routing control system according to claim 8,
wherein said plurality of user networks are connected to the same edge node in the L3VPN service network,
wherein the edge node forwards routing control packets received from each of said user networks, in accordance with a management table indicating the correspondence of identifiers of said user networks and addresses of said master and slave routing servers, and
wherein said system controller issues a command for updating the management table to said edge node when said logical controller has immigrated from said master routing server to said slave routing server.
10. A routing control system to be located in an L3VPN service network connected to a plurality of user networks, comprising: a system controller; a master routing server; and a slave routing server,
wherein said master routing server includes a plurality of logical controllers, each of which is associated with one of said plurality of user networks to perform routing control for the user network,
wherein said system controller obtains load information indicating the amount of routing control packets from edge nodes to which said user networks are connected and migrates at least one logical controller selected out of said plurality of logical controllers operating on said master routing server from the master routing server to said slave routing server when the amount of routing control packets has satisfied a predetermined condition, and
wherein said slave routing server inherits routing control for a particular user network associated with the migrated logical controller by activating the logical controller.
11. The routing control system according to claim 10,
wherein said system controller includes a migration controller which issues migration commands for said selected logical controller to said master routing server and said slave routing server, and
wherein, in response to the migration commands from the migration controller, said master routing server transfers the selected logical controller to said slave routing server and said slave routing server activates the logical controller to inherit routing control for said particular user network.
12. The routing control system according to claim 11,
wherein said system controller includes a user network monitor which obtains load information indicating the amount of routing control packets from said edge nodes to determine whether the amount of routing control packets has reached a predefined threshold value, and
wherein said user network monitor selects at least one logical controller out of said plurality of logical controllers operating on said master routing server and issues a migration request for the selected logical controller to said migration controller when the amount of routing control packets has reached a predefined threshold value.
13. The routing control system according to claim 12,
wherein said migration controller determines whether the migration of said selected logical controller is allowed or not when said migration request was received, and issues migration commands for the logical controller to said master routing server and said slave routing server when the migration of the logical controller was allowed.
14. The routing control system according to claim 10,
wherein said master routing server, said slave routing server and said system controller are connected to the same core node in said L3VPN service network, and said migration of logical controller is carried out via the core node or an internal switch for interconnecting the master and slave routing servers.
15. The routing control system according to claim 10,
wherein said master routing server and said slave routing server are connected to different core nodes in said L3VPN service network, and said migration of logical controller is carried out via a communication line in the L3VPN service network.
16. The routing control system according to claim 15,
wherein said plurality of user networks are connected to the same edge node in the L3VPN service network,
wherein the edge node forwards routing control packets received from each of said user networks, in accordance with a management table indicating the correspondence of identifiers of said user networks and addresses of said master and slave routing servers, and
wherein said system controller issues a command for updating the management table to said edge node when said logical controller has immigrated from said master routing server to said slave routing server.
US12/542,878 2008-08-21 2009-08-18 Routing control system for l3vpn service network Abandoned US20100046532A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/450,368 US9185031B2 (en) 2008-08-21 2014-08-04 Routing control system for L3VPN service network

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2008213250A JP5074327B2 (en) 2008-08-21 2008-08-21 Routing system
JP2008-213250 2008-08-21

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US14/450,368 Continuation US9185031B2 (en) 2008-08-21 2014-08-04 Routing control system for L3VPN service network

Publications (1)

Publication Number Publication Date
US20100046532A1 true US20100046532A1 (en) 2010-02-25

Family

ID=41056862

Family Applications (2)

Application Number Title Priority Date Filing Date
US12/542,878 Abandoned US20100046532A1 (en) 2008-08-21 2009-08-18 Routing control system for l3vpn service network
US14/450,368 Expired - Fee Related US9185031B2 (en) 2008-08-21 2014-08-04 Routing control system for L3VPN service network

Family Applications After (1)

Application Number Title Priority Date Filing Date
US14/450,368 Expired - Fee Related US9185031B2 (en) 2008-08-21 2014-08-04 Routing control system for L3VPN service network

Country Status (4)

Country Link
US (2) US20100046532A1 (en)
EP (1) EP2157746B1 (en)
JP (1) JP5074327B2 (en)
CN (1) CN101656732B (en)

Cited By (66)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110158111A1 (en) * 2009-12-28 2011-06-30 Alcatel-Lucent Canada Inc. Bulk service provisioning on live network
US20110307571A1 (en) * 2010-06-15 2011-12-15 Steve Bakke Hierarchical display-server system and method
CN103152262A (en) * 2013-02-25 2013-06-12 华为技术有限公司 Method and equipment for connection establishment
US20130294453A1 (en) * 2012-05-03 2013-11-07 Futurewei Technologies, Inc. Layer-3 services for united router farm
US20130329734A1 (en) * 2012-06-11 2013-12-12 Radware, Ltd. Techniques for providing value-added services in sdn-based networks
CN103618562A (en) * 2013-12-16 2014-03-05 国家电网公司 Intelligent digital signal wire distribution device
US8897134B2 (en) * 2010-06-25 2014-11-25 Telefonaktiebolaget L M Ericsson (Publ) Notifying a controller of a change to a packet forwarding configuration of a network element over a communication channel
US20150263867A1 (en) * 2014-03-11 2015-09-17 Futurewei Technologies, Inc. Virtual Private Network Migration and Management in Centrally Controlled Networks
US20160262046A1 (en) * 2013-10-15 2016-09-08 Ntt Docomo, Inc. Mobile station
US20170171304A1 (en) * 2015-12-09 2017-06-15 Le Holdings (Beijing) Co., Ltd. Service updating method and system for server cluster
US20170317954A1 (en) * 2016-04-28 2017-11-02 Nicira, Inc. Automatic configuration of logical routers on edge nodes
US9906448B2 (en) 2010-12-10 2018-02-27 Nec Corporation Communication system, control device, node controlling method, and program
US10101720B2 (en) * 2011-10-05 2018-10-16 Opteon Corporation Methods, apparatus, and systems for monitoring and/or controlling dynamic environments
US10164881B2 (en) 2014-03-14 2018-12-25 Nicira, Inc. Route advertisement by managed gateways
US10237123B2 (en) 2016-12-21 2019-03-19 Nicira, Inc. Dynamic recovery from a split-brain failure in edge nodes
US20190150034A1 (en) * 2016-07-11 2019-05-16 Huawei Technologies Co., Ltd. Service Traffic Control Method and System and Decision Network Element
US10389634B2 (en) 2013-09-04 2019-08-20 Nicira, Inc. Multiple active L3 gateways for logical networks
US10560320B2 (en) 2016-06-29 2020-02-11 Nicira, Inc. Ranking of gateways in cluster
US10616045B2 (en) 2016-12-22 2020-04-07 Nicira, Inc. Migration of centralized routing components of logical router
US10652143B2 (en) 2015-04-04 2020-05-12 Nicira, Inc Route server mode for dynamic routing between logical and physical networks
US10938693B2 (en) 2017-06-22 2021-03-02 Nicira, Inc. Method and system of resiliency in cloud-delivered SD-WAN
EP2582099B1 (en) * 2010-06-09 2021-03-17 Nec Corporation Communication system, logic channel control device, communication method and program
US10959098B2 (en) 2017-10-02 2021-03-23 Vmware, Inc. Dynamically specifying multiple public cloud edge nodes to connect to an external multi-computer node
US10958479B2 (en) 2017-10-02 2021-03-23 Vmware, Inc. Selecting one node from several candidate nodes in several public clouds to establish a virtual network that spans the public clouds
US10992558B1 (en) 2017-11-06 2021-04-27 Vmware, Inc. Method and apparatus for distributed data network traffic optimization
US10992568B2 (en) 2017-01-31 2021-04-27 Vmware, Inc. High performance software-defined core network
US10999165B2 (en) 2017-10-02 2021-05-04 Vmware, Inc. Three tiers of SaaS providers for deploying compute and network infrastructure in the public cloud
US10999100B2 (en) 2017-10-02 2021-05-04 Vmware, Inc. Identifying multiple nodes in a virtual network defined over a set of public clouds to connect to an external SAAS provider
US10999137B2 (en) 2019-08-27 2021-05-04 Vmware, Inc. Providing recommendations for implementing virtual networks
US11044190B2 (en) 2019-10-28 2021-06-22 Vmware, Inc. Managing forwarding elements at edge nodes connected to a virtual network
US20210194802A1 (en) * 2019-04-22 2021-06-24 Mingwei Xu Method and system for implementing l3vpn based on two-dimensional routing protocol
US11050588B2 (en) 2013-07-10 2021-06-29 Nicira, Inc. Method and system of overlay flow control
US11089111B2 (en) 2017-10-02 2021-08-10 Vmware, Inc. Layer four optimization for a virtual network defined over public cloud
US11115480B2 (en) 2017-10-02 2021-09-07 Vmware, Inc. Layer four optimization for a virtual network defined over public cloud
US11121962B2 (en) 2017-01-31 2021-09-14 Vmware, Inc. High performance software-defined core network
US11212140B2 (en) 2013-07-10 2021-12-28 Nicira, Inc. Network-link method useful for a last-mile connectivity in an edge-gateway multipath system
US11223514B2 (en) 2017-11-09 2022-01-11 Nicira, Inc. Method and system of a dynamic high-availability mode based on current wide area network connectivity
US11245641B2 (en) 2020-07-02 2022-02-08 Vmware, Inc. Methods and apparatus for application aware hub clustering techniques for a hyper scale SD-WAN
US11252079B2 (en) 2017-01-31 2022-02-15 Vmware, Inc. High performance software-defined core network
US11303557B2 (en) 2020-04-06 2022-04-12 Vmware, Inc. Tunnel endpoint group records for inter-datacenter traffic
US11349722B2 (en) 2017-02-11 2022-05-31 Nicira, Inc. Method and system of connecting to a multipath hub in a cluster
US11363124B2 (en) 2020-07-30 2022-06-14 Vmware, Inc. Zero copy socket splicing
US11374904B2 (en) 2015-04-13 2022-06-28 Nicira, Inc. Method and system of a cloud-based multipath routing protocol
US11375005B1 (en) 2021-07-24 2022-06-28 Vmware, Inc. High availability solutions for a secure access service edge application
US11381499B1 (en) 2021-05-03 2022-07-05 Vmware, Inc. Routing meshes for facilitating routing through an SD-WAN
US11394640B2 (en) 2019-12-12 2022-07-19 Vmware, Inc. Collecting and analyzing data regarding flows associated with DPI parameters
US11418997B2 (en) 2020-01-24 2022-08-16 Vmware, Inc. Using heart beats to monitor operational state of service classes of a QoS aware network link
US11444872B2 (en) 2015-04-13 2022-09-13 Nicira, Inc. Method and system of application-aware routing with crowdsourcing
US11444865B2 (en) 2020-11-17 2022-09-13 Vmware, Inc. Autonomous distributed forwarding plane traceability based anomaly detection in application traffic for hyper-scale SD-WAN
US11489720B1 (en) 2021-06-18 2022-11-01 Vmware, Inc. Method and apparatus to evaluate resource elements and public clouds for deploying tenant deployable elements based on harvested performance metrics
US11489783B2 (en) 2019-12-12 2022-11-01 Vmware, Inc. Performing deep packet inspection in a software defined wide area network
US11496392B2 (en) 2015-06-27 2022-11-08 Nicira, Inc. Provisioning logical entities in a multidatacenter environment
US11575600B2 (en) 2020-11-24 2023-02-07 Vmware, Inc. Tunnel-less SD-WAN
US11601356B2 (en) 2020-12-29 2023-03-07 Vmware, Inc. Emulating packet flows to assess network links for SD-WAN
US11606286B2 (en) 2017-01-31 2023-03-14 Vmware, Inc. High performance software-defined core network
US11677720B2 (en) 2015-04-13 2023-06-13 Nicira, Inc. Method and system of establishing a virtual private network in a cloud service for branch networking
US11706127B2 (en) 2017-01-31 2023-07-18 Vmware, Inc. High performance software-defined core network
US11706126B2 (en) 2017-01-31 2023-07-18 Vmware, Inc. Method and apparatus for distributed data network traffic optimization
US11729065B2 (en) 2021-05-06 2023-08-15 Vmware, Inc. Methods for application defined virtual network service among multiple transport in SD-WAN
US11792127B2 (en) 2021-01-18 2023-10-17 Vmware, Inc. Network-aware load balancing
US11909815B2 (en) 2022-06-06 2024-02-20 VMware LLC Routing based on geolocation costs
US11943146B2 (en) 2021-10-01 2024-03-26 VMware LLC Traffic prioritization in SD-WAN
US11979325B2 (en) 2021-01-28 2024-05-07 VMware LLC Dynamic SD-WAN hub cluster scaling with machine learning
US12009987B2 (en) 2021-05-03 2024-06-11 VMware LLC Methods to support dynamic transit paths through hub clustering across branches in SD-WAN
US12015536B2 (en) 2021-06-18 2024-06-18 VMware LLC Method and apparatus for deploying tenant deployable elements across public clouds based on harvested performance metrics of types of resource elements in the public clouds
US12034630B2 (en) 2023-07-17 2024-07-09 VMware LLC Method and apparatus for distributed data network traffic optimization

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5052642B2 (en) * 2010-04-21 2012-10-17 株式会社エヌ・ティ・ティ・ドコモ Mobile communication system, network device, and mobile communication method
CN102547756B (en) * 2010-12-29 2014-09-10 中国移动通信集团公司 Data processing system, nodes and method
JP5304813B2 (en) * 2011-02-22 2013-10-02 沖電気工業株式会社 Communication node device
WO2012106919A1 (en) * 2011-07-22 2012-08-16 华为技术有限公司 Routing control method, apparatus and system of layer 3 virtual private network
US8830820B2 (en) * 2011-10-14 2014-09-09 Google Inc. Semi-centralized routing
JP5889813B2 (en) * 2013-02-19 2016-03-22 日本電信電話株式会社 Communication system and program
CN104253733B (en) * 2013-06-26 2017-12-19 北京思普崚技术有限公司 A kind of VPN multi connection methods based on IPSec
US9338094B2 (en) 2014-03-31 2016-05-10 Dell Products, L.P. System and method for context aware network
CN105183431B (en) * 2015-08-05 2018-09-28 瑞斯康达科技发展股份有限公司 A kind of cpu busy percentage control method and device
US10411990B2 (en) * 2017-12-18 2019-09-10 At&T Intellectual Property I, L.P. Routing stability in hybrid software-defined networking networks
WO2022066568A1 (en) * 2020-09-24 2022-03-31 Arris Enterprises Llc Personalized data throttling in a residential wireless network
TWI813316B (en) * 2022-05-31 2023-08-21 瑞昱半導體股份有限公司 Method for data access control among multiple nodes and data access system

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030037165A1 (en) * 2001-07-06 2003-02-20 Daisuke Shinomiya Dynamic load sharing system using a virtual router
US6751191B1 (en) * 1999-06-29 2004-06-15 Cisco Technology, Inc. Load sharing and redundancy scheme
US20040221065A1 (en) * 2003-04-30 2004-11-04 International Business Machines Corporation Apparatus and method for dynamic sharing of server network interface resources
US20060077922A1 (en) * 2003-03-21 2006-04-13 Siemens Aktiengesellschaft System method & apparatus for routing traffic in a telecommunications network
US20060092971A1 (en) * 2004-10-29 2006-05-04 Hitachi, Ltd. Packet transfer device
US20080065783A1 (en) * 2003-07-03 2008-03-13 Iloglu Ali M Externally controlled reachability in virtual private networks
US7406037B2 (en) * 2004-04-08 2008-07-29 Hitachi, Ltd. Packet forwarding apparatus with redundant routing module
US20080198849A1 (en) * 2007-02-20 2008-08-21 Jim Guichard Scaling virtual private networks using service insertion architecture
US7684417B2 (en) * 2004-02-26 2010-03-23 Nec Corporation Method of migrating processes between networks and network system thereof

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3776814B2 (en) * 2002-02-14 2006-05-17 日本電信電話株式会社 Method for minimizing communication interruption time in case of router and router partial failure
JP4789425B2 (en) * 2004-03-31 2011-10-12 富士通株式会社 Route table synchronization method, network device, and route table synchronization program
US7400585B2 (en) * 2004-09-23 2008-07-15 International Business Machines Corporation Optimal interconnect utilization in a data processing network
JP4255080B2 (en) 2004-11-05 2009-04-15 日本電信電話株式会社 Network failure recovery management method and network failure recovery management device
JP4466434B2 (en) * 2005-03-30 2010-05-26 パナソニック株式会社 Routing method and home agent

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6751191B1 (en) * 1999-06-29 2004-06-15 Cisco Technology, Inc. Load sharing and redundancy scheme
US20030037165A1 (en) * 2001-07-06 2003-02-20 Daisuke Shinomiya Dynamic load sharing system using a virtual router
US20060077922A1 (en) * 2003-03-21 2006-04-13 Siemens Aktiengesellschaft System method & apparatus for routing traffic in a telecommunications network
US20040221065A1 (en) * 2003-04-30 2004-11-04 International Business Machines Corporation Apparatus and method for dynamic sharing of server network interface resources
US7263555B2 (en) * 2003-04-30 2007-08-28 International Business Machines Corporation Apparatus and method for dynamic sharing of server network interface resources
US20080065783A1 (en) * 2003-07-03 2008-03-13 Iloglu Ali M Externally controlled reachability in virtual private networks
US7684417B2 (en) * 2004-02-26 2010-03-23 Nec Corporation Method of migrating processes between networks and network system thereof
US7406037B2 (en) * 2004-04-08 2008-07-29 Hitachi, Ltd. Packet forwarding apparatus with redundant routing module
US20060092971A1 (en) * 2004-10-29 2006-05-04 Hitachi, Ltd. Packet transfer device
US20080198849A1 (en) * 2007-02-20 2008-08-21 Jim Guichard Scaling virtual private networks using service insertion architecture

Cited By (129)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110158111A1 (en) * 2009-12-28 2011-06-30 Alcatel-Lucent Canada Inc. Bulk service provisioning on live network
EP2582099B1 (en) * 2010-06-09 2021-03-17 Nec Corporation Communication system, logic channel control device, communication method and program
US20110307571A1 (en) * 2010-06-15 2011-12-15 Steve Bakke Hierarchical display-server system and method
US8700723B2 (en) * 2010-06-15 2014-04-15 Netzyn, Inc. Hierarchical display-server system and method
US8897134B2 (en) * 2010-06-25 2014-11-25 Telefonaktiebolaget L M Ericsson (Publ) Notifying a controller of a change to a packet forwarding configuration of a network element over a communication channel
US9906448B2 (en) 2010-12-10 2018-02-27 Nec Corporation Communication system, control device, node controlling method, and program
US10983493B2 (en) 2011-10-05 2021-04-20 Opteon Corporation Methods, apparatus, and systems for monitoring and/or controlling dynamic environments
US10101720B2 (en) * 2011-10-05 2018-10-16 Opteon Corporation Methods, apparatus, and systems for monitoring and/or controlling dynamic environments
US20130294453A1 (en) * 2012-05-03 2013-11-07 Futurewei Technologies, Inc. Layer-3 services for united router farm
US8891536B2 (en) * 2012-05-03 2014-11-18 Futurewei Technologies, Inc. Layer-3 services for united router farm
US20130329734A1 (en) * 2012-06-11 2013-12-12 Radware, Ltd. Techniques for providing value-added services in sdn-based networks
US9647938B2 (en) * 2012-06-11 2017-05-09 Radware, Ltd. Techniques for providing value-added services in SDN-based networks
US10110485B2 (en) 2012-06-11 2018-10-23 Radware, Ltd. Techniques for traffic diversion in software defined networks for mitigating denial of service attacks
CN103152262A (en) * 2013-02-25 2013-06-12 华为技术有限公司 Method and equipment for connection establishment
US11804988B2 (en) 2013-07-10 2023-10-31 Nicira, Inc. Method and system of overlay flow control
US11212140B2 (en) 2013-07-10 2021-12-28 Nicira, Inc. Network-link method useful for a last-mile connectivity in an edge-gateway multipath system
US11050588B2 (en) 2013-07-10 2021-06-29 Nicira, Inc. Method and system of overlay flow control
US10389634B2 (en) 2013-09-04 2019-08-20 Nicira, Inc. Multiple active L3 gateways for logical networks
US10306512B2 (en) * 2013-10-15 2019-05-28 Ntt Docomo, Inc. Mobile station
US20160262046A1 (en) * 2013-10-15 2016-09-08 Ntt Docomo, Inc. Mobile station
CN103618562A (en) * 2013-12-16 2014-03-05 国家电网公司 Intelligent digital signal wire distribution device
US9450862B2 (en) * 2014-03-11 2016-09-20 Futurewei Technologies, Inc. Virtual private network migration and management in centrally controlled networks
US20150263867A1 (en) * 2014-03-11 2015-09-17 Futurewei Technologies, Inc. Virtual Private Network Migration and Management in Centrally Controlled Networks
US10164881B2 (en) 2014-03-14 2018-12-25 Nicira, Inc. Route advertisement by managed gateways
US11025543B2 (en) 2014-03-14 2021-06-01 Nicira, Inc. Route advertisement by managed gateways
US10567283B2 (en) 2014-03-14 2020-02-18 Nicira, Inc. Route advertisement by managed gateways
US10652143B2 (en) 2015-04-04 2020-05-12 Nicira, Inc Route server mode for dynamic routing between logical and physical networks
US11601362B2 (en) 2015-04-04 2023-03-07 Nicira, Inc. Route server mode for dynamic routing between logical and physical networks
US11374904B2 (en) 2015-04-13 2022-06-28 Nicira, Inc. Method and system of a cloud-based multipath routing protocol
US11677720B2 (en) 2015-04-13 2023-06-13 Nicira, Inc. Method and system of establishing a virtual private network in a cloud service for branch networking
US11444872B2 (en) 2015-04-13 2022-09-13 Nicira, Inc. Method and system of application-aware routing with crowdsourcing
US11496392B2 (en) 2015-06-27 2022-11-08 Nicira, Inc. Provisioning logical entities in a multidatacenter environment
US20170171304A1 (en) * 2015-12-09 2017-06-15 Le Holdings (Beijing) Co., Ltd. Service updating method and system for server cluster
US10805220B2 (en) 2016-04-28 2020-10-13 Nicira, Inc. Automatic configuration of logical routers on edge nodes
US10333849B2 (en) * 2016-04-28 2019-06-25 Nicira, Inc. Automatic configuration of logical routers on edge nodes
US11502958B2 (en) 2016-04-28 2022-11-15 Nicira, Inc. Automatic configuration of logical routers on edge nodes
US20170317954A1 (en) * 2016-04-28 2017-11-02 Nicira, Inc. Automatic configuration of logical routers on edge nodes
US10560320B2 (en) 2016-06-29 2020-02-11 Nicira, Inc. Ranking of gateways in cluster
US20190150034A1 (en) * 2016-07-11 2019-05-16 Huawei Technologies Co., Ltd. Service Traffic Control Method and System and Decision Network Element
US11019533B2 (en) * 2016-07-11 2021-05-25 Huawei Technologies Co., Ltd. Service traffic control method and system and decision network element
US10645204B2 (en) 2016-12-21 2020-05-05 Nicira, Inc Dynamic recovery from a split-brain failure in edge nodes
US10237123B2 (en) 2016-12-21 2019-03-19 Nicira, Inc. Dynamic recovery from a split-brain failure in edge nodes
US10616045B2 (en) 2016-12-22 2020-04-07 Nicira, Inc. Migration of centralized routing components of logical router
US11115262B2 (en) 2016-12-22 2021-09-07 Nicira, Inc. Migration of centralized routing components of logical router
US11700196B2 (en) 2017-01-31 2023-07-11 Vmware, Inc. High performance software-defined core network
US11706127B2 (en) 2017-01-31 2023-07-18 Vmware, Inc. High performance software-defined core network
US11606286B2 (en) 2017-01-31 2023-03-14 Vmware, Inc. High performance software-defined core network
US10992568B2 (en) 2017-01-31 2021-04-27 Vmware, Inc. High performance software-defined core network
US11706126B2 (en) 2017-01-31 2023-07-18 Vmware, Inc. Method and apparatus for distributed data network traffic optimization
US11252079B2 (en) 2017-01-31 2022-02-15 Vmware, Inc. High performance software-defined core network
US11121962B2 (en) 2017-01-31 2021-09-14 Vmware, Inc. High performance software-defined core network
US11349722B2 (en) 2017-02-11 2022-05-31 Nicira, Inc. Method and system of connecting to a multipath hub in a cluster
US11533248B2 (en) 2017-06-22 2022-12-20 Nicira, Inc. Method and system of resiliency in cloud-delivered SD-WAN
US10938693B2 (en) 2017-06-22 2021-03-02 Nicira, Inc. Method and system of resiliency in cloud-delivered SD-WAN
US10959098B2 (en) 2017-10-02 2021-03-23 Vmware, Inc. Dynamically specifying multiple public cloud edge nodes to connect to an external multi-computer node
US11102032B2 (en) 2017-10-02 2021-08-24 Vmware, Inc. Routing data message flow through multiple public clouds
US10958479B2 (en) 2017-10-02 2021-03-23 Vmware, Inc. Selecting one node from several candidate nodes in several public clouds to establish a virtual network that spans the public clouds
US11115480B2 (en) 2017-10-02 2021-09-07 Vmware, Inc. Layer four optimization for a virtual network defined over public cloud
US11005684B2 (en) 2017-10-02 2021-05-11 Vmware, Inc. Creating virtual networks spanning multiple public clouds
US11516049B2 (en) 2017-10-02 2022-11-29 Vmware, Inc. Overlay network encapsulation to forward data message flows through multiple public cloud datacenters
US11855805B2 (en) 2017-10-02 2023-12-26 Vmware, Inc. Deploying firewall for virtual network defined over public cloud infrastructure
US11089111B2 (en) 2017-10-02 2021-08-10 Vmware, Inc. Layer four optimization for a virtual network defined over public cloud
US11606225B2 (en) 2017-10-02 2023-03-14 Vmware, Inc. Identifying multiple nodes in a virtual network defined over a set of public clouds to connect to an external SAAS provider
US10999100B2 (en) 2017-10-02 2021-05-04 Vmware, Inc. Identifying multiple nodes in a virtual network defined over a set of public clouds to connect to an external SAAS provider
US11895194B2 (en) 2017-10-02 2024-02-06 VMware LLC Layer four optimization for a virtual network defined over public cloud
US11894949B2 (en) 2017-10-02 2024-02-06 VMware LLC Identifying multiple nodes in a virtual network defined over a set of public clouds to connect to an external SaaS provider
US10999165B2 (en) 2017-10-02 2021-05-04 Vmware, Inc. Three tiers of SaaS providers for deploying compute and network infrastructure in the public cloud
US10992558B1 (en) 2017-11-06 2021-04-27 Vmware, Inc. Method and apparatus for distributed data network traffic optimization
US11323307B2 (en) 2017-11-09 2022-05-03 Nicira, Inc. Method and system of a dynamic high-availability mode based on current wide area network connectivity
US11902086B2 (en) 2017-11-09 2024-02-13 Nicira, Inc. Method and system of a dynamic high-availability mode based on current wide area network connectivity
US11223514B2 (en) 2017-11-09 2022-01-11 Nicira, Inc. Method and system of a dynamic high-availability mode based on current wide area network connectivity
US20210194802A1 (en) * 2019-04-22 2021-06-24 Mingwei Xu Method and system for implementing l3vpn based on two-dimensional routing protocol
US11595301B2 (en) * 2019-04-22 2023-02-28 Tsinghua University Method and system for implementing L3VPN based on two-dimensional routing protocol
US11153230B2 (en) 2019-08-27 2021-10-19 Vmware, Inc. Having a remote device use a shared virtual network to access a dedicated virtual network defined over public clouds
US11252105B2 (en) 2019-08-27 2022-02-15 Vmware, Inc. Identifying different SaaS optimal egress nodes for virtual networks of different entities
US11252106B2 (en) 2019-08-27 2022-02-15 Vmware, Inc. Alleviating congestion in a virtual network deployed over public clouds for an entity
US11121985B2 (en) 2019-08-27 2021-09-14 Vmware, Inc. Defining different public cloud virtual networks for different entities based on different sets of measurements
US11831414B2 (en) 2019-08-27 2023-11-28 Vmware, Inc. Providing recommendations for implementing virtual networks
US11606314B2 (en) 2019-08-27 2023-03-14 Vmware, Inc. Providing recommendations for implementing virtual networks
US11171885B2 (en) 2019-08-27 2021-11-09 Vmware, Inc. Providing recommendations for implementing virtual networks
US11310170B2 (en) 2019-08-27 2022-04-19 Vmware, Inc. Configuring edge nodes outside of public clouds to use routes defined through the public clouds
US11018995B2 (en) 2019-08-27 2021-05-25 Vmware, Inc. Alleviating congestion in a virtual network deployed over public clouds for an entity
US11258728B2 (en) 2019-08-27 2022-02-22 Vmware, Inc. Providing measurements of public cloud connections
US11212238B2 (en) 2019-08-27 2021-12-28 Vmware, Inc. Providing recommendations for implementing virtual networks
US10999137B2 (en) 2019-08-27 2021-05-04 Vmware, Inc. Providing recommendations for implementing virtual networks
US11044190B2 (en) 2019-10-28 2021-06-22 Vmware, Inc. Managing forwarding elements at edge nodes connected to a virtual network
US11611507B2 (en) 2019-10-28 2023-03-21 Vmware, Inc. Managing forwarding elements at edge nodes connected to a virtual network
US11716286B2 (en) 2019-12-12 2023-08-01 Vmware, Inc. Collecting and analyzing data regarding flows associated with DPI parameters
US11394640B2 (en) 2019-12-12 2022-07-19 Vmware, Inc. Collecting and analyzing data regarding flows associated with DPI parameters
US11489783B2 (en) 2019-12-12 2022-11-01 Vmware, Inc. Performing deep packet inspection in a software defined wide area network
US11606712B2 (en) 2020-01-24 2023-03-14 Vmware, Inc. Dynamically assigning service classes for a QOS aware network link
US11689959B2 (en) 2020-01-24 2023-06-27 Vmware, Inc. Generating path usability state for different sub-paths offered by a network link
US11722925B2 (en) 2020-01-24 2023-08-08 Vmware, Inc. Performing service class aware load balancing to distribute packets of a flow among multiple network links
US11438789B2 (en) 2020-01-24 2022-09-06 Vmware, Inc. Computing and using different path quality metrics for different service classes
US11418997B2 (en) 2020-01-24 2022-08-16 Vmware, Inc. Using heart beats to monitor operational state of service classes of a QoS aware network link
US11336556B2 (en) 2020-04-06 2022-05-17 Vmware, Inc. Route exchange between logical routers in different datacenters
US11736383B2 (en) 2020-04-06 2023-08-22 Vmware, Inc. Logical forwarding element identifier translation between datacenters
US11528214B2 (en) 2020-04-06 2022-12-13 Vmware, Inc. Logical router implementation across multiple datacenters
US11870679B2 (en) 2020-04-06 2024-01-09 VMware LLC Primary datacenter for logical router
US11303557B2 (en) 2020-04-06 2022-04-12 Vmware, Inc. Tunnel endpoint group records for inter-datacenter traffic
US11316773B2 (en) 2020-04-06 2022-04-26 Vmware, Inc. Configuring edge device with multiple routing tables
US11743168B2 (en) 2020-04-06 2023-08-29 Vmware, Inc. Edge device implementing a logical network that spans across multiple routing tables
US11394634B2 (en) 2020-04-06 2022-07-19 Vmware, Inc. Architecture for stretching logical switches between multiple datacenters
US11374850B2 (en) 2020-04-06 2022-06-28 Vmware, Inc. Tunnel endpoint group records
US11477127B2 (en) 2020-07-02 2022-10-18 Vmware, Inc. Methods and apparatus for application aware hub clustering techniques for a hyper scale SD-WAN
US11245641B2 (en) 2020-07-02 2022-02-08 Vmware, Inc. Methods and apparatus for application aware hub clustering techniques for a hyper scale SD-WAN
US11363124B2 (en) 2020-07-30 2022-06-14 Vmware, Inc. Zero copy socket splicing
US11709710B2 (en) 2020-07-30 2023-07-25 Vmware, Inc. Memory allocator for I/O operations
US11575591B2 (en) 2020-11-17 2023-02-07 Vmware, Inc. Autonomous distributed forwarding plane traceability based anomaly detection in application traffic for hyper-scale SD-WAN
US11444865B2 (en) 2020-11-17 2022-09-13 Vmware, Inc. Autonomous distributed forwarding plane traceability based anomaly detection in application traffic for hyper-scale SD-WAN
US11575600B2 (en) 2020-11-24 2023-02-07 Vmware, Inc. Tunnel-less SD-WAN
US11929903B2 (en) 2020-12-29 2024-03-12 VMware LLC Emulating packet flows to assess network links for SD-WAN
US11601356B2 (en) 2020-12-29 2023-03-07 Vmware, Inc. Emulating packet flows to assess network links for SD-WAN
US11792127B2 (en) 2021-01-18 2023-10-17 Vmware, Inc. Network-aware load balancing
US11979325B2 (en) 2021-01-28 2024-05-07 VMware LLC Dynamic SD-WAN hub cluster scaling with machine learning
US11381499B1 (en) 2021-05-03 2022-07-05 Vmware, Inc. Routing meshes for facilitating routing through an SD-WAN
US12009987B2 (en) 2021-05-03 2024-06-11 VMware LLC Methods to support dynamic transit paths through hub clustering across branches in SD-WAN
US11509571B1 (en) 2021-05-03 2022-11-22 Vmware, Inc. Cost-based routing mesh for facilitating routing through an SD-WAN
US11388086B1 (en) 2021-05-03 2022-07-12 Vmware, Inc. On demand routing mesh for dynamically adjusting SD-WAN edge forwarding node roles to facilitate routing through an SD-WAN
US11637768B2 (en) 2021-05-03 2023-04-25 Vmware, Inc. On demand routing mesh for routing packets through SD-WAN edge forwarding nodes in an SD-WAN
US11582144B2 (en) 2021-05-03 2023-02-14 Vmware, Inc. Routing mesh to provide alternate routes through SD-WAN edge forwarding nodes based on degraded operational states of SD-WAN hubs
US11729065B2 (en) 2021-05-06 2023-08-15 Vmware, Inc. Methods for application defined virtual network service among multiple transport in SD-WAN
US11489720B1 (en) 2021-06-18 2022-11-01 Vmware, Inc. Method and apparatus to evaluate resource elements and public clouds for deploying tenant deployable elements based on harvested performance metrics
US12015536B2 (en) 2021-06-18 2024-06-18 VMware LLC Method and apparatus for deploying tenant deployable elements across public clouds based on harvested performance metrics of types of resource elements in the public clouds
US11375005B1 (en) 2021-07-24 2022-06-28 Vmware, Inc. High availability solutions for a secure access service edge application
US11943146B2 (en) 2021-10-01 2024-03-26 VMware LLC Traffic prioritization in SD-WAN
US11909815B2 (en) 2022-06-06 2024-02-20 VMware LLC Routing based on geolocation costs
US12034587B1 (en) 2023-03-27 2024-07-09 VMware LLC Identifying and remediating anomalies in a self-healing network
US12034630B2 (en) 2023-07-17 2024-07-09 VMware LLC Method and apparatus for distributed data network traffic optimization

Also Published As

Publication number Publication date
EP2157746B1 (en) 2015-08-05
US20140341226A1 (en) 2014-11-20
JP5074327B2 (en) 2012-11-14
EP2157746A1 (en) 2010-02-24
CN101656732A (en) 2010-02-24
US9185031B2 (en) 2015-11-10
CN101656732B (en) 2016-02-17
JP2010050749A (en) 2010-03-04

Similar Documents

Publication Publication Date Title
US9185031B2 (en) Routing control system for L3VPN service network
CN113765829B (en) Activity detection and route convergence in a software-defined networking distributed system
US11411770B2 (en) Virtual port channel bounce in overlay network
US10757006B1 (en) Enhanced traffic flow in software-defined networking controller-based architecture
EP3920483B1 (en) Local repair for underlay failure using prefix independent convergence
US10454806B2 (en) SDN controller, data center system, and routing connection method
US10534601B1 (en) In-service software upgrade of virtual router with reduced packet loss
RU2636689C2 (en) Automatic establishment of redundant paths with careful restoration in packet switching network
US9042234B1 (en) Systems and methods for efficient network traffic forwarding
US11201782B1 (en) Automation of maintenance mode operations for network devices
CN111756566B (en) Software upgrade deployment in a hybrid network with and without ISSU devices
JP5488979B2 (en) Computer system, controller, switch, and communication method
WO2015045466A1 (en) Communications control device, communications control system, communications control method, and communications control program
EP3038296B1 (en) Pool element status information synchronization method, pool register and pool element
WO2014175423A1 (en) Communication node, communication system, packet processing method and program
Benet et al. Minimizing live VM migration downtime using OpenFlow based resiliency mechanisms
US11171863B2 (en) System and method for lag performance improvements
US20170208020A1 (en) Communication control apparatus, communication system, communication control method, and medium
US20150372900A1 (en) Communication system, control apparatus, communication control method, and program

Legal Events

Date Code Title Description
AS Assignment

Owner name: HITACHI, LTD.,JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:OKITA, HIDEKI;REEL/FRAME:023110/0767

Effective date: 20090722

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION