WO2016194089A1 - 通信ネットワーク、通信ネットワークの管理方法および管理システム - Google Patents
通信ネットワーク、通信ネットワークの管理方法および管理システム Download PDFInfo
- Publication number
- WO2016194089A1 WO2016194089A1 PCT/JP2015/065681 JP2015065681W WO2016194089A1 WO 2016194089 A1 WO2016194089 A1 WO 2016194089A1 JP 2015065681 W JP2015065681 W JP 2015065681W WO 2016194089 A1 WO2016194089 A1 WO 2016194089A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- service
- communication
- nms
- path
- communication path
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/302—Route determination based on requested QoS
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/10—Active monitoring, e.g. heartbeat, ping or trace-route
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/06—Management of faults, events, alarms or notifications
- H04L41/0677—Localisation of faults
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/06—Management of faults, events, alarms or notifications
- H04L41/0686—Additional information in the notification, e.g. enhancement of specific meta-data
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/50—Network service management, e.g. ensuring proper service fulfilment according to agreements
- H04L41/5003—Managing SLA; Interaction between SLA and QoS
- H04L41/5019—Ensuring fulfilment of SLA
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/08—Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
- H04L43/0876—Network utilisation, e.g. volume of load or congestion level
- H04L43/0882—Utilisation of link capacity
Definitions
- the present invention relates to a packet communication system, and more particularly to a communication system that accommodates a plurality of different services, and more particularly, to a packet communication system and a communication device capable of SLA guarantee.
- the conventional communication network has been built independently for each communication service provided. This is because the quality required for each service differs, and the network construction and maintenance methods differ greatly from service to service. For example, in a communication service for business users represented by a dedicated line used for mission-critical operations such as national defense and finance, a guarantee of 100% communication bandwidth and a yearly operating rate of, for example, 99.99% or more Required.
- the communication service provider provides a communication service by linking Service Level Agreement (SLA), which defines such communication quality (bandwidth, delay, etc.) guarantee and operation rate guarantee, with the user. If the SLA cannot be satisfied, the telecommunications carrier is required to reduce the fee or pay compensation, so the SLA guarantee is very important.
- SLA Service Level Agreement
- a conventional communication system employs a route calculation method, such as a Dijkstra algorithm, that sums the costs of links on a route and selects a route having a minimum or maximum value.
- the communication band and the delay are converted into the cost of each link on the route and calculated.
- the physical bandwidth of a link is expressed as the cost of the link, and the route that maximizes or minimizes the total cost of the links on the route is calculated.
- a route that can accommodate more traffic is calculated.
- this route calculation method only the total cost of the links on the route is considered, so if the cost of one link is extremely small or large, the link becomes a bottleneck and the traffic is delayed. Problems occur.
- there is a Dijkstra improvement method that solves the problem by not only considering the total cost of the links on the route but also focusing on the cost of each link on the route (Patent Literature). 1). By using this method, it is possible to avoid a bottleneck and search for a path capable of guaranteeing an SLA.
- the leased line service that includes the operation rate in the SLA has an Operation Administration and Maintenance (OAM) tool in which all communication devices detect communication path failures. Automatically switches to the prepared alternative route.
- OAM Operation Administration and Maintenance
- a connection verification OAM tool such as a loopback test for the route in which the failure occurred, identify the physical failure location, and perform maintenance work such as component replacement.
- VPN Virtual Private Network
- MPLS Multiprotocol Label Switching
- each service and its users are accommodated in the network as logical paths.
- Ethernet registered trademark
- MPLS MPLS network path
- the MPLS path is a route in the MPLS network specified by the path ID, and a packet arriving at the MPLS device from the Ethernet is encapsulated with an MPLS label including this path ID, and then is sent to this route in the MPLS network. Transferred along. For this reason, the path in the MPLS network is uniquely determined depending on which path ID is assigned to each user or service, and multiple services are multiplexed by accommodating multiple logical paths on the physical line. Is possible. This virtual network for each service is called a virtual network.
- Non-Patent Document 1 an OAM tool that improves maintainability is defined, and a failure occurrence is detected at high speed for each logical path by an OAM tool (Non-Patent Document 1) that periodically transmits and receives OAM packets at the start and end points of a logical path.
- the failure detected at the start point and end point of the logical path is notified from the communication device to the operator via the network management system.
- the operator performs a loopback test OAM tool (Non-patent Document 2) that transmits a loopback OAM packet to a relay point on the logical path in order to identify the fault location on the logical path where the fault has occurred.
- the physical failure location is identified from the failure locations on the logical path, so that maintenance work such as component effects can be performed.
- the conventional communication system can guarantee the operation rate by the OAM tool, only the communication quality such as the bandwidth and the delay has been considered when calculating the route.
- IETF RFC6428 Proactive Connectivity er Verification, Continuity Check, and Remote Defect Indication for the MPLS Transport Profile
- IETF RFC6426 MPLS On-Demand on Connectivity Verification and Route Tracing
- Logical paths are set in a distributed manner throughout the virtual network.
- a packet communication system includes a plurality of communication devices and a management system thereof, and transfers packets between the plurality of communication devices via a communication path set from the management system. It is a communication system.
- the management system sets communication paths, to improve maintainability, the paths that share the same route even in a part of the network are aggregated, and the route is used to efficiently accommodate traffic.
- a plurality of path setting policies such as those distributed throughout the network are changed according to the service.
- a service that aggregates paths is a service that secures a certain bandwidth for each user or service, and is aggregated on the same route. If the total sum of the bandwidths of the selected services exceeds one of the line bandwidths on the path, a search is made for other routes where the sum of the bandwidths of the services aggregated on the same route does not exceed any of the line bandwidths on the path. Is set.
- the service for distributing paths distributes the paths according to the remaining bandwidth obtained by subtracting the bandwidth reserved for the service that aggregates the paths from each line bandwidth on the route.
- the packet communication system of the present invention has an automatic management system when a path is changed according to a request from a connected external system such as a user on the Internet or a data center. Apply path setting policy.
- the communication apparatus of the packet communication system of the present invention preferentially manages a path failure related to a service that requires a guarantee of an operation rate when a failure of a plurality of paths is detected.
- the management system preferentially processes a failure notification of a service that requires a guarantee of an operation rate, automatically executes a return test, or prompts an operator to execute the return test.
- Another aspect of the present invention includes a communication network management method including a communication network including a plurality of communication devices and a management system, and transferring packets between the plurality of communication devices via a communication path set from the management system. It is.
- a management system sets communication paths, a first service that aggregates communication paths that share the same route even in a part of the communication network for the first service that requires guarantee of the operation rate.
- the second service that does not guarantee the setting policy and the operation rate, it has a second setting policy for setting the communication path so that the route to be used is distributed over the entire communication network. The setting policy is changed.
- Still another aspect of the present invention provides a communication path for a first service that guarantees a bandwidth to a user for a plurality of communication devices constituting a communication network, and a first that does not guarantee a bandwidth to a user.
- 2 is a communication network management system in which communication paths for two services are set and communication paths for the first and second services coexist in the communication network.
- this communication network management system sets a new communication path in a route selected from routes that have free bandwidth corresponding to the guaranteed bandwidth.
- the new setting is applied to the route selected based on the free bandwidth per user of the second service.
- the second setting policy for setting the communication path is applied.
- the first setting policy selects a route with the smallest free area from a route having a free bandwidth corresponding to the guaranteed bandwidth, and sets a new communication path.
- the second setting policy selects a route having the largest available bandwidth per user of the second service, and sets a new communication path. According to such a policy, the communication path for the first service is set to share the route as much as possible, and the communication path for the second service is as uniform in the available bandwidth between users as possible. Is set to be
- Still another aspect of the present invention is a communication network having a plurality of communication devices constituting a route and a management system for setting communication paths to be used by a user for the plurality of communication devices.
- the management system sets a communication path for the first service and a communication path for the second service with different SLA for use by the user. Then, when setting the communication path used for the first service, the communication path used for the first service is set to be aggregated to a specific route on the network, and the communication path used for the second service is set. In the case of setting, the communication path used for the second service is set to be distributed over the route on the network.
- the first service is a service whose operating rate and bandwidth are guaranteed, and a plurality of communication paths used for a plurality of users provided with the first service.
- the plurality of communication paths are set in the same route.
- the second service is a best-effort service, and the allocation per user of the second service is evenly allocated in the free bandwidth excluding the communication bandwidth used by the communication path used for the first service. As described above, a communication path used for the second service is set.
- FIG. 3 is a table showing an example of a path construction policy table provided in the network management system of FIG. 2.
- FIG. 3 is a table showing an example of a user management table provided in the network management system of FIG. 2.
- FIG. 3 is a table showing an example of a connection destination management table provided in the network management system of FIG. 2.
- FIG. 3 is a table showing an example of a path configuration table provided in the network management system of FIG. 2.
- FIG. 3 is a table showing an example of a link management table provided in the network management system of FIG. 2.
- FIG. 12 is a table showing an example of a connection ID determination table provided in the network interface board (10-n) of FIG.
- FIG. 12 is a table showing an example of an input header processing table provided in the network interface board (10-n) of FIG.
- FIG. 12 is a table showing an example of an input header processing table provided in the network interface board (10-n) of FIG.
- FIG. 12 is a table showing an example of a label setting table provided in the network interface board (10-n) of FIG.
- FIG. 12 is a table showing an example of a bandwidth monitoring table provided in the network interface board (10-n) of FIG.
- FIG. 12 is a table showing an example of a packet transfer table provided in the switch unit 11 of FIG. 11.
- 12 is a flowchart showing an example of input packet processing S100 executed by the input packet processing unit 103 in FIG.
- FIG. 12 is a table showing an example of a failure management table provided in the network interface board (10-n) of FIG.
- the sequence diagram which shows an example of network setting sequence SQ100 from the operator which the communication system of an Example performs.
- FIG. 4 is a flowchart of an example of a path search process S200 corresponding to a service executed by the network management system of FIG. 2;
- FIG. 3 is a continuation of a flowchart of an example of a path search process S200 corresponding to a service executed by the network management system of FIG. 12 is a flowchart of an example of a failure management polling process S300 executed by the network interface board (10-n) of FIG.
- FIG. 12 is a flowchart of failure notification queue read processing S400 executed by the device management unit 12 of FIG.
- notations such as “first”, “second”, and “third” are attached to identify the constituent elements, and do not necessarily limit the number or order.
- a number for identifying a component is used for each context, and a number used in one context does not necessarily indicate the same configuration in another context. Further, it does not preclude that a component identified by a certain number also functions as a component identified by another number.
- FIG. 1 shows an example of a communication system of the present invention.
- This system is a communication system that includes a plurality of communication devices and their management systems, and transfers packets between the plurality of communication devices via a communication path set from the management system.
- the management system sets the communication path, it is a service that requires guarantee of the operation rate, so that the paths that share the same route even in part of the network are aggregated for the purpose of quickly identifying the failure location.
- multiple paths can be set up, such as a route that is distributed over the entire network for the purpose of accumulating traffic fairly for a large number of users.
- the policy can be changed according to the service.
- the communication devices ND # 1 to n of the present embodiment constitute a network NW of a communication carrier that connects between the access devices AE1 to n that accommodate the user terminals TE1 to n and the data center DC or the Internet IN.
- the device configurations of the edge device and the relay device may be the same, and operate as an edge device according to a preset or input packet and operate as a relay device.
- the communication devices ND # 1 and ND # n operate as edge devices
- the communication devices ND # 2, ND # 3, ND # 4, and ND # 5 operate as relay devices for convenience from the position of the network NW. To do.
- the communication devices ND # 1 to ND # n are connected to the network management system NMS via the management network MNW.
- the Internet IN in which a server for processing a request from the user is installed, and the data center DC possessed by the application provider are also included. It is connected to the management network MNW.
- Each logical path is set by the network management system NMS (described later in SQ100 of FIG. 20).
- the paths PTH # 1 and PTH # 2 are set to pass through the relay apparatuses ND # 2 and ND # 3
- the path PTH # n is set to pass through the relay apparatuses ND # 4 and ND # 5, both of which are edges. This is a path between the device ND # 1 and the edge device ND # n.
- the path PTH # 1 is a path for which the bandwidth for the communication service for business users is guaranteed, and therefore the network management system NMS assigns 500 Mbps to the path PTH # 1.
- the business users using the user terminals TE1 and TE2 each have a contract for a communication service of 250 Mbps, so that the integrated value 500 Mbps is secured for the path PTH # 1 in which the users are accommodated.
- the paths PTH # 2 and PTH # n used by the user terminals TE3, TE4, and TEn for general users are intended for general consumer communication services and are operated at best effort. Only the connectivity between the devices ND # 1 and ND # n is ensured.
- the communication path for business users and the communication path for individual users, which have different SLA guarantee levels, are allowed to use the same communication device as a route.
- Such path setting and change are performed by an operator OP, who is usually a network administrator, giving instructions to the network management system NMS via the monitoring terminal MT.
- operator OP who is usually a network administrator, giving instructions to the network management system NMS via the monitoring terminal MT.
- NMS network management system
- current telecommunications carriers are trying to obtain new revenues by providing an optimal network in response to requests from users and application providers, and from the Internet IN and data center DC as well as operators. Then, instructions for setting and changing the path arrive.
- FIG. 2 shows an example of the configuration of the network management system NMS.
- the network management system NMS is usually realized by a general-purpose server, its configuration is the Micro Processing Unit (MPU) NMS-mpu that executes the program, and information necessary for installing the program and processing the program Signals of Hard Disk Drive (HDD) NMS-hdd, MPU: NMS-mpu for temporarily storing these information, and monitoring terminal MT operated by operator OP And an input unit NMS-in, an output unit NMS-out, and a Network Interface Card (NIC) NMS-nic connected to the management network MNW.
- MPU Micro Processing Unit
- HDD Hard Disk Drive
- NIC Network Interface Card
- the HDD: NMS-hdd includes, as information necessary for managing the network NW in this embodiment, a path construction table NMS-t1, a user management table NMS-t2, and a connection destination management table NMS-t3.
- the path configuration table NMS-t4 and the link management table NMS-t5 are stored. These pieces of information are input and changed by the operator OP, and are changed in response to changes in the state of the network NW and requests from users and application providers.
- FIG. 3 shows an example of the path construction policy NMS-t1.
- the path construction policy NMS-t1 uses the SLA type NMS-t11 as a search key to retrieve a table entry indicating the communication quality NMS-t12, the operation rate guarantee NMS-t13, and the path construction policy NMS-t14. belongs to.
- the SLA type NMS-t11 indicates a communication service for business users and a communication service for general consumers.
- a method of guaranteeing the communication quality NMS-t12 (bandwidth guarantee or fairness) Use), presence / absence of operation rate guarantee NMS-t13, its reference value, and path construction policy NMS-t14 such as aggregation and distribution can be searched.
- the communication service for business users is referred to as a guaranteed service
- the communication service for general consumers is referred to as a fair service. Details of how to use this table will be described later.
- FIG. 4 shows an example of the user management table NMS-t2.
- the user management table NMS-t2 uses the user ID: NMS-t21 as a search key, the SLA type NMS-t22, the accommodation path ID: NMS-t23, the contracted bandwidth NMS-t24, and the connection destination NMS-t25. This is for searching for a table entry indicating.
- the user ID: NMS-t21 identifies each service terminal TEn connected via the user access device AEn.
- the SLA type NMS-t22 or the user terminal It is possible to search the accommodation path ID of TEn: NMS-t23, the contract bandwidth NMS-t24 assigned to each user terminal TEn, and the connection destination NMS-t25 of the user terminal TEn.
- the accommodation path ID: NMS-t23 any one of the path IDs: NMS-t41, which is a search key of the path configuration table NMS-t4 described later, is set as a path for accommodating the user. Details of how to use this table will be described later.
- FIG. 5 shows an example of the connection destination management table NMS-t3.
- the connection destination management table NMS-t3 indicates the accommodation device ID: NMS-t33 and the accommodation port ID: NMS-t34 using the combination of the connection destination NMS-t31 and the connection port ID: NMS-t32 as a search key. This is for retrieving table entries.
- connection destination NMS-t31 and the connection port ID: NMS-t32 indicate points that are traffic transmission / reception sources for the network NW, and the accommodation device ID: NMS-t33, which is the point of the network NW in which they are accommodated. And the accommodation port ID: NMS-t34 can be searched. Details of how to use this table will be described later.
- FIG. 6 shows the path configuration table NMS-t4.
- the path configuration table NMS-t4 uses the path ID: NMS-t41 as a search key, the SLA type NMS-t42, the terminating device ID: NMS-t43, the transit device ID: NMS-t44, and the transit link ID: This is for retrieving table entries indicating the NMS-t45, the LSP label NMS-t46, the allocated bandwidth NMS-t47, and the accommodated user NMS-t48.
- the path ID: NMS-t41 uniformly identifies the path in the network NW. Unlike the LSP label that is actually assigned to the packet, the path ID: NMS-t41 is the same in both communications. Value.
- NMS-t41 there is an SLA type NMS-t42, a terminal device ID: NMS-t43 of the path, a relay device ID: NMS-t44, a relay link ID: NMS-t45, and an LSP label NMS-t46. Is set.
- the integrated value of the contracted bandwidths of all users described in the accommodated user NMS-t48 is the allocated bandwidth NMS. -T47 is set.
- the LSP label NMS-t46 is an LSP label that is actually given to the packet, and a different value is set for each direction of communication. In general, it is possible to set a different LSP label every time the communication device ND # n is relayed. However, in this embodiment, for simplicity, the LSP label is not changed every time the relay is performed, and the network edge device is used. It is assumed that the same LSP label is used. Details of how to use this table will be described later.
- FIG. 7 shows the link management table NMS-t5.
- the link management table NMS-t5 is for searching for a table entry indicating the free bandwidth NMS-t52 and the number of transparent non-priority users NMS-t53 using the link ID: NMS-t51 as a search key.
- the link ID: NMS-t51 indicates a connection relationship between the ports of each communication device, and is a combination of the ID of the communication device ND # n at both ends of each link and its port ID.
- the link ID: NMS-t51 is “LNK # N1- 2-N3-4 ".
- paths configured by the same link ID that is, paths having the same combination of transmission port and destination port are paths of the same route.
- NMS-t51 For each link ID: NMS-t51, a value obtained by subtracting the sum of the contracted bandwidths of the paths passing through the link from the physical bandwidth of the link is held as a free bandwidth: NMS-t52, and is a fair type that passes through the link
- NMS-t52 The number of users on the service path is held as the transparent non-priority user number NMS-t53 and can be searched. Details of how to use this table will be described later.
- FIG. 8 shows the format of the communication packet 40 received by the edge devices ND # 1 and ND # n from the access devices AE # 1 to AE #, the data center DC and the Internet IN in this embodiment.
- the communication packet 40 includes a MAC header including a destination MAC address 401, a transmission source MAC address 402, a VLAN tag 403, a type value 404 indicating the type of the subsequent header, a payload portion 405, and a packet check sequence (FCS) 406. .
- a MAC header including a destination MAC address 401, a transmission source MAC address 402, a VLAN tag 403, a type value 404 indicating the type of the subsequent header, a payload portion 405, and a packet check sequence (FCS) 406.
- the VLAN tag 403 stores a VLAN ID value (VID #) serving as a flow identifier and a CoS value indicating priority.
- FIG. 9 shows a format of a communication packet 41 transmitted and received by each communication device ND # n within the network NW.
- PW Psudo Wire
- the communication packet 41 includes a MAC header composed of a destination MAC address 411, a source MAC address 412, a type value 413 indicating the type of the subsequent header, an MPLS label (LSP label) 414-1, and an MPLS label (PW label) 414. 2, payload portion 415, and FCS: 416.
- the MPLS labels 414-1 and 414-2 store a label value serving as a path identifier and a TC value indicating priority.
- the Ethernet packet of the communication packet 40 shown in FIG. 4 is encapsulated and a case where information regarding the OAM generated by each communication device ND # n is stored.
- This format has two levels of MPLS labels, and the first level MPLS label (LSP label) 414-1 is an identifier for specifying a path in the network NW, and the second level MPLS label (PW label). 414-2 is used to identify a user or an OAM packet.
- LSP label first level MPLS label
- PW label second level MPLS label
- 414-2 is used to identify a user or an OAM packet.
- the label value of the MPLS label 414-2 in the second stage has a reserved value “13” or the like, it is an OAM packet, otherwise it is a user packet (the Ethernet packet of the communication packet 40 is stored in the payload 415). Encapsulated).
- FIG. 10 shows a format of the OAM packet 42 transmitted / received in the network NW by the communication device ND # n.
- the OAM packet 42 includes a MAC header including a destination MAC address 421, a source MAC address 422, a type value 423 indicating the type of the subsequent header, and a first-stage MPLS label (LSP label) 414-1 similar to the communication packet 41. And the second-stage MPLS label (OAM label) 414-3, the OAM type 424, the payload 425, and the FCS 426.
- LSP label first-stage MPLS label
- the second-stage MPLS label (OAM label) 414-3 has “13” or the like in which the label value of the second-stage MPLS label (PW label) 414-2 in FIG. 9 is a reserved value.
- the OAM type 424 is an identifier indicating the type of the OAM packet, and in this embodiment, the identifier of the failure monitoring packet or the loopback test packet (loopback request packet or loopback response packet) is stored.
- the payload 425 stores information dedicated to OAM.
- the termination device ID is stored in the case of a failure monitoring packet
- the return device ID is stored in a return request packet
- the end device ID is stored in a return response packet. Is done.
- FIG. 11 shows the configuration of the communication device ND # n.
- the communication device ND # n includes a plurality of network interface boards (NIF) 10 (10-1 to 10-n), a switch unit 11 connected to these NIFs, and a device management unit 12 that manages the entire device.
- NIF network interface boards
- Each NIF 10 includes a plurality of input / output line interfaces 101 (101-1 to 101-n) serving as communication ports, and is connected to other devices via these communication ports.
- the input / output line interface 101 is a line interface for Ethernet.
- the input / output line interface 101 is not limited to an Ethernet line interface.
- Each NIF: 10-n includes an input packet processing unit 103 connected to these input / output line interfaces 101, a plurality of SW interfaces 102 (102-1 to 102-n) connected to the switch unit 11, and these Output packet processing unit 104 connected to the SW interface, a failure management unit 107 that performs OAM-related processing, an NIF management unit 105 that manages NIF, and a setting register 106 that holds various settings.
- the SW interface 102-i corresponds to the input / output line interface 101-i
- the input packet received by the input / output line interface 101-i is transferred to the switch unit 11 via the SW interface 102-i. Is done.
- the output packet distributed from the switch unit 11 to the SW interface 102-i is sent to the output line via the input / output line interface 101-i.
- the input packet processing unit 103 and the output packet processing unit 104 have an independent structure for each line, and packets of each line do not mix.
- the input / output line interface 101-i When the input / output line interface 101-i receives the communication packet 40 or 41 from the input line, it adds the in-device header 45 shown in FIG. 12 to the received packet.
- FIG. 12 shows an example of the in-device header 45.
- the in-device header 45 includes a plurality of fields indicating a connection ID: 451, a reception port ID: 452, a priority 453, and a packet length 454.
- the ID of the port acquired from the setting register 106 is stored in the received port ID: 452, and the length of this packet is stored. Are stored as packet length 454.
- the connection ID: 451 and the priority 453 are blank. An effective value is set in this field by the input packet processing unit 103.
- the input packet processing unit 103 performs input packet processing S100 described later, refers to the following tables 21 to 24, and adds the connection ID: 451 and the priority 453 to the in-device header 45 of each input packet. In addition, other header processing, bandwidth monitoring processing, and the like are performed. As a result of the input packet processing S100, the input packet is distributed and transferred to the SW IF: 102 for each line.
- FIG. 13 shows the connection ID determination table 21.
- the connection ID determination table 21 uses the combination of the input port ID: 212 and the VLAN ID: 213 as a search key to acquire the connection ID: 211 that is the registered address.
- a table is composed of a CAM (Content-addressable memory).
- the connection ID: 211 is an ID that identifies each connection in the communication apparatus ND # n, and uses the same ID in both directions. Details of how to use this table will be described later.
- FIG. 14 shows the input header processing table 22.
- the input header processing table 22 is used to search for a table entry indicating the VLAN tag processing 222 and the VLAN tag 223 using the connection ID: 221 as a search key.
- the VLAN tag processing 222 designates VLAN tag processing for an input packet, and tag information necessary for the VLAN tag processing is set in the VLAN tag 223. Details of how to use this table will be described later.
- FIG. 15 shows the label setting table 23.
- the label setting table 23 is used to search a table entry indicating the LSP label 232 and the PW label 233 using the connection ID: 231 as a search key. Details of how to use this table will be described later.
- FIG. 16 shows the bandwidth monitoring table 24.
- the bandwidth monitoring table 24 is used for retrieving a table entry indicating the contract bandwidth 242, the bucket depth 243, the previous token value 244, and the previous time 245 using the connection ID: 241 as a search key. It is.
- the contract bandwidth 242 is set to the same value as the contract bandwidth for each user, and a packet within the contract bandwidth is set to the priority 453 of the in-device header 45 by a general token bucket algorithm. Packets for which priority has been overwritten and it is determined that the contracted bandwidth has been exceeded are discarded. On the other hand, in the case of a fair service, an invalid value is set in the contract bandwidth 242 and the low priority is overwritten on the priority 453 of the in-device header 45 of all packets.
- the switch unit 11 receives input packets from the SW interfaces 102-1 to 102-n of each NIF, specifies the output port ID and output label from the packet transfer table 26, and sends them to the corresponding SW interface 102-i as output packets. Forward. At this time, according to the TC indicating the priority of the MPLS label 414-1, a packet having a high priority is preferentially transferred during congestion. Further, the output label 276 is overwritten on the assigned MPLS label (LSP label) 414-1.
- FIG. 17 shows the packet transfer table 26.
- the packet forwarding table 26 is used to search for a table entry indicating the output port ID: 263 and the output LSP label 264 using the combination of the input port ID: 261 and the input LSP label 262 as a search key.
- the switch unit 11 searches the packet forwarding table 26 using the reception port ID: 451 of the in-device header 45 and the LSP ID of the MPLS label (LSP label) 414-1 of the input packet, and determines the output destination.
- the output packets received by each SW interface 102 are supplied to the output packet processing unit 104 one after another.
- the output packet processing unit 104 When the NIF: 10-n processing mode is set to the Ethernet processing mode in the setting register 106, the output packet processing unit 104 outputs the destination MAC address 411, the source MAC address 412 of the output packet, and the type The value 413, the MPLS label (LSP label) 414-1, and the MPLS label (PW label) 414-2 are deleted, and the packet is output to the corresponding input / output line interface 101-i.
- the processing mode of the NIF: 10-n is set to the MPLS processing mode in the setting register 106, the packet is output to the corresponding input / output line interface 101-i without performing packet processing.
- FIG. 18 shows a flowchart of the input packet processing S100 executed by the input packet processing unit 103 of the communication device ND # n. Such processing can be executed by the communication device ND # n having hardware resources such as a microcomputer and using the hardware resources by information processing by software.
- the input packet processing unit 103 determines the processing mode of the NIF: 10-n set in the setting register 106 (S101).
- each piece of information is extracted from the in-device header 45 and the VLAN tag 403, and the connection ID determination table 21 is searched using the extracted reception port ID: 452 and VID.
- the packet connection ID: 211 is specified (S102).
- connection ID: 211 is written in the in-device header: 45, the input header processing table 22 and the label setting table 23 are searched, and the entry content is acquired (S103).
- VLAN tag 403 is edited according to the contents of the input header processing table 22 (S104).
- bandwidth monitoring processing is performed for each connection ID: 211 (here, a user), and the priority 453 of the in-device header (FIG. 12, 45) is added (S105).
- the destination MAC address 411 and the source MAC address 412 are set in the setting register 106, the type value 413 is “8847” (hexadecimal) indicating MPLS, and the MPLS label (LSP).
- Label) 414-1 is assigned LSP label 232 of label setting table 23
- MPLS label (PW label) 414-2 is assigned label value PW label 233 of label setting table 23
- priority 453 of in-device header 45 is assigned TC. .
- the MPLS processing mode is set in S101, it is determined whether or not the second-stage MPLS label 414-2 is a reserved value (“13”) in the communication packet 41 (S107). If it is not a reserved value, the packet is transferred as it is as a user packet (S108), and the process is terminated (S111).
- FIG. 19 shows the failure management table 25.
- the failure management table 25 uses the path ID: 251 as a search key, the SLA type 252, the terminating device ID: 253, the transit device ID: 254, the transit link ID: 255, the LSP label value 256, the failure This is for searching a table entry indicating occurrence / non-occurrence 257.
- the path ID: 251, the SLA type 252, the terminating device ID: 253, the transit device ID: 254, the transit link ID: 255, and the LSP label value 256 are the path ID: NMS-t 41 of the path configuration table NMS-t 4, This is the same as the SLA type NMS-t42, termination device ID: NMS-t43, transit device ID: NMS-t44, transit link ID: NMS-t45, and LSP label value NMS-t46.
- Failure occurrence presence / absence 257 is information indicating whether or not a failure has occurred in the path.
- the NIF management unit 105 reads the failure by a failure management table polling process described later, and determines the priority according to the SLA type 252. To the device management unit 12.
- the device management unit 12 determines the priority according to the SLA type 252 in the entire device through the failure notification queue read processing S400, and finally notifies the network management system NMS. Details of how to use this table will be described later.
- the failure management unit 107 periodically transmits a failure monitoring packet to the path 251 added to the failure management table 25.
- the LSP label value 414-1 is an LSP label value 256
- the OAM type 424 is an identifier indicating the fault monitoring packet
- the payload 425 is a terminating terminal ID: ND # n
- other areas are the setting registers 106 Is stored (see FIG. 10).
- the failure management unit 107 overwrites “present” as a failure occurrence in the failure occurrence presence / absence 256 of the failure management table 25 when a failure monitoring packet does not arrive on the path for a certain period.
- the failure management unit 107 When the failure management unit 107 receives the OAM packet addressed to itself from the input packet processing unit 103, the failure management unit 107 checks the OAM type 424 of the payload 425 and determines whether the packet is a failure monitoring packet or a loopback test packet (loopback request packet or loopback response). If the packet is a failure monitoring packet, “None” is overwritten as failure recovery in the failure occurrence presence / absence 256 of the failure management table 25.
- the failure management unit 107 Since the failure management unit 107 performs a loopback test on a path specified by the network management system in a loopback test described later, a test target specified by the network management system as described later as the LSP label 414-1 Path ID: LMS label 256 of NMS-t41, identifier indicating that this packet is a return request packet in OAM type 424, transit device ID to be returned in payload 425: NMS-t44, setting register in other areas A return request packet is generated using the set value of 106 and transmitted.
- LSP label 414-1 Path ID LMS label 256 of NMS-t41
- identifier indicating that this packet is a return request packet in OAM type 424
- transit device ID to be returned in payload 425 NMS-t44
- setting register in other areas A return request packet is generated using the set value of 106 and transmitted.
- the failure management unit 107 When the failure management unit 107 receives the OAM packet addressed to itself from the input packet processing unit 103, the failure management unit 107 checks the OAM type 424 of the payload 425, and if it is determined to be a return request packet, the failure management unit 107 uses the LSP label 414-1.
- the LSP label value 256 in the direction opposite to the received direction, the identifier indicating that it is a return response packet in the OAM type 424, the termination device ID to be returned in the payload 425: 253, and the setting register 106 set in other areas Returns a response packet using the value.
- the response packet is a return response packet
- the return test is successful, so that the network management system NMS is notified via the NIF management unit 105 and the device management unit 12.
- FIG. 20 shows a sequence SQ100 when setting the network NW from the operator OP.
- this change request type new addition or deletion of the user.
- the operator adds a new addition after deletion
- user ID for example, access device # 1 and data center
- connection destination for example, access device # 1 and data center
- service type for example, service type, and changed contract bandwidth are transmitted (SQ101).
- the network management system NMS that has received the setting change changes the path construction policy according to the SLA of the service by referring to the path construction policy table NMS-t1 or the like by a path search process S2000 corresponding to the service to be described later.
- the path is searched using the connection destination management table NMS-t3 and the link management table NMS-t5.
- the result is set in the corresponding communication device ND # 1-n (SQ102-1 to n).
- This setting information includes the connection relationship between each user and path such as the connection ID determination table 21, the input header processing table 22, the label setting table 23, the bandwidth monitoring table 24, the failure management table 25, and the packet transfer table 26 described above. Includes bandwidth settings.
- the traffic from the user can be transmitted and received along the set route, and between the edge devices ND # 1 and ND # n that are the end points of the path, Periodic transmission / reception of the failure monitoring packet is started (SQ103-1, SQ103-n).
- a setting completion notification is transmitted from the network management system NMS to the operator OP (SQ104), and this sequence is completed.
- FIG. 21 shows a sequence SQ200 when the network NW is set by a request from the user terminal TEn.
- a server that provides a homepage provided by a telecommunications carrier is installed in the Internet IN as a means for the telecommunications carrier to accept a service request that requires the user to change the network NW.
- other alternative means for example, a means capable of accessing the Internet from any of smartphones, homes, or offices are provided. Suppose you are.
- the server installed in the Internet IN that has received the request converts it into setting information of the network NW (SQ202), and changes the setting change to the management network MNW. Is transmitted to the network management system NMS via SQ203 (SQ203).
- the processing of the path search process S2000 corresponding to the subsequent service, the setting to the communication device ND # n (SQ102), and the start of constant monitoring by the monitoring packet (SQ103) are the same as SQ100 (FIG. 20).
- a setting completion notification is transmitted from the network management system NMS to the server on the Internet IN via the management network MNW (SQ104), and further notified to the user terminal TEn (SQ205). This sequence ends.
- FIG. 22 shows a sequence SQ300 when the network NW is set according to a request from the data center DC.
- the processing of the path search process S2000 corresponding to the subsequent service, the setting to the communication device ND # n (SQ102), and the start of constant monitoring by the monitoring packet (SQ103) are the same as SQ100 (FIG. 20).
- the network management system NMS notifies the setting completion notification to the data center DC via the management network MNW (SQ302), and this sequence ends.
- FIG. 23 shows a failure location identification sequence SQ400 when a failure occurs in the relay device ND # 3.
- failure monitoring packets periodically transmitted / received between the edge devices ND # 1 and ND # n do not arrive (SQ401-1, SQ401). -N).
- each of the edge devices ND # 1 and ND # n detects that a failure has occurred in the guaranteed service path PTH # 1 (SQ402-1, SQ402-n).
- each of the edge devices ND # 1 and ND # n performs a failure notification process S3000 described later, and preferentially notifies the network management system NMS of the failure of the guaranteed service path PTH # 1 (SQ403-1). , SQ403-n).
- the network management system NMS Upon receiving this notification, the network management system NMS notifies the operator OP that a failure has occurred in the guaranteed service path PTH # 1 (SQ404), and automatically executes the following failure location determination processing (SQ405). .
- the network management system NMS confirms the normality between the edge device ND # 1 and the adjacent relay device ND # 2, and the loopback test request and information necessary for that (test target path ID: NMS-t41, The edge device ND # 1 is notified of the return device ID (NMS-t44) to be returned (SQ4051-1).
- the edge device ND # 1 transmits a return request packet as described above (SQ4051-1req).
- the relay device ND # 2 that has received this loopback test packet returns the loopback response packet as described above because it is a loopback test addressed to itself (SQ4051-1rpy).
- the edge device ND # 1 that has received the loopback response packet notifies the network management system NMS of a loopback test success notification (SQ4051-1suc).
- the network management system NMS that has received the success notification of the loopback test identifies the loopback test request and information necessary for this in order to identify the failure location and to confirm the normality with the relay device ND # 1. (SQ4051-2).
- the edge device ND # 1 transmits a return request packet as described above (SQ4051-2req).
- This loopback test packet is not returned to the edge device ND # 1 because the relay device ND # 3 has failed (SQ4051-2def).
- the edge device ND # 1 notifies the network management system NMS of a return test failure notification because the return response packet is not returned within a predetermined time (SQ4051-2fail).
- the network management system NMS that has received the return test failure notification identifies the failure location as the relay device ND # 3 (SQ4052), notifies the operator OP of the failure location as the failure location (SQ4053), and ends this sequence. .
- FIG. 24 and 25 show a path search process S2000 corresponding to a service executed by the network management system NMS.
- processing can be realized by the network management system NMS having the hardware resources shown in FIG. 2 and using the hardware resources for information processing by software.
- the network management system NMS that has received the setting change from the operator OP or the Internet IN or the data center DC acquires the request type, the connection destination, the SLA type, and the contract bandwidth as the setting change (S201). Check (S202).
- the request type is deletion
- the corresponding entry is deleted from the corresponding user management table NMS-t2 (FIG. 4), and the path configuration table NMS-t4 (FIG. 6) corresponding to the accommodation path NMS-t23 of the user is deleted. Update entry information.
- the contracted bandwidth NMS-t24 of the user management table NMS-t2 (FIG. 4) is subtracted from the allocated bandwidth NMS-t47 of the path configuration table NMS-t4 (FIG. 6).
- the user ID is deleted from the accommodated user NMS-t48. If it is a fair service, the user ID is deleted from the accommodated user NMS-t48.
- connection destination management table NMS-t3 (FIG. 5) is searched using the information of the connection destination, and the accommodation device (node) ID: NMS-t33 as a point that can be the connection destination. And a candidate for a combination of accommodation port ID: NMS-t34 is extracted (S203). For example, when the access device AE # 1 is designated as the starting point and the data center DC is designated as the ending point in FIG. 1, the candidates are as follows.
- Candidate source port (1) Accommodating device ID: ND # 1 accommodating port ID: PT # 1 Destination port candidates: (A) Accommodating device ID: ND # n accommodating port ID: PT # 10 (B) Accommodating device ID: ND # n accommodating port ID: PT # 11
- the SLA type acquired in S201 is checked (S204). If it is a guarantee type, a general route search algorithm (multi-route selection method or Dijkstra) is used using the link management table NMS-t5 (FIG. 7). Using a method or the like, a route having a free bandwidth corresponding to the designated contract bandwidth and having the minimum free bandwidth is searched (S205). Specifically, for example, when there is a route that reaches from the start port to the end port via a link determined to be usable from the link management table NMS-t5, the total value of the costs (present By selecting a route with the smallest available bandwidth in the embodiment, the guaranteed service paths can be aggregated into the existing paths.
- a general route search algorithm multi-route selection method or Dijkstra
- the threshold value may be a threshold value that defines an absolute numerical value, or may be a threshold value based on a relative (for example, lower 10%) definition.
- FIG. 25 shows processing when it is determined that the fair type is determined in S204. If the fair type is determined in S204, the “free bandwidth NMS-t52” is used by using a general route search algorithm (multi-route selection method or Dijkstra method) using the link management table NMS-t5. ⁇ A route with the maximum number of transparent non-priority users NMS-t53 "is searched (S212).
- a general route search algorithm multi-route selection method or Dijkstra method
- the total value of the costs (present In the embodiment, the fair service is distributed among the existing paths by selecting the route having the largest “free bandwidth NMS-t52 ⁇ transparent non-priority user count NMS-t53”).
- a certain degree of dispersion effect can also be obtained by a method such as randomly selecting a route having a value not less than the predetermined threshold value instead of the route having the maximum value.
- the threshold value may be a threshold value that defines an absolute numerical value, or may be a threshold value based on a relative (for example, upper 10%) rule.
- the path configuration table NMS-t4 is used to determine whether the obtained path is an existing path (S213).
- a new entry is added to the user management table NMS-t2
- the existing path is set as the accommodation path NMS-t23
- the information of the entry in the corresponding path configuration table NMS-t4 is updated.
- a new user ID is added to the accommodated user NMS-t48.
- all entries corresponding to the via link ID: NMS-t45 in the link management table NMS-t5 are updated.
- 1 is added to the number of transparent non-priority users NMS-t53.
- the various tables 21 to 26 of the corresponding communication device #n are updated, the result of the process is notified to the operator (S214), and the process is terminated (S216).
- guaranteed service paths can be aggregated on the same route, and fair service paths can be distributed according to the ratio of the number of accommodated users.
- FIG. 26 shows details of the failure management table polling process S300 in the failure notification process (FIG. 23, S3000) executed by the NIF management unit 105 of the communication device ND # n.
- the NIF management unit 105 starts this polling process at the same time when the apparatus is turned on, initializes the variable i to “0” (S301), and adds 1 to the variable i (S302).
- the path ID: 251 in the failure management table 25 (FIG. 19) is searched as PTH # i, and an entry is acquired (S303).
- PTH # i is used as a path ID, and the SLA type (FIG. 19, 252) is notified to the device management unit 12 as a failure occurrence notification (S305), and the processing after S302 is continued.
- the device management unit 12 that has received the failure occurrence notification stores the received information in the failure notification queue (priority) 27-1 if the SLA type is a guaranteed service (eg, SLA # 1), and the SLA type is fair. If it is a type service (for example, SLA # 2), it is stored in the failure notification queue (non-priority) 27-2 (see FIG. 11).
- the SLA type is a guaranteed service (eg, SLA # 1), and the SLA type is fair. If it is a type service (for example, SLA # 2), it is stored in the failure notification queue (non-priority) 27-2 (see FIG. 11).
- FIG. 27 shows details of the failure notification queue read processing S400 in the failure notification processing S3000 executed by the device management unit 12 of the communication device ND # n.
- the failure notification queue (priority) 27- 1 determines whether there is a notification (S401).
- the network management system NMS is notified as a failure notification together with the path ID and SLA type stored in the failure notification queue (priority) 27-1 (S402).
- the network management system NMS preferentially processes the previously notified failure, so that it is possible to preferentially deal with the guaranteed service and guarantee the operation rate.
- S2800 is obtained by adding the following S2001 to S2006 after the processing of S209, S210, and S211 in S2000 (FIG. 24), and the other processing is the same as S2000. Therefore, only the changes will be described below. .
- the path configuration table NMS-t4 is searched for whether there is a fair service path on the same path as the path whose setting has been changed (S2001).
- a fair service path ID NMS-t41 is obtained in which the via link ID NMS-t45 has the same path. Then, 1 is subtracted from the number NMS-t53 of non-priority transparent users corresponding to the link via the path: NMS-t45 in the link management table NMS-t5, and the result is stored as the temporary link management table NMS-t5 (S2002).
- the total cost value (In this embodiment, the route with the largest “free bandwidth NMS-t52 ⁇ transparent non-priority user number NMS-t53”) is selected, so that the fair service is distributed among the existing paths.
- the path configuration table NMS-t4 is used to determine whether the obtained path is an existing path (S2004).
- the entry is deleted from the corresponding user management table NMS-t2, and the entry information of the path configuration table NMS-t4 corresponding to the accommodation path NMS-t23 of the user is updated (from the accommodation user NMS-t48). Delete the user ID). Also, all entries corresponding to the via link ID: NMS-t45 in the link management table NMS-t5 are updated (transparent non-priority user number NMS-t53 is decremented by 1). Further, the various tables 21 to 26 of the corresponding communication device #n are updated. Further, the user deleted above is added to the user management table NMS-t2, and the existing path is set as the accommodation path NMS-t23.
- the entry information of the corresponding path configuration table NMS-t4 is updated (the user ID deleted above is added to the accommodated user NMS-t48). Also, all entries corresponding to the via link ID: NMS-t45 in the link management table NMS-t5 are updated (1 is added to the number of transparent non-priority users NMS-t53). In addition, the various tables 21 to 26 of the corresponding communication device #n are updated, and the processing result is notified to the operator.
- the corresponding entry is deleted from the corresponding user management table NMS-t2, and the information of the entry in the path configuration table NMS-t4 corresponding to the accommodation path NMS-t23 of the user is updated.
- the user ID is deleted from the accommodated user NMS-t48.
- all entries corresponding to the via link ID: NMS-t45 in the link management table NMS-t5 are updated.
- 1 is subtracted from the number NMS-t53 of transparent non-priority users.
- the various tables 21 to 26 of the corresponding communication device #n are updated.
- the user deleted above is added to the user management table NMS-t2, and a new path is set in the accommodation path NMS-t23.
- a new entry is added to the path configuration table NMS-t4. Specifically, the deleted user ID is added to the accommodated user NMS-t48. Also, all entries corresponding to the via link ID: NMS-t45 in the link management table NMS-t5 are updated. Specifically, 1 is added to the number NMS-t53 of transparent non-priority users. In addition, the various tables 21 to 26 of the corresponding communication device #n are updated, and the processing result is notified to the operator.
- the configuration of the network management system of this embodiment is almost the same as that of the network management system NMS of the first embodiment shown in FIG.
- the description will focus on the different parts, except that a path is set in advance in the path configuration table NMS-t4.
- the path configuration table is referred to as NMS-t40.
- the configuration of each block is the same as that of the network management system NMS.
- FIG. 30 shows a network pre-setting sequence SQ1000 from the operator.
- the operator OP transmits a connection destination (for example, a combination of the access device # 1 and the data center DC) and a service type as advance setting information (SQ1001).
- a connection destination for example, a combination of the access device # 1 and the data center DC
- a service type as advance setting information (SQ1001).
- the network management system NMS that has received it searches for a path using the connection destination management table NMS-t3 and the link management table NMS-t5 by a pre-path search process S500 described later.
- the result is set in the corresponding communication device ND # 1-n (SQ1002-1-n).
- connection ID determination table 21 As this setting information, each user and path such as the connection ID determination table 21, the input header processing table 22, the label setting table 23, the bandwidth monitoring table 24, the failure management table 25, and the packet forwarding table 26 as in the first embodiment are used. Connection relations and bandwidth settings.
- each communication device ND # n When these pieces of information are set in each communication device ND # n, periodic transmission / reception of a failure monitoring packet is started between the edge devices ND # 1 and ND # n serving as path termination points (SQ1003-1). , SQ1003-n).
- a setting completion notification is transmitted from the network management system NMS2 to the operator OP (SQ1004), and this sequence is completed.
- FIG. 31 shows a pre-path search process S500 executed by the network management system NMS.
- the network management system NMS that has received the presetting from the operator OP acquires the connection destination and the SLA type as presetting (S501).
- connection destination management table NMS-t3 is searched using the information of the connection destination, and a combination candidate of the accommodation node ID: NMS-t33 and the accommodation port ID: NMS-t34 is extracted as a point that can be a connection destination. (S502).
- the candidates are as follows.
- Candidate source port (1) Accommodating device ID: ND # 1 accommodating port ID: PT # 1 Destination port candidates: (A) Accommodating device ID: ND # n accommodating port ID: PT # 10 (B) Accommodating device ID: ND # n accommodating port ID: PT # 11
- the link management table NMS-t5 is used to search a list of routes to which the start point and end point can be connected using a general route search algorithm (multi-route selection method or Dijkstra method). (S503).
- a new entry is added to the user management table NMS-t2
- a new path is set in the accommodation path NMS-t23
- a new entry is added to the path configuration table NMS-t4 (0 Mbps in the allocated bandwidth NMS-t47) (Unused), set an invalid value to the accommodating user NMS-t48), update the various tables 21 to 26 of the corresponding communication device #n, and notify the operator of the processing result.
- FIG. 32 shows a path configuration table NMS-t40 generated by the network preset sequence SQ1000 from the operator.
- the path configuration table NMS-t40 uses the path ID: NMS-t401 as a search key, the SLA type NMS-t402, the terminating device ID: NMS-t403, the transit device ID: NMS-t404, and the transit link ID: This is for retrieving table entries indicating the NMS-t 405, the allocated bandwidth NMS-t 406, and the accommodated user NMS-t 407.
- the allocated bandwidth NMS-t 406 is 0 Mbps because the user is not accrued, and there is no accommodated user. Furthermore, there is no number of accommodated users for the fair service path.
- the configuration of the communication system, the block configuration of the communication device ND # n, and the processing are the same as those in the first embodiment.
- a plurality of candidate paths can be set in advance for all the connection destinations. Therefore, in the path search processing S2000 and S2800 corresponding to the service, a new user is already existing. The probability of being accommodated in the path is increased, and the network can be changed more quickly.
- the present invention is not limited to the above-described embodiment, and includes various modifications.
- a part of the configuration of one embodiment can be replaced with the configuration of another embodiment, and the configuration of another embodiment can be added to the configuration of one embodiment.
- functions equivalent to those configured by software can also be realized by hardware such as FPGA (Field Programmable Gate Array) and ASIC (Application Specific Integrated Circuit).
- the functions configured by software may be realized by a single computer configuration, or any part of the input device, output device, processing device, and storage device may be connected to another computer connected via a network. It may be configured.
- the failure of the communication service for business users is preferentially notified from the communication device, and the network management system that has received this notification can preferentially execute the loopback test, so that the failure of the communication service path for business users can be performed. It is possible to quickly identify the location and quickly perform maintenance work such as component replacement. As described above, it is possible to satisfy both the communication quality and the operation rate.
- a path for a communication service for general consumers that needs to accommodate a large amount of traffic efficiently and fairly among users is considered in consideration of a surplus band other than the band reserved for the communication path for business users, By distributing the excess bandwidth ratio over the entire network so that it becomes equal for each user, it becomes possible to accommodate large volumes of traffic efficiently and with fairness among users.
- It can be used for operation management of networks used for various services.
- TE1 to n User terminals AE1 to n: Access devices ND # 1 to n: Communication device DC: Data center IN: Internet MNW: Management network NMS: Network management system MT: Monitoring terminal OP: Operator
Abstract
Description
(1)収容装置ID:ND#1の収容ポートID:PT#1
終点ポート候補:
(A)収容装置ID:ND#nの収容ポートID:PT#10
(B)収容装置ID:ND#nの収容ポートID:PT#11
ここで、始点ポート候補と終点ポート候補間のパスを検索する必要があることを意味する。つまり、この場合、(1)-(A)間および(1)-(B)間のパスが候補となる。
(1)収容装置ID:ND#1の収容ポートID:PT#1
終点ポート候補:
(A)収容装置ID:ND#nの収容ポートID:PT#10
(B)収容装置ID:ND#nの収容ポートID:PT#11
ここで、始点ポート候補と終点ポート候補間のパスを検索する必要があることを意味する。つまり、この場合、(1)-(A)間および(1)-(B)間のパスが候補となる。
AE1~n:アクセス装置
ND#1~n:通信装置
DC:データセンタ
IN:インタネット
MNW:管理用ネットワーク
NMS:ネットワーク管理システム
MT:監視端末
OP:オペレータ
Claims (15)
- 複数の通信装置で構成される通信ネットワークと管理システムを備え、上記管理システムから設定した通信パスを介して上記複数の通信装置間でパケットを転送する、通信ネットワークの管理方法であって、
上記管理システムが上記通信パスを設定する際、
稼働率の保証が必要な第1のサービスのために、上記通信ネットワーク上一部でも同じ経路を共有する通信パス同士を集約する、第1の設定ポリシと、
稼働率の保証をしない第2のサービスのために、使用する経路を上記通信ネットワーク全体に分散するように上記通信パスを設定する、第2の設定ポリシを有し、
上記サービスの種類に応じて上記設定ポリシを変更することを特徴とする、
通信ネットワークの管理方法。 - 上記第1の設定ポリシは、
上記通信ネットワーク上の送信ポートと宛先ポートが全く同一の経路の場合、上記通信パス同士を集約する設定ポリシであることを特徴とする、
請求項1記載の通信ネットワークの管理方法。 - 上記第1のサービスは、ユーザまたはサービス毎にある一定の帯域を確保するサービスであって、
上記第1の設定ポリシに基づいて上記管理システムは、
同一経路に集約されたサービスの帯域の総和が上記通信パス上のいずれかの回線帯域を超過した場合、同一経路に集約されたサービスの帯域の総和が上記通信パス上のいずれの回線帯域も超過しない新規の経路を検索し、当該経路に上記通信パスを新規に設定し、ユーザやサービスを収容するように制御し、
上記第2の設定ポリシに基づいて上記管理システムは、
経路上の各回線帯域から上記第1のサービス向けに確保された帯域を差し引いた余りの帯域に基づいて、当該余りの帯域に対して上記第2のサービスのための通信パスを分散することを特徴とする、
請求項1記載の通信ネットワークの管理方法。 - 上記通信ネットワークに接続された外部システムからの要求に応じて上記通信パスの変更をする際、
上記管理システムは自動的に上記設定ポリシを適用することを特徴とする、
請求項1記載の通信ネットワークの管理方法。 - 上記管理システムは、上記第1のサービスに確保する帯域の設定に変更があった場合に、
当該変更により変化した余りの帯域が、上記第2のサービス内のユーザ間で等しい比率になるように再度上記通信パスを分散し、設定し直すことを特徴とする、
請求項3記載の通信ネットワークの管理方法。 - 上記管理システムは、
各サービスの経路をユーザ収容する前に検索し、事前に設定しておき、ユーザの収容の設定要求があった場合に、上記通信パスに新規にユーザを収容することを特徴とする、
請求項1記載の通信ネットワークの管理方法。 - 複数の上記通信パスの障害を検出した際、上記通信装置は上記第1のサービスに関わる通信パスの障害を優先的に上記管理システムに通知することを特徴とする、
請求項1記載の通信ネットワークの管理方法。 - 前記障害通知を受信した上記管理システムは、上記第1のサービスの障害通知を優先的に処理し、折返し試験を自動的に実行する、または、折返し試験の実行をオペレータに促すことを特徴とする、
請求項7記載の通信ネットワークの管理方法。 - 通信ネットワークを構成する複数の通信装置に対して、ユーザに対して帯域を保証する第1のサービスのための通信パスと、ユーザに対して帯域を保証しない第2のサービスのための通信パスを設定し、上記通信ネットワーク内に上記第1及び第2のサービスのための通信パスを共存させる、通信ネットワーク管理システムであって、
上記通信ネットワーク管理システムは、
上記第1のサービスのための新規の通信パス設定要求があった場合には、
上記保証帯域分の空き帯域が存在する経路から選ばれた経路に、上記新規の通信パスを設定する第1の設定ポリシを適用し、
上記第2のサービスのための新規の通信パス設定要求があった場合には、
上記第2のサービスのユーザあたりの空き帯域に基づいて選ばれた経路に、上記新規の通信パスを設定する第2の設定ポリシを適用する、
通信ネットワーク管理システム。 - 上記第1の設定ポリシは、
上記保証帯域分の空き帯域が存在する経路から、空き領域が最小あるいは所定閾値以下の経路を選び、上記新規の通信パスを設定し、
上記第2の設定ポリシは、
上記第2のサービスのユーザあたりの空き帯域が最大あるいは所定閾値以上の経路を選び、上記新規の通信パスを設定する、
請求項9記載の通信ネットワーク管理システム。 - 上記ユーザを特定する識別子と、上記ユーザに対して提供されるサービスのSLA種別と、上記SLA種別に適用される前記設定ポリシを、関連付けて記憶するデータを保持する、
請求項9記載の通信ネットワーク管理システム。 - 経路を構成する複数の通信装置と、上記複数の通信装置にユーザの利用に供する通信パスを設定する管理システムを有する通信ネットワークであって、
上記管理システムは、前記ユーザの利用に供するため、SLAの異なる第1のサービスのための通信パスと、第2のサービスのための通信パスを設定し、
上記第1のサービスに用いる通信パスを設定する場合には、上記第1のサービスに用いる通信パスが上記ネットワーク上の特定の経路に集約されるように設定し、
上記第2のサービスに用いる通信パスを設定する場合には、上記第2のサービスに用いる通信パスが上記ネットワーク上の経路に分散されるように設定することを特徴とする、
通信ネットワーク。 - 上記第1のサービスは、稼働率および帯域が保証されるサービスであり、
上記第1のサービスが提供される複数のユーザのために用いられる複数の通信パスの、上記ネットワーク上の送信ポートと宛先ポートが同一の場合、同一の経路に上記複数の通信パスを設定することを特徴とする、
請求項12記載の通信ネットワーク。 - 上記第2のサービスは、ベストエフォート型のサービスであり、
上記第1のサービスに用いる通信パスが使用している通信帯域を除外した空き帯域の、上記第2のサービスのユーザ当たりの割り当てが均等になるように、上記第2のサービスに用いる通信パスを設定することを特徴とする、
請求項12記載の通信ネットワーク。 - 上記通信装置は上記通信パスの障害を管理する障害管理部を有し、
上記障害管理部は、
障害が発生した通信パスが、上記第1のサービスに用いる通信パスか、上記第2のサービスに用いる通信パスか、により、障害対応処理の優先度を変更する、
請求項12記載の通信ネットワーク。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2017500088A JPWO2016194089A1 (ja) | 2015-05-29 | 2015-05-29 | 通信ネットワーク、通信ネットワークの管理方法および管理システム |
PCT/JP2015/065681 WO2016194089A1 (ja) | 2015-05-29 | 2015-05-29 | 通信ネットワーク、通信ネットワークの管理方法および管理システム |
US15/507,954 US20170310581A1 (en) | 2015-05-29 | 2015-05-29 | Communication Network, Communication Network Management Method, and Management System |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2015/065681 WO2016194089A1 (ja) | 2015-05-29 | 2015-05-29 | 通信ネットワーク、通信ネットワークの管理方法および管理システム |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2016194089A1 true WO2016194089A1 (ja) | 2016-12-08 |
Family
ID=57442240
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2015/065681 WO2016194089A1 (ja) | 2015-05-29 | 2015-05-29 | 通信ネットワーク、通信ネットワークの管理方法および管理システム |
Country Status (3)
Country | Link |
---|---|
US (1) | US20170310581A1 (ja) |
JP (1) | JPWO2016194089A1 (ja) |
WO (1) | WO2016194089A1 (ja) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2018170595A (ja) * | 2017-03-29 | 2018-11-01 | Kddi株式会社 | 障害管理装置およびその障害監視用パス設定方法 |
JP2019102083A (ja) * | 2017-11-30 | 2019-06-24 | 三星電子株式会社Samsung Electronics Co.,Ltd. | 差別化されたストレージサービス提供方法及びイーサネットssd |
JP2021057632A (ja) * | 2019-09-26 | 2021-04-08 | 富士通株式会社 | 障害評価装置及び障害評価方法 |
WO2021124416A1 (ja) * | 2019-12-16 | 2021-06-24 | 三菱電機株式会社 | リソース管理装置、制御回路、記憶媒体およびリソース管理方法 |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10531314B2 (en) * | 2017-11-17 | 2020-01-07 | Abl Ip Holding Llc | Heuristic optimization of performance of a radio frequency nodal network |
US20190253341A1 (en) * | 2018-02-15 | 2019-08-15 | 128 Technology, Inc. | Service Related Routing Method and Apparatus |
US11451435B2 (en) * | 2019-03-28 | 2022-09-20 | Intel Corporation | Technologies for providing multi-tenant support using one or more edge channels |
CN115428411A (zh) | 2020-04-23 | 2022-12-02 | 瞻博网络公司 | 使用会话建立度量的会话监测 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004274368A (ja) * | 2003-03-07 | 2004-09-30 | Fujitsu Ltd | 品質保証制御装置および負荷分散装置 |
WO2010052826A1 (ja) * | 2008-11-05 | 2010-05-14 | 日本電気株式会社 | 通信装置、ネットワーク及びそれらに用いる経路制御方法 |
WO2015029420A1 (ja) * | 2013-08-26 | 2015-03-05 | 日本電気株式会社 | 通信システムにおける通信装置、通信方法、制御装置および管理装置 |
-
2015
- 2015-05-29 WO PCT/JP2015/065681 patent/WO2016194089A1/ja active Application Filing
- 2015-05-29 US US15/507,954 patent/US20170310581A1/en not_active Abandoned
- 2015-05-29 JP JP2017500088A patent/JPWO2016194089A1/ja not_active Ceased
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004274368A (ja) * | 2003-03-07 | 2004-09-30 | Fujitsu Ltd | 品質保証制御装置および負荷分散装置 |
WO2010052826A1 (ja) * | 2008-11-05 | 2010-05-14 | 日本電気株式会社 | 通信装置、ネットワーク及びそれらに用いる経路制御方法 |
WO2015029420A1 (ja) * | 2013-08-26 | 2015-03-05 | 日本電気株式会社 | 通信システムにおける通信装置、通信方法、制御装置および管理装置 |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2018170595A (ja) * | 2017-03-29 | 2018-11-01 | Kddi株式会社 | 障害管理装置およびその障害監視用パス設定方法 |
JP2019102083A (ja) * | 2017-11-30 | 2019-06-24 | 三星電子株式会社Samsung Electronics Co.,Ltd. | 差別化されたストレージサービス提供方法及びイーサネットssd |
US11544212B2 (en) | 2017-11-30 | 2023-01-03 | Samsung Electronics Co., Ltd. | Differentiated storage services in ethernet SSD |
JP2021057632A (ja) * | 2019-09-26 | 2021-04-08 | 富士通株式会社 | 障害評価装置及び障害評価方法 |
JP7287219B2 (ja) | 2019-09-26 | 2023-06-06 | 富士通株式会社 | 障害評価装置及び障害評価方法 |
WO2021124416A1 (ja) * | 2019-12-16 | 2021-06-24 | 三菱電機株式会社 | リソース管理装置、制御回路、記憶媒体およびリソース管理方法 |
JPWO2021124416A1 (ja) * | 2019-12-16 | 2021-06-24 | ||
JP7053970B2 (ja) | 2019-12-16 | 2022-04-12 | 三菱電機株式会社 | リソース管理装置、制御回路、記憶媒体およびリソース管理方法 |
CN114788244A (zh) * | 2019-12-16 | 2022-07-22 | 三菱电机株式会社 | 资源管理装置、控制电路、存储介质和资源管理方法 |
Also Published As
Publication number | Publication date |
---|---|
US20170310581A1 (en) | 2017-10-26 |
JPWO2016194089A1 (ja) | 2017-06-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11412416B2 (en) | Data transmission via bonded tunnels of a virtual wide area network overlay | |
WO2016194089A1 (ja) | 通信ネットワーク、通信ネットワークの管理方法および管理システム | |
JP7417825B2 (ja) | スライスベースルーティング | |
US9717021B2 (en) | Virtual network overlay | |
EP2911348B1 (en) | Control device discovery in networks having separate control and forwarding devices | |
US6594268B1 (en) | Adaptive routing system and method for QOS packet networks | |
EP3042476B1 (en) | Buffer-less virtual routing | |
RU2358398C2 (ru) | Способ пересылки трафика, имеющего предварительно определенную категорию обслуживания передачи данных, в сети связи без установления соединений | |
US7209434B2 (en) | Path modifying method, label switching node and administrative node in label transfer network | |
EP3208977A1 (en) | Data forwarding method, device and system in software-defined networking | |
US7944834B2 (en) | Policing virtual connections | |
US20070140235A1 (en) | Network visible inter-logical router links | |
JPH11127195A (ja) | 通信資源管理方法及びノード装置 | |
Lee et al. | Path layout planning and software based fast failure detection in survivable OpenFlow networks | |
CN103581009A (zh) | 对丢弃敏感的前缀(bgp路径)属性修改 | |
EP2838231B1 (en) | Network system and access controller and method for operating the network system | |
US8121138B2 (en) | Communication apparatus in label switching network | |
US9118580B2 (en) | Communication device and method for controlling transmission priority related to shared backup communication channel | |
Lee et al. | Design and implementation of an sd-wan vpn system to support multipath and multi-wan-hop routing in the public internet | |
CN110300073A (zh) | 级联端口的目标选择方法、聚合装置及存储介质 | |
JP6344005B2 (ja) | 制御装置、通信システム、通信方法及びプログラム | |
US7990945B1 (en) | Method and apparatus for provisioning a label switched path across two or more networks | |
JP2005102012A (ja) | スパニングツリープロトコル適用時における網資源管理装置 | |
Budka et al. | An Overview of Smart Grid Network Design Process |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 2017500088 Country of ref document: JP Kind code of ref document: A |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15894124 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15507954 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 15894124 Country of ref document: EP Kind code of ref document: A1 |