WO2019117509A1 - Procédé de sélection et de gestion d'un graphe axé sur un temps de réponse pour assurance qualité de service de réseau de bout en bout - Google Patents

Procédé de sélection et de gestion d'un graphe axé sur un temps de réponse pour assurance qualité de service de réseau de bout en bout Download PDF

Info

Publication number
WO2019117509A1
WO2019117509A1 PCT/KR2018/014779 KR2018014779W WO2019117509A1 WO 2019117509 A1 WO2019117509 A1 WO 2019117509A1 KR 2018014779 W KR2018014779 W KR 2018014779W WO 2019117509 A1 WO2019117509 A1 WO 2019117509A1
Authority
WO
WIPO (PCT)
Prior art keywords
network
delay value
vnf
network delay
inter
Prior art date
Application number
PCT/KR2018/014779
Other languages
English (en)
Korean (ko)
Inventor
이재용
김창우
오유정
Original Assignee
연세대학교 산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 연세대학교 산학협력단 filed Critical 연세대학교 산학협력단
Publication of WO2019117509A1 publication Critical patent/WO2019117509A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/50Network service management, e.g. ensuring proper service fulfilment according to agreements
    • H04L41/5003Managing SLA; Interaction between SLA and QoS
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/50Network service management, e.g. ensuring proper service fulfilment according to agreements
    • H04L41/5003Managing SLA; Interaction between SLA and QoS
    • H04L41/5019Ensuring fulfilment of SLA
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/40Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using virtualisation of network functions or resources, e.g. SDN or NFV entities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0852Delays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0852Delays
    • H04L43/0864Round trip delays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2425Traffic characterised by specific attributes, e.g. priority or QoS for supporting services specification, e.g. SLA
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/28Flow control; Congestion control in relation to timing considerations
    • H04L47/283Flow control; Congestion control in relation to timing considerations in response to processing delays, e.g. caused by jitter or round trip time [RTT]

Definitions

  • the present invention relates to network technology.
  • NFV Network Functions Virtualization
  • SDN Software Defined Networking
  • MANO Management and orchestration technologies that manage Service Function Chain (SFC) have received much attention from industry and academia.
  • NFV MANO Orchestrator is used, which considers only Virtualized Network Function (VNF) and network capacity.
  • VNF Virtualized Network Function
  • network slicing technology has been introduced to provide dedicated service-specific networks, and different network characteristics are being considered.
  • Network slicing is an approach in which a plurality of logically separated networks are created together with a substrate that emanates from the physical network, and each network is assigned to a dedicated network for each service.
  • the current NFV Orchestrator manages the SFC only considering VNF and network capacity. Therefore, in a situation where network slicing is introduced, end-to-end (E2E) network delay and quality of service (QoS) for the change need to be considered.
  • E2E end-to-end
  • QoS quality of service
  • the present invention seeks to improve the end-to-end network service quality of a network composed of a plurality of network slices.
  • a VNF delivery graph selection method executed in an NFV MANO that creates, maintains, and deletes a network service composed of VNFs to ensure quality of end-to-end network service.
  • the method includes receiving an SLA that includes a network QoS requirement, obtaining a list of available VNFs to configure a VNF delivery graph, acquiring the VNF list, receiving a VNF list from the Virtualized Infrastructure Manager (VIM) And selecting a VNF delivery graph that satisfies the network QoS requirement using the inter-network delay value and the intra-network delay value.
  • VIP Virtualized Infrastructure Manager
  • the obtaining of the VNF list may include obtaining at least one pair of VIMs adjacent to each other based on the sliced SDN network, And requesting the selected VIM either or both of the inter-network delay value and the in-network delay value.
  • the VIM having received the inter-network delay value may calculate a delay value between the networks by generating a temporary VM host and measuring a round trip time for the pairing VIM by setting a temporary SDN flow rule.
  • the step of selecting a VNF delivery graph that satisfies the network QoS requirement using the inter-network delay value and the in-network delay value may further comprise the steps of: using the intra-network delay value and the intra- Generating a VNF delivery graph that meets the network QoS requirements, and selecting any of the generated VNF delivery graphs by Round-robin scheduling.
  • an NFV system that ensures quality of end-to-end network services.
  • the NFV system obtains an available VNF list for setting a VNF delivery graph upon receiving an SLA including a network QoS requirement, acquires an inter-network delay value and an intra-network delay value using the obtained VNF list
  • An NFV MANO that selects a VNF delivery graph that satisfies the network QoS requirement using the acquired inter-network delay value and an intra-network delay value, a VNFM that provides a VNF list to the NFV MANO, Value and a VIM that provides an in-network delay value.
  • the NFV MANO selects at least one pair of VIMs adjacent to each other based on the sliced SDN network, and can request either or both of the inter-network delay value and the intra-network delay value to the selected VIM have.
  • the VIM may create a temporary VM host and set a temporary SDN flow rule to calculate the inter-network delay value by measuring the round trip time for the paired counterpart VIM.
  • the NFV MANO generates a VNF delivery graph that satisfies network QoS requirements using the inter-network delay value and the intra-network delay value, and generates a VNF delivery graph by round- Can be selected.
  • the quality of end-to-end network service can be improved by selecting the VNF delivery graph considering the delay between each network slice as well as the delay between a plurality of network slices located between ends.
  • FIG. 1 is an exemplary diagram illustrating an end-to-end connected VNF delivery graph through a plurality of network slices.
  • FIG. 2 is an exemplary diagram illustrating a sequence of NFV MANOs for VNF delivery graph setup.
  • FIG. 3 is an exemplary diagram illustrating a configuration for simulating the VNF delivery graph setup shown in FIG.
  • FIG. 4 is a graph showing a simulation result in a network in which the VNF transfer graph setting is not applied.
  • FIG. 5 is a graph of a simulation result in a network to which a VNF transfer graph setting is applied.
  • FIG. 1 is an exemplary diagram illustrating an end-to-end connected VNF delivery graph through a plurality of network slices.
  • NFV separates the network functions from the hardware in hardware devices and implements the separated network functions into Virtualized Network Functions (VNFs) that run in the data center virtual machines.
  • VNFs Virtualized Network Functions
  • NFV requires features such as Programmability, Flexibility, and Modularity to facilitate the construction of dynamic network service provisioning and SFC.
  • NFVs combined with Software Defined Networking (SDN) and cloud technologies are used to automate VNFs And adjust traffic across the VNF.
  • SDN Software Defined Networking
  • NFV MANO NFV Management and Orchestration
  • VNFM VNF Manager
  • VIP Virtualized Infrastructure Manager
  • the NFV Orchestrator should be designed to ensure dynamic configuration of the VNF delivery graph, reduce the production, distribution and activation time of network services, and provide stable services.
  • Prior art challenges for NFV Orchestrator include the availability of common interfaces to handle heterogeneous and distributed SDN and cloud network technologies, flexible VNF deployment algorithms that meet service level agreements (SLAs) with the required network capacity, It is the construction of multiple NFV Infrastructure (NFVI) for services.
  • SLAs service level agreements
  • NFVI NFV Infrastructure
  • the NFV MANO is connected to a plurality of VIMs capable of managing a plurality of dispersed NFVIs and a plurality of VNFMs capable of managing VNFs generated from the respective VIMs in a structure of M: N: 1.
  • Each infrastructure located between the ends is logically divided into several virtual networks through the network slicing.
  • GSM Graph Selection Manager
  • the NFVO MANO using GSM enables an end-to-end network service setup that meets the SLAs that require QoS on the logical network topology delivered from each VIM and its associated VNFM capable of network slicing.
  • Figure 1 schematically illustrates this.
  • GSM can provide an end-to-end network service through a comprehensive consideration by adding inter-network delay to available network devices that have been delivered to the resource efficiency algorithm in terms of capacity (110). Therefore, the end-to-end delay time can be significantly reduced compared to the network service provided by the conventional NFV MANO, and a dedicated VNF delivery graph can be provided with a more constant provisioning level.
  • a logical network For applications and network users, a logical network must be configured, which requires an SDN hypervisor. At this time, the packets belonging to the first network slice should not be delivered to the second network slice because the available bandwidth of the second network slice becomes smaller than the management plan of the NFVO. Therefore, the SDN controller must be an SDN hypervisor, such as RadioVisor, FlowVisor, FlowN, NVP, OpenVirteX, IBM SDN VE, etc., which provide routing and queuing capabilities for bandwidth per network slice while guaranteeing QoS.
  • a Flowman was chosen as the SDN hypervisor to implement GSM.
  • the Flowvisor is an Openflow controller that acts as a hypervisor or relay between SDN switches and multiple SDN controllers. The SDN hypervisor effectively isolates each network slice from multiple switches in parallel.
  • FIG. 2 is an exemplary diagram illustrating a sequence of NFV MANOs for VNF delivery graph setup.
  • NFV consists of VNF, NFVI, VIM (200) VNFM 210 and NFVO 220.
  • VNF is a basic block of NFV, a virtualized network function
  • NFVI is an environment in which VNFs operate, including physical resources, virtual resources, and virtualization layers.
  • the VIM 200 is a system for managing NFVIs, which monitors the computing network and storage resources within the network, measures performance, and monitors events occurring.
  • the VNFM 210 manages the life cycle of the VNF such as creation, maintenance and deletion of the VNF.
  • NFVO 220 creates, maintains, and deletes network services comprised of VNFs.
  • the NFVO 220 manages the NFVI resources as a whole. That is, the NFVO 220 comprehensively manages the resources of the NFVI managed by the plurality of VIMs 200.
  • a mutual process should be added to an existing process sequence.
  • the network user 230 forwards the SLA including the network QoS requirements to the NFVO 220 (S10).
  • the NFVO 220 Upon receiving the SLA that includes the network QoS requirements, the NFVO 220 requests the VNFM 210 for a list of available VNFs to establish a VNF delivery graph that meets the SLA (S11).
  • the VNFM 210 Upon receiving a request for an available VNF list, the VNFM 210 reserves one or more managed VIMs for resources currently available in the NFVI managed by the VIM (S12)
  • the VIM 200 Upon receipt of the resource reservation request, the VIM 200 acknowledges the currently available VNF and responds with a list of VNFs that can be reserved. The VIM 200 transmits the VNF list to the VNFM 210 (S13), and the VNFM 210 transfers the list to the NFVO 220 (S14).
  • the NFVO 220 Upon receiving the VNF list, the NFVO 220 collects the delay value (S15, S16). The NFVO 220 transmits the inter-network latency 1 and the intra-network latency E of the NFVI connected to at least one pair of adjacent VIMs 200 based on the SDN to the VIM 200, (200).
  • the inter-network delay value l may be a sliced SDN network delay value
  • the in-network delay value E may be a delay value between VNF (or SDN router).
  • the first algorithm is performed by the NFVO 220 to determine an adjacent VIM 200 pair.
  • the NFVO 220 may determine which of the two VIMs 200 constituting the determined VIM 200 pair, e.g., the first VIM 200, ) To the inter-network delay value l.
  • the delay value request may include information about one or more other VIMs 200 from which the VIM 200 will collect inter-network delay values l.
  • Equation 1 represents the first algorithm.
  • the VIM 200 collects the inter-network delay value 1 and / or the intra-network delay value E (S17). In addition, the VIM 200 collects delay values between edge network devices of the NFVI.
  • the second algorithm is executed by the VIM 200 and calculates an inter-network delay value according to the delay value request received from the NFVO 220.
  • the VIM 200 can measure round trip time (RTT) between the paired counterpart VIMs 200 and, based on the measured round trip time RTT, The inter-delay value l can be calculated.
  • the VIM 200 transmits the intra-network delay value and the intra-network delay value E calculated to the NFVO 220 (S18). Equation 2 represents the second algorithm.
  • the NFVO 220 Upon receiving the inter-network delay value I and the in-network delay value E, the NFVO 220 selects a VNF delivery graph satisfying the SLA (S19). The NFVO 220 can confirm the delay of the network with respect to the current bandwidth from the inter-network delay value I and the in-network delay value E. The NFVO 220 selects the final VNF delivery graph using the VNF sequence, the required delay value, the required QoS variable value, and the delay values according to the currently available resource state. The VNF delivery graph selection can be performed by applying a round-robin scheduling scheme, which is a simple best effort scheme for one or more VNF delivery graphs satisfying the SLA.
  • a round-robin scheduling scheme which is a simple best effort scheme for one or more VNF delivery graphs satisfying the SLA.
  • NFVO 220 selects a VNF delivery graph according to a third algorithm.
  • NFVO 220 creates a final network slice path selection list S.
  • the NFVO 220 receives the inter-network delay value, the in-network delay value E, and the inter-network device delay value of the NFVI.
  • a total delay value L (G, R, MaxKbps, MaxBurstKbps, ClientSideGW, ServiceSideGW), a delay value E in each network.
  • the NFVO 220 basically follows the round-robin scheduling to select one VNF delivery graph and occasionally use a different VNF delivery graph depending on the network characteristics. Equation 3 represents a third algorithm.
  • the NFVO 220 requests the VNFM 210 to allocate the VNF constituting the VNF delivery graph to the network service requested by the network user 230 (S20).
  • the VNFM 210 Upon receiving the VNF allocation request, the VNFM 210 requests network resource allocation to the VIM 200 managing the VNFs constituting the VNF delivery graph (S21).
  • the VIM 200 Upon receiving the network resource allocation request, the VIM 200 allocates the requested VNFs to the network service and transmits the result to the VNFM 210 (S22).
  • the VNFM 210 notifies the NFVO 220 that the VNF allocation is completed ). Thereafter, the NFVO 220 notifies the network user 230 that the network service is available (S24).
  • FIG. 3 is an exemplary diagram illustrating a configuration for simulating the VNF delivery graph setup shown in FIG.
  • the network topology includes three virtualization hosts T1-T3, six SDN switches S1-S6, two SDN hypervisors, two SDN controllers, and one virtualization server.
  • Three physical servers were used. In one physical server, there are three virtualization hosts, three SDN switches, one SDN hypervisor and one SDN controller, and logical ports 2 and 3 of S2 are actually connected to Ethernet. In the other server, there is one virtualization host, three SDN switches, one SDN hypervisor and one SDN controller, and logical port 1 of S4 and logical port 1 of S5 are actually connected to Ethernet.
  • the virtualization hosts T1-T3 are network users, such as over-the-top (OTT) operators, and continuously request network services.
  • the SDN switch is an Open VSwitch (OVS) switch that supports Openflow 1.0 and later, and is connected to a modified Flowvisor, three SDN hypervisors.
  • the sliced OVS switch is connected to ovs-ofctl.
  • Openflow flow rules can be issued through ovs-ofctl connected to a dedicated network. With ovs-ofctl, you can modify the bandwidth settings of the port as well as the port's VLAN, the ToS or DSCP value of the IPv4 header, and / or the traffic class field of the IPv6 header. In an experimental network topology, it was developed to communicate with them by modally adding the commands of the Flowvisor and the commands of ovs-ofctl.
  • the virtualization server is configured to operate as a video streaming server.
  • the virtualization host T1-T3 is allocated a link connected to itself and all connected links between switches. At this time, each virtualization host T1-T3 communicates with the virtualization server based on the flow rule using the VLAN tag information received from the SDN hypervisor.
  • port 2 of S2, port 3 of S3, port 1 of S4 and port 1 of S5 physically connected to Ethernet are connected to a virtual environment link Lt; / RTI >
  • the inter-network delay in the experimental network topology, four transient VM hosts were created and the flow rule was set. Then, when the delay value was transmitted by RTT check through ICMP, the temporary VM host was configured to be deleted. Therefore, the network slice path selection application was operated to satisfy the inter-network delay, the intra-network delay, the inter-network delay factor and the specific SLA requirements divided into three zones.
  • the bandwidth of the link connected with the virtualization host is limited to 40% of the bandwidth of the switch link, about 40 Mbps, and the delay time of each logical link is 1 ms Respectively.
  • the bandwidth of the Ethernet that is physically connected to the SDN switch is set up to 100 Mbps from the SDN supervisor. Since the virtualization server is a video streaming server, UDP packets are transmitted from each virtualization host for a certain period of time, and the amount of change of the delay value of the logical link is confirmed.
  • FIG. 4 is a graph of simulation results in a network to which the VNF delivery graph setting is not applied
  • FIG. 5 is a graph of simulation results in the network to which the VNF delivery graph setting is applied.
  • FIG. 4 shows that the network service traffic increases according to the Openflow flow rule not considering the network state, and the packet delay increases rapidly due to the traffic exceeding the bandwidth of the link.
  • the utilization rate of network resources will decrease.
  • FIG. 5 shows performance effects of application of the network slice path selection application.
  • the lack of network state management led to an end-to-end delay value for the VNF delivery graph up to a maximum of 600 ms, resulting in packet delay and loss.
  • the VNF transfer graphs consistently show a low delay value.
  • the above-mentioned GSM confirms the network state (in-network delay) and the inter-network connection state (inter-network delay) by using a plurality of VIMs and a plurality of VNFMs connected to NFVI.
  • the delay value is obtained, and the VNF transfer graph is selected based on the information.
  • GSM can act as the middle layer application of the NFV Orchestrator proposed in the existing ETSI-compliant NFV MANO and can be added to each VIM.
  • the VNF delivery graph according to the network characteristics can be selected and applied.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Environmental & Geological Engineering (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

La présente invention se rapporte à une technologie réseau. Selon un aspect, la présente invention concerne un procédé de sélection d'un graphe d'acheminement de VNF exécuté dans un MANO de NFV pour générer, maintenir et supprimer un service réseau composé de VNF pour l'assurance qualité d'un service de réseau de bout en bout. Le procédé peut comprendre les étapes consistant à : obtenir une liste de VNF disponibles pour configurer un graphe d'acheminement de VNF lorsqu'un SLA comprenant des exigences de QoS de réseau est reçu ; obtenir, d'un gestionnaire d'infrastructure virtualisée (VIM), une valeur de latence entre des réseaux et une valeur de latence à l'intérieur d'un réseau, lorsque la liste de VNF est obtenue ; et sélectionner un graphe d'acheminement de VNF satisfaisant les exigences de QoS de réseau à l'aide de la valeur de latence entre des réseaux et la valeur de latence à l'intérieur d'un réseau.
PCT/KR2018/014779 2017-12-15 2018-11-28 Procédé de sélection et de gestion d'un graphe axé sur un temps de réponse pour assurance qualité de service de réseau de bout en bout WO2019117509A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2017-0172843 2017-12-15
KR1020170172843A KR101976958B1 (ko) 2017-12-15 2017-12-15 종단간 네트워크 서비스의 품질 보장을 위한 응답 시간 기반 그래프 선택 관리 방법

Publications (1)

Publication Number Publication Date
WO2019117509A1 true WO2019117509A1 (fr) 2019-06-20

Family

ID=66545874

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2018/014779 WO2019117509A1 (fr) 2017-12-15 2018-11-28 Procédé de sélection et de gestion d'un graphe axé sur un temps de réponse pour assurance qualité de service de réseau de bout en bout

Country Status (2)

Country Link
KR (1) KR101976958B1 (fr)
WO (1) WO2019117509A1 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10897423B2 (en) * 2019-05-14 2021-01-19 Vmware, Inc. Congestion avoidance in a slice-based network

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20030044134A (ko) * 2001-11-28 2003-06-09 한국전자통신연구원 제한조건을 만족하는 다중 경로 배정방법
KR20170105597A (ko) * 2015-01-20 2017-09-19 후아웨이 테크놀러지 컴퍼니 리미티드 Nfv 관리 및 편성을 위한 방법 및 장치
KR20170111246A (ko) * 2016-03-25 2017-10-12 한국전자통신연구원 네트워크 기능 가상화를 위한 자원 제어방법과 자원 스케줄링 방법 및 그 네트워크 기능 가상화 시스템
KR20170113807A (ko) * 2016-03-25 2017-10-13 고려대학교 산학협력단 서비스 기능 체이닝 시스템 및 그 방법

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20030044134A (ko) * 2001-11-28 2003-06-09 한국전자통신연구원 제한조건을 만족하는 다중 경로 배정방법
KR20170105597A (ko) * 2015-01-20 2017-09-19 후아웨이 테크놀러지 컴퍼니 리미티드 Nfv 관리 및 편성을 위한 방법 및 장치
KR20170111246A (ko) * 2016-03-25 2017-10-12 한국전자통신연구원 네트워크 기능 가상화를 위한 자원 제어방법과 자원 스케줄링 방법 및 그 네트워크 기능 가상화 시스템
KR20170113807A (ko) * 2016-03-25 2017-10-13 고려대학교 산학협력단 서비스 기능 체이닝 시스템 및 그 방법

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ETSI: "Network Function Virtualisation (NFV); Management and Orchestration", ETSI GS NFV-MAN 001 V1.1.1, vol. 1.1, December 2014 (2014-12-01), pages 1 - 184, XP055383931 *

Also Published As

Publication number Publication date
KR101976958B1 (ko) 2019-05-09

Similar Documents

Publication Publication Date Title
CN114073052B (zh) 用于基于切片的路由的系统、方法及计算机可读介质
Lam et al. Netshare and stochastic netshare: predictable bandwidth allocation for data centers
JP7288980B2 (ja) 仮想サービスネットワークにおけるサービス品質
Zhang et al. Vertex-centric computation of service function chains in multi-domain networks
KR101909364B1 (ko) 전송 네트워크 가상화에서 탄력성을 제공하기 위한 방법
EP3541028B1 (fr) Équilibrage de charge adaptatif sur une interface logique multipoints
KR20160041631A (ko) 서비스 품질 인지 라우팅 제어 장치 및 라우팅 제어 방법
CN106059915A (zh) 基于sdn控制器实现租户南北向流量限速的系统及方法
JP5364183B2 (ja) ネットワークのリソース管理装置
KR20170033179A (ko) 소프트웨어 정의 네트워크 기반 가상 네트워크 사용 대역폭 관리 방법 및 가상 네트워크 관리장치
Chahlaoui et al. Performance analysis of load balancing mechanisms in SDN networks
Tomovic et al. RO-RO: Routing optimality-reconfiguration overhead balance in software-defined ISP networks
Kunz et al. Comparing OpenFlow and NETCONF when interconnecting data centers
WO2019117509A1 (fr) Procédé de sélection et de gestion d'un graphe axé sur un temps de réponse pour assurance qualité de service de réseau de bout en bout
Ojo et al. Modified floyd-warshall algorithm for equal cost multipath in software-defined data center
Li et al. CoMan: Managing bandwidth across computing frameworks in multiplexed datacenters
Cui et al. Accurate network resource allocation in SDN according to traffic demand
Priyadarsini et al. A new approach for SDN performance enhancement
Faraj et al. Load balancing using queue length in SDN based switches
Wang et al. SD-WAN: Edge Cloud Network Acceleration at Australia Hybrid Data Center
Bhaumik et al. Hierarchical two dimensional queuing: A scalable approach for traffic shaping using software defined networking
Sidhardha et al. HomeCloud Framework for Priority aware VM Allocation and Network Bandwidth Provisioning
Wang et al. Dynamic bandwidth allocation for preventing congestion in data center networks
Nguyen et al. Saco: A service chain aware SDN controller-switch mapping framework
Meneses et al. Traffic-aware Live Migration in Virtualized CPE Scenarios

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18889597

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18889597

Country of ref document: EP

Kind code of ref document: A1