EP3955522B1 - Method for an operation of a broadband access network of a telecommunications network comprising a central office point of delivery, a central office point of delivery, a program and a computer-readable medium - Google Patents
Method for an operation of a broadband access network of a telecommunications network comprising a central office point of delivery, a central office point of delivery, a program and a computer-readable medium Download PDFInfo
- Publication number
- EP3955522B1 EP3955522B1 EP20190484.4A EP20190484A EP3955522B1 EP 3955522 B1 EP3955522 B1 EP 3955522B1 EP 20190484 A EP20190484 A EP 20190484A EP 3955522 B1 EP3955522 B1 EP 3955522B1
- Authority
- EP
- European Patent Office
- Prior art keywords
- delivery
- central office
- office point
- additional
- micro services
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims description 25
- 230000006870 function Effects 0.000 claims description 98
- 238000007726 management method Methods 0.000 claims description 36
- 238000012545 processing Methods 0.000 claims description 30
- 238000001514 detection method Methods 0.000 claims description 10
- 239000004744 fabric Substances 0.000 claims description 7
- 238000012423 maintenance Methods 0.000 claims description 7
- 238000004891 communication Methods 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 230000011664 signaling Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 238000010295 mobile communication Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 230000001960 triggered effect Effects 0.000 description 2
- 230000001154 acute effect Effects 0.000 description 1
- 230000004931 aggregating effect Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000010367 cloning Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000004886 process control Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000007493 shaping process Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0866—Checking the configuration
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/12—Avoiding congestion; Recovering from congestion
- H04L47/125—Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0803—Configuration setting
- H04L41/0813—Configuration setting characterised by the conditions triggering a change of settings
- H04L41/0816—Configuration setting characterised by the conditions triggering a change of settings the condition being an adaptation, e.g. in response to network events
Definitions
- the present invention relates to a method for an operation of a broadband access network of a telecommunications network comprising a central office point of delivery and/or for handling increased load situations and/or for performing specific tasks within the telecommunications network and/or within the central office point of delivery, wherein the central office point of delivery and/or the broadband access network comprises a plurality of compute nodes or servers being internally connected and providing an infrastructure to realize, on the one hand, a plurality of virtualized network functions and/or micro services, and, on the other hand, a message router and load balancing entity connecting these virtualized network functions and/or micro services.
- the present invention relates to a broadband access network or telecommunications network for an operation of a broadband access network of a telecommunications network comprising a central office point of delivery and/or for handling increased load situations and/or for performing specific tasks within the telecommunications network and/or within the central office point of delivery, wherein the central office point of delivery and/or the broadband access network comprises a plurality of compute nodes or servers being internally connected and providing an infrastructure to realize, on the one hand, a plurality of virtualized network functions and/or micro services, and, on the other hand, a message router and load balancing entity connecting these virtualized network functions and/or micro services.
- the present invention relates to a central office point of delivery or a system comprising a central office point of delivery an operation of a broadband access network of a telecommunications network comprising a central office point of delivery and/or for handling increased load situations and/or for performing specific tasks within the telecommunications network and/or within the central office point of delivery, wherein the central office point of delivery and/or the broadband access network comprises a plurality of compute nodes or servers being internally connected and providing an infrastructure to realize, on the one hand, a plurality of virtualized network functions and/or micro services, and, on the other hand, a message router and load balancing entity connecting these virtualized network functions and/or micro services.
- the present invention relates to a program and a computer-readable medium for an operation of a broadband access network of a telecommunications network comprising a central office point of delivery and/or for handling increased load situations and/or for performing specific tasks within the telecommunications network and/or within the central office point of delivery.
- Document EP 3 672 169 A1 discloses a data center connected to a plurality of customer networks and a service provider network for providing a scalable service platform including load balancing to handle traffic load.
- An object of the present invention is to provide a technically simple, effective and cost effective solution for an operation of a broadband access network of a telecommunications network, comprising a central office point of delivery, and/or for handling increased load situations and/or for performing specific tasks within the telecommunications network and/or within the central office point of delivery, the handling of increased load situations and/or the performance of specific tasks relate to carrier control plane functions performed or to be performed by the central office point of delivery, wherein the central office point of delivery and/or the broadband access network comprises a plurality of compute nodes or servers being internally connected and providing an infrastructure to realize, on the one hand, a plurality of virtualized network functions and/or micro services, and, on the other hand, a message router and load balancing entity connecting these virtualized network functions and/or micro services.
- a further object of the present invention is to provide a corresponding broadband access network or telecommunications network, a corresponding central office point of delivery and a corresponding system according to the present invention.
- the object of the present invention is achieved by a method for an operation of a broadband access network of a telecommunications network comprising a central office point of delivery and/or for handling increased load situations and/or for performing specific tasks within the telecommunications network and/or within the central office point of delivery as defined according to the appended independent claim 1.
- This extension of the locally available computing power is especially possible to provide temporarily, i.e. by means of extend the compute power inside the central office point of delivery by linking it to external resources for a limited amount of time.
- the bring-up (time) or the change of a single user session typically just takes seconds, and maintaining lots of sessions operational does not put a large burden on the control and management plane of the central office point of delivery.
- a problem may arise when many sessions need to be setup at the same time. This can happen, e.g., during partial or full system reboots. In that case, the available processing power and memory of the local compute servers within the central office point of delivery determine the time it takes to bring a large number of customers (i.e. user equipments) into service.
- an architecture of a central office point of delivery where (along the lines of the ONF SEBA architecture) dedicated network elements are replaced by a modular software and hardware structures; hence, disaggregation (i.e. modularization) is a key aspect.
- the architecture is very similar to a spine-leaf switching architecture and consists of (or comprises) access nodes (terminating physical subscriber lines), switches (transporting, aggregating, shaping subscriber traffic) and servers (hosting the modularized software components that support local management plane and control plane functions for device discovery, subscriber session setup, and maintenance).
- Such a mini data center system is considered to be a central office point of delivery according to the present invention.
- control plane i.e. software part
- receive a packet request
- process the request especially by looking up data in databases, possibly also by issuing requests to other modules, processing the request, and passing on the request to the next processing stage (which can also be the sender of the request)
- this functional architecture is implemented in a microservice framework supported by a method to registering the service instances and exchange messages among the instances (e.g. via a message bus systems as Apache Kafka).
- the first event at each stage triggers the communication between microservices in the central office point of delivery. Those also contact external backend systems. Each stage contains multiple processing steps / message exchanges among such microservice instances as well as between the control and management plane microservices and the physical network devices (incl. the customer premises equipment/optical fiber network terminal devices).
- the main task of the microservices is to support session maintenance (incl. monitoring and OAM) as well to change their characteristics (e.g. via applying policies) on demand during runtime.
- the time it takes for completing the full setup procedure is determined by the complexity (processing time for each steps as well as the number of steps) and the capacity of the compute servers within the considered central office point of delivery (i.e. the capacity of the respective micro services). While the complexity of the procedure is optimized, the end-to-end processing time to get a customer in-service is determined by the available compute power (as well as storage capacity) in the central office point of delivery.
- the telecommunications network according to the present invention might be a fixed-line telecommunications network or a mobile communication network but could also have both aspects, i.e. parts of a fixed-line telecommunications network (or being a fixed-line telecommunications network in such parts) and parts of a mobile communication network (or being a mobile communication network in such parts); such networks are also known under the term fixed-mobile-convergence networks (FMC networks).
- FMC networks fixed-mobile-convergence networks
- the second step comprises the central office point of delivery, especially the load management entity or functionality,
- the emergency stop also involves shutting down the plurality of compute nodes or servers of the central office point of delivery.
- the central office point of delivery comprises a microservices management system to measure the load of the virtualized network functions and/or micro services in real-time.
- the central office point of delivery and/or the broadband access network comprises a switching fabric, the switching fabric comprising a plurality of spine network nodes and a plurality of leaf network nodes, and/or wherein the central office point of delivery and/or the broadband access network comprises a plurality of line termination nodes, wherein each one of the plurality of line termination nodes is connected to at least two leaf network nodes of the plurality of leaf network nodes.
- increased load situations and/or performing specific tasks within the telecommunications network and/or the central office point of delivery include one or a plurality of the following:
- the present invention relates to a central office point of delivery according to the appended independent claim 7.
- the present invention relates to a program as defined according to the appended independent claim 10.
- the present invention relates to a computer-readable medium as defined according to the appended independent claim 11.
- first, second, third and the like in the description and in the claims are used for distinguishing between similar elements and not necessarily for describing a sequential or chronological order; this is especially the case for the terms “first step”, “second step”, etc. It is to be understood that the terms so used are interchangeable under appropriate circumstances and that the embodiments of the invention described herein are capable of operation in other sequences than described or illustrated herein.
- a telecommunications network 100 is schematically shown, having - preferably - at least a fixed line part.
- a mobile (or cellular) part might be present as well, as part of the telecommunications network 100, but is not specifically illustrated in Figure 1 .
- User equipment or client devices 51, 52 are connected to the telecommunications network 100 by means of a (broadband) access network 120.
- the telecommunications network 100 comprises, especially as part of the broadband access network 120, at least one logical or physical central office point of delivery 110 that is preferably realized within a (mini) data center and that is especially handling different access requirements, especially different access possibilities, of the client devices 51, 52 to network functionalities provided by the telecommunications network 100 or via the telecommunications network 100.
- the client devices 51, 52 are typically connected to the logical or physical central office point of delivery 110 by means of a customer premises equipment device 50, 50' or by means of a customer premises equipment functionality that might be built into or realized by the client devices 51, 52.
- the central office point of delivery 110 comprises a switching fabric 115 comprising a plurality of spine network nodes and typically also a plurality of leaf network nodes which are not explicitly represented in Figure 1 .
- FIG. 2 schematically shows in greater detail the central office point of delivery 110 as part of the broadband access network 120 of a telecommunications network 100, the central office point of delivery 110 comprising a certain number of components, entities and/or network nodes, especially a plurality of compute nodes or servers 141, 142, 143, 144 being internally connected and providing an infrastructure to realize both a plurality of virtualized network functions and/or micro services, and a message router and load balancing entity connecting these virtualized network functions and/or micro services within the central office point of delivery 110.
- the represented part of the telecommunications network 100 comprises a switching fabric 115 comprising a plurality of spine network nodes 171, 172 and typically also a plurality of leaf network nodes 161, 162.
- Figure 2 shows a plurality of access nodes 151, 152, 153; examples of such access nodes 151, 152, 153 include line termination nodes, wherein, typically, each of the line termination nodes 151, 152 has one or a plurality of access node ports (not specifically illustrated in Figure 2 ).
- the line termination nodes 151, 152, 153 or access nodes 151, 152, 153 might be provided to support different access technologies (e.g. DSL, digital subscriber line technologies, or line termination nodes supporting to be connected to an optical network, especially a passive optical network (PON), typically a so-called optical line terminal (OLT) or optical line terminal device) to a home gateway or customer premises equipment 50 and a user equipment 51.
- PON passive optical network
- OLT optical line terminal
- the client device 51 is connected to the telecommunications network 100 (i.e. to the respective access node of the plurality of access nodes 151, 152, 153) via the customer premises equipment 50 (or home gateway device 50), and, if applicable, a network termination node (not specifically shown in Figure 2 ).
- the functionality of the customer premises equipment 50 (or home gateway device 50, cf. Figure 1 ) and the functionality of the network termination node might also be integrated in one device or "box". Even the functionality of the client device 51, the functionality of the customer premises equipment 50 (or home gateway device 50) and the functionality of the network termination node might be integrated in one device or "box".
- the central office point of delivery 110 has or realizes a plurality of access nodes 151, 152, 153 that terminate physical subscriber lines serving end users (such as, e.g., the user equipment 51) of the telecommunications network 100.
- the processing done within the central office point of delivery 110 mainly relates to carrier control plane functions, hence, also the handling of increased load situations and/or the performance of specific tasks performed or to be performed by the central office point of delivery 110 relate to carrier control plane functions, i.e. typically not (or not primarily) the user plane traffic of the subscribers, or their user equipments, connected to the central office point of delivery 110.
- carrier control plane functions i.e. typically not (or not primarily) the user plane traffic of the subscribers, or their user equipments, connected to the central office point of delivery 110.
- Figure 2 primarily the pysical entities, components and/or network nodes of the central office point of delivery 110 are schematically shown.
- FIG. 3 a block diagram showing primarily the virtualized layer and its parts or components of the central office point of delivery 110 is schematically shown.
- the central office point of delivery 110 comprises (i.e. the physical hardware within the central office point of delivery 110 is able to instantiate) a plurality of virtualized network functions and/or micro services 201, 202, 203, and a message router and load balancing entity 210, connecting these virtualized network functions and/or micro services 201, 202, 203, i.e. the compute nodes or servers 141, 142 are internally (i.e. within the central office point of delivery 110) connected and provide an infrastructure to realize the virtualized network functions and/or micro services 201, 202, 203, and the message router and load balancing entity 210.
- the central office point of delivery 110 also comprises or realizes (or instantiates) a microservices management system 209, and comprises a load management entity or functionality 180.
- a microservices management system 209 it is advantageously possible to execute management decisions and/or to comply with policies or rules regarding the virtualized network functions and/or micro services 201, 202, 203.
- the load management entity or functionality 180 and its building blocks and/or components is schematically illustrated.
- the load management entity or functionality 180 comprises a load detection function or functionality 181, a rules and/or policies function or database or repository 182, an external management interface 183, a decision logic 184, and an offload execution engine 185.
- an operator 189 is able to generally trigger an offload or is able to trigger that either specificed micro services or that specificed micro service instances or processing capacities are offload (i.e. from the central office point of delivery 110 to the external processing resources).
- the offload execution engine 185 is directly triggered which is schematically represented by an arrow from the external management interface 183 to the offload execution engine 185.
- Both signals produced or generated by the load detection function or functionality 181 and content of the rules and/or policies function or database or repository 182 are provided to the decision logic 184 (schematically represented, in Figure 5 , by respective arrows), and resulting decisions are provided to the offload execution engine 185 (also schematically represented, in Figure 5 , by an arrow).
- the processing done within the central office point of delivery 110 mainly relates to carrier control plane functions, i.e. typically not (or not primarily) the user plane traffic of the subscribers, or their user equipments, connected to the central office point of delivery 110.
- the virtualized network functions and/or micro services 201, 202, 203 exclusively or mainly relate or process control plane tasks or signalization tasks related the central office point of delivery 110 and/or to the user equipments connected thereto.
- FIG. 4 a further block diagram is schematically shown relating to the virtualized network functions and/or micro services 201, 202, 203 (as well as the message router and load balancing entity 210) realized (or instantiated) as part of the central office point of delivery 110, i.e. locally with respect to the central office point of delivery 110, as well as additional virtualized network functions and/or additional micro services 301, 302, 303 (together with an additional message router and load balancing entity 310), being realized (or instantiated) as part of a hardware infrastructure external to the considered central office point of delivery 110.
- the virtualized network functions and/or micro services 201, 202, 203 (as well as the message router and load balancing entity 210) realized (or instantiated) as part of the central office point of delivery 110, i.e. locally with respect to the central office point of delivery 110, as well as additional virtualized network functions and/or additional micro services 301, 302, 303 (together with an additional message router and load balancing entity 310
- the hardware infrastructure providing (realizing or instantiating) both the additional virtualized network functions and/or additional micro services 301, 302, 303, and the additional message router and load balancing entity 310 might either be another central office point of delivery (e.g. a more or less neighbouring central office point of delivery) and/or a central or centralized hardware component or data center, either as part of the telecommunications network 100 or external to the telecommunications network 100 (but, of course, accessible by the telecommunications network 100).
- another central office point of delivery e.g. a more or less neighbouring central office point of delivery
- a central or centralized hardware component or data center either as part of the telecommunications network 100 or external to the telecommunications network 100 (but, of course, accessible by the telecommunications network 100).
- the processing capacity (especially in terms of being able to provide virtualized network functions and/or micro services 201, 202, 203 (especially of sufficient sorts and numbers and/or having sufficient processing capacity) and/or message router and load balancing entity 210) the hardware components of the central office point of delivery 110 and its processing power is (especially temporarily) enhanced by means of the external hardware infrastructure providing the additional virtualized network functions and/or micro services 301, 302, 303 and the additional message router and load balancing entity 310.
- the additional virtualized network functions and/or micro services 301, 302, 303 and the additional message router and load balancing entity 310 are linked (or made available to the resources within the considered central office point of delivery 110) by means of using a tunnel connection 250, especially a virtual private network tunnel connection 250.
- the central office point of delivery 110 comprises a load management entity or functionality 180, especially for detecting the need to ask for or to request additional virtualized network functions and/or micro services 301, 302, 303 (at instances within the telecommunications network 100 external to the central office point of delivery 110).
- a load management entity or functionality 180 especially for detecting the need to ask for or to request additional virtualized network functions and/or micro services 301, 302, 303 (at instances within the telecommunications network 100 external to the central office point of delivery 110).
- it is advantageously possible that such decistions to enhance the locally available processing capacity (of the virtualized network functions and/or micro services 201, 202, 203) are especially able to be taken prior to an acute situation of increased load and/or prior to performing specific tasks within the central office point of delivery 110 and/or within the telecommunications network 100.
- the central office point of delivery 110 and/or the load management entity or functionality 180 thereof either detects that an increased load situation is currently happening, or expects that an increased load situation is likely to happen or that a specific task is to be performed.
- the central office point of delivery 110 (especially the load management entity or functionality 180) triggers the use of the additional virtualized network functions and/or additional micro services 301, 302, 303 and the additional message router and load balancing entity 310 (provided by external hardware) for handling the actual or expected increased load situation and/or for performing specific tasks within the telecommunications network 100 and/or within the central office point of delivery 110.
- the central office point of delivery 110 releases, in a third step, the use of the additional virtualized network functions and/or additional micro services 301, 302, 303 and the use of the additional message router and load balancing entity 310.
- the processing capacities within a considered central office point of delivery 110 are able to be - especially temporarily - enhanced such that a specific situation of high load (such as, e.g., during a (partial) boot operation within the central office point of delivery 110) and/or specific tasks to be performed within the telecommunications network 100 and/or within the central office point of delivery 110 are able to be handled efficiently and especially without leading to a reduced service level or quality-of-service noticeable to the subscribers of the telecommunications network 100.
- servers 141, 142, 143, 144 or network nodes (or, more generally, processing capacities) inside a central office point of delivery 110 is limited - also due to the need and the desire to implement an energy-efficient system design.
- compute power in the different central office points of delivery within the telecommunications network 100 is typically not an issue; servers will not be under (heavy) load, as - especially in order to keep, e.g., the time limit for the system reboot reasoably low - the local hardware capacity setup (within each central office point of delivery 110) is typically over-dimensioned.
- the load management entity or functionality 180 within the central office point of delivery 110 detects that an increased load situation is currently happening, or expects that an increased load situation is likely to happen or that a specific task is to be performed.
- it is a domain-related knowledge that typically triggers the additional virtualized network functions and/or micro services 301, 302, 303 and the additional message router and load balancing entity 310 to be able to be used by the considered central office point of delivery 110 (and, according to the present invention, it is no service-agnostic system that is only able to react to current load conditions and to outsource compute power completely to a central site without any local hosting of the respective services; hence, according to the present invention and in contrary to such service-agnostic systems, a full dependency of the local system on the availability of the central resources is able to be avoided).
- a further system either a neighboring central office point of delivery or a centralized cloud system, or a combination thereof
- a “cloud burst" mechanism is added to the central office point of delivery 110 (i.e. the additional virtualized network functions and/or micro services 301, 302, 303 are temporarily made available to the central office point of delivery 110 and its resources (virtualized network functions and/or micro services 201, 202, 203).
- the central office point of delivery 110 of a telecommunications network 100 is able to use the additional virtualized network functions and/or micro services 301, 302, 303; the load management entity or functionality 180 typically
- microservices or virtualized network functions and/or micro services 201, 202, 203 are running. Only a first type 201, a second type 202, and a third type 203 of micro service is schematically shown in Figure 3 ; however, each type of micro service 201, 202, 203 may have multiple instances (which are, however, not shown in Figure 2 ).
- the message router (or message router and load balancing entity 210), e.g. a load balancer for HTTP(S) requests (hypertext transfer protocol (secure)) or a message bus serves to route messages between the micro services 201, 202, 203; hence, messages always traverse via the message router and load balancing entity 210 of the central office point of delivery 110.
- HTTP(S) requests hypertext transfer protocol (secure)
- secure hypertext transfer protocol
- the central office point of delivery 110 is connected to an external data center (or another central office point of delivery) via a tunnel 250 and connects the message router and load balancing entity 210 in the central office point of delivery 110 to the message router and load balancing entity 310 in the external data center (or other central office point of delivery).
- the third micro service 203 has been the determining factor for the overall performance of the system, by means of offloading and cloning this third micro service 203 (and providing its functionality by means of the additional virtualized network functions and/or micro services 301, 302, 303) it is advantageously possible according to the present invention to achieve a much higher processing rate for the overall system of the central office point of delivery 110.
- a full reboot operation of the central office point of delivery 110 and a (partial) reboot operation thereof (i.e. a reboot operation of at least some components). Both operations result in a storm of network attachment requests and high load on the control plane servers and the need for offloading.
- a full reboot process could be implemented as follows: In a first processing step of such a boot up of hardware and software components of the central office point of delivery 110, the gates towards customers are closed or limited to acceptable rates; in a second processing step, a virtual private network connection is established to external resources, e.g.
- a cloud resource and resources thereof are reserved for the micro services needed; if no external resources are available, the central office point of delivery is booted anyway and connected to customers without external resources, i.e. with only using local micro services; however, in case external resources are available, in a third processing step service instances (i.e. virtualized network functions and/or micro services 201, 202, 203) are cloned and data (especially state data if applicable) is moved as well as both message router and load balancing entities 210, 310 adapted. In a fourth processing step, the gates to the customer side are opened, and requests are processed using both local resources (i.e.
- a fifth processing step it is detected that the current state is stable and that all or almost all user sessions are up.
- the gates of the message routers and load balancing entities 210, 310 are closed, furthermore, the service instances are cloned into the central office point of delivery 110, the data (especially state data) are moved to the central office point of delivery 110, both message routers and load balancing entities 210, 310 are adapted, and the local message router and load balancing entity 210 is re-opened. Finally, this leads again to a stable operation of the central office point of delivery 110.
- a full offload of a type of service might be differentiated from a burst operation where instances of the same microservice may reside inside and outside the central office point of delivery 110 (i.e. as part of the virtualized network functions and/or micro services 201, 202, 203 and as part of the additional virtualized network functions and/or micro services 301, 302, 303).
- the micro services 201, 202, 203 are all managed by Kubernetes.
- the k8s cluster may be either expanded from within the central office point of delivery 110 to the cloud offload data center or stitched to an existing one.
- an adapted load balancing and communication between the central office point of delivery 110 and the external resources is provided with especially the central office point of delivery 110 is detecting the need to add external resources and triggering the offloading process (e.g. by shutting down interfaces to ingress events for a short time). Detecting the need can be done by k8s only by checking the server load or other metrics that is not application-specific. According to the present invention, this can also be done based on following citeria:
- an architecture for distributed resource management and control for micro services running in a mini-data center-like central office point of delivery 110 and serving broadband network access control and management plane functions with a focus on dynamically scaling out and in the services to/from external data centers. It is especially preferred to identify, verify and reserve suitable remote (i.e. external to the central office point of delivery 110) data center resources. Furthermore, it is preferred to provide a mechanism to manage (e.g. by throttling/closing gates) the load on local services of the central office point of delivery 110, to ensure a stable operation, until additional remote cloud resources become available.
- the invention furthermore involves a method to temporarily bind and release remote data center resources for the execution of a selected set of micro services, running in a mini-data center-like central office point of delivery 110 and serving broadband network access control and management plane functions, to boost performance, either to burst or to fully offload all services or a selected subset of types.
Description
- The present invention relates to a method for an operation of a broadband access network of a telecommunications network comprising a central office point of delivery and/or for handling increased load situations and/or for performing specific tasks within the telecommunications network and/or within the central office point of delivery, wherein the central office point of delivery and/or the broadband access network comprises a plurality of compute nodes or servers being internally connected and providing an infrastructure to realize, on the one hand, a plurality of virtualized network functions and/or micro services, and, on the other hand, a message router and load balancing entity connecting these virtualized network functions and/or micro services.
- Furthermore, the present invention relates to a broadband access network or telecommunications network for an operation of a broadband access network of a telecommunications network comprising a central office point of delivery and/or for handling increased load situations and/or for performing specific tasks within the telecommunications network and/or within the central office point of delivery, wherein the central office point of delivery and/or the broadband access network comprises a plurality of compute nodes or servers being internally connected and providing an infrastructure to realize, on the one hand, a plurality of virtualized network functions and/or micro services, and, on the other hand, a message router and load balancing entity connecting these virtualized network functions and/or micro services.
- Additionally, the present invention relates to a central office point of delivery or a system comprising a central office point of delivery an operation of a broadband access network of a telecommunications network comprising a central office point of delivery and/or for handling increased load situations and/or for performing specific tasks within the telecommunications network and/or within the central office point of delivery, wherein the central office point of delivery and/or the broadband access network comprises a plurality of compute nodes or servers being internally connected and providing an infrastructure to realize, on the one hand, a plurality of virtualized network functions and/or micro services, and, on the other hand, a message router and load balancing entity connecting these virtualized network functions and/or micro services.
- Furthermore, the present invention relates to a program and a computer-readable medium for an operation of a broadband access network of a telecommunications network comprising a central office point of delivery and/or for handling increased load situations and/or for performing specific tasks within the telecommunications network and/or within the central office point of delivery.
- The exchange of packetized information in broadband communication systems or telecommunications networks, both in fixed-line as in wireless communication systems (or fixed-line communication networks and mobile communication networks) has already grown dramatically and probably will also grow in the future due to the rapid spread of different data services in such communication networks.
- In conventionally known or current central office point of delivery design architectures, especially of the kind having or comprising modular software and hardware elements or entities that replaces dedicated (physical) network elements, situations might arise where typical operational parameters or indicators, such as, e.g., the time for setting up a subscriber line or a user session, deteriorate due to increased load situations and/or due to specific tasks being performed within the central office point of delivery, such as, e.g., partial or full system reboots.
- Document
EP 3 672 169 A1 discloses a data center connected to a plurality of customer networks and a service provider network for providing a scalable service platform including load balancing to handle traffic load. - An object of the present invention is to provide a technically simple, effective and cost effective solution for an operation of a broadband access network of a telecommunications network, comprising a central office point of delivery, and/or for handling increased load situations and/or for performing specific tasks within the telecommunications network and/or within the central office point of delivery, the handling of increased load situations and/or the performance of specific tasks relate to carrier control plane functions performed or to be performed by the central office point of delivery, wherein the central office point of delivery and/or the broadband access network comprises a plurality of compute nodes or servers being internally connected and providing an infrastructure to realize, on the one hand, a plurality of virtualized network functions and/or micro services, and, on the other hand, a message router and load balancing entity connecting these virtualized network functions and/or micro services. A further object of the present invention is to provide a corresponding broadband access network or telecommunications network, a corresponding central office point of delivery and a corresponding system according to the present invention.
- The object of the present invention is achieved by a method for an operation of a broadband access network of a telecommunications network comprising a central office point of delivery and/or for handling increased load situations and/or for performing specific tasks within the telecommunications network and/or within the central office point of delivery as defined according to the appended independent claim 1.
- It is thereby advantageously possible according to the present invention to provide a solution to extend the locally available compute power, within the central office point of delivery, by means of the additional virtualized network functions and/or additional micro services and the additional message router and load balancing entity. This extension of the locally available computing power is especially possible to provide temporarily, i.e. by means of extend the compute power inside the central office point of delivery by linking it to external resources for a limited amount of time.
- Typically (and for a considered central office point of delivery being set up and operationally running), the bring-up (time) or the change of a single user session (i.e. to provide network connectivity to a user equipment or client device connected, via the central office point of delivery, to the telecommunications network) typically just takes seconds, and maintaining lots of sessions operational does not put a large burden on the control and management plane of the central office point of delivery. However, a problem may arise when many sessions need to be setup at the same time. This can happen, e.g., during partial or full system reboots. In that case, the available processing power and memory of the local compute servers within the central office point of delivery determine the time it takes to bring a large number of customers (i.e. user equipments) into service. Such a setup for a large number of customers might take a considerably longer time, sometimes up to or more than 30 minutes. According to the present invention, it is advantageously possible to reduce this downtime (or this time to service) for user equipments or subscribers served by the specific considered central office point of delivery.
- According to the present invention, especially an architecture of a central office point of delivery is used where (along the lines of the ONF SEBA architecture) dedicated network elements are replaced by a modular software and hardware structures; hence, disaggregation (i.e. modularization) is a key aspect. The architecture is very similar to a spine-leaf switching architecture and consists of (or comprises) access nodes (terminating physical subscriber lines), switches (transporting, aggregating, shaping subscriber traffic) and servers (hosting the modularized software components that support local management plane and control plane functions for device discovery, subscriber session setup, and maintenance). Such a mini data center system is considered to be a central office point of delivery according to the present invention.
- When customers attach their customer premises equipments to the telecommunications network over the physical line to the access node (AN) (via or triggering a device attach message), processes inside the control plane implemented by the software framework are triggered (such as access line autoconfig or the like). In subsequent steps, a data path through the switching fabric (of the central office point of delivery) is established (by means of a path provisioning step). Once that is done, the IP session (cf. Broadband Forum TR-146, TR-187) is being set up.
- The main principle at all these last three stages in the control plane (i.e. software part) is: (1) receive a packet (request), (2) process the request (especially by looking up data in databases, possibly also by issuing requests to other modules, processing the request, and passing on the request to the next processing stage (which can also be the sender of the request)). According to the present invention, this functional architecture is implemented in a microservice framework supported by a method to registering the service instances and exchange messages among the instances (e.g. via a message bus systems as Apache Kafka).
- The first event at each stage triggers the communication between microservices in the central office point of delivery. Those also contact external backend systems. Each stage contains multiple processing steps / message exchanges among such microservice instances as well as between the control and management plane microservices and the physical network devices (incl. the customer premises equipment/optical fiber network terminal devices). Once the internet protocol session is up, the main task of the microservices is to support session maintenance (incl. monitoring and OAM) as well to change their characteristics (e.g. via applying policies) on demand during runtime.
- The time it takes for completing the full setup procedure is determined by the complexity (processing time for each steps as well as the number of steps) and the capacity of the compute servers within the considered central office point of delivery (i.e. the capacity of the respective micro services). While the complexity of the procedure is optimized, the end-to-end processing time to get a customer in-service is determined by the available compute power (as well as storage capacity) in the central office point of delivery.
- According to the present invention, in order to be able for the central office point of delivery to handle increased load situations and/or to perform specific tasks within the telecommunications network, additional virtualized network functions and/or additional micro services and an additional message router and load balancing entity is (especially temporarily) made available to the central office point of delivery over the tunnel connection.
- The telecommunications network according to the present invention might be a fixed-line telecommunications network or a mobile communication network but could also have both aspects, i.e. parts of a fixed-line telecommunications network (or being a fixed-line telecommunications network in such parts) and parts of a mobile communication network (or being a mobile communication network in such parts); such networks are also known under the term fixed-mobile-convergence networks (FMC networks).
- According to the present invention, it is advantageously possible and preferred according to the present invention that the second step comprises the central office point of delivery, especially the load management entity or functionality,
- -- to identify and to reserve and/or to request the additional virtualized network functions and/or additional micro services and the additional message router and load balancing entity,
- -- to use the additional virtualized network functions and/or additional micro services, and/or the additional message router and load balancing entity, especially by means of either
- -- extending the available plurality of virtualized network functions and/or micro services by the additional virtualized network functions and/or additional micro services and/or extending the message router and load balancing entity by the additional message router and load balancing entity, or by
- -- relocalize the plurality of virtualized network functions and/or micro services to the additional virtualized network functions and/or additional micro services,
- -- to detect the increased load situation to be over and/or to detect the completion of the specific task, and to release the use of the additional virtualized network functions and/or the additional micro services and of the additional message router and load balancing entity for the purposes of the central office point of delivery.
- It is thereby advantageously possible according to this preferred embodiment of the present invention to controlledly implement and execute the inventive principle of using - at least, but typically also only, temporarily - the additional virtualized network functions and/or additional micro services, and/or the additional message router and load balancing entity to either extend or to relocalize the available plurality of virtualized network functions and/or micro services by the additional virtualized network functions and/or additional micro services and/or extending the message router and load balancing entity by the additional message router and load balancing entity
- -- to detect the increased load situation to be over and/or to detect the completion of the specific task, and to release the use of the additional virtualized network functions and/or the additional micro services and of the additional message router and load balancing entity for the purposes of the central office point of delivery.
- Thereby, it is - at least temporarily, but preferably only temporarily - possible to provide, to the central office point of delivery, an enhanced processing capacity in order to handleincreased load situations and/or to perform specific tasks.
- According to a further preferred embodiment of the present invention, the emergency stop also involves shutting down the plurality of compute nodes or servers of the central office point of delivery.
- Thereby, it is advantageously possible to efficiently implement the method according to the present invention.
- According to a further embodiment of the present invention, the central office point of delivery comprises a microservices management system to measure the load of the virtualized network functions and/or micro services in real-time.
- Thereby, it is advantageously possible to easily and efficiently implement the use of virtualized network functions and/or micro services within the central office point of delivery according to the present invention.
- Furthermore, according to a preferred embodiment of the present invention, the central office point of delivery and/or the broadband access network comprises a switching fabric, the switching fabric comprising a plurality of spine network nodes and a plurality of leaf network nodes, and/or wherein the central office point of delivery and/or the broadband access network comprises a plurality of line termination nodes, wherein each one of the plurality of line termination nodes is connected to at least two leaf network nodes of the plurality of leaf network nodes.
- Thereby, it is advantageously possible to efficiently implement the method according to the present invention.
- According to a further embodiment of the present invention, increased load situations and/or performing specific tasks within the telecommunications network and/or the central office point of delivery include one or a plurality of the following:
- -- complete reboot of the central office point of delivery,
- -- partial reboot of the central office point of delivery, especially a reboot of a line termination node and/or a reboot of a leaf network node,
- -- scheduled maintenance of the compute nodes or servers of the central office point of delivery,
- -- an update, especially a scheduled update of all or at least a majority of user sessions currently running within the central office point of delivery,
- -- running a data processing task or special operative mode, especially a local data processing task or local special operative mode, in the central office point of delivery, especially in the control plane of the central office point of delivery, especially a debugging mode,
- -- relocalizing the control plane of the central office point of delivery to realize the functionality of the central office point of delivery by the additional virtualized network functions and/or additional micro services and/or by the additional message router and load balancing entity.
- By means of being able, according to the present invention, to efficiently handle increased load situations and/or to efficiently perform specific tasks regarding a multitude of different situations and scenarios, it is advantageously possible according to the present invention to realize the different central office points of delivery in a manner providing less hardware and/or software resources such that the functionality of central office point of delivery is able to be provide in a more cost-effective manner.
- Additionally, the present invention relates to a central office point of delivery according to the appended independent claim 7.
- Still additionally, the present invention relates to a program as defined according to the appended independent claim 10.
- Furthermore, the present invention relates to a computer-readable medium as defined according to the appended independent claim 11.
- These and other characteristics, features and advantages of the present invention will become apparent from the following detailed description, taken in conjunction with the accompanying drawings, which illustrate, by way of example, the principles of the invention. The description is given for the sake of example only, without limiting the scope of the invention. The reference figures quoted below refer to the attached drawings.
-
-
Figure 1 schematically illustrates a telecommunications network according to the present invention, having a broadband access network with a central office point of delivery. -
Figure 2 schematically shows in greater detail the central office point of delivery comprising a certain number of components, entities and/or network nodes, especially a plurality of compute nodes or servers being internally connected and providing an infrastructure to realize, on the one hand, a plurality of virtualized network functions and/or micro services, and, on the other hand, a message router and load balancing entity connecting these virtualized network functions and/or micro services. -
Figure 3 schematically illustrates a block diagram comprising a plurality of virtualized network functions and/or micro services, and a message router and load balancing entity connecting these virtualized network functions and/or micro services within a central office point of delivery according to the present invention. -
Figure 4 schematically illustrates a further block diagram comprising likewise a plurality of virtualized network functions and/or micro services, together with a message router and load balancing entity within a central office point of delivery according to the present invention, being connected to additional virtualized network functions and/or micro services, and additional virtualized network functions and/or additional micro services and an additional message router and load balancing entity over a tunnel connection. -
Figure 5 schematically illustrates a load management entity or functionality and its building blocks and/or components. - The present invention will be described with respect to particular embodiments and with reference to certain drawings but the invention is not limited thereto but only by the claims. The drawings described are only schematic and are nonlimiting. In the drawings, the size of some of the elements may be exaggerated and not drawn on scale for illustrative purposes.
- Where an indefinite or definite article is used when referring to a singular noun, e.g. "a", "an", "the", this includes a plural of that noun unless something else is specifically stated.
- Furthermore, the terms first, second, third and the like in the description and in the claims are used for distinguishing between similar elements and not necessarily for describing a sequential or chronological order; this is especially the case for the terms "first step", "second step", etc. It is to be understood that the terms so used are interchangeable under appropriate circumstances and that the embodiments of the invention described herein are capable of operation in other sequences than described or illustrated herein.
- In
Figure 1 , atelecommunications network 100 according to the present invention is schematically shown, having - preferably - at least a fixed line part. A mobile (or cellular) part might be present as well, as part of thetelecommunications network 100, but is not specifically illustrated inFigure 1 . User equipment orclient devices 51, 52 are connected to thetelecommunications network 100 by means of a (broadband)access network 120. Thetelecommunications network 100 comprises, especially as part of thebroadband access network 120, at least one logical or physical central office point ofdelivery 110 that is preferably realized within a (mini) data center and that is especially handling different access requirements, especially different access possibilities, of theclient devices 51, 52 to network functionalities provided by thetelecommunications network 100 or via thetelecommunications network 100. Theclient devices 51, 52 are typically connected to the logical or physical central office point ofdelivery 110 by means of a customerpremises equipment device 50, 50' or by means of a customer premises equipment functionality that might be built into or realized by theclient devices 51, 52. Preferably but not necessarily, the central office point ofdelivery 110 comprises a switchingfabric 115 comprising a plurality of spine network nodes and typically also a plurality of leaf network nodes which are not explicitly represented inFigure 1 . -
Figure 2 schematically shows in greater detail the central office point ofdelivery 110 as part of thebroadband access network 120 of atelecommunications network 100, the central office point ofdelivery 110 comprising a certain number of components, entities and/or network nodes, especially a plurality of compute nodes orservers delivery 110. The represented part of thetelecommunications network 100 comprises a switchingfabric 115 comprising a plurality ofspine network nodes leaf network nodes Figure 2 shows a plurality ofaccess nodes such access nodes line termination nodes Figure 2 ). Theline termination nodes access nodes customer premises equipment 50 and a user equipment 51. In such a situation, the client device 51 is connected to the telecommunications network 100 (i.e. to the respective access node of the plurality ofaccess nodes Figure 2 ). The functionality of the customer premises equipment 50 (orhome gateway device 50, cf.Figure 1 ) and the functionality of the network termination node might also be integrated in one device or "box". Even the functionality of the client device 51, the functionality of the customer premises equipment 50 (or home gateway device 50) and the functionality of the network termination node might be integrated in one device or "box". - According to the present invention, the central office point of
delivery 110 has or realizes a plurality ofaccess nodes telecommunications network 100. The processing done within the central office point ofdelivery 110 mainly relates to carrier control plane functions, hence, also the handling of increased load situations and/or the performance of specific tasks performed or to be performed by the central office point ofdelivery 110 relate to carrier control plane functions, i.e. typically not (or not primarily) the user plane traffic of the subscribers, or their user equipments, connected to the central office point ofdelivery 110. InFigure 2 , primarily the pysical entities, components and/or network nodes of the central office point ofdelivery 110 are schematically shown. - In
Figure 3 a block diagram showing primarily the virtualized layer and its parts or components of the central office point ofdelivery 110 is schematically shown. - The central office point of
delivery 110 comprises (i.e. the physical hardware within the central office point ofdelivery 110 is able to instantiate) a plurality of virtualized network functions and/ormicro services entity 210, connecting these virtualized network functions and/ormicro services servers micro services entity 210. - In addition to the virtualized network functions and/or
micro services entity 210, the central office point ofdelivery 110 also comprises or realizes (or instantiates) amicroservices management system 209, and comprises a load management entity orfunctionality 180. By means of themicroservices management system 209, it is advantageously possible to execute management decisions and/or to comply with policies or rules regarding the virtualized network functions and/ormicro services Figure 5 , the load management entity orfunctionality 180 and its building blocks and/or components is schematically illustrated. Preferably according to the present invention, the load management entity orfunctionality 180 comprises a load detection function orfunctionality 181, a rules and/or policies function or database orrepository 182, anexternal management interface 183, adecision logic 184, and anoffload execution engine 185. By means or through theexternal management interface 183, it is advantageously possible according to the present invention that anoperator 189 is able to generally trigger an offload or is able to trigger that either specificed micro services or that specificed micro service instances or processing capacities are offload (i.e. from the central office point ofdelivery 110 to the external processing resources). In such a case, theoffload execution engine 185 is directly triggered which is schematically represented by an arrow from theexternal management interface 183 to theoffload execution engine 185. Both signals produced or generated by the load detection function orfunctionality 181 and content of the rules and/or policies function or database orrepository 182 are provided to the decision logic 184 (schematically represented, inFigure 5 , by respective arrows), and resulting decisions are provided to the offload execution engine 185 (also schematically represented, inFigure 5 , by an arrow). - As already said, the processing done within the central office point of
delivery 110 mainly relates to carrier control plane functions, i.e. typically not (or not primarily) the user plane traffic of the subscribers, or their user equipments, connected to the central office point ofdelivery 110. Hence, the virtualized network functions and/ormicro services delivery 110 and/or to the user equipments connected thereto. - In
Figure 4 , a further block diagram is schematically shown relating to the virtualized network functions and/ormicro services delivery 110, i.e. locally with respect to the central office point ofdelivery 110, as well as additional virtualized network functions and/or additionalmicro services delivery 110. The hardware infrastructure providing (realizing or instantiating) both the additional virtualized network functions and/or additionalmicro services entity 310 might either be another central office point of delivery (e.g. a more or less neighbouring central office point of delivery) and/or a central or centralized hardware component or data center, either as part of thetelecommunications network 100 or external to the telecommunications network 100 (but, of course, accessible by the telecommunications network 100). According to the present invention, the processing capacity (especially in terms of being able to provide virtualized network functions and/ormicro services delivery 110 and its processing power is (especially temporarily) enhanced by means of the external hardware infrastructure providing the additional virtualized network functions and/ormicro services entity 310. The additional virtualized network functions and/ormicro services entity 310 are linked (or made available to the resources within the considered central office point of delivery 110) by means of using atunnel connection 250, especially a virtual privatenetwork tunnel connection 250. - According to the present invention, the central office point of
delivery 110 comprises a load management entity orfunctionality 180, especially for detecting the need to ask for or to request additional virtualized network functions and/ormicro services telecommunications network 100 external to the central office point of delivery 110). According to the present invention, it is advantageously possible that such decistions to enhance the locally available processing capacity (of the virtualized network functions and/ormicro services delivery 110 and/or within thetelecommunications network 100. Hence, in a first step according to the present invention, the central office point ofdelivery 110 and/or the load management entity orfunctionality 180 thereof either detects that an increased load situation is currently happening, or expects that an increased load situation is likely to happen or that a specific task is to be performed. In a second step, the central office point of delivery 110 (especially the load management entity or functionality 180) triggers the use of the additional virtualized network functions and/or additionalmicro services telecommunications network 100 and/or within the central office point ofdelivery 110. After the increased load situation is over or after the specific tasks are executed within the central office point ofdelivery 110 and/or within thetelecommunications network 100, i.e. upon a detection of a normal load situation, the central office point ofdelivery 110 releases, in a third step, the use of the additional virtualized network functions and/or additionalmicro services entity 310. - It is thereby advantageously possible according to the present invention that the processing capacities within a considered central office point of
delivery 110 are able to be - especially temporarily - enhanced such that a specific situation of high load (such as, e.g., during a (partial) boot operation within the central office point of delivery 110) and/or specific tasks to be performed within thetelecommunications network 100 and/or within the central office point ofdelivery 110 are able to be handled efficiently and especially without leading to a reduced service level or quality-of-service noticeable to the subscribers of thetelecommunications network 100. Due to economic reasons, the number ofservers delivery 110 is limited - also due to the need and the desire to implement an energy-efficient system design. During normal operation of the central office point ofdelivery 110, compute power in the different central office points of delivery within thetelecommunications network 100 is typically not an issue; servers will not be under (heavy) load, as - especially in order to keep, e.g., the time limit for the system reboot reasoably low - the local hardware capacity setup (within each central office point of delivery 110) is typically over-dimensioned. - According to the present invention, especially the load management entity or
functionality 180 within the central office point ofdelivery 110 detects that an increased load situation is currently happening, or expects that an increased load situation is likely to happen or that a specific task is to be performed. Hence, according to the present invention, it is a domain-related knowledge that typically triggers the additional virtualized network functions and/ormicro services entity 310 to be able to be used by the considered central office point of delivery 110 (and, according to the present invention, it is no service-agnostic system that is only able to react to current load conditions and to outsource compute power completely to a central site without any local hosting of the respective services; hence, according to the present invention and in contrary to such service-agnostic systems, a full dependency of the local system on the availability of the central resources is able to be avoided). - Hence, according to the present invention, it is advantageously possible to offload compute storage power from the central office point of
delivery 110 to a further system (either a neighboring central office point of delivery or a centralized cloud system, or a combination thereof) to drive down, e.g., the restart time in case that many user sessions need to be setup at the same time. According to the present invention a "cloud burst" mechanism is added to the central office point of delivery 110 (i.e. the additional virtualized network functions and/ormicro services delivery 110 and its resources (virtualized network functions and/ormicro services - According to the present invention, the central office point of
delivery 110 of atelecommunications network 100 is able to use the additional virtualized network functions and/ormicro services functionality 180 typically - -- detects the need to extend (offload or boost) local resource capacity (based on e.g. load of local services or scheduling of a system maintenance),
- -- identies and reserves external resources, and
- -- extends the cloud cluster and load balancing; advantageously this could be done, e.g., by means of Kubernetes and components included;
- -- throttles the control and management plane of the central office point of
delivery 110 to ensure stable operation until the additional resources (i.e. the additional virtualized network functions and/ormicro services - -- extends the capacity (offload or boost) for specific microservices;
- -- detects completion
- -- moves full operation back into the central office point of
delivery 110 and releases the remote resources (i.e. the the additional virtualized network functions and/ormicro services - -- emergency stops (and reboots) locally in case of failure / problems.
- Inside the central office point of
delivery 110, a plurality (or a multitude) of multiple microservices (or virtualized network functions and/ormicro services first type 201, asecond type 202, and athird type 203 of micro service is schematically shown inFigure 3 ; however, each type ofmicro service Figure 2 ). - The message router (or message router and load balancing entity 210), e.g. a load balancer for HTTP(S) requests (hypertext transfer protocol (secure)) or a message bus serves to route messages between the
micro services entity 210 of the central office point ofdelivery 110. - According to the present invention, the central office point of
delivery 110 is connected to an external data center (or another central office point of delivery) via atunnel 250 and connects the message router and load balancingentity 210 in the central office point ofdelivery 110 to the message router and load balancingentity 310 in the external data center (or other central office point of delivery). This means that services such as specific micro services (e.g. the third micro service 203) are able to "move" into (or are able to be processed by) the external resource or cloud, and with the help of adapting the configurations of the message router and load balancingentity 210, it looks to other services (e.g. to the firstmicro service 201, and to the second micro service 202) as if the third micro service 203 (and, if applicable, its clones or further instances) was/were still available locally, i.e. within the central office point ofdelivery 110. In case the thirdmicro service 203 has been the determining factor for the overall performance of the system, by means of offloading and cloning this third micro service 203 (and providing its functionality by means of the additional virtualized network functions and/ormicro services delivery 110. - Different use cases for such an offloading are able to be implemented, however, at least the two following cases are mentioned: a full reboot operation of the central office point of
delivery 110 and a (partial) reboot operation thereof (i.e. a reboot operation of at least some components). Both operations result in a storm of network attachment requests and high load on the control plane servers and the need for offloading. A full reboot process could be implemented as follows:
In a first processing step of such a boot up of hardware and software components of the central office point ofdelivery 110, the gates towards customers are closed or limited to acceptable rates; in a second processing step, a virtual private network connection is established to external resources, e.g. a cloud resource, and resources thereof are reserved for the micro services needed; if no external resources are available, the central office point of delivery is booted anyway and connected to customers without external resources, i.e. with only using local micro services; however, in case external resources are available, in a third processing step service instances (i.e. virtualized network functions and/ormicro services entities micro services micro services entities delivery 110, the data (especially state data) are moved to the central office point ofdelivery 110, both message routers and load balancingentities entity 210 is re-opened. Finally, this leads again to a stable operation of the central office point ofdelivery 110. - According to the present invention, a full offload of a type of service (or all micro services) might be differentiated from a burst operation where instances of the same microservice may reside inside and outside the central office point of delivery 110 (i.e. as part of the virtualized network functions and/or
micro services micro services - According to the present invention, other use cases that require offloading of computing power in the central office point of
delivery 110 include: - -- a scheduled update of all running sessions in the central office point of
delivery 110 that requires a lot of internal signaling load; - -- running a "local" data processing task in the control plane such as e.g. a debug mode; it could be imagined here an additional module that is put into the signaling path and does, e.g., anomaly detection. In case the resource required for running those modules goes beyond to what the central office point of
delivery 110 is able to provide, it is advantageously possible according to the present invention to offload this module or the whole control plane to external resources. Thereby, it is possible and preferred according to the present invention to run the control plane fully outside of the central office point ofdelivery 110 where there are, e.g., much better tools to debug. It is furthermore possible according to the present invention to have a data center with extended debugging and/or telemetry tools; such a data center would receive the full control plane micro services and execute them while being fully observed by tools that could not be run in the restricted environment of a central office point ofdelivery 110. Other use case for offloading processing power to instances or hardware resources outside of the central office point ofdelivery 110 include the following: - -- scheduled maintenance of the local servers or the local Kubernetes cluster;
- -- maintenance take over of local server ressources.
- This leads, again, to the basic principle according to the present invention.
- According to a specific embodiment of the present invention, the
micro services delivery 110 to the cloud offload data center or stitched to an existing one. Especially according to the present invention, an adapted load balancing and communication between the central office point ofdelivery 110 and the external resources is provided with especially the central office point ofdelivery 110 is detecting the need to add external resources and triggering the offloading process (e.g. by shutting down interfaces to ingress events for a short time). Detecting the need can be done by k8s only by checking the server load or other metrics that is not application-specific. According to the present invention, this can also be done based on following citeria: - -- Scheduled reboot or turn-on of debugging mode;
- -- Early detection of a signaling storm at the entry to the control plane signaling system (e.g. by means of detecting a huge amount of attachment requests, close gate, scale out, open gate)
- -- Detection of a bogus component that causes many message but shall be handled and investigated into by offloading the workload on thus being able to handle the requests
- -- Detection by other citeria that can be learnt, e.g. via signature detection / artificial intelligence (an example could be misbehaving devices that cause a signaling storm or OLTs (optical line terminals) going out of service comparatively soon
- -- identifying the target data center can include a reservation process by request and response.
- Hence, according to the present invention, an architecture is provided for distributed resource management and control for micro services running in a mini-data center-like central office point of
delivery 110 and serving broadband network access control and management plane functions with a focus on dynamically scaling out and in the services to/from external data centers. It is especially preferred to identify, verify and reserve suitable remote (i.e. external to the central office point of delivery 110) data center resources. Furthermore, it is preferred to provide a mechanism to manage (e.g. by throttling/closing gates) the load on local services of the central office point ofdelivery 110, to ensure a stable operation, until additional remote cloud resources become available. Furthermore, it is prefered to provide a method to setup/teardown the network connection from the central office point ofdelivery 110 to remote cloud helper data center sites, and a method to verify the performance of network connections from the central office point ofdelivery 110 to remote cloud helper data center sites. The invention furthermore involves a method to temporarily bind and release remote data center resources for the execution of a selected set of micro services, running in a mini-data center-like central office point ofdelivery 110 and serving broadband network access control and management plane functions, to boost performance, either to burst or to fully offload all services or a selected subset of types.
Claims (11)
- Method for an operation of a broadband access network (120) of a telecommunications network (100) comprising a central office point of delivery (110) and/or for handling increased load situations and/or for performing specific tasks within the telecommunications network (100) and/or within the central office point of delivery (110), wherein the central office point of delivery (110) has or realizes a plurality of access nodes (151, 152, 153) that terminate physical subscriber lines serving end users of the telecommunications network (100), wherein the handling of increased load situations and/or the performance of specific tasks relate to carrier control plane functions performed or to be performed by the central office point of delivery, wherein the central office point of delivery (110) and/or the broadband access network (120) comprises a plurality of compute nodes or servers (141, 142) being internally connected and providing an infrastructure to realize, on the one hand, a plurality of virtualized network functions and/or micro services (201, 202, 203), and, on the other hand, a message router and load balancing entity (210) connecting these virtualized network functions and/or micro services (201, 202, 203),wherein the central office point of delivery (110) is furthermore connected or connectable to additional compute nodes being able to provide an infrastructure to realize additional virtualized network functions and/or additional micro services (301, 302, 303) and an additional message router and load balancing entity (310) over a tunnel connection (250),wherein the central office point of delivery (110) comprises a load management entity or functionality (180),wherein in order for operating the central office point of delivery (110) and/or for handling increased load situations and/or for performing specific tasks within the telecommunications network (100) and/or within the central office point of delivery (110), the method comprises the following steps:-- in a first step, the central office point of delivery (110) and/or the load management entity or functionality (180) thereof-- detects that an increased load situation is currently happening, or-- expects that an increased load situation is likely to happen or that a specific task is to be performed,-- in a second step, the central office point of delivery (110) triggers the use of the additional virtualized network functions and/or additional micro services (301, 302, 303), being realized or instantiated as part of a hardware infrastructure external to the considered central office point of delivery (110), and the additional message router and load balancing entity (310) for handling the actual or expected increased load situation and/or for performing specific tasks within the telecommunications network (100) and/or within the central office point of delivery (110),-- in a third step, upon a detection of a normal load situation, the central office point of delivery (110) releases the use of the additional virtualized network functions and/or additional micro services (301, 302, 303) and the use of the additional message router and load balancing entity (310),wherein - in case of a failure or in case of problems, especially during the second step - the central office point of delivery (110)-- triggers an emergency stop of at least the plurality of virtualized network functions and/or micro services (201, 202, 203) and/or the message router and load balancing entity (210), and-- triggers a local reboot of at least the plurality of virtualized network functions and/or micro services (201, 202, 203) and/or of the message router and load balancing entity (210) and/or of the plurality of compute nodes or servers (141, 142) of the central office point of delivery (110).
- Method according to claim 1, wherein the second step comprises the central office point of delivery (110), especially the load management entity or functionality (180),-- to identify and to reserve and/or to request the additional virtualized network functions and/or additional micro services (301, 302, 303) and the additional message router and load balancing entity (310),-- to use the additional virtualized network functions and/or additional micro services (301, 302, 303), and/or the additional message router and load balancing entity (310), especially by means of either-- extending the available plurality of virtualized network functions and/or micro services (201, 202, 203) by the additional virtualized network functions and/or additional micro services (301, 302, 303) and/or extending the message router and load balancing entity (210) by the additional message router and load balancing entity (310), or by-- relocalize the plurality of virtualized network functions and/or micro services (201, 202, 203) to the additional virtualized network functions and/or additional micro services (301, 302, 303),-- to detect the increased load situation to be over and/or to detect the completion of the specific task, and to release the use of the additional virtualized network functions and/or the additional micro services (301, 302, 303) and of the additional message router and load balancing entity (310) for the purposes of the central office point of delivery (110).
- Method according to one of the preceding claims, the emergency stop also involving shutting down the plurality of compute nodes or servers (141, 142) of the central office point of delivery (110).
- Method according to one of the preceding claims, wherein the central office point of delivery (110) comprises a microservices management system (209) to measure the load of the virtualized network functions and/or micro services (201, 202, 203) in real-time.
- Method according to one of the preceding claims, wherein the central office point of delivery (110) and/or the broadband access network (120) comprises a switching fabric (115), the switching fabric (115) comprising a plurality of spine network nodes (171, 172) and a plurality of leaf network nodes (161, 162), and/or wherein the central office point of delivery (110) and/or the broadband access network (120) comprises a plurality of line termination nodes (151, 152, 153), wherein each one of the plurality of line termination nodes (151, 152, 153) is connected to at least two leaf network nodes of the plurality of leaf network nodes (161, 162).
- Method according to one of the preceding claims, wherein increased load situations and/or performing specific tasks within the telecommunications network (100) and/or the central office point of delivery (110) include one or a plurality of the following:-- complete reboot of the central office point of delivery (110),-- partial reboot of the central office point of delivery (110), especially a reboot of a line termination node (151, 152, 153) and/or a reboot of a leaf network node (161, 162),-- scheduled maintenance of the compute nodes or servers (141, 142) of the central office point of delivery (110),-- an update, especially a scheduled update of all or at least a majority of user sessions currently running within the central office point of delivery (110),-- running a data processing task or special operative mode, especially a local data processing task or local special operative mode, in the central office point of delivery (110), especially in the control plane of the central office point of delivery (110), especially a debugging mode,-- relocalizing the control plane of the central office point of delivery (110) to realize the functionality of the central office point of delivery (110) by the additional virtualized network functions and/or additional micro services (301, 302, 303) and/or by the additional message router and load balancing entity (310).
- Central office point of delivery (110) for an operation of a broadband access network (120) of a telecommunications network (100) comprising a central office point of delivery (110) and/or for handling increased load situations and/or for performing specific tasks within the telecommunications network (100) and/or within the central office point of delivery (110), wherein the central office point of delivery (110) has or realizes a plurality of access nodes (151, 152, 153) that terminate physical subscriber lines serving end users of the telecommunications network (100), wherein the handling of increased load situations and/or the performance of specific tasks relate to carrier control plane functions performed or to be performed by the central office point of delivery, wherein the central office point of delivery (110) and/or the broadband access network (120) comprises a plurality of compute nodes or servers (141, 142) being internally connected and providing an infrastructure to realize, on the one hand, a plurality of virtualized network functions and/or micro services (201, 202, 203), and, on the other hand, a message router and load balancing entity (210) connecting these virtualized network functions and/or micro services (201, 202, 203),wherein the central office point of delivery (110) is furthermore connected or connectable to additional compute nodes being able to provide an infrastructure to realize additional virtualized network functions and/or additional micro services (301, 302, 303) and an additional message router and load balancing entity (310) over a tunnel connection (250),wherein the central office point of delivery (110) comprises a load management entity or functionality (180),wherein in order for operating the central office point of delivery (110) and/or for handling increased load situations and/or for performing specific tasks within the telecommunications network (100) and/or within the central office point of delivery (110), the central office point of delivery (110) is configured such that:-- the central office point of delivery (110) and/or the load management entity or functionality (180) thereof-- detects that an increased load situation is currently happening, or-- expects that an increased load situation is likely to happen or that a specific task is to be performed,-- the central office point of delivery (110) triggers the use of the additional virtualized network functions and/or additional micro services (301, 302, 303), being realized or instantiated as part of a hardware infrastructure external to the considered central office point of delivery (110), and the additional message router and load balancing entity (310) for handling the actual or expected increased load situation and/or for performing specific tasks within the telecommunications network (100) and/or within the central office point of delivery (110),-- upon a detection of a normal load situation, the central office point of delivery (110) releases the use of the additional virtualized network functions and/or additional micro services (301, 302, 303) and the use of the additional message router and load balancing entity (310),wherein the central office point of delivery (110) is configured such that - in case of a failure or in case of problems - the central office point of delivery (110)-- triggers an emergency stop of at least the plurality of virtualized network functions and/or micro services (201, 202, 203) and/or the message router and load balancing entity (210), and-- triggers a local reboot of at least the plurality of virtualized network functions and/or micro services (201, 202, 203) and/or of the message router and load balancing entity (210) and/or of the plurality of compute nodes or servers (141, 142) of the central office point of delivery (110).
- System for an operation of a broadband access network (120) of a telecommunications network (100) comprising a central office point of delivery (110) according to claim 7.
- Broadband access network (120) or telecommunications network (100) for an operation of a broadband access network (120) of a telecommunications network (100) comprising a central office point of delivery (110) according to claim 7.
- Program comprising a computer readable program code which, when executed on a computer and/or on a network node of a central office point of delivery (110) or on a load management entity or functionality (180), or in part on the network node of a central office point of delivery (110) and/or in part on the load management entity or functionality (180), causes the computer and/or the network node of the central office point of delivery (110) or the load management entity or functionality (180) to perform a method according one of claims 1 to 5.
- Computer-readable medium comprising instructions which when executed on a computer and/or on a network node of a central office point of delivery (110) or on a load management entity or functionality (180), or in part on the network node of a central office point of delivery (110) and/or in part on the load management entity or functionality (180), causes the computer and/or the network node of the central office point of delivery (110) or the load management entity or functionality (180) to perform a method according one of claims 1 to 5.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
ES20190484T ES2936652T3 (en) | 2020-08-11 | 2020-08-11 | Procedure for operation of a broadband access network of a telecommunications network comprising a central office delivery point, a central office delivery point, a program and a computer-readable medium |
EP20190484.4A EP3955522B1 (en) | 2020-08-11 | 2020-08-11 | Method for an operation of a broadband access network of a telecommunications network comprising a central office point of delivery, a central office point of delivery, a program and a computer-readable medium |
US17/393,421 US20220052953A1 (en) | 2020-08-11 | 2021-08-04 | Operation of a broadband access network of a telecommunications network comprising a central office point of delivery |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP20190484.4A EP3955522B1 (en) | 2020-08-11 | 2020-08-11 | Method for an operation of a broadband access network of a telecommunications network comprising a central office point of delivery, a central office point of delivery, a program and a computer-readable medium |
Publications (2)
Publication Number | Publication Date |
---|---|
EP3955522A1 EP3955522A1 (en) | 2022-02-16 |
EP3955522B1 true EP3955522B1 (en) | 2022-12-21 |
Family
ID=72050659
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP20190484.4A Active EP3955522B1 (en) | 2020-08-11 | 2020-08-11 | Method for an operation of a broadband access network of a telecommunications network comprising a central office point of delivery, a central office point of delivery, a program and a computer-readable medium |
Country Status (3)
Country | Link |
---|---|
US (1) | US20220052953A1 (en) |
EP (1) | EP3955522B1 (en) |
ES (1) | ES2936652T3 (en) |
Family Cites Families (40)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8179809B1 (en) * | 1999-08-23 | 2012-05-15 | Oracle America, Inc. | Approach for allocating resources to an apparatus based on suspendable resource requirements |
US9061207B2 (en) * | 2002-12-10 | 2015-06-23 | Sony Computer Entertainment America Llc | Temporary decoder apparatus and method |
WO2004081762A2 (en) * | 2003-03-12 | 2004-09-23 | Lammina Systems Corporation | Method and apparatus for executing applications on a distributed computer system |
US8000254B2 (en) * | 2008-12-23 | 2011-08-16 | General Instruments Corporation | Methods and system for determining a dominant impairment of an impaired communication channel |
WO2011059370A1 (en) * | 2009-11-16 | 2011-05-19 | Telefonaktiebolaget L M Ericsson (Publ) | Apparatuses and methods for reducing a load on a serving gateway in a communications network system |
US8533337B2 (en) * | 2010-05-06 | 2013-09-10 | Citrix Systems, Inc. | Continuous upgrading of computers in a load balanced environment |
US9065831B2 (en) * | 2011-03-01 | 2015-06-23 | Cisco Technology, Inc. | Active load distribution for control plane traffic using a messaging and presence protocol |
US8948596B2 (en) * | 2011-07-01 | 2015-02-03 | CetusView Technologies, LLC | Neighborhood node mapping methods and apparatus for ingress mitigation in cable communication systems |
US9311160B2 (en) * | 2011-11-10 | 2016-04-12 | Verizon Patent And Licensing Inc. | Elastic cloud networking |
US8935375B2 (en) * | 2011-12-12 | 2015-01-13 | Microsoft Corporation | Increasing availability of stateful applications |
US9323628B2 (en) * | 2012-10-09 | 2016-04-26 | Dh2I Company | Instance level server application monitoring, load balancing, and resource allocation |
US20150263894A1 (en) * | 2014-03-14 | 2015-09-17 | Avni Networks Inc. | Method and apparatus to migrate applications and network services onto any cloud |
US20170199770A1 (en) * | 2014-06-23 | 2017-07-13 | Getclouder Ltd. | Cloud hosting systems featuring scaling and load balancing with containers |
US20160112252A1 (en) * | 2014-10-15 | 2016-04-21 | Cisco Technology, Inc. | Deployment and upgrade of network devices in a network environment |
EP3060000A1 (en) * | 2015-02-20 | 2016-08-24 | Thomson Licensing | Adaptive load balancing in residential hybrid gateways |
US10425322B2 (en) * | 2015-04-13 | 2019-09-24 | Ciena Corporation | Scalable broadband metro network architecture |
US10230662B2 (en) * | 2016-05-20 | 2019-03-12 | Mitel Networks, Inc. | Hybrid cloud deployment for hybrid unified communications |
US10164858B2 (en) * | 2016-06-15 | 2018-12-25 | Time Warner Cable Enterprises Llc | Apparatus and methods for monitoring and diagnosing a wireless network |
US10348838B2 (en) * | 2016-07-22 | 2019-07-09 | Cisco Technology, Inc. | Scaling service discovery in a micro-service environment |
US10157071B2 (en) * | 2016-08-30 | 2018-12-18 | Vmware, Inc. | Method for migrating a virtual machine between a local virtualization infrastructure and a cloud-based virtualization infrastructure |
US10812366B1 (en) * | 2017-08-31 | 2020-10-20 | Google Llc | System and method for deploying, scaling and managing network endpoint groups in cloud computing environments |
US10623822B2 (en) * | 2017-10-20 | 2020-04-14 | Cisco Technology, Inc. | Virtual cable modem termination system redundancy |
WO2019086719A1 (en) * | 2017-11-06 | 2019-05-09 | Athonet S.R.L. | Policy-driven local offload of selected user data traffic at a mobile edge computing platform |
US20200022005A1 (en) * | 2018-01-18 | 2020-01-16 | Cable Television Laboratories, Inc. | Ad-hoc wireless mesh network system and methodology for failure reporting and emergency communications |
US10860367B2 (en) * | 2018-03-14 | 2020-12-08 | Microsoft Technology Licensing, Llc | Opportunistic virtual machine migration |
US10749971B2 (en) * | 2018-04-24 | 2020-08-18 | Microsoft Technology Licensing, Llc | Virtual private network gateway management |
US10938626B2 (en) * | 2018-07-25 | 2021-03-02 | Microsoft Technology Licensing, Llc | Fast failover for gateway instances |
US10691568B2 (en) * | 2018-07-26 | 2020-06-23 | International Business Machines Corporation | Container replication and failover orchestration in distributed computing environments |
US20200050694A1 (en) * | 2018-08-13 | 2020-02-13 | Amazon Technologies, Inc. | Burst Performance of Database Queries According to Query Size |
US10897497B2 (en) * | 2018-11-13 | 2021-01-19 | International Business Machines Corporation | Automated infrastructure updates in a cluster environment that includes containers |
EP3884201A1 (en) * | 2018-11-20 | 2021-09-29 | Transocean Sedco Forex Ventures Limited | Proximity-based personnel safety system and method |
US10785166B1 (en) * | 2018-11-29 | 2020-09-22 | Cox Communications, Inc. | Resource assignment protocol-implemented policy-based direction of a client to an edge-compute resource |
US10855588B2 (en) * | 2018-12-21 | 2020-12-01 | Juniper Networks, Inc. | Facilitating flow symmetry for service chains in a computer network |
US10693713B1 (en) * | 2019-02-22 | 2020-06-23 | At&T Intellectual Property I, L.P. | Method and apparatus for providing service coverage with a measurement-based dynamic threshold adjustment |
US11106516B2 (en) * | 2019-04-10 | 2021-08-31 | International Business Machines Corporation | Selective service-specific controls in a virtualized container environment |
US20200328977A1 (en) * | 2019-04-10 | 2020-10-15 | Cisco Technology, Inc. | Reactive approach to resource allocation for micro-services based infrastructure |
US11175939B2 (en) * | 2019-05-09 | 2021-11-16 | International Business Machines Corporation | Dynamically changing containerized workload isolation in response to detection of a triggering factor |
US10992546B2 (en) * | 2019-07-09 | 2021-04-27 | Charter Communications Operating, Llc | Multi-domain software defined network controller |
US11074111B2 (en) * | 2019-07-15 | 2021-07-27 | Vmware, Inc | Quality of service scheduling with workload profiles |
US11669349B2 (en) * | 2019-07-24 | 2023-06-06 | Workspot, Inc. | Method and system for cloud desktop fabric |
-
2020
- 2020-08-11 EP EP20190484.4A patent/EP3955522B1/en active Active
- 2020-08-11 ES ES20190484T patent/ES2936652T3/en active Active
-
2021
- 2021-08-04 US US17/393,421 patent/US20220052953A1/en active Pending
Also Published As
Publication number | Publication date |
---|---|
EP3955522A1 (en) | 2022-02-16 |
ES2936652T3 (en) | 2023-03-21 |
US20220052953A1 (en) | 2022-02-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Velasco et al. | In-operation network planning | |
US9407557B2 (en) | Methods and systems to split equipment control between local and remote processing units | |
US20220052923A1 (en) | Data processing method and device, storage medium and electronic device | |
CN105706393A (en) | Method and system of supporting operator commands in link aggregation group | |
Fichera et al. | On experimenting 5G: Testbed set-up for SDN orchestration across network cloud and IoT domains | |
US10530669B2 (en) | Network service aware routers, and applications thereof | |
CN110266593B (en) | Self-adaptive routing switching cloud network system based on flow monitoring | |
CN112203172B (en) | Special line opening method and device | |
Dab et al. | An efficient traffic steering for cloud-native service function chaining | |
KR101975082B1 (en) | Methods for managing transaction in software defined networking network | |
US10142200B2 (en) | Methods and systems for a network appliance module enabling dynamic VDC aware span | |
CN107911243B (en) | Network optimization method, device and computer readable storage medium | |
EP3955522B1 (en) | Method for an operation of a broadband access network of a telecommunications network comprising a central office point of delivery, a central office point of delivery, a program and a computer-readable medium | |
EP4080850A1 (en) | Onboarding virtualized network devices to cloud-based network assurance system | |
US6768746B1 (en) | Telecommunications network with a transport layer controlled by an internet protocol layer | |
US20060129662A1 (en) | Method and apparatus for a service integration system | |
CN114826939B (en) | Network traffic analysis method of K8S cluster | |
US11456916B2 (en) | Operation and architecture of a central office point of delivery within a broadband access network of a telecommunications network | |
Bensalah et al. | Towards a new SDN NFV approach for the management of MPLS Infrastructures | |
Veichtlbauer et al. | Enabling Application Independent Redundancy by Using Software Defined Networking | |
Haga et al. | Building intelligent future Internet infrastructures: FELIX for federating software-defined networking experimental networks in Europe and Japan | |
Pavan | Offering Cloud Native Network Services to Residential Users | |
CA2451042A1 (en) | Method and apparatus for provisioning a communication path | |
CN117336175A (en) | Computer network system, computer networking method and computer readable storage medium | |
CN114826953A (en) | Business arrangement method based on process and CFS/RFS model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
17P | Request for examination filed |
Effective date: 20210512 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R079 Ref document number: 602020007033 Country of ref document: DE Free format text: PREVIOUS MAIN CLASS: H04L0012240000 Ipc: H04L0041086600 |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: GRANT OF PATENT IS INTENDED |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: H04L 41/0866 20220101AFI20220506BHEP |
|
INTG | Intention to grant announced |
Effective date: 20220524 |
|
GRAJ | Information related to disapproval of communication of intention to grant by the applicant or resumption of examination proceedings by the epo deleted |
Free format text: ORIGINAL CODE: EPIDOSDIGR1 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
INTC | Intention to grant announced (deleted) | ||
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: GRANT OF PATENT IS INTENDED |
|
INTG | Intention to grant announced |
Effective date: 20220928 |
|
GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE PATENT HAS BEEN GRANTED |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: FG4D |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: EP |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R096 Ref document number: 602020007033 Country of ref document: DE |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: REF Ref document number: 1539717 Country of ref document: AT Kind code of ref document: T Effective date: 20230115 |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: FG4D |
|
REG | Reference to a national code |
Ref country code: ES Ref legal event code: FG2A Ref document number: 2936652 Country of ref document: ES Kind code of ref document: T3 Effective date: 20230321 |
|
REG | Reference to a national code |
Ref country code: LT Ref legal event code: MG9D |
|
REG | Reference to a national code |
Ref country code: NL Ref legal event code: MP Effective date: 20221221 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20221221 Ref country code: NO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230321 Ref country code: LT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20221221 Ref country code: FI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20221221 |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: MK05 Ref document number: 1539717 Country of ref document: AT Kind code of ref document: T Effective date: 20221221 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: RS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20221221 Ref country code: LV Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20221221 Ref country code: HR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20221221 Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230322 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: NL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20221221 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SM Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20221221 Ref country code: RO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20221221 Ref country code: PT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230421 Ref country code: EE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20221221 Ref country code: CZ Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20221221 Ref country code: AT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20221221 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20221221 Ref country code: PL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20221221 Ref country code: IS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230421 Ref country code: AL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20221221 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R097 Ref document number: 602020007033 Country of ref document: DE |
|
PLBE | No opposition filed within time limit |
Free format text: ORIGINAL CODE: 0009261 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: DK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20221221 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: IT Payment date: 20230831 Year of fee payment: 4 Ref country code: ES Payment date: 20230918 Year of fee payment: 4 |
|
26N | No opposition filed |
Effective date: 20230922 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: FR Payment date: 20230821 Year of fee payment: 4 Ref country code: DE Payment date: 20230822 Year of fee payment: 4 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20221221 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MC Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20221221 |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: PL |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MC Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20221221 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LU Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20230811 |