WO2022177455A1 - Procédé et système pour optimiser la gestion de ressources et de trafic d'un environnement d'exécution d'ordinateur dans un vran - Google Patents

Procédé et système pour optimiser la gestion de ressources et de trafic d'un environnement d'exécution d'ordinateur dans un vran Download PDF

Info

Publication number
WO2022177455A1
WO2022177455A1 PCT/PT2022/050007 PT2022050007W WO2022177455A1 WO 2022177455 A1 WO2022177455 A1 WO 2022177455A1 PT 2022050007 W PT2022050007 W PT 2022050007W WO 2022177455 A1 WO2022177455 A1 WO 2022177455A1
Authority
WO
WIPO (PCT)
Prior art keywords
physical
traffic
virtual
power
data
Prior art date
Application number
PCT/PT2022/050007
Other languages
English (en)
Inventor
Shahid MUMTAZ
Anwer AL-DULAIMI
Jonathan RODRÍGUEZ GONZÁLEZ
Original Assignee
Instituto De Telecomunicações
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Instituto De Telecomunicações filed Critical Instituto De Telecomunicações
Publication of WO2022177455A1 publication Critical patent/WO2022177455A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0896Bandwidth or capacity management, i.e. automatically increasing or decreasing capacities
    • H04L41/0897Bandwidth or capacity management, i.e. automatically increasing or decreasing capacities by horizontal or vertical scaling of resources, or by migrating entities, e.g. virtual resources or entities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5094Allocation of resources, e.g. of the central processing unit [CPU] where the allocation takes into account power or heat criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0895Configuration of virtualised networks or elements, e.g. virtualised network function or OpenFlow elements

Definitions

  • the present disclosure relates to automatic resource and traffic management in cloud computing platforms related to O-RANs, Open Radio Access Networks, vRANs, virtualized radio area networks, or other virtualized networks that are impacted by dynamic traffic alterations.
  • Cloud service providers normally allow users to launch cloud native compliant components that are entirely decoupled from underlying hardware layer.
  • Virtualization operating systems can overcommit virtual functions using the same physical computational resources without any clear vision on impacts to hardware. Therefore, it is necessary to identify relevant challenges to current thread assignment mechanisms especially when processing heavy loads such as high efficiency video coding (HEVC). From hardware perspective, overloading computing systems may cause overheating the central processing units (CPUs) leading to service interruptions when employing dedicated core selection mechanism.
  • CPUs central processing units
  • the CPU power consumption is impacted by the number of threads, and it scales up with the load processed.
  • the CPU has many power levels that define power consumptions, for example, numbers PL1 (Power Level 1), PL2 (Power Level 2), and turbo power limits (TPLs) or tau.
  • PL1 is the effective long-term expected steady state power consumption of a processor. It is understood that when CPU reaches the thermal design power (TDP) it starts warming up and exhausting cooling system capability.
  • TDP thermal design power
  • New incoming service requests are new threads that are allocated to vCPUs (virtualized CPUs) based on flows handled by these processes. Therefore, the elasticity feature allows processes to be more resilient to failure when the incoming traffic exceeds the assigned processing resources. However, threads are scheduled in a priority order in view of thread class and resource availability. Nevertheless, continuing to increase threads in a built-in server will significantly impact CPUs performance. If this occurs, the server may become unstable and the whole physical and software systems will be at the risk of failure and potentially overheating. [0008] Typically, the current hypervisors will wait until VMs crash down and then launch new backup VMs. However, this tactic does not consider the time of interruptions or the risk of contiguous effect in small cloud centres.
  • the present disclosure consolidates two mechanisms that contribute to the network scalability and performance improvements through efficient resource allocations of the computational resources (e.g., CPU) and traffic management, namely, thread assignment schemes that consider global availability of cloud resources before instantiating any service.
  • the computational resources e.g., CPU
  • traffic management e.g., thread assignment schemes that consider global availability of cloud resources before instantiating any service.
  • the disclosed solution includes two schemes for centralized and distributed service distribution mechanism to allocate incoming load between different computational platforms. To coordinate between disclosed schemes, it is also disclosed a power controller utility function at the virtual layer to automate resource assignments considering CPU overheating indicators.
  • the present disclosure further proposes an I/O traffic controller to manage traffic forwarding between virtual and physical domains for cloud native domains.
  • the I/O traffic controller requires sensors that capture traffic volume generating a database for I/O plugin interfaces.
  • the arrival of a request or a piece of new information can be identified as a process that requires a certain amount of resources to execute.
  • threads can either handle one process or a component of the process. These are decided by the executable code included in that process or the process responsible for that form of information. From a system level, it is usually well known the time required to process a thread and the number of threads mapped to the number of CPUs on any server. Therefore, it is reasonable to use threads to determine CPU utilization rather than tracking processes for different operations.
  • the hypervisor generates a common shared virtual layer that allows central monitoring of resources and coordinated assignment of threads, preferably from a single point.
  • This single point can be the hypervisor dashboard that also provides the necessary resources control through the Virtualized Infrastructure Manager (VIM), or equivalently Cloud Infrastructure Manager (CIM).
  • VIP Virtualized Infrastructure Manager
  • CIM Cloud Infrastructure Manager
  • a new component named a power controller to monitor the power consumed by server CPUs The framework shows the controller location within hypervisor layer and the association with virtual Baseband Broadcast Units (vBBUs) on virtual domain and orchestrator on cloud management domain.
  • the vBBUs provide the processing resources for various numbers of Virtual Machines (VMs) or containers that formulate the Virtual Network Functions (VNFs).
  • VMs Virtual Machines
  • VNFs Virtual Network Functions
  • a vBBU can be equated with a Cloud native resource measure (CNRM), defined as a 1 CPU unit which is equivalent to 1 physical CPU core or 1 virtual core, depending on whether the node in question is a physical host itself or a virtual machine running inside a physical host.
  • CNRM Cloud native resource measure
  • the key management components of virtual domain are the resource allocation, service analytics, and VIM that are bundled together within the orchestrator fabric. Since the number of threads allocated for each process is dynamically changing along with traffic fluctuations or number of VNFs, the disclosed power controller provides management control throughout the hypervisor layer to choose between centralized or distributed thread assignment mechanisms.
  • the modelling of thread assignment and status evaluation is performed using graph colouring algorithms.
  • Those algorithms provide the necessary mechanisms for the disclosed power controller to switch data centres from centralized thread mode to distributed thread mode assignments and vice-versa for the purpose of preventing CPU overheating and any subsequent system failures.
  • the present disclosure presents those two modes of operation and the algorithm used to label threads for each of them. Additionally, it shows how the power controller manages the mechanism for assigning resources.
  • VMs can be triggered manually through command shell or automatically from orchestrator level.
  • the user is able to add or create a flavour of virtual resources (virtual CPUs, RAMs, etc.) and assign this flavour to a VM.
  • Flavours can be default pre-set flavours that are offered by the hypervisor dashboard or customized flavours that are created and chosen according to user needs.
  • the automation features in the hypervisor typically allows to assign additional resources, i.e., virtual CPUs, RAMs, when instantiating additional VMs or expanding the resources available for existing VMs.
  • New incoming service requests are new threads that are allocated to vCPUs based on flows handled by these processes. Therefore, the elasticity feature allows processes to be more resilient to failure when the incoming traffic exceeds the assigned processing resources.
  • threads are scheduled in a priority order in view of thread class and resource availability. Since the virtualization hypervisor allows to generate even more threads compared with conventional hyperthreading, the feature of scheduling threads according to a priority order in view of thread class and resource availability can be used to increase the number of abstracted virtual CPUs in response to traffic growth. This seems to be an interesting approach to scale up limited resources although accepting the risk of slower processing speed. However, continuing to increase threads in a built-in server will significantly impact CPUs performance.
  • the server may become unstable and the whole physical and software systems become at the risk of failure and potentially overheating.
  • the current hypervisors will wait until VMs crash down and then launch new backup VMs.
  • this tactic does not consider the time of interruptions or the risk of contiguous effect in small cloud centers.
  • the power controller is an additional software component that bypasses the virtual layer to capture the physical resource status.
  • This functionality of the disclosed controller component is different from the VIM that was defined by the European Telecommunications Standards Institute (ETSI) NFV model. While the disclosed controller manages threads based on CPU power analysis, the VIM manages the hypervisor resource allocations between VMs. In typical cloud domain, VMs are migrated to another server upon any failure in hosting server. In case of deploying power controller, the controller that balances CPU power consumption through thread assignments to stabilize hardware performance and potentially reduce the chances of failure. Therefore, VMs are maintained for longer time with less migration or lifecycle events.
  • ETSI European Telecommunications Standards Institute
  • each resource (vBBU) is tagged with a certain 'colour' that reflects specific power requirements resulting from processing assigned flow.
  • the unique vBBU colour is drawn from the resource pool and is referenced to that resource as long as flow is still ongoing.
  • each vBBU is independent and the concept of workspace becomes inapplicable anymore allowing more flexibility to the orchestration layer in case migration of a resource is scheduled to occur.
  • Resource allocation and power modelling are also based on load handled by each vBBU, individually. Therefore, the overall power consumption and tolerance for any server is determined through collective estimations of resource power requirements on that server or the resource power requirements handled by a certain CPU in case dictated CPU was enabled. This means that power controller may thus use conventional tools for CPU monitoring to abstract various analytics required to trigger resource migration though orchestrated workflows.
  • power controller may thus use conventional tools for CPU monitoring to abstract various analytics required to trigger resource migration though orchestrated workflows.
  • Given the above disclosure for distributed model it is also disclosed two new power controller models that have similar functionality to the centralized power controller. Each controller model receives analytics regarding the types of flows that are being processed along with information on where that vBBU resides. To this end, the methodology of the controller models diverges as they both attempt to achieve energy efficiency through different means.
  • the first model is an Overload Model, which relies on the fact that CPUs come built in with an overload margin. This controller monitors the CPU overload alarm to terminate some flow processing as the first reaction step or reschedule some flows to later time until more computational resources become available.
  • the model typically, by committing to architectural CPU design, has better performance in maintaining hardware and to prevent cascaded actions resulting from multiple termination of flows and re-connection requests.
  • This model avoids reallocating resources or issuing requests to the orchestrator for additional resources. This model is particularly useful for small and overloaded cloud centres operating with limited processing capacity.
  • the second model is an Optimum Control model, which introduces monitoring indicators that reflect real-time changes in CPU status.
  • This controller type initiates a request for resource adaptions upon the change in any colours (vBBU node power).
  • the request carries information about impacted node and is sent to the orchestrator. Then, a decision is made to reschedule some flows to other CPUs from within the server without committing any actions for resource migration.
  • This model avoids CPU heating through balanced assignment of incoming flows and CPU processing resources. This model also avoids requesting services from orchestrator and relies on utilizing the processing capabilities of server CPUs.
  • the Overload Model attempts to achieve better performance under high load by continuing to allow overloaded CPUs to process resource intensive flows. However, this might be risky and could lead to server hardware failure if load exceeded certain threshold, though the present disclosure attempts to mitigate this downside and normal functionality.
  • the controller monitors real-time heating indicators for CPU when overloading starts. The controller correlates CPU heating indicators with vendor specifications for better predictions of future CPU status changes and/or failures.
  • the Optimum Control/Distribution Model exchanges worse performance (e.g. low optimization of buffered CPU power or inefficiencies overloading CPU with extra threads), due to CPU reallocation, with a better quality of experience (QoE) as there is no risk in the design for a request to fail due to overloading.
  • QoE quality of experience
  • the disclosed controller Upon seeing a high CPU usage warning, the disclosed controller waits, for example 2 CPU cycles, in an attempt to allow the high usage flows to complete allowing the CPUs to refrain back to normal processing levels under 100% load. Should the overloading of the CPUs not have restored to normal processing levels after two cycles, the power controller operating in overloading mode starts rescheduling flows or simply accepts to terminate some of them.
  • the success of this resource distribution model is based in creating CPU health indicators defined as overheating and overloading.
  • overloading as a measure of the percentage CPU usage for a given VM or guest operating system that exceeds 100%.
  • overheating as a measure of the current CPU temperature as reported by the driver tools and exceeding the safety threshold given by vendor specifications.
  • alarms are triggered on a server, but they are only related to a specific CPU. In this case, it is considered that server is still safe to operate while the power controller restores the affected CPU back to normal status.
  • the optimum control in distribution model is designed to proactively avoid CPU overloading while simultaneously rejecting flows that require large amounts of computational resources.
  • the decision to assign resources or instantiate VMs is based on confirming resource availability using the aforementioned health indicators of overloading and overheating.
  • the present disclosure thus allows an automated and dynamic management CPU power status to maximize the processing power and prevent any meltdown.
  • Figure 1 Block diagram of an embodiment of the described system.
  • Figure 2 Flowchart representation of the algorithm used in an embodiment of the described system.
  • Figure 3 Schematic representation of orchestrator, hypervisor, VNF, virtual network functions, and physical network interface cards, NIC, with the disclosed power and I/O controllers.
  • Figure 4 Schematic representation of a hypervisor employing the disclosed power controller for thread assignment in a data centre comprising multiple virtualized servers.
  • Figure 5 Schematic representation of an embodiment of the disclosed power controller for configuring CPU power parameters comprising CPU power thresholds.
  • Figure 6 Schematic representation of an embodiment of the disclosed traffic controller for configuring routing between NICs/VNICs and/or network plugins.
  • a method for optimizing resource and traffic management of a computer execution environment for a vRAN, virtualized radio area network comprising: a plurality of physical computer servers comprising physical NICs, network interface cards, to be used as host machines; a plurality of vBBUs, virtual baseband units, connected using digitized radio frequency data to RRUs, remote radio units, and said vBBUs being virtual machines comprising virtual NICs made available with corresponding virtual network plugins, emulated as physical machines by said physical computer servers; a hypervisor for running and managing said vBBUs; a VIM, virtualised infrastructure manager, for running and managing hypervisor resource allocation between said vBBUs; an I/O controller, input/output controller, comprised in said hypervisor for controlling data traffic forwarding between physical and virtual domains; an orchestrator for managing connections and workload in said computer execution environment; each of the vBBUs providing computing resources for a plurality of VNFs, virtual network function, devices; wherein
  • the vBBUs can also be described as a cloud native resource measure (CNRM).
  • CNRM cloud native resource measure
  • the computer execution environment further comprises a router and the method further comprises using said I/O controller to configure routing parameters of said router for load-balancing and routing traffic, in particular said routing parameters comprising routing table and subnet parameters.
  • An embodiment comprises said router sending physical and virtual NIC monitoring data to the I/O controller and the I/O controller sending aggregated traffic data to the orchestrator for capturing traffic status changes due to changes in processing power.
  • the router is a VNF, virtual network function, device.
  • a for optimizing resource and traffic management of a computer execution environment for a vRAN, virtualized radio area network said computer execution environment comprising: a plurality of physical computer servers comprising physical CPUs, network interface cards, to be used as host machines; a plurality of vBBUs, virtual baseband units, connected using digitized radio frequency data to RRUs, remote radio units, and said vBBUs being virtual machines comprising virtual CPUs, emulated as physical machines by said physical computer servers; a hypervisor for running and managing said vBBUs; a VIM, virtualised infrastructure manager, for running and managing hypervisor resource allocation between said vBBUs; a power controller comprised in said hypervisor for controlling CPU power parameters comprising CPU power thresholds; an orchestrator for managing connections and workload in said computer execution environment; each of the vBBUs providing computing resources for a plurality of VNFs, virtual network function, devices; wherein said VNFs are allocated by said VIM; said VIM being connected with the orchestrat
  • the method further comprises using said power controller to configure CPU power parameters comprising CPU power thresholds.
  • An embodiment comprises said power controller receiving CPU monitoring data and the power controller sending aggregated CPU power data to the orchestrator for capturing CPU power changes due to changes in CPU power thresholds.
  • a part of said virtual machines are virtual machines containers, in particular all of said virtual machines are virtual machines containers.
  • These virtual machines containers can also be referred as CNF or CVNF, containerized Virtualized Network Functions.
  • An embodiment comprises using said orchestrator to publish update machine blueprints comprising machine specifications.
  • An embodiment comprises migrating virtual machines between host machines for avoiding physical NIC overload.
  • the hypervisor and VIM are contained in a virtual machine or machines emulated as a physical machine or machines, respectively, by said physical computer servers.
  • An embodiment comprises using said power controller for allocating threads of execution between said physical computer servers, the method further comprising using said power controller to: acquire CPU power data from CPUs of said physical computer servers; allocate threads between said physical computer servers using the acquired CPU power data.
  • the power controller is arranged to bypass a virtual machine, within which the power controller is contained, to acquire CPU power data from CPUs of said physical computer servers.
  • An embodiment further comprises the steps of: logging by the power controller of power consumption and acquired traffic volume data records; sending by the power controller, to be received by the orchestrator, the logged data records; training a machine learning model by the orchestrator on the received data records to forecast power consumption from traffic volume data; sending by the orchestrator, to be received by the power controller, of trained machine learning model; evaluating by the power controller of the trained machine learning model using current traffic volume data records to forecast power consumption.
  • An embodiment comprises using said power controller to: assign each vBBU a power requirement colour according to the power requirement of each said vBBU; maintain each vBBU as an independent workspace with individually allocatable resources and power modelling.
  • An embodiment comprises using said power controller to terminate a flow or flows, or request flow rescheduling when CPU power data or a vBBU power requirement colour change indicates CPU overload or overheating.
  • An embodiment comprises using said power controller to: allocate a unique vBBU colour drawn from a resource pool wherein each vBBU is assigned a said power requirement colour corresponding to specific power requirements; the unique vBBU colour is referenced to a resource as long as a corresponding flow is active.
  • the CPU power data comprises CPU overheating or overload indicators, in particular CPU overheating alarms, further in particular CPU overheating alarms LI, L2, L3 and L4.
  • An embodiment comprises using said power controller for modelling thread assignment using graph colouring.
  • An embodiment comprises using said power controller with access to physical CPU power status for CPU monitoring, to trigger resource migration though orchestrated workflows.
  • control data traffic forwarding between physical and virtual domains comprises diverting traffic between network interfaces to avoiding physical NIC overload.
  • each said VNF comprises an additional traffic sensor arranged as a virtual device of a corresponding vBBU, the method further comprising the method comprising using said I/O controller for: acquire additional traffic volume data from said additional traffic sensors; control data traffic forwarding between different virtual networking plugins and the physical NICs, using the acquired traffic volume data for avoiding physical NIC overload; load-balance and route traffic using the acquired traffic volume data.
  • the traffic data from said traffic sensors comprises traffic load data, computational resource data, and network plugin availability data.
  • An embodiment comprises the step of storing the acquired traffic volume data in a traffic volume data storage.
  • each of the physical NICs comprises a traffic sensor arranged as a virtual device of a corresponding vBBU.
  • the VIM is arranged to control and managing hypervisor physical resource allocation between said vBBUs, said physical resource comprises computational, storage and network resources.
  • a system for optimizing resource and traffic management of a computer execution environment for a vRAN, virtualized radio area network comprising: a plurality of physical computer servers comprising physical NICs, network interface cards, to be used as host machines; a plurality of vBBUs, virtual baseband units, connected using digitized radio frequency data to RRUs, remote radio units, and said vBBUs being virtual machines comprising virtual NICs made available with corresponding virtual network plugins, emulated as physical machines by said physical computer servers; a hypervisor for running and managing said vBBUs; a VIM, virtualised infrastructure manager, for running and managing hypervisor resource allocation between said vBBUs; an I/O controller, input/output controller, comprised in said hypervisor for controlling data traffic forwarding between physical and virtual domains; an orchestrator for managing connections and workload in said computer execution environment; each of the vBBUs providing computing resources for a plurality of VNFs, virtual network function, devices; wherein said
  • a system for optimizing resource and traffic management of a computer execution environment for a vRAN, virtualized radio area network comprising: a plurality of physical computer servers comprising physical CPUs, network interface cards, to be used as host machines; a plurality of vBBUs, virtual baseband units, connected using digitized radio frequency data to RRUs, remote radio units, and said vBBUs being virtual machines comprising virtual CPUs, emulated as physical machines by said physical computer servers; a hypervisor for running and managing said vBBUs; a VIM, virtualised infrastructure manager, for running and managing hypervisor resource allocation between said vBBUs; a power controller comprised in said hypervisor for controlling CPU power parameters comprising CPU power thresholds; an orchestrator for managing connections and workload in said computer execution environment; each of the vBBUs providing computing resources for a plurality of VNFs, virtual network function, devices; wherein said VNFs are allocated by said VIM; said VIM being connected with the orchestrator
  • Figure 1 shows a block diagram of an embodiment of the described system, where: 10 represents the system, 12 represents a capture agent, 13 represents a data processor, 14 represents a data assignment unit, and 16 represents a dashboard.
  • the capture agent (12) obtains data from a cloud platform that is to be processed by the data processor (13). Processed data is used to automate the data assignment (14), provided to the dashboard (16), for example for presentation to a user.
  • the capture agent (12) uses resource metrics API to obtain real-time status of computational resources from the computing platform or the traffic exchanged on Cloud Native Interface (CNI) to send to data processor unit (13).
  • the data processor (13) correlates the ingested data with current assigned resources to generate new data assignment models (14).
  • the new data assignments are ingested into analytics for display using the orchestrator dashboard (16).
  • Figure 2 shows a flowchart representation of the algorithm used in an embodiment of the described system, where 20 represents an algorithm, 21 represents a resource status data, 22 represents computing resource data, 23 represents traffic volume data, 24 represents resource reassignment, and 25 represents data to dashboard.
  • the algorithm (20) starts at the first step (21), where resource status data is obtained from one or more virtual clusters, e.g., PODs, or real-time traffic status obtained from sensor agents installed in both virtual clusters and Network Interface Cards (NICs) of a cloud computing system.
  • the data obtained in the first step (21) represents the number of threads assigned to a POD per CPU.
  • the analytics data obtained in the first step (21) may also take the form of traffic volumes exchanged on CNI or NIC interfaces.
  • the CPU resource update is obtained from one or more CPU managers that periodically write resource updates through Container Runtime Interface (CRI).
  • CRI Container Runtime Interface
  • the resource data comprises POD CPU policies, CPU affinity and scheduling latency.
  • the captured data maybe enriched with Completely Fair Scheduler (CFS) processes for better provision of CPU utilization status.
  • CFS Completely Fair Scheduler
  • the first (21) and second (22) steps of the algorithm (20) are implemented using the capture agent (12) of the system (10).
  • the traffic volume data are ingested into a processor unit (13) of the system (10).
  • the processed data is used to generate new resources assignments, for example using the resource reassignment (14) of the system (10).
  • the data provided includes commands to cloud computing engine, e.g., Kubernetes scheduler, using a component named "Power Controller”, changes in cluster interfaces with I/O plugins as configured by the "I/O Traffic Controller”, etc.
  • cloud computing engine e.g., Kubernetes scheduler
  • changes in cluster interfaces with I/O plugins as configured by the "I/O Traffic Controller” etc.
  • the latter processes are triggered by sensing information obtained from the third step (23).
  • the processed data may be provided to a dashboard, e.g., the dashboard (16), through embedded analytics engine.
  • At least one dashboard enables visualisation of the network status and enables configuration of the system.
  • the capture agents collect the CPU status from the physical and virtual components of the provisioned cloud. This data must preferably include CPU Power Limits (PL) for physical hosting servers especially PL3 or above. [0091] Overloading this CPU can drive the power consumption higher causing the power limits to grow to PL2, PL3, and even PL4. However, CPUs are used in PL1 region even though the melting down threads do not occur until passing the PL4 limits.
  • PL CPU Power Limits
  • Power level PL1 usually refers to a power state or profile corresponding to the stock (marketed) power state, which the CPU can sustain indefinitely.
  • Power level PL2 usually refers to a power state or profile which can be sustained for short periods of time, normally around hundreds of seconds.
  • Power level PL3 usually refers to a power state or profile which can be sustained for normally up to 10 ms.
  • Power level PL4 usually refers to a power state or profile which can be sustained for normally up to 10 ms with predetermined duty cycles.
  • the captured data is ingested into the processor unit (13), i.e., that forms the data storage unit of the system (10), thereby implementing the fourth step (24) of the algorithm (20).
  • the analytics store and power manager collectively form the data processor (13) of the system (10) and can be used to implement the fourth step (24) of the algorithm (20).
  • Figure 3 shows a schematic representation of a framework for a hypervisor employing power controller to thread assignment in a data centre that consists of multiple serves.
  • Figure 4 shows a schematic representation of a hypervisor employing the disclosed power controller for thread assignment in a data centre comprising multiple virtualized servers.
  • Figure 5 shows a schematic representation of an embodiment of the disclosed power controller for configuring CPU power parameters comprising CPU power thresholds.
  • Figure 6 shows a schematic representation of an embodiment of the disclosed traffic controller for configuring routing between NICs/VNICs and/or network plugins.
  • Flow diagrams of particular embodiments of the presently disclosed methods are depicted in figures. The flow diagrams illustrate the functional information one of ordinary skill in the art requires to perform said methods required in accordance with the present disclosure. It will be appreciated by those of ordinary skill in the art that unless otherwise indicated herein, the particular sequence of steps described is illustrative only and can be varied without departing from the disclosure. Thus, unless otherwise stated the steps described are so unordered meaning that, when possible, the steps can be performed in any convenient or desirable order.
  • code e.g., a software algorithm or program
  • firmware e.g., a software algorithm or program
  • computer useable medium having control logic for enabling execution on a computer system having a computer processor, such as any of the servers described herein.
  • Such a computer system typically includes memory storage configured to provide output from execution of the code which configures a processor in accordance with the execution.
  • the code can be arranged as firmware or software, and can be organized as a set of modules, including the various modules and algorithms described herein, such as discrete code modules, function calls, procedure calls or objects in an object-oriented programming environment. If implemented using modules, the code can comprise a single module or a plurality of modules that operate in cooperation with one another to configure the machine in which it is executed to perform the associated functions, as described herein.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Power Sources (AREA)

Abstract

L'invention concerne un procédé et un système pour optimiser la gestion de ressources et de trafic d'un environnement d'exécution informatique pour un vRAN, un réseau d'accès radio virtualisé, ledit environnement d'exécution d'ordinateur comprenant un contrôleur d'entrée/sortie compris dans ledit hyperviseur pour commander un transfert de trafic de données entre des domaines physique et virtuel, et un orchestrateur pour gérer des connexions et une charge de travail dans ledit environnement; à l'aide dudit contrôleur d'entrée/sortie pour acquérir des données de trafic à partir desdits capteurs de trafic; pour comparer la capacité d'un module d'extension de réseau virtuel à un volume de trafic entrant; pour commander le transfert de trafic de données entre des modules d'extension de réseautage virtuel et la NIC physique, à l'aide du volume de trafic acquis pour éviter une surcharge de carte NIC physique, en particulier un équilibrage de charge et un trafic de routage pour éviter une surcharge de carte NIC physique. L'invention concerne également, comprenant un dispositif de commande de puissance pour commander des paramètres de puissance de CPU, L'acquisition de données de consommation d'énergie de CPU à partir desdits capteurs de puissance de CPU, la comparaison de la capacité de puissance de CPU contre la consommation d'énergie de CPU pour calculer une marge de puissance accrue, l'augmentation de la puissance de CPU par la marge calculée.
PCT/PT2022/050007 2021-02-22 2022-02-22 Procédé et système pour optimiser la gestion de ressources et de trafic d'un environnement d'exécution d'ordinateur dans un vran WO2022177455A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202163152070P 2021-02-22 2021-02-22
US63/152,070 2021-02-22

Publications (1)

Publication Number Publication Date
WO2022177455A1 true WO2022177455A1 (fr) 2022-08-25

Family

ID=80819924

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/PT2022/050007 WO2022177455A1 (fr) 2021-02-22 2022-02-22 Procédé et système pour optimiser la gestion de ressources et de trafic d'un environnement d'exécution d'ordinateur dans un vran

Country Status (1)

Country Link
WO (1) WO2022177455A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024091862A1 (fr) * 2022-10-28 2024-05-02 Intel Corporation Modèles d'intelligence artificielle/apprentissage automatique (ai/ml) pour déterminer la consommation d'énergie dans des instances de fonction de réseau virtuel

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4636625B2 (ja) * 2008-01-25 2011-02-23 株式会社日立情報システムズ 仮想ネットワークシステムのnic接続制御方法と仮想ネットワークのnic接続制御システムおよびプログラム
WO2016197848A1 (fr) * 2015-06-09 2016-12-15 华为技术有限公司 Procédé, appareil et système pour gérer une carte réseau
US20190041967A1 (en) * 2018-09-20 2019-02-07 Intel Corporation System, Apparatus And Method For Power Budget Distribution For A Plurality Of Virtual Machines To Execute On A Processor

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4636625B2 (ja) * 2008-01-25 2011-02-23 株式会社日立情報システムズ 仮想ネットワークシステムのnic接続制御方法と仮想ネットワークのnic接続制御システムおよびプログラム
WO2016197848A1 (fr) * 2015-06-09 2016-12-15 华为技术有限公司 Procédé, appareil et système pour gérer une carte réseau
US20190041967A1 (en) * 2018-09-20 2019-02-07 Intel Corporation System, Apparatus And Method For Power Budget Distribution For A Plurality Of Virtual Machines To Execute On A Processor

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ENES JONATAN ET AL: "Power Budgeting of Big Data Applications in Container-based Clusters", 2020 IEEE INTERNATIONAL CONFERENCE ON CLUSTER COMPUTING (CLUSTER), IEEE, 14 September 2020 (2020-09-14), pages 281 - 287, XP033846988, DOI: 10.1109/CLUSTER49012.2020.00038 *
PAPADOGIANNAKI EVA ET AL: "Efficient Software Packet Processing on Heterogeneous and Asymmetric Hardware Architectures", IEEE /ACM TRANSACTIONS ON NETWORKING, IEEE / ACM, NEW YORK, NY, US, vol. 25, no. 3, 1 June 2017 (2017-06-01), pages 1593 - 1606, XP011653060, ISSN: 1063-6692, [retrieved on 20170614], DOI: 10.1109/TNET.2016.2642338 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024091862A1 (fr) * 2022-10-28 2024-05-02 Intel Corporation Modèles d'intelligence artificielle/apprentissage automatique (ai/ml) pour déterminer la consommation d'énergie dans des instances de fonction de réseau virtuel

Similar Documents

Publication Publication Date Title
US9967136B2 (en) System and method for policy-based smart placement for network function virtualization
Jennings et al. Resource management in clouds: Survey and research challenges
US20160216994A1 (en) Method, system, computer program and computer program product for monitoring data packet flows between virtual machines, vms, within a data centre
US11237862B2 (en) Virtualized network function deployment
Sun et al. Rose: Cluster resource scheduling via speculative over-subscription
US20200174844A1 (en) System and method for resource partitioning in distributed computing
Struhár et al. React: Enabling real-time container orchestration
WO2017010922A1 (fr) Attribution de ressources informatiques en nuage
KR20150011815A (ko) 접속 서비스 오케스트레이터
CN105159775A (zh) 基于负载均衡器的云计算数据中心的管理系统和管理方法
WO2011119444A2 (fr) Gestion de l'alimentation de puissance dans une informatique répartie
CN110221920B (zh) 部署方法、装置、存储介质及系统
US9184982B2 (en) Balancing the allocation of virtual machines in cloud systems
Adhikary et al. Quality of service aware cloud resource provisioning for social multimedia services and applications
US20220006879A1 (en) Intelligent scheduling apparatus and method
Kulkarni et al. Context aware VM placement optimization technique for heterogeneous IaaS cloud
EP3698246A1 (fr) Gestion d'une fonction réseau virtuelle
Remesh Babu et al. Service‐level agreement–aware scheduling and load balancing of tasks in cloud
Shifrin et al. Optimal control of VNF deployment and scheduling
Tiwari et al. Dynamic weighted virtual machine live migration mechanism to manages load balancing in cloud computing
WO2022177455A1 (fr) Procédé et système pour optimiser la gestion de ressources et de trafic d'un environnement d'exécution d'ordinateur dans un vran
CA3190966A1 (fr) Fongibilite automatique de nƒuds entre des nƒuds d'infrastructure et de calcul dans des zones de bord
Parakh et al. SLA-aware virtual machine scheduling in openstack-based private cloud
Paulos et al. Priority-enabled load balancing for dispersed computing
US10621006B2 (en) Method for monitoring the use capacity of a partitioned data-processing system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22711696

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22711696

Country of ref document: EP

Kind code of ref document: A1