WO2024091862A1 - Artificial intelligence/machine learning (ai/ml) models for determining energy consumption in virtual network function instances - Google Patents

Artificial intelligence/machine learning (ai/ml) models for determining energy consumption in virtual network function instances Download PDF

Info

Publication number
WO2024091862A1
WO2024091862A1 PCT/US2023/077505 US2023077505W WO2024091862A1 WO 2024091862 A1 WO2024091862 A1 WO 2024091862A1 US 2023077505 W US2023077505 W US 2023077505W WO 2024091862 A1 WO2024091862 A1 WO 2024091862A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
vnf
examples
network
term
Prior art date
Application number
PCT/US2023/077505
Other languages
French (fr)
Inventor
Joey Chou
Yizhi Yao
John Browne
Niall POWER
Original Assignee
Intel Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intel Corporation filed Critical Intel Corporation
Publication of WO2024091862A1 publication Critical patent/WO2024091862A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/147Network analysis or design for predicting network behaviour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/145Network analysis or design involving simulating, designing, planning or modelling of a network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/16Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0806Configuration setting for initial configuration or provisioning, e.g. plug-and-play
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0895Configuration of virtualised networks or elements, e.g. virtualised network function or OpenFlow elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/40Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using virtualisation of network functions or resources, e.g. SDN or NFV entities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0876Network utilisation, e.g. volume of load or congestion level
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/20Arrangements for monitoring or testing data switching networks the monitoring system or the monitored elements being virtualised, abstracted or software-defined entities, e.g. SDN or NFV

Definitions

  • FIG. 1 depicts an example model training architecture for VNF energy consumption
  • Figure 2 depicts example data samples used for the architecture of Figure 1
  • Figure 3 depicts an example MDA inference function for VNF energy consumption predictions
  • Figure 4 depicts an example VNF energy consumption prediction procedure
  • Figures 5 and 6 depict example wireless networks
  • Figure 7 depicts example hardware resources
  • Figure 8 depicts an example of management services (MnS)
  • Figure 9 depicts an example AI/ML functional framework
  • Figure 10 depicts an example AI/ML- assisted communication architecture
  • Figures 11 and 12 depict example processes for practicing the various embodiments discussed herein.
  • the present disclosure is generally related to wireless communications technologies, cloud computing, edge computing, artificial intelligence (Al) and machine learning (ML), and in particular, to technologies to predict energy consumption of VNF instances.
  • Embodiments of the present disclosure address the aforementioned issues and other issues by utilizing artificial intelligence (Al) and/or machine learning (ML) models to predict energy consumption of VNF instances.
  • AI/ML models use AI/ML models to predict VNF instance energy consumption based on virtual compute usage, virtual memory usage, and/or virtual disk usage measurements.
  • FIG. 1 shows an example architecture 100, which involves training an ML model used by an inference function (see e.g., inference engines 915 and 1045 of Figures 9 and 10) to predict VNF energy consumption.
  • a virtualized network entity (NE) 101 (see e.g., [TS28500]) contains Network Function Virtualization Infrastructure (NFVI) 102 (see e.g., ETSI GS NFV 003 and/or ETSI GR NFV 003) in/on which one or more VNF instances 103 (e.g., VNF 103-1 to VNF 103-/2, where n is a number) are deployed.
  • the NFVI comprises various resources, such as hypervisor/VMM, compute, storage/memory, networking, and/or other hardware (HW) resources.
  • Power, energy, environmental (PEE) sensor(s) 120 are used to collect energy consumption data (ECD) 122 of the VNE 101, which is then provided to a model training function (MTF) 125.
  • ECD energy consumption data
  • MTF model training function
  • PEE sensors 120 are discussed infra with respect to (w.r.t) Figure 7).
  • Some or all of the PEE sensor(s) 120 may be built-in or embedded inside the VNE 101, NFVI 102, or in individual components of the NFVI 102, or may be external to the VNE 101 and/or NFVI 102.
  • some or all of the PEE sensor(s) 120 may be part of power distribution frame, power supply system, junction box, electrical panel, and/or the like.
  • a power input 121 to the PEE sensor(s) 120 is provided to the VNE 101 to power the VNE 101. Additionally or alternatively, the PEE sensor(s) 120 collect data based on the power input 121, which may be provided as part of the VNE ECD 122.
  • FIG. 1 also shows virtual resource usage data (VRUD) 112 for VNFs 103 (see e.g., clauses 5.7.1, 5.7.2, and/or 6.2 in [TS28552]) being collected/utilized by the MTF 125.
  • VRUD 112-1 is provided by VNF 103-1
  • VRUD 112-n is provided by VNF 103-n.
  • the VRUD 112 includes statistics, measurements, metrics, and/or other data related to a respective VNF’s usage of compute (e.g., mean virtual CPU usage, peak virtual CPU usage, and/or the like), memory (e.g., mean virtual memory usage, peak virtual memory usage, and/or the like), disk/storage (e.g., mean virtual disk usage, peak virtual disk/storage usage, and/or the like), network (e.g., connection data volumes of NFs, number of incoming and/or outgoing packets, and/or the like), and/or other resources.
  • compute e.g., mean virtual CPU usage, peak virtual CPU usage, and/or the like
  • memory e.g., mean virtual memory usage, peak virtual memory usage, and/or the like
  • disk/storage e.g., mean virtual disk usage, peak virtual disk/storage usage, and/or the like
  • network e.g., connection data volumes of NFs, number of incoming and/or outgoing packets, and/or the like
  • the VRUD 112 can be calculated, generated, measured, and/or collected according to [TS28532], [TS28552], [TS28554], ETSI GS NFV-IFA 027 (e.g., ETSI GS NFV-IFA 027 v4.4.1 (2023-03)), and/or as discussed in Intel® VTuneTM Profiler User Guide, Intel Corp., version 2023.1 (31 Mar. 2023).
  • the MTF 125 uses relatively large volume dataset(s), including VRUD 112 for VNFs 103 and the VNE ECD 122 to compute the parameters of an ML model that is (or will be) used to predict the energy consumption for VNF instance(s) 103.
  • the training dataset(s) include data samples (or features) of the VRUD 112 and data labels of the VNE ECD 122.
  • the MTF 125 may be the same or similar as the MTF 910 and/or MLTF 1045 of Figures 9 and 10, a model training logical function (MTLF) of an NWDAF 562, and/or the AI/ML model entities discussed in U.S. App. No. 18/358,288 filed on 25 Jul. 2023.
  • MTLF model training logical function
  • FIG. 2 shows an example MDA architecture 200 where an MDA inference function is used for VNF energy consumption prediction.
  • an MDA function (MDAF) 851 may be deployed as an AI/ML inference function 145 using one or more ML models 250 to predict the VNF energy consumption of VNFs 103.
  • the AI/ML inference function 245 utilizes the ML model(s) 250 to predict the ECD 212 for respective VNFs 104 (e.g., VNF ECD 212-1 for VNF 103-1, and so forth, to VNF ECD 212-n for VNF 103-n) based on the VRUD 112-1 to 112-n, and VNE ECD 122.
  • the AI/ML inference function 245 and/or the ML model(s) 250 may predict the ECD 212 according to any suitable energy efficiency (EE) and/or energy consumption (EC) metrics, such as those discussed herein (see e.g., sections 1.2.3 and 1.2.4, infra), discussed in [TS28554], and/or the like.
  • EE energy efficiency
  • EC energy consumption
  • FIG. 3 shows example samples 300 of VNE ECD 122 and the VRUD 112 for VNFs 103 (see e.g., clause 5.7.1.1.1-3 in [TS28552]), which fluctuate continuously over time.
  • the VRUD 112 are time synchronized with the VNE ECD 122 so they can be used as data samples (or features) and data labels for model training.
  • Figure 4 shows an example of a procedure 400 for VNF energy consumption prediction.
  • the procedure 400 includes an offline training phase (operations 1-6) and an offline inference phase (operations 7-8).
  • the procedure 400 may operate as follows.
  • the MTF 125 requests a producer of performance assurance management service (MnS) 150 to create a performance management (PM) job to collect measurement data (PM data) from VNFs 103 and/or VNE 101 (see e.g., [TS28550]).
  • MnS performance assurance management service
  • PM data measurement data
  • the PM job request can include any of the parameters discussed herein (see e.g., Table 1.2.2.3-1, infra) and/or as discussed in [TS28550] (e.g., IOC name, IOC instance list, measurement category list and/or list of measurement/KPI type names (e.g., VNE measurement data and/or VNF measurement data), granularity period, reporting period, start time, end/stop time, schedule, stream target, priority, reliability, and/or the like).
  • the producer of performance assurance MnS 150 sends the VNE ECD 122 to the MTF 125.
  • the producer of performance assurance MnS 150 sends the VRUD 112 for VNFs 103-1 to 103-n to the MTF 125.
  • the MTF 125 performs model training using data samples of the VRUD 112 for VNFs 103-1 to 103-n, and data labels of the VNE ECD 122.
  • the MTF 125 deploys the model to a model inference engine 245.
  • the model inference engine 245 is, or is part of, an MDAF 851 and/or is the same or similar as the inference engine 915 and/or inference engine 1045 of Figures 9 and 10.
  • the inference engine 245 requests the producer of performance assurance MnS 150 to create PM job to collect measurement data from VNFs 103-1 to 103-n.
  • the PM job request can include any of the parameters discussed herein (see e.g., Table 1.2.2.3-1, infra) and/or as discussed in [TS28550] (e.g., IOC name, IOC instance list, measurement category list and/or list of measurement/KPI type names (e.g., VNF measurement data), granularity period, reporting period, start time, end/stop time, schedule, stream target, priority, reliability, and/or the like).
  • IOC name e.g., IOC instance list, measurement category list and/or list of measurement/KPI type names (e.g., VNF measurement data), granularity period, reporting period, start time, end/stop time, schedule, stream target, priority, reliability, and/or the like.
  • the producer of performance assurance MnS 150 sends the VRUD 112 for VNFs 103-1 to 103-n to the inference engine 245.
  • the model inference engine 245 e.g., in the MDAF 851 uses respective VRUDs 112 to predict the energy consumption for corresponding VNS 103, for example, using the VRUD 112 of VNF 103-1 to predict the energy consumption of VNF 103-1, and so forth to using the VRUD 112 of VNF 103- n to predict the energy consumption of VNF 103-n.
  • Energy saving is achieved by activating the energy saving mode of the NR capacity booster cell or 5GC NFs (e.g., UPF and/or the like).
  • the energy saving decision making is typically based on the load information of the related cells/UPFs, the energy saving policies set by operators and the energy saving recommendations provided by MDAS producer (e.g., MDAS-P or MDA MnS-P).
  • MDAS producer e.g., MDAS-P or MDA MnS-P
  • MDA can be used to assist the MDAS consumer (MDAS-C or MDA MnS-C) to make energy saving decisions.
  • an MDAS-C can determine where the energy efficiency issues (e.g., high energy consumption, low energy efficiency) exist, and the cause of the energy efficiency issues.
  • MDA 851 to correlate and analyze energy saving related performance measurements (e.g., PDCP data volume of cells, power consumption, and/or the like) and network analysis data (e.g., observed service experience related network data analytics) to provide the analytics results which indicate current network energy efficiency.
  • energy saving related performance measurements e.g., PDCP data volume of cells, power consumption, and/or the like
  • network analysis data e.g., observed service experience related network data analytics
  • MDA MnS-Cs may expect to reduce energy consumption to save energy.
  • the MDA MnS-C may request the MDAS-P to report only high energy consumption issue related analytics results.
  • the related issue is the low energy efficiency one.
  • the consumer may request analytics results related to low energy efficiency issue. So, the target could be to enhance the performance of NF for a given energy consumption. This will result in higher Energy Efficiency of network.
  • MDAS-C determines which Energy Efficiency (EE) KPI related factor(s) (e.g., traffic load, end-to-end latency, active UE numbers, and/or the like) are affected or potentially affected.
  • EE KPI related factor(s) e.g., traffic load, end-to-end latency, active UE numbers, and/or the like
  • the MDAS-P can utilize historical data to predict the EE KPI related factors (e.g., load variation of cells at some future time, and/or the like). The prediction result of these information can then be used by operators to make energysaving decision to guarantee the service experience.
  • the MDAS-P may also provide energy saving related recommendation with the energy saving state to the MDAS-C. Under the energy saving state, the required network performance and network experience should be guaranteed. Therefore, it is important to formulate appropriate energy saving policies (start time, dynamic threshold setting, base station parameter configuration, and/or the like).
  • the MDAS-C may take the recommendations with the energy saving state into account for making analysis or making energy saving decisions. After the recommendations have been executed, the MDA producer may start evaluating and further analyzing network management data to optimize the recommendations.
  • energy saving analysis is used to determine when to take energy saving actions for 5G/NR and/or 5GC NFs that can be triggered by energy efficiency related measurements and/or KPIs. For example, when the energy efficiency measurements/metrics is/are below a certain threshold, the energy saving analysis can be triggered and/or initiated to analyze the cause and determine the mitigation actions.
  • the energy efficiency for 5G/NR and/or 5GC NFs are tightly coupled to the VNF ECD and/or VNE ECD 122.
  • the energy saving analysis (e.g., as performed by the AI/ML entity 245) is used to predict the energy consumption for VNF instance(s) 103 (e.g., VNF ECD 212), based on virtual resource usage data (e.g., VRUD 112), such as compute usage, virtual memory usage, and virtual disk usage measurements (see e.g., clause 5.7.1.1 in [TS28552]).
  • VNF ii I analysis DEFINITIONS FOR MDA ASSISTED ENERGY SAVING
  • the MDA type for energy saving analysis is: MDAAssistedEnergySaving.EnergySavingAnalysis.
  • the Predicted VnfEC data type specifies the type of predicted VNF energy consumption.
  • Table 1.2.2.3- 1 shows example lEs/parameters/data elements for the Predicted VnfEC data type.
  • the PEE related measurements defined herein are valid for a 5G Physical Network Function (PNF).
  • PNF 5G Physical Network Function
  • the NR NRM is defined in [TS28541] .
  • VNF Energy Consumption a) The VNF energy consumption measurement provides the energy consumption of the virtualized NE (see e.g., [TS28500]) that contains the NFVI (see e.g., ETSI GS NFV 003 and/or ETSI GR NFV 003) where VNF instance(s) are deployed. b) OM. c) This measurement is obtained by mapping the energy consumption E Tr) received from the PEE sensor(s) 120 (see e.g., clause 4.4.3.1 in [ES202336-12]) to the ManagedElement MOI representing the VNE 101. In some examples, the energy consumption E(Tr) can be calculated according to equation 1.2.3.1-1. wherein:
  • Tr is a record period (see e.g., clause 4.4.3.4 in [ES202336-12])
  • P(j ⁇ ) is a measurement of power
  • u(/) and i(y) are values of voltage and current acquired over a sampling period Ta by analog-digital conversion equipment of measurements at the AC or DC power interface of the NE and/or VNE 101 under measurement (see e.g., table 1 in [ES202336- 12]).
  • E(Tr) represents the energy consumed within a granularity period (e.g., granularityPeriod as discussed in clause 6.1.1.2 of [TS28550]).
  • this measurement is a real value in Watt-hours (Wh), kiloWatt-hours (kWh), or the like.
  • Wh Watt-hours
  • kWh kiloWatt-hours
  • the unit of this measurement may be in kWh.
  • J Joules
  • the measurement name has the form of “PEE.VirtualizedNeEC”.
  • this measurement object is a ManagedElement.
  • this measurement is valid for packet switched traffic.
  • this measurement can be used in a 5GS (see e.g., Figure 5).
  • One example usage of this measurement is in the VNF energy consumption prediction.
  • Predicted VNF energy consumption KPI a) The name of this KPI may be “PredictedEC-VNF”. b) This KPI describes the predicted energy consumption for a VNF instance 103. c) This KPI is mapped from the predicted VNF energy consumption that was provided by the MDA function 851. The value of this KPI is in kilowatt-hours (kWh). In some examples, the PredictedEC-VNF can be expressed as shown by equation 1.2.4.1-1.
  • PredictedEC-VNF predictedVnfEnergyConsumption (1.2.4.1-1)
  • this KPI is a ManagedFunction.
  • Figure 5 depicts an example network architecture 500.
  • the network 500 may operate in a manner consistent with 3 GPP technical specifications for ETE or 5G/NR systems.
  • 3 GPP technical specifications for ETE or 5G/NR systems 3 GPP technical specifications for ETE or 5G/NR systems.
  • the example embodiments are not limited in this regard and the described examples may apply to other networks that benefit from the principles described herein, such as future 3 GPP systems, or the like.
  • the network 500 includes a UE 502, which is any mobile or non-mobile computing device designed to communicate with a RAN 504 via an over-the-air connection.
  • the UE 502 is communicatively coupled with the RAN 504 by a Uu interface, which may be applicable to both LTE and NR systems.
  • Examples of the UE 502 include, but are not limited to, a smartphone, tablet computer, wearable device (e.g., smart watch, fitness tracker, smart glasses, smart clothing/fabrics, head-mounted displays, smart shows, and/or the like), desktop computer, workstation, laptop computer, in-vehicle infotainment system, in-car entertainment system, instrument cluster, head-up display (HUD) device, onboard diagnostic device, dashtop mobile equipment, mobile data terminal, electronic engine management system, electronic/engine control unit, electronic/engine control module, embedded system, sensor, microcontroller, control module, engine management system, networked appliance, machine-type communication device, machine-to-machine (M2M), device-to-device (D2D), machine-type communication (MTC) device, Internet of Things (loT) device, smart appliance, flying drone or unmanned aerial vehicle (UAV), terrestrial drone or autonomous vehicle, robot, electronic signage, single-board computer (SBC) (e.g., Raspberry Pi, iOS, Intel Edison, and the like
  • the network 500 may include a set of UEs 502 coupled directly with one another via a device-to-device (D2D), proximity services (ProSe), PC5, and/or sidelink (SL) interface, and/or any other suitable interface such as any of those discussed herein.
  • D2D device-to-device
  • ProSe proximity services
  • SL sidelink
  • UEs 502 may be M2M, D2D, MTC, and/or loT devices, and/or V2X systems that communicate using physical sidelink channels such as, but not limited to, PSBCH, PSDCH, PSSCH, PSCCH, PSFCH, and the like.
  • the UE 502 may perform blind decoding attempts of SL channels/links according to the various examples herein.
  • the UE 502 may additionally communicate with an AP 506 via an over- the-air (OTA) connection.
  • the AP 506 manages a WLAN connection, which may serve to offload some/all network traffic from the RAN 504.
  • the connection between the UE 502 and the AP 506 may be consistent with any IEEE 802.11 protocol.
  • the UE 502, RAN 504, and AP 506 may utilize cellular-WLAN aggregation/integration (e.g., LWA/LWIP).
  • Cellular-WLAN aggregation may involve the UE 502 being configured by the RAN 504 to utilize both cellular radio resources and WLAN resources.
  • the RAN 504 includes one or more network access nodes (NANs) 514 (also referred to as “RAN nodes 514”).
  • the NANs 514 terminate air-interface(s) for the UE 502 by providing access stratum protocols including RRC, PDCP, RLC, MAC, and PHY/L1 protocols.
  • RRC access control protocol
  • PDCP packet data convergence protocol
  • RLC Radio Link Control
  • MAC media access control
  • PHY/L1 protocols PHY/L1 protocols.
  • the NAN 514 enables data/voice connectivity between a core network (CN) 540 and the UE 502.
  • the NANs 514 may be a macrocell base station or a low power base station for providing femtocells, picocells or other like cells having smaller coverage areas, smaller user capacity, or higher bandwidth compared to macrocells; or some combination thereof.
  • a NAN 514 may be referred to as a base station (BS), next generation nodeB (gNB), RAN node, eNodeB (eNB), next generation (ng)-eNB, NodeB, RSU, TRP, and/or the like.
  • BS base station
  • gNB next generation nodeB
  • eNB eNodeB
  • ng next generation-eNB
  • NodeB RSU
  • TRP TRP
  • One example implementation is a “CU/DU split” architecture where the NANs 514 are embodied as a gNB-Central Unit (CU) that is communicatively coupled with one or more gNB- Distributed Units (DUs), where each DU may be communicatively coupled with one or more Radio Units (RUs) (also referred to as RRHs, RRUs, or the like).
  • RUs Radio Units
  • the one or more RUs may be individual RSUs.
  • the CU/DU split may include an ng-eNB-CU and one or more ng-eNB-DUs instead of, or in addition to, the gNB-CU and gNB- DUs, respectively.
  • the NANs 514 employed as the CU may be implemented in a discrete device or as one or more software entities running on server computers as part of, for example, a virtual network including a virtual Base Band Unit (BBU) or BBU pool, cloud RAN (CRAN), Radio Equipment Controller (REC), Radio Cloud Center (RCC), centralized RAN (C-RAN), virtualized RAN (vRAN), and/or the like (although these terms may refer to different implementation concepts). Any other type of architectures, arrangements, and/or configurations can be used.
  • BBU Base Band Unit
  • CRAN cloud RAN
  • REC Radio Equipment Controller
  • RRCC Radio Cloud Center
  • C-RAN centralized RAN
  • vRAN virtualized RAN
  • the set of NANs 514 are coupled with one another via respective Xn interfaces if the RAN 504 is a NG-RAN 504.
  • the X2/Xn interfaces which may be separated into control/user plane interfaces in some examples, may allow the ANs to communicate information related to handovers, data/context transfers, mobility, load management, interference coordination, and the like.
  • the ANs of the RAN 504 may each manage one or more cells, cell groups, component carriers, and the like to provide the UE 502 with an air interface for network access.
  • the UE 502 may be simultaneously connected with a set of cells provided by the same or different NANs 514 of the RAN 504.
  • the UE 502 and RAN 504 may use carrier aggregation to allow the UE 502 to connect with a set of component carriers, each corresponding to a PCell or SCell.
  • a first NAN 514 may be a master node that provides an MCG and a second NAN 514 may be secondary node that provides an SCG.
  • the first/second NANs 514 may be any combination of eNB, gNB, ng-eNB, and the like.
  • the RAN 504 may provide the air interface over a licensed spectrum or an unlicensed spectrum.
  • the nodes may use LAA, eLAA, and/or feLAA mechanisms based on CA technology with PCells/Scells.
  • the nodes Prior to accessing the unlicensed spectrum, the nodes may perform medium/carrier-sensing operations based on, for example, a listen-before-talk (LBT) protocol.
  • LBT listen-before-talk
  • individual UEs 502 provide radio information to one or more NANs 514 and/or one or more edge compute nodes (e.g., edge servers/hosts, and the like).
  • the radio information may be in the form of one or more measurement reports, and/or may include, for example, signal strength measurements, signal quality measurements, and/or the like.
  • Each measurement report is tagged with a timestamp and the location of the measurement (e.g., the UEs 502 current location).
  • the measurements collected by the UEs 502 and/or included in the measurement reports may include one or more of the following: bandwidth (BW), network or cell load, latency, jitter, round trip time (RTT), number of interrupts, out-of-order delivery of data packets, transmission power, bit error rate, bit error ratio (BER), Block Error Rate (BLER), packet error ratio (PER), packet loss rate, packet reception rate (PRR), data rate, peak data rate, end-to-end (e2e) delay, signal-to-noise ratio (SNR), signal-to-noise and interference ratio (SINR), signal-plus-noise-plus-distortion to noise-plus-distortion (SINAD) ratio, carrier-to-interference plus noise ratio (CINR), Additive White Gaussian Noise (AW GN), energy per bit to noise power density ratio (Eb/No), energy per chip to interference power density ratio (E c /Io), energy per chip to noise power density ratio (Ec/NO
  • the RSRP, RSSI, and/or RSRQ measurements may include RSRP, RSSI, and/or RSRQ measurements of cell-specific reference signals, channel state information reference signals (CSI-RS), and/or synchronization signals (SS) or SS blocks for 3GPP networks (e.g., LTE or 5G/NR), and RSRP, RSSI, RSRQ, RCPI, RSNI, and/or ANPI measurements of various beacon, Fast Initial Link Setup (FILS) discovery frames, or probe response frames for WLAN/WiFi (e.g., [IEEE80211]) networks.
  • CSI-RS channel state information reference signals
  • SS synchronization signals
  • 3GPP networks e.g., LTE or 5G/NR
  • measurements may be additionally or alternatively used, such as those discussed in 3GPP TS 36.214, 3GPP TS 38.215 (“[TS38215]”), 3GPP TS 38.314, IEEE Standard for Information Technology- Telecommunications and Information Exchange between Systems - Local and Metropolitan Area Networks— Specific Requirements - Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications, IEEE Std 802.11 -2020, pp.1-4379 (26 Feb. 2021) (“[IEEE80211]”), and/or the like. Additionally or alternatively, any of the aforementioned measurements (or combination of measurements) may be collected by one or more NANs 514 and provided to the edge compute node(s).
  • MAC Medium Access Control
  • PHY Physical Layer
  • the measurements can include one or more of the following measurements: measurements related to Data Radio Bearer (DRB) (e.g., number of DRBs attempted to setup, number of DRBs successfully setup, number of released active DRBs, insession activity time for DRB, number of DRBs attempted to be resumed, number of DRBs successfully resumed, and the like); measurements related to RRC (e.g., mean number of RRC connections, maximum number of RRC connections, mean number of stored inactive RRC connections, maximum number of stored inactive RRC connections, number of attempted, successful, and/or failed RRC connection establishments, and the like); measurements related to UE Context (UECNTX); measurements related to Radio Resource Utilization (RRU) (e.g., DL total PRB usage, UL total PRB usage, distribution of DL total PRB usage, distribution of UL total PRB usage, DL PRB used for data traffic, UL PRB used for data traffic, DL total available PRBs, UL total available PRBs, and the like
  • RRC
  • the radio information may be reported in response to a trigger event and/or on a periodic basis. Additionally or alternatively, individual UEs 502 report radio information either at a low periodicity or a high periodicity depending on a data transfer that is to take place, and/or other information about the data transfer. Additionally or alternatively, the edge compute node(s) may request the measurements from the NANs 514 at low or high periodicity, or the NANs 514 may provide the measurements to the edge compute node(s) at low or high periodicity.
  • the edge compute node(s) may obtain other relevant data from other edge compute node(s), core network functions (NFs), application functions (AFs), and/or other UEs 502 such as Key Performance Indicators (KPIs), with the measurement reports or separately from the measurement reports.
  • NFs core network functions
  • AFs application functions
  • KPIs Key Performance Indicators
  • one or more RAN nodes, and/or core network NFs may be performed to supplement the obtained observation data such as, for example, substituting values from previous reports and/or historical data, apply an extrapolation filter, and/or the like.
  • acceptable bounds for the observation data may be predetermined or configured. For example, CQI and MCS measurements may be configured to only be within ranges defined by suitable 3 GPP standards.
  • a reported data value may not make sense (e.g., the value exceeds an acceptable range/bounds, or the like)
  • such values may be dropped for the current learning/training episode or epoch.
  • packet delivery delay bounds may be defined or configured, and packets determined to have been received after the packet delivery delay bound may be dropped.
  • the UE 502 can also perform determine reference signal (RS) measurement and reporting procedures to provide the network with information about the quality of one or more wireless channels and/or the communication media in general, and this information can be used to optimize various aspects of the communication system.
  • RS reference signal
  • the measurement and reporting procedures performed by the UE 502 can include those discussed in 3GPP TS 38.211, 3GPP TS 38.212, 3GPP TS 38.213, 3GPP TS 38.214, [TS38215], 3GPP TS 38.101-1, 3GPP TS 38.104, 3GPP TS 38.133, [TS38331], and/or other the like.
  • the physical signals and/or reference signals can include demodulation reference signals (DM-RS), phase-tracking reference signals (PT-RS), positioning reference signal (PRS), channel-state information reference signal (CSI-RS), synchronization signal block (SSB), primary synchronization signal (PSS), secondary synchronization signal (SSS), and sounding reference signal (SRS).
  • DM-RS demodulation reference signals
  • PT-RS phase-tracking reference signals
  • PRS positioning reference signal
  • CSI-RS channel-state information reference signal
  • SSB synchronization signal block
  • PSS primary synchronization signal
  • SSS secondary synchronization signal
  • SRS sounding reference signal
  • any suitable data collection and/or measurement mechanism(s) may be used to collect the observation data.
  • data marking e.g., sequence numbering, and the like
  • packet tracing e.g., signal measurement, data sampling, and/or timestamping techniques
  • the collection of data may be based on occurrence of events that trigger collection of the data. Additionally or alternatively, data collection may take place at the initiation or termination of an event.
  • the data collection can be continuous, discontinuous, and/or have start and stop times.
  • the data collection techniques/mechanisms may be specific to a HW configuration/implementation or non-HW-specific, or may be based on various software parameters (e.g., OS type and version, and the like). Various configurations may be used to define any of the aforementioned data collection parameters.
  • Such configurations may be defined by suitable specifications/standards, such as 3GPP (e.g., [5GEdge]), ETSI (e.g., [MEC]), O-RAN (e.g., [O-RAN]), Intel® Smart Edge Open (e.g., [ISEO]), IETF (e.g., [MAMS]), lEEE/WiFi (e.g., [IEEE80211] and the like), and/or any other like standards such as those discussed herein.
  • 3GPP e.g., [5GEdge]
  • ETSI e.g., [MEC]
  • O-RAN e.g., [O-RAN]
  • Intel® Smart Edge Open e.g., [ISEO]
  • IETF e.g., [MAMS]
  • lEEE/WiFi e.g., [IEEE80211] and the like
  • any other like standards such as those discussed herein.
  • the RAN 504 is an E-UTRAN with one or more eNBs, and provides an LTE air interface (Uu) with the parameters and characteristics at least as discussed in 3GPP TS 36.300.
  • the RAN 504 is an next generation (NG)-RAN 504 with a set of RAN nodes 514 (including gNBs 514a and ng-eNBs 514b).
  • NG next generation
  • Each gNB 514a connects with 5G-enabled UEs 502 using a 5G-NR Uu interface with parameters and characteristics as discussed in [TS383OO], among many other 3GPP standards, including any of those discussed herein.
  • the one or more ng-eNBs 514b connect with a UE 502 via the 5G Uu and/or LTE Uu interface.
  • the gNBs 514a and the ng-eNBs 514b connect with the 5GC 540 through respective NG interfaces, which include an N2 interface, an N3 interface, and/or other interfaces.
  • the gNBs 514a and the ng-eNBs 514b are connected with each other over an Xn interface. Additionally, individual gNBs 514a are connected to one another via respective Xn interfaces, and individual ng-eNBs 514b are connected to one another via respective Xn interfaces.
  • the NG interface may be split into two parts, an NG user plane (NG-U) interface, which carries traffic data between the nodes of the NG-RAN 504 and a UPF 548 (e.g., N3 interface), and an NG control plane (NG-C) interface, which is a signaling interface between the nodes of the NG-RAN 504 and an AMF 544 (e.g., N2 interface).
  • NG-U NG user plane
  • N-C NG control plane
  • the NG-RAN 504 may provide a 5G-NR air interface (which may also be referred to as a Uu interface) with the following characteristics: variable SCS; CP-OFDM for DL, CP-OFDM and DFT-s-OFDM for UL; polar, repetition, simplex, and Reed-Muller codes for control and LDPC for data.
  • the 5G-NR air interface may rely on CSI-RS, PDSCH/PDCCH DMRS similar to the LTE air interface.
  • the 5G-NR air interface may not use a CRS, but may use PBCH DMRS for PBCH demodulation; PTRS for phase tracking for PDSCH; and tracking reference signal for time tracking.
  • the 5G-NR air interface may operating on FR1 bands that include sub-6 GHz bands or FR2 bands that include bands from 24.25 GHz to 52.6 GHz.
  • the 5G-NR air interface may include an SSB that is an area of a DL resource grid that includes PSS/SSS/PBCH.
  • the 5G-NR air interface may utilize BWPs for various purposes.
  • BWP can be used for dynamic adaptation of the SCS.
  • the UE 502 can be configured with multiple BWPs where each BWP configuration has a different SCS. When a BWP change is indicated to the UE 502, the SCS of the transmission is changed as well.
  • Another use case example of BWP is related to power saving.
  • multiple BWPs can be configured for the UE 502 with different amount of frequency resources (e.g., PRBs) to support data transmission under different traffic loading scenarios.
  • a BWP containing a smaller number of PRBs can be used for data transmission with small traffic load while allowing power saving at the UE 502 and in some cases at the gNB 514a.
  • a BWP containing a larger number of PRBs can be used for scenarios with higher traffic load.
  • individual gNBs 514a can include a gNB-CU and a set of gNB- DUs. Additionally or alternatively, gNBs 514a can include one or more RUs. In these implementations, the gNB-CU may be connected to each gNB-DU via respective Fl interfaces. In case of network sharing with multiple cell ID broadcast(s), each cell identity associated with a subset of PLMNs corresponds to a gNB-DU and the gNB-CU it is connected to, share the same physical layer cell resources. For resiliency, a gNB-DU may be connected to multiple gNB-CUs by appropriate implementation.
  • a gNB-CU can be separated into gNB-CU control plane (gNB-CU-CP) and gNB-CU user plane (gNB-CU-UP) functions.
  • the gNB-CU-CP is connected to a gNB-DU through an Fl control plane interface (Fl-C)
  • the gNB-CU-UP is connected to the gNB-DU through an Fl user plane interface (Fl-U)
  • the gNB-CU-UP is connected to the gNB-CU-CP through an El interface.
  • one gNB-DU is connected to only one gNB-CU-CP
  • one gNB-CU-UP is connected to only one gNB-CU-CP.
  • a gNB-DU and/or a gNB-CU-UP may be connected to multiple gNB-CU-CPs by appropriate implementation.
  • One gNB-DU can be connected to multiple gNB-CU-UPs under the control of the same gNB-CU-CP, and one gNB-CU-UP can be connected to multiple DUs under the control of the same gNB-CU-CP.
  • Data forwarding between gNB-CU-UPs during intra-gNB- CU-CP handover within a gNB may be supported by Xn-U.
  • individual ng-eNBs 514b can include an ng-eNB-CU and a set of ng-eNB-DUs.
  • the ng-eNB-CU and each ng-eNB-DU are connected to one another via respective W1 interface.
  • An ng-eNB can include an ng-eNB-CU-CP, one or more ng-eNB-CU-UP(s), and one or more ng-eNB-DU(s).
  • An ng-eNB-CU-CP and an ng-eNB-CU-UP is connected via the El interface.
  • An ng-eNB-DU is connected to an ng-eNB-CU-CP via the Wl-C interface, and to an ng-eNB-CU-UP via the Wl-U interface.
  • the general principle described herein w.r.t gNB aspects also applies to ng-eNB aspects and corresponding El and W1 interfaces, if not explicitly specified otherwise.
  • the node hosting user plane part of the PDCP protocol layer (e.g., gNB-CU, gNB-CU-UP, and for EN-DC, MeNB or SgNB depending on the bearer split) performs user inactivity monitoring and further informs its inactivity or (re)activation to the node having control plane connection towards the core network (e.g., over El, X2, or the like).
  • the node hosting the RLC protocol layer (e.g., gNB-DU) may perform user inactivity monitoring and further inform its inactivity or (re)activation to the node hosting the control plane (e.g., gNB-CU or gNB-CU-CP).
  • the NG-RAN 504 is layered into a Radio Network Layer (RNL) and a Transport Network Layer (TNL).
  • RNL Radio Network Layer
  • TNL Transport Network Layer
  • the NG-RAN 504 architecture e.g., the NG-RAN logical nodes and interfaces between them
  • the NG-RAN 504 architecture is part of the RNL.
  • the NG-RAN interface e.g., NG, Xn, Fl, and the like
  • the TNL provides services for user plane transport and/or signaling transport.
  • each NG-RAN node is connected to all AMFs 544 of AMF sets within an AMF region supporting at least one slice also supported by the NG-RAN node.
  • the AMF Set and the AMF Region are defined in [TS23501].
  • the RAN 504 is communicatively coupled to CN 540 that includes network elements and/or network functions (NFs) to provide various functions to support data and telecommunications services to customers/subscribers (e.g., UE 502).
  • the components of the CN 540 may be implemented in one physical node or separate physical nodes.
  • NFV may be utilized to virtualize any or all of the functions provided by the network elements of the CN 540 onto physical compute/storage resources in servers, switches, and the like.
  • a logical instantiation of the CN 540 may be referred to as a network slice, and a logical instantiation of a portion of the CN 540 may be referred to as a network sub-slice.
  • the CN 540 is a 5GC 540 including an Authentication Server Function (AUSF) 542, Access and Mobility Management Function (AMF) 544, Session Management Function (SMF) 546, User Plane Function (UPF) 548, Network Slice Selection Function (NSSF) 550, Network Exposure Function (NEF) 552, Network Repository Function (NRF) 554, Policy Control Function (PCF) 556, Unified Data Management (UDM) 558, Unified Data Repository (UDR), Application Function (AF) 560, and Network Data Analytics Function (NWDAF) 562 coupled with one another over various interfaces as shown.
  • AUSF Authentication Server Function
  • AMF Access and Mobility Management Function
  • SMF Session Management Function
  • UPF User Plane Function
  • NEF Network Exposure Function
  • NRF Network Repository Function
  • PCF Policy Control Function
  • UDM Unified Data Management
  • UDR Unified Data Repository
  • AF Application Function
  • NWDAF Network Data Analytics Function
  • the NWDAF 562 is an NF capable of collecting data from UEs 502, other NF(s) in 5GC 540, an Operations, Administration and Maintenance (0AM) entities/functions, MnS (see e.g., Figure 8), MnF (see e.g., Figure 8), AFs 560, DNs 536, server(s) 538, cloud computing services, edge compute nodes and/or edge networks, and/or other entities/elements that can be used for analytics.
  • MnS see e.g., Figure 8
  • MnF see e.g., Figure 8
  • AFs 560 AFs 560
  • DNs 536 DNs 536
  • server(s) 538 cloud computing services
  • edge compute nodes and/or edge networks and/or other entities/elements that can be used for analytics.
  • the NWDAF 562 includes one or more of the following functionalities: support data collection from NFs and AFs 560; support data collection from 0AM; NWDAF service registration and metadata exposure to NFs and AFs 560; support analytics information provisioning to NFs and AFs 560; support ML model training and provisioning to NWDAF(s) 562 (e.g., those containing analytics logical function). Some or all of the NWDAF functionalities can be supported in a single instance of an NWDAF 562.
  • the NWDAF 562 also includes an analytics reporting capability, which comprises means that allow discovery of the type of analytics that can be consumed by an external party and/or the request for consumption of analytics information generated by the NWDAF 562.
  • the NWDAF 562 can collect data from NF(s) and/or other entities/elements/functions over an Nnf service-based interface associated with the NF(s) and/or other entities/elements/functions.
  • the NWDAF 562 belongs to the same PLMN as the NF that provides the data.
  • the Nnf interface is defined for the NWDAF 562 to request subscription to data delivery for a particular context, cancel subscription to data delivery, and request a specific report of data for a particular context.
  • the 5GS architecture also allows the NWDAF 562 to retrieve management data from an 0AM entity by invoking 0AM services.
  • the NWDAF 562 interacts with different entities for different purposes, such as one or more of the following: data collection based on subscription to events provided by AMF 544, SMF 546, PCF 556, UDM 558, NSACF, AF 560 (directly or via NEF 552) and 0AM; analytics and data collection using the Data Collection Coordination Function (DCCF); retrieval of information from data repositories (e.g., UDR via UDM 558 for subscriber-related information); data collection of location information from LCS system; storage and retrieval of information from an Analytics Data Repository Function (ADRF); analytics and data collection from a Messaging Framework Adaptor Function (MFAF); retrieval of information about NFs (e.g., from NRF 554 for NF-related information); on-demand provision of analytics to consumers, as specified in clause 6 of [TS23288]; and/or provision of bulked data related to analytics ID(s). NWDAF discovery and selection procedures are discussed in clause 6.3.13 in [TS23501] and clause 5.2
  • a single instance or multiple instances of NWDAF 562 may be deployed in a PLMN. If multiple NWDAF 562 instances are deployed, the architecture supports deploying the NWDAF 562 as a central NF, as a collection of distributed NFs, or as a combination of both. If multiple NWDAF 562 instances are deployed, an NWDAF 562 can act as an aggregate point (e.g., aggregator NWDAF 562) and collect analytics information from other NWDAFs 562, which may have different serving areas, to produce the aggregated analytics (e.g., per analytics ID), possibly with analytics generated by itself. When multiple NWDAFs 562 exist, not all of them need to be able to provide the same type of analytics results.
  • NWDAFs 562 can be specialized in providing certain types of analytics.
  • An analytics ID information element is used to identify the type of supported analytics that NWDAF 562 can generate.
  • NWDAF 562 instance(s) can be collocated with a 5GS NF.
  • the NWDAF 562 may contain an analytics logical function (AnLF) and/or a model training logical function (MTLF).
  • the NWDAF 562 can contain only an MTLF, only an AnLF, or both logical functions.
  • the 5GS architecture allows an NWDAF containing an AnLF (referred to herein as “NWDAF-ANLF”) to use trained ML model provisioning services from the same or different NWDAF containing an MTLF (also referred to herein as “NWDAF-MTLF”).
  • NWDAF-ANLF an NWDAF containing an AnLF
  • NWDAF-MTLF also referred to herein as “NWDAF-MTLF”.
  • the Nnwdaf interface is used by the NWDAF-AnLF to request and subscribe to trained ML model provisioning services provided by the NWDAF-MTLF.
  • the NWDAF 562 provides an Nnwdaf_MLModelProvision service enables an NF service consumer (NFc) to receive a notification when an ML model matching the subscription parameters becomes available in the NWDAF-MTLF (see e.g., clause 7.5 of [TS23288]).
  • the NWDAF 562 provides an Nnwdaf_MLModelInfo service that enables an NFc to
  • the AnLF is a logical function in the NWDAF 562 that performs inference, derives analytics information (e.g., derives statistics, inferences, and/or predictions based on analytics consumer requests) and exposes analytics services (e.g., Nnwdaf_AnalyticsSubscription or Nnwdaf_AnalyticsInfo). Analytics information are either statistical information of the past events, or predictive information.
  • the MTLF is a logical function in the NWDAF 562 that trains AI/ML models and exposes new training services (e.g., providing trained ML model) as defined in clauses 7.5 and 7.6 of [TS23288],
  • an NFc can utilize the NRF 554 to discover NWDAF 562 instance(s) unless NWDAF information is available by other means (e.g., locally configured on NFcs). NFcs may make an additional query to the UDM 558, when supported.
  • NWDAF selection function in an NFc selects an NWDAF 562 (or NWDAF-MTLF and/or NWDAF-AnLF) instance based on the available NWDAF 562 instances, a list of supported analytics ID(s) (e.g., possibly per supported service) stored/from an NRF 554, NWDAF capabilities (e.g., analytics aggregation capability, analytics metadata provisioning capability, ML model training capabilities, ML model deployment capabilities, and/or the like), and/or other NRF 554 registration elements of the NF profile. Additional aspects of NWDAF 562 functionality are defined in 3GPP TS 23.288 (“[TS23288]”).
  • the AUSF 542 stores data for authentication of UE 502 and handle authentication-related functionality.
  • the AUSF 542 may facilitate a common authentication framework for various access types.
  • the AMF 544 allows other functions of the 5GC 540 to communicate with the UE 502 and the RAN 504 and to subscribe to notifications about mobility events w.r.t the UE 502.
  • the AMF 544 is also responsible for registration management (e.g., for registering UE 502), connection management, reachability management, mobility management, lawful interception of AMF-related events, and access authentication and authorization.
  • the AMF 544 provides transport for SM messages between the UE 502 and the SMF 546, and acts as a transparent proxy for routing SM messages.
  • AMF 544 also provides transport for SMS messages between UE 502 and an SMSF.
  • AMF 544 interacts with the AUSF 542 and the UE 502 to perform various security anchor and context management functions.
  • AMF 544 is a termination point of a RAN-CP interface, which includes the N2 reference point between the RAN 504 and the AMF 544.
  • the AMF 544 is also a termination point of NAS (Nl) signaling, and performs NAS ciphering and integrity protection.
  • the AMF 544 also supports NAS signaling with the UE 502 over an N3IWF interface.
  • the N3IWF provides access to untrusted entities.
  • N3IWF may be a termination point for the N2 interface between the (R)AN 504 and the AMF 544 for the control plane, and may be a termination point for the N3 reference point between the (R)AN 504 and the 548 for the user plane.
  • the AMF 544 handles N2 signaling from the SMF 546 and the AMF 544 for PDU sessions and QoS, encapsulate/de-encapsulate packets for IPSec and N3 tunneling, marks N3 user-plane packets in the UL, and enforces QoS corresponding to N3 packet marking taking into account QoS requirements associated with such marking received over N2.
  • N3IWF may also relay UL and DL control-plane NAS signaling between the UE 502 and AMF 544 via an Nl reference point between the UE 502and the AMF 544, and relay UL and DL user-plane packets between the UE 502 and UPF 548.
  • the N3IWF also provides mechanisms for IPsec tunnel establishment with the UE 502.
  • the AMF 544 may exhibit an Namf service-based interface, and may be a termination point for an N14 reference point between two AMFs 544 and an N17 reference point between the AMF 544 and a 5G-EIR (not shown by Figure 5).
  • the AMF 544 may provide support for Network Slice restriction and Network Slice instance restriction based on NWDAF analytics.
  • the SMF 546 is responsible for SM (e.g., session establishment, tunnel management between UPF 548 and NAN 514); UE IP address allocation and management (including optional authorization); selection and control of UP function; configuring traffic steering at UPF 548 to route traffic to proper destination; termination of interfaces toward policy control functions; controlling part of policy enforcement, charging, and QoS; lawful intercept (for SM events and interface to LI system); termination of SM parts of NAS messages; DL data notification; initiating AN specific SM information, sent via AMF 544 over N2 to NAN 514; and determining SSC mode of a session.
  • SM e.g., session establishment, tunnel management between UPF 548 and NAN 514
  • UE IP address allocation and management including optional authorization
  • selection and control of UP function configuring traffic steering at UPF 548 to route traffic to proper destination; termination of interfaces toward policy control functions; controlling part of policy enforcement, charging, and QoS; lawful intercept (for SM events and interface to LI system); termination
  • the SMF 546 may also include the following functionalities to support edge computing enhancements (see e.g., [TS23548]): selection of EASDF 561 and provision of its address to the UE as the DNS server for the PDU session; usage of EASDF 561 services as defined in [TS23548]; and for supporting the application layer architecture defined in [TS23558], provision and updates of ECS address configuration information to the UE.
  • edge computing enhancements see e.g., [TS23548]: selection of EASDF 561 and provision of its address to the UE as the DNS server for the PDU session; usage of EASDF 561 services as defined in [TS23548]; and for supporting the application layer architecture defined in [TS23558], provision and updates of ECS address configuration information to the UE.
  • Discovery and selection procedures for EASDFs 561 is discussed in [TS23501] ⁇ 6.3.23.
  • the UPF 548 acts as an anchor point for intra-RAT and inter-RAT mobility, an external PDU session point of interconnect to data network 536, and a branching point to support multihomed PDU session.
  • the UPF 548 also performs packet routing and forwarding, packet inspection, enforces user plane part of policy rules, lawfully intercept packets (UP collection), performs traffic usage reporting, perform QoS handling for a user plane (e.g., packet filtering, gating, UL/DL rate enforcement), performs UL traffic verification (e.g., SDF-to-QoS flow mapping), transport level packet marking in the UL and DL, and performs DL packet buffering and DL data notification triggering.
  • UPF 548 may include an UL classifier to support routing traffic flows to a data network.
  • the NSSF 550 selects a set of network slice instances serving the UE 502.
  • the NSSF 550 also determines allowed NSSAI and the mapping to the subscribed S-NSSAIs, if needed.
  • the NSSF 550 also determines an AMF set to be used to serve the UE 502, or a list of candidate AMFs 544 based on a suitable configuration and possibly by querying the NRF 554.
  • the selection of a set of network slice instances for the UE 502 may be triggered by the AMF 544 with which the UE 502 is registered by interacting with the NSSF 550; this may lead to a change of AMF 544.
  • the NSSF 550 interacts with the AMF 544 via an N22 reference point; and may communicate with another NSSF in a visited network via an N31 reference point (not shown).
  • the NEF 552 securely exposes services and capabilities provided by 3GPP NFs for third party, internal exposure/re-exposure, AFs 560, edge computing networks/frame works, and the like.
  • the NEF 552 may authenticate, authorize, or throttle the AFs 560.
  • the NEF 552 stores/retrieves information as structured data using the Nudr interface to a Unified Data Repository (UDR).
  • UDR Unified Data Repository
  • the NEF 552 also translates information exchanged with the AF 560 and information exchanged with internal NFs.
  • the NEF 552 may translate between an AF-Service-Identifier and an internal 5GC information, such as DNN, S-NSSAI, as described in clause 5.6.7 of [TS23501].
  • the NEF 552 handles masking of network and user sensitive information to external AF's 560 according to the network policy.
  • the NEF 552 also receives information from other NFs based on exposed capabilities of other NFs. This information may be stored at the NEF 552 as structured data, or at a data storage NF using standardized interfaces. The stored information can then be re-exposed by the NEF 552 to other NFs and AFs, or used for other purposes such as analytics.
  • NWDAF analytics may be securely exposed by the NEF 552 for external party, as specified in [TS23288].
  • data provided by an external party may be collected by the NWDAF 562 via the NEF 552 for analytics generation purpose.
  • the NEF 552 handles and forwards requests and notifications between the NWDAF 562 and AF(s) 560, as specified in [TS23288],
  • the NRF 554 supports service discovery functions, receives NF discovery requests from NF instances, and provides information of the discovered NF instances to the requesting NF instances.
  • the NRF 554 also maintains NF profiles of available NF instances and their supported services.
  • the NF profile of NF instance maintained in the NRF 554 includes the following information: NF instance ID; NF type; PLMN ID in the case of PLMN, PLMN ID + NID in the case of SNPN; Network Slice related Identifier(s) (e.g., S-NSSAI, NSI ID); an NF’s network address(es) (e.g., FQDN, IP address, and/or the like), NF capacity information, NF priority information (e.g., for AMF selection), NF set ID, NF service set ID of the NF service instance; NF specific service authorization information; names of supported services, if applicable; endpoint address(es) of instance(s) of each supported service; identification of stored data/information (e.
  • the NF profile includes: supported analytics ID(s), possibly per service, NWDAF serving area information (e.g., a list of TAIs for which the NWDAF can provide services and/or data), Supported Analytics Delay per Analytics ID (if available), NF types of the NF data sources, NF Set IDs of the NF data sources, if available, analytics aggregation capability (if available), analytics metadata provisioning capability (if available), ML model filter information parameters S-NSSAI(s) and area(s) of interest for the trained ML model(s) per analytics ID(s) (if available), federated learning (FL) capability type (e.g., FL server or FL client, if available), Time interval supporting FL (if available).
  • NWDAF serving area information e.g., a list of TAIs for which the NWDAF can provide services and/or data
  • Supported Analytics Delay per Analytics ID if available
  • NF types of the NF data sources NF Set IDs of the NF data sources, if available
  • the NWDAF's 562 Serving Area information is common to all its supported analytics IDs.
  • the analytics IDs supported by the NWDAF 562 may be associated with a supported analytics delay, for example, the analytics report can be generated with a time (including data collection delay and inference delay) in less than or equal to the supported analytics delay.
  • the determination of supported analytics delay, and how the NWDAF 562 avoid updating its Supported Analytics Delay in NRF frequently may be NWDAF-implementation specific.
  • the PCF 556 provides policy rules to control plane functions to enforce them, and may also support unified policy framework to govern network behavior.
  • the PCF 556 may also implement a front end to access subscription information relevant for policy decisions in a UDR 559 of the UDM 558.
  • the PCF 556 exhibit an Npcf service-based interface.
  • the UDM 558 handles subscription-related information to support the network entities’ handling of communication sessions, and stores subscription data of UE 502. For example, subscription data may be communicated via an N8 reference point between the UDM 558 and the AMF 544.
  • the UDM 558 may include two parts, an application front end and a UDR.
  • the UDR may store subscription data and policy data for the UDM 558 and the PCF 556, and/or structured data for exposure and application data (including PFDs for application detection, application request information for multiple UEs 502) for the NEF 552.
  • the Nudr service-based interface may be exhibited by the UDR to allow the UDM 558, PCF 556, and NEF 552 to access a particular set of the stored data, as well as to read, update (e.g., add, modify), delete, and subscribe to notification of relevant data changes in the UDR.
  • the UDM 558 may include a UDM-FE, which is in charge of processing credentials, location management, subscription management and so on. Several different front ends may serve the same user in different transactions.
  • the UDM-FE accesses subscription information stored in the UDR and performs authentication credential processing, user identification handling, access authorization, registration/mobility management, and subscription management.
  • the UDM 558 may exhibit the Nudm service-based interface.
  • EASDF Edge Application Server Discovery Function
  • the EASDF 561 exhibits an Neasdf servicebased interface, and is connected to the SMF 546 via an N88 interface.
  • One or multiple EASDF instances may be deployed within a PLMN, and interactions between 5GC NF(s) and the EASDF 561 take place within a PLMN.
  • the EASDF 561 includes one or more of the following functionalities: registering to NRF 554 for EASDF 561 discovery and selection; handling the DNS messages according to the instruction from the SMF 546; and/or terminating DNS security, if used.
  • Handling the DNS messages according to the instruction from the SMF 546 includes one or more of the following functionalities: receiving DNS message handling rules and/or BaselineDNS Pattern from the SMF 546; exchanging DNS messages from/with the UE 502; forwarding DNS messages to C-DNS or L-DNS for DNS query; adding EDNS client subnet (ECS) option into DNS query for an FQDN; reporting to the SMF 546 the information related to the received DNS messages; and/or buffering/discarding DNS messages from the UE 502 or DNS Server.
  • the EASDF has direct user plane connectivity (e.g., without any NAT) with the PSA UPF over N6 for the transmission of DNS signaling exchanged with the UE.
  • the deployment of a NAT between EASDF 561 and PSA UPF 548 may or may not be supported. Additional aspects of the EASDF 561 are discussed in [TS23548].
  • AF 560 provides application influence on traffic routing, provide access to NEF 552, and interact with the policy framework for policy control.
  • the AF 560 may influence UPF 548 (re)selection and traffic routing.
  • the network operator may permit AF 560 to interact directly with relevant NFs.
  • the AF 560 is used for edge computing implementations.
  • An NF that needs to collect data from an AF 560 may subscribe/unsubscribe to notifications regarding data collected from an AF 560, either directly from the AF 560 or via NEF 552.
  • the data collected from an AF 560 is used as input for analytics by the NWDAF 562.
  • the details for the data collected from an AF 560 as well as interactions between NEF 552, AF 560 and NWDAF 562 are described in [TS23288],
  • the 5GC 540 may enable edge computing by selecting operator/3rd party services to be geographically close to a point that the UE 502 is attached to the network. This may reduce latency and load on the network.
  • the 5GC 540 may select a UPF 548 close to the UE 502 and execute traffic steering from the UPF 548 to DN 536 via the N6 interface. This may be based on the UE subscription data, UE location, and information provided by the AF 560, which allows the AF 560 to influence UPF (re)selection and traffic routing.
  • the data network (DN) 536 may represent various network operator services, Internet access, or third party services that may be provided by one or more servers including, for example, application (app)Zcontent server 538.
  • the DN 536 may be an operator external public, a private PDN, or an intra-operator packet data network, for example, for provision of IMS services.
  • the app server 538 can be coupled to an IMS via an S-CSCF or the I-CSCF.
  • the DN 536 may represent one or more local area DNs (LADNs), which are DNs 536 (or DN names (DNNs)) that is/are accessible by a UE 502 in one or more specific areas. Outside of these specific areas, the UE 502 is not able to access the LADN/DN 536.
  • LADNs local area DNs
  • DNNs DN names
  • the DN 536 may be an edge DN 536, which is a (local) DN that supports the architecture for enabling edge applications.
  • the app server 538 may represent the physical hardware systems/devices providing app server functionality and/or the application software resident in the cloud or at an edge compute node that performs server function(s).
  • the app/content server 538 provides an edge hosting environment that provides support required for Edge Application Server's execution.
  • the 5GS can use one or more edge compute nodes to provide an interface and offload processing of wireless communication traffic.
  • the edge compute nodes may be included in, or co-located with one or more RANs 504 or RAN nodes 514.
  • the edge compute nodes can provide a connection between the RAN 504 and UPF 548 in the 5GC 540.
  • the edge compute nodes can use one or more NFV instances instantiated on virtualization infrastructure within the edge compute nodes to process wireless connections to and from the RAN 514 and UPF 548.
  • the edge compute nodes provide a distributed computing environment for application and service hosting, and also provide storage and processing resources so that data and/or content can be processed in close proximity to subscribers (e.g., users of UEs 502) for faster response times.
  • the edge compute nodes also support multitenancy runtime and hosting environment(s) for applications, including virtual appliance applications that may be delivered as packaged virtual machine (VM) images, middleware application and infrastructure services, content delivery services including content caching, mobile big data analytics, and computational offloading, among others.
  • Computational offloading involves offloading computational tasks, workloads, applications, and/or services to the edge compute nodes from the UEs 502, CN 540, DN 536, and/or server(s) 538, or vice versa.
  • a device application or client application operating in a UE 502 may offload application tasks or workloads to one or more edge compute nodes.
  • an edge compute node may offload application tasks or workloads to a set of UEs 502 (e.g., for distributed machine learning computation and/or the like).
  • the edge compute nodes may include or be part of an edge system that employs one or more edge computing technologies (ECTs) (also referred to as an “edge computing framework” or the like).
  • ECTs edge computing technologies
  • the edge compute nodes may also be referred to as “edge hosts” or “edge servers.”
  • the edge system includes a collection of edge servers and edge management systems (not shown) necessary to run edge computing applications within an operator network or a subset of an operator network.
  • the edge servers are physical computer systems that may include an edge platform and/or virtualization infrastructure, and provide compute, storage, and network resources to edge computing applications.
  • Each of the edge servers are disposed at an edge of a corresponding access network, and are arranged to provide computing resources and/or various services (e.g., computational task and/or workload offloading, cloud-computing capabilities, IT services, and other like resources and/or services as discussed herein) in relatively close proximity to UEs 502.
  • the VI of the edge compute nodes provide virtualized environments and virtualized resources for the edge hosts, and the edge computing applications may run as VMs and/or application containers on top of the VI.
  • the ECT is and/or operates according to the MEC framework, as discussed in ETSI GR MEC 001, ETSI GS MEC 003, ETSI GS MEC 009, ETSI GS MEC 010-1, ETSI GS MEC 010-2, ETSI GS MEC Oi l, ETSI GS MEC 012, ETSI GS MEC 013, ETSI GS MEC 014, ETSI GS MEC 015, ETSI GS MEC 016, ETSI GS MEC 021, ETSI GR MEC 024, ETSI GS MEC 028, ETSI GS MEC 029, ETSI MEC GS 030, and ETSI GR MEC 031 (collectively referred to herein as “[MEC]”).
  • [MEC] ETSI GR MEC 001, ETSI GS MEC 003, ETSI GS MEC 009, ETSI GS MEC 010-1, ETSI GS MEC 01
  • This example implementation may also include NFV and/or other like virtualization technologies such as those discussed in ETSI GR NFV 001, ETSI GS NFV 002, ETSI GR NFV 003, ETSI GR NFV 003, ETSI GS NFV 006, ETSI GS NFV-INF 001, ETSI GS NFV-INF 003, ETSI GS NFV-INF 004, ETSI GS NFV-MAN 001, and/or Israel et al., OSM Release FIVE Technical Overview, ETSI OPEN SOURCE MANO, OSM White Paper, 1st ed. (Jan. 2019).
  • the ECT is and/or operates according to the O-RAN framework, as described in O-RAN Working Group 1 (Use Cases and Overall Architecture): O-RAN Architecture Description, O-RAN ALLIANCE WG1, O-RAN Architecture Description v09.00, Release R003 (Jun. 2023); O-RAN Working Group 2 (Non-RT R1C and Al interface WG) Al interface: Application Protocol, v04.00, R003 (Mar. 2023); O-RAN Working Group 2 (Non- RT RIC and Al interface WG) Al interface: General Aspects and Principles, v03.01, Release R003 (Mar.
  • O-RAN Working Group 2 Al/ML workflow description and requirements v01.03 O-RAN ALLIANCE WG2 Oct. 2021
  • O-RAN Working Group 2 Non-RT RIC and Al interface WG: R1 interface: General Aspects and Principles 5.0, v05.00, R003 (Jun. 2023);
  • O-RAN Working Group 2 Non-RT RIC and Al interface WG) Non-RT RIC Architecture, v03.00, Release R003 (Jun. 2023);
  • O-RAN Working Group 3 Near-Real-time RAN Intelligent Controller Architecture & E2 General Aspects and Principles, v03.01, Release R003 (Jun.
  • O-RAN Working Group 3 Near-Real-time Intelligent Controller E2 Service Model (E2SM) RAN Function Network Interface (NI) vOl.OO (Feb. 2020); O-RAN Working Group 3 Near-Real-time Intelligent Controller E2 Service Model (E2SM) RAN Control v03.00, Release R003 (Jun. 2023); O-RAN Working Group 3 (Near-Real-time RAN Intelligent Controller and E2 Interface Working Group): Near-RT R1C Architecture, v04.00, Release R003 (Mar. 2023) (collectively referred to as “[O- RAN]”).
  • the ECT is and/or operates according to the 3rd Generation Partnership Project (3GPP) System Aspects Working Group 6 (SA6) Architecture for enabling Edge Applications (referred to as “3GPP edge computing”) as discussed in 3GPP TS 23.222, 3GPP TS 23.401, 3GPP TS 23.434, 3GPP TS 23.501 (“[TS23501]”), 3GPP TS 23.502 (“[TS23502]”), 3GPP TS 23.548 (“[TS23548]”), 3GPP TS 23.558 (“[TS23558]”), 3GPP TS 23.682, 3GPP TR 23.700-98, 3GPP TS 28.104 (“[TS28104]”), 3GPP TS 28.105 (“[TS28105]”), 3GPP TS 28.312, 3GPP TS 28.532 (“[TS28532]”), 3GPP TS 28.533 (“[TS28533]”), 3GPP TS 28.535, 3GPP TS 3GPP
  • the ECT is and/or operates according to the Intel® Smart Edge Open framework (formerly known as OpenNESS) as discussed in Intel® Smart Edge Open Developer Guide, version 21.09 (30 Sep. 2021), available at: https://smart-edge- open.github.io/ (“[ISEO]”).
  • OpenNESS Intel® Smart Edge Open framework
  • the ECT operates according to the Multi-Access Management Services (MAMS) framework as discussed in Kanugovi et al., Multi-Access Management Services (MAMS), INTERNET ENGINEERING TASK FORCE (IETF), Request for Comments (RFC) 8743 (Mar. 2020), Ford et al., TCP Extensions for Multipath Operation with Multiple Addresses, IETF RFC 8684, (Mar.
  • MAMS Multi-Access Management Services
  • MAMS Multi-Access Management Services
  • IETF INTERNET ENGINEERING TASK FORCE
  • RRC Request for Comments
  • edge computing frameworks/ECTs and services deployment examples are only illustrative examples of ECTs, and that the present disclosure may be applicable to many other or additional edge computing/networking technologies in various combinations and layouts of devices located at the edge of a network including the various edge computing networks/systems described herein. Further, the techniques disclosed herein may relate to other loT edge network systems and configurations, and other intermediate processing entities and architectures may also be applicable to the present disclosure.
  • edge computing/networking technologies examples include [MEC]; [O-RAN]; [ISEO]; [5GEdge]; Content Delivery Networks (CDNs) (also referred to as “Content Distribution Networks” or the like); Mobility Service Provider (MSP) edge computing and/or Mobility as a Service (MaaS) provider systems (e.g., used in AECC architectures); Nebula edge-cloud systems; Fog computing systems; Cloudlet edge-cloud systems; Mobile Cloud Computing (MCC) systems; Central Office Rearchitected as a Datacenter (CORD), mobile CORD (M-CORD) and/or Converged Multi-Access and Core (COMAC) systems; and/or the like.
  • MEC Mobility Service Provider
  • MaaS Mobility as a Service
  • Nebula edge-cloud systems Fog computing systems
  • Cloudlet edge-cloud systems Cloudlet edge-cloud systems
  • MCC Mobile Cloud Computing
  • CORD Central Office Rearchitected as a Datacenter
  • M-CORD mobile CORD
  • COMAC Con
  • the interfaces of the 5GC 540 include reference points and service-based interfaces.
  • a reference point is a point at the conjunction of two non-overlapping functional groups, elements, or entities.
  • the reference points include: N 1 (between the UE 502 and the AMF 544), N2 (between RAN 514 and AMF 544), N3 (between RAN 514 and UPF 548), N4 (between the SMF 546 and UPF 548), N5 (between PCF 556 and AF 560), N6 (between UPF 548 and DN 536), N7 (between SMF 546 and PCF 556), N8 (between UDM 558 and AMF 544), N9 (between two UPFs 548), N10 (between the UDM 558 and the SMF 546), Ni l (between the AMF 544 and the SMF 546), N12 (between AUSF 542 and AMF 544), N13 (between AUSF 542 and UDM 558), N14 (between two AMFs 544; not
  • the service-based representation of Figure 5 represents NFs within the control plane that enable other authorized NFs to access their services.
  • a service-based interface (SBI), at least in some examples, is an interface over which an NF can access the services of one or more other NFs.
  • the service-based interfaces are API-based interfaces (e.g., northbound APIs, southbound APIs, HTTP/2, RESTful, SOAP, A1AP, E2AP, and/or any other API, web service, application layer and/or other communication protocol, such as any of those discussed herein) that can be used by an NF to call or invoke a particular service or service operation.
  • the SBIs include: Namf (SBI exhibited by AMF 544), Nsmf (SBI exhibited by SMF 546), Nnef (SBI exhibited by NEF 552), Npcf (SBI exhibited by PCF 556), Nudm (SBI exhibited by the UDM 558), Naf (SBI exhibited by AF 560), Nnrf (SBI exhibited by NRF 554), Nnssf (SBI exhibited by NSSF 550), Nausf (SBI exhibited by AUSF 542).
  • Other service-based interfaces e.g., Nudr, N5g-eir, and Nudsf
  • Other service-based interfaces e.g., Nudr, N5g-eir, and Nudsf
  • the system 500 may also include NFs that are not shown such as, for example, UDR, Unstructured Data Storage Function (UDSF), Network Slice Admission Control Function (NSACF), Network Slice-specific and Stand-alone Non-Public Network (SNPN) Authentication and Authorization Function (NSSAAF), UE radio Capability Management Function (UCMF), 5G-Equipment Identity Register (5G-EIR), CHarging Function (CHF), Time Sensitive Networking (TSN) AF 560, Time Sensitive Communication and Time Synchronization Function (TSCTSF), DCCF, Analytics Data Repository Function (ADRF), MFAF, Non-Seamless WLAN Offload Function (NSWOF), Service Communication Proxy (SCP), Security Edge Protection Proxy (SEPP), Non-3GPP InterWorking Function (N3IWF), Trusted Non-3GPP Gateway Function (TNGF), Wireline Access Gateway Function (W-AGF), and/or Trusted WLAN Interworking Function (TWIF) as discussed in [TS23501]
  • NFs 5
  • FIG. 6 schematically illustrates a wireless network 600.
  • the wireless network 600 includes a UE 602 in wireless communication with a NAN 604.
  • the UE 602 may be the same or similar to, and substantially interchangeable with any of the of the UEs discussed herein such as, for example, UE 502, hardware resources 700, and/or any other UE discussed herein.
  • the NAN 604 may be the same or similar to, and substantially interchangeable with any of the NANs discussed herein such as, for example, AP 506, NANs 514, RAN 504, hardware resources 700, and/or any other NAN(s) discussed herein.
  • the UE 602 may be communicatively coupled with the NAN 604 via connection 606.
  • the connection 606 is illustrated as an air interface to enable communicative coupling, and can be consistent with cellular communications protocols such as an LTE protocol or a 5G NR protocol operating at mmWave or sub-6GHz frequencies.
  • the UE 602 includes a host platform 608 coupled with a modem platform 610.
  • the host platform 608 includes application processing circuitry 612, which may be coupled with protocol processing circuitry 614 of the modem platform 610.
  • the application processing circuitry 612 may run various applications for the UE 602 that source/sink application data.
  • the application processing circuitry 612 may further implement one or more layer operations to transmit/receive application data to/from a data network. These layer operations includes transport (for example UDP) and Internet (e.g., IP) operations
  • the protocol processing circuitry 614 may implement one or more of layer operations to facilitate transmission or reception of data over the connection 606.
  • the layer operations implemented by the protocol processing circuitry 614 includes, for example, MAC, RLC, PDCP, RRC and NAS operations.
  • the modem platform 610 may further include digital baseband circuitry 616 that may implement one or more layer operations that are “below” layer operations performed by the protocol processing circuitry 614 in a network protocol stack. These operations includes, for example, PHY operations including one or more of HARQ-ACK functions, scrambling/descrambling, encoding/decoding, layer mapping/de-mapping, modulation symbol mapping, received symbol/bit metric determination, multi-antenna port precoding/decoding, which includes one or more of space-time, space-frequency or spatial coding, reference signal generation/detection, preamble sequence generation and/or decoding, synchronization sequence generation/detection, control channel signal blind decoding, and other related functions.
  • PHY operations including one or more of HARQ-ACK functions, scrambling/descrambling, encoding/decoding, layer mapping/de-mapping, modulation symbol mapping, received symbol/bit metric determination, multi-antenna port precoding/decoding, which includes one or
  • the modem platform 610 may further include transmit circuitry 618, receive circuitry 620, RF circuitry 622, and RF front end (RFFE) 624, which includes or connect to one or more antenna panels 626.
  • the transmit circuitry 618 includes a digital- to- analog converter, mixer, intermediate frequency (IF) components, and/or the like
  • the receive circuitry 620 includes an analog-to-digital converter, mixer, IF components, and/or the like
  • the RF circuitry 622 includes a low-noise amplifier, a power amplifier, power tracking components, and/or the like
  • RFFE 624 includes filters (e.g., surface/bulk acoustic wave filters), switches, antenna tuners, beamforming components (e.g., phase-array antenna components), and/or the like
  • the selection and arrangement of the components of the transmit circuitry 618, receive circuitry 620, RF circuitry 622, RFFE 624, and antenna panels 626 (referred generically as “transmit/receive
  • the protocol processing circuitry 614 includes one or more instances of control circuitry (not shown) to provide control functions for the transmit/receive components.
  • a UE reception may be established by and via the antenna panels 626, RFFE 624, RF circuitry 622, receive circuitry 620, digital baseband circuitry 616, and protocol processing circuitry 614.
  • the antenna panels 626 may receive a transmission from the NAN 604 by receive-beamforming signals received by a set of antennas/antenna elements of the one or more antenna panels 626.
  • a UE transmission may be established by and via the protocol processing circuitry 614, digital baseband circuitry 616, transmit circuitry 618, RF circuitry 622, RFFE 624, and antenna panels 626.
  • the transmit components of the UE 604 may apply a spatial filter to the data to be transmitted to form a transmit beam emitted by the antenna elements of the antenna panels 626.
  • the NAN 604 includes a host platform 628 coupled with a modem platform 630.
  • the host platform 628 includes application processing circuitry 632 coupled with protocol processing circuitry 634 of the modem platform 630.
  • the modem platform may further include digital baseband circuitry 636, transmit circuitry 638, receive circuitry 640, RF circuitry 642, RFFE circuitry 644, and antenna panels 646.
  • the components of the AN 604 may be similar to and substantially interchangeable with like-named components of the UE 602.
  • the components of the AN 608 may perform various logical functions that include, for example, RNC functions such as radio bearer management, UL and DL dynamic radio resource management, and data packet scheduling.
  • Examples of the antenna elements of the antenna panels 626 and/or the antenna elements of the antenna panels 646 include planar inverted-F antennas (PIFAs), monopole antennas, dipole antennas, loop antennas, patch antennas, Yagi antennas, parabolic dish antennas, omni-directional antennas, and/or the like.
  • PIFAs planar inverted-F antennas
  • monopole antennas dipole antennas
  • loop antennas loop antennas
  • patch antennas Yagi antennas
  • parabolic dish antennas parabolic dish antennas
  • omni-directional antennas and/or the like.
  • Figure 7 illustrates components capable of reading instructions from a machine-readable or computer-readable medium (e.g., a non-transitory machine-readable storage medium) and perform any one or more of the methodologies discussed herein.
  • Figure 7 shows a diagrammatic representation of hardware resources 700 including one or more processors (or processor cores) 710, one or more memory/storage devices 720, and one or more communication resources 730, each of which may be communicatively coupled via a bus 740 or other interface circuitry.
  • node virtualization e.g., NFV
  • a hypervisor 702 may be executed to provide an execution environment for one or more network slices/sub-slices to utilize the hardware resources 700.
  • the processors 710 may include, for example, a processor 712 and a processor 714.
  • the processors 710 may be, for example, a central processing unit (CPU), a reduced instruction set computing (RISC) processor, a complex instruction set computing (CISC) processor, a graphics processing unit (GPU), a DSP such as a baseband processor, an ASIC, an FPGA, a radio-frequency integrated circuit (RFIC), another processor (including those discussed herein), or any suitable combination thereof.
  • CPU central processing unit
  • RISC reduced instruction set computing
  • CISC complex instruction set computing
  • GPU graphics processing unit
  • DSP such as a baseband processor, an ASIC, an FPGA, a radio-frequency integrated circuit (RFIC), another processor (including those discussed herein), or any suitable combination thereof.
  • the memory/storage devices 720 may include main memory, disk storage, or any suitable combination thereof.
  • the memory/storage devices 720 may include, but are not limited to, any type of volatile, non-volatile, or semi-volatile memory such as dynamic random access memory (DRAM), static random access memory (SRAM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), Flash memory, solid-state storage, and/or the like.
  • DRAM dynamic random access memory
  • SRAM static random access memory
  • EPROM erasable programmable read-only memory
  • EEPROM electrically erasable programmable read-only memory
  • Flash memory solid-state storage, and/or the like.
  • the communication resources 730 may include interconnection or network interface controllers, components, or other suitable devices to communicate with one or more peripheral devices 704 or one or more databases 706 or other network elements via a network 708.
  • the communication resources 730 may include wired communication components (e.g., for coupling via USB, Ethernet, and/or the like), cellular communication components, NFC components, Bluetooth® components, WiFi® components, and other communication components.
  • Instructions 750 may comprise software, a program, an application, an applet, an app, or other executable code for causing at least any of the processors 710 to perform any one or more of the methodologies discussed herein.
  • the instructions 750 may reside, completely or partially, within at least one of the processors 710 (e.g., within the processor’s cache memory), the memory/storage devices 720, or any suitable combination thereof.
  • any portion of the instructions 750 may be transferred to the hardware resources 700 from any combination of the peripheral devices 704 or the databases 706. Accordingly, the memory of processors 710, the memory/storage devices 720, the peripheral devices 704, and the databases 706 are examples of computer-readable and machine-readable media.
  • the peripheral devices 704 may represent one or more sensors (also referred to as “sensor circuitry”).
  • the sensor circuitry includes devices, modules, or subsystems whose purpose is to detect events or changes in its environment and send the information (sensor data) about the detected events to some other a device, module, subsystem, and/or the like.
  • Individual sensors may be exteroceptive sensors (e.g., sensors that capture and/or measure environmental phenomena and/ external states), proprioceptive sensors (e.g., sensors that capture and/or measure internal states of a compute node or platform and/or individual components of a compute node or platform), and/or exproprioceptive sensors (e.g., sensors that capture, measure, or correlate internal states and external states).
  • sensors include, inter alia, inertia measurement units (IMU) comprising accelerometers, gyroscopes, and/or magnetometers; microelectromechanical systems (MEMS) or nanoelectromechanical systems (NEMS) comprising 3-axis accelerometers, 3-axis gyroscopes, and/or magnetometers; level sensors; flow sensors; temperature sensors (e.g., thermistors, including sensors for measuring the temperature of internal components and sensors for measuring temperature external to the compute node or platform); pressure sensors; barometric pressure sensors; gravimeters; altimeters; image capture devices (e.g., cameras); light detection and ranging (LiDAR) sensors; proximity sensors (e.g., infrared radiation detector and the like); depth sensors, ambient light sensors; optical light sensors; ultrasonic transceivers; microphones; and the like.
  • IMU inertia measurement units
  • MEMS microelectromechanical systems
  • NEMS nanoelectromechanical systems
  • level sensors
  • the sensor circuitry includes the PEE sensor(s) 120 discussed previously.
  • the PEE sensors can include energy/power meters (e.g., analog, digital, and/or smart electric meters, wattmeter including current coils and voltage coils, volt-ampere meters, reactive power meters, power quality analyzers, and/or the like), voltage meters (e.g., analog voltmeters, digital voltmeters, moving-coild voltmeters, moving-iron voltmeters, electrostatic voltmeters, vacuum tube voltmeters, digital storage oscilloscopes, high-voltage probes, AC voltage sensors, digital panel meters, and/or the like), alternating current (AC) and/or direct current (DC) meters/sensors (e.g., open-loop and/or closed loop hall effect sensors, Rogowski coil sensors, current transformers, shunt resistors, resistor-based current sensors, zeroflux current sensors, digital current sensors, fiber optic current sensors, and/or the like), AC frequency measurement sensors (e.g.,
  • temperature sensors examples include resistance temperature detectors (RTDs), thermocouples, thermistors (e.g., negative temperature coefficient (NTC) and/or positive temperature coefficient (PTC) thermistors), IR sensors, bimetallic temperature sensors, fiber optic temperature sensors, digital temperature sensors (IC sensors), gas thermometers, hygrothermo meters, and/or the like.
  • temperature sensors include capacitive humidity sensors, resistive humidity sensors, gravimetric hygrometers, dew point sensors, hygrothermometers.
  • the PEE sensor(s) 120 can include environmental monitoring sensors, which may include temperature sensors, humidity sensors, pressure sensors, light sensors and/or photodetectors (e.g., photodiodes, phototransistors, photovoltaic cells, photomultiplier tubes, light-dependent resistors, charge-coupled devices (CCDs), active-pixel sensors, avalanche photodiodes, photonic sensors, pyroelectric sensors, radiometers, and/or the like), air quality sensors (e.g., particulate matter (e.g., PM2.5 and PM10) sensors, gas sensors, particle counters, temperature and/or humidity sensors, and/or the like), and/or any other suitable sensor(s).
  • gas sensors include carbon monoxide sensors, carbon dioxide sensors, ozone sensors, volatile organic compound sensors, nitrogen dioxide sensors, sulfur dioxide sensors, ammonia sensors, and/or the like.
  • the peripheral devices 704 may represent one or more actuators, which allow a compute node, platform, machine, device, mechanism, system, or other object to change its state, position, and/or orientation, or move or control a compute node (e.g., node 700), platform, machine, device, mechanism, system, or other object.
  • the actuators comprise electrical and/or mechanical devices for moving or controlling a mechanism or system, and converts energy (e.g., electric current or moving air and/or liquid) into some kind of motion.
  • the actuators can be or include any number and combination of the following: soft actuators (e.g., actuators that changes its shape in response to a stimuli such as, for example, mechanical, thermal, magnetic, and/or electrical stimuli), hydraulic actuators, pneumatic actuators, mechanical actuators, electromechanical actuators (EMAs), microelectromechanical actuators, electrohydraulic actuators, linear actuators, linear motors, rotary motors, DC motors, stepper motors, servomechanisms, electromechanical switches, electromechanical relays (EMRs), power switches, valve actuators, piezoelectric actuators and/or biomorphs, thermal biomorphs, solid state actuators, solid state relays (SSRs), shape-memory alloy-based actuators, electroactive polymer-based actuators, relay driver integrated circuits (ICs), solenoids, impactive actuators/mechanisms (e.g., jaws, claws, tweezers, clamps, hooks, mechanical fingers, humaniform dexterous robotic hands
  • the compute node 700 may be configured to operate one or more actuators based on one or more captured events, instructions, control signals, and/or configurations received from a service provider, client device, and/or other components of a compute node or platform. Additionally or alternatively, the actuators are used to change the operational state, position, and/or orientation of the sensors
  • FIG 8 depicts an example of management services (MnS) deployment 800.
  • MnS is a Service Based Management Architecture (SBMA).
  • An MnS is a set of offered management capabilities (e.g., capabilities for management and orchestration (MANO) of network and services).
  • the entity producing an MnS is referred to as an MnS producer (MnS-P) and the entity consuming an MnS is referred to as an MnS consumer (MnS-C).
  • MnS-P An MnS provided by an MnS-P can be consumed by any entity with appropriate authorization and authentication.
  • the MnS-P offers its services via a standardized service interface composed of individually specified MnS components (e.g., MnS-C).
  • a MnS is specified using different independent components.
  • a concrete MnS includes at least two of these components.
  • Three different component types are defined, including MnS component type A, MnS component type B, and MnS component type C.
  • the MnS component type A is a group of management operations and/or notifications that is agnostic with regard to the entities managed. The operations and notifications as such are hence not involving any information related to the managed network. These operations and notifications are called generic or network agnostic. For example, operations for creating, reading, updating and deleting managed object instances, where the managed object instance to be manipulated is specified only in the signature of the operation, are generic.
  • MnS component type B refers to management information represented by information models representing the managed entities.
  • a MnS component type B is also called Network Resource Model (NRM).
  • MnS component type B include network resource models (see e.g., [TS28622]) and network resource models (see e.g., [TS28541]).
  • MnS component type C is performance information of the managed entity and fault information of the managed entity. Examples of management service component type C include alarm information (see e.g., [TS28532] and [TS28545]) and performance data (see e.g., [TS28552], [TS28554], and [TS32425]).
  • MnS-P profile An MnS-P is described by a set of metadata called MnS-P profile.
  • the profile holds information about the supported MnS components and their version numbers. This may include also information about support of optional features. For example, a read operation on a complete subtree of managed object instances may support applying filters on the scoped set of objects as optional feature. In this case, the MnS profile should include the information if filtering is supported.
  • FIG 8 also depicts an example management function (MnF) deployment 810.
  • MnF management function
  • the MnF is a logical entity playing the roles of MnS-C and/or MnS-P.
  • An MnF with the role of management service exposure governance is referred to as an “Exposure governance management function” or “exposure governance MnF”.
  • An MnS produced by an MnF 810 may have multiple consumers.
  • the MnF 810 may consume multiple MnS from one or multiple MnS-Ps.
  • the MnF plays both roles (e.g., MnS-P and MnS-C).
  • An MnF can be deployed as a separate entity or embedded in an NF to provide MnS(s).
  • MnF deployment scenario 820 shows an example where the MnF is deployed as a separate entity to provide MnS(s) and MnF deployment scenario 830 in which an MnF is embedded in an NF to provide MnS(s).
  • the MnFs may interact by consuming MnS produced by other MnFs.
  • FIG. 8 also depicts an example MDA service (MDAS or MDA MnS) deployment 850.
  • MDA Management data analytics
  • the MDA provides a capability of processing and analysing data related to network and service events and status including, for example, performance measurements, KPIs, trace data, minimization drive tests (MDT) reports, radio link failure (RLF) reports, RRC connection establishment failure event (RCEF) reports, QoE reports, alarms, configuration data, network analytics data, and service experience data from AFs 560, and/or the like, to provide analytics output, (e.g., statistics, predictions, inferences, root cause analysis issues, and/or the like), and may also include recommendations to enable necessary actions for network and service operations.
  • analytics output e.g., statistics, predictions, inferences, root cause analysis issues, and/or the like
  • the MDA output is provided by the MDAS-P to the corresponding consumer(s) (e.g., MDAS-C/MDA MnS- C) that requested the analytics.
  • the MDA can identify ongoing issues impacting the performance of the network and services, and help to identify in advance potential issues that may cause potential failure and/or performance degradation.
  • the MDA can also assist to predict the network and service demand to enable the timely resource provisioning and deployments which would allow fast time-to-market network and service deployments.
  • the MDAS includes the services exposed by the MDA, which can be consumed by various consumers including, for example, MnFs (e.g., MnS-Ps and/or MnS- Cs for network and service management), NFs (e.g., NWDAF 562 and/or any other NFs/NEs discussed herein), SON functions, network and service optimization tools/functions, SLS assurance functions, human operators, AFs 560, and/or the like.
  • MnFs e.g., MnS-Ps and/or MnS- Cs for network and service management
  • NFs e.g., NWDAF 562 and/or any other NFs/NEs discussed herein
  • SON functions e.g., network and service optimization tools/functions
  • SLS assurance functions e.g., human operators
  • AFs 560 e.g., AFs 560, and/or the like.
  • MDAS in the context of SBMA enables any authorized consumer to request and receive analytics.
  • a management function may play the roles of MDA MnS-P, MDA MnS-C, other MnS-C, NWDAF consumer, and Location Management Function (LMF) service consumer, and may also interact with other non-3GPP management systems.
  • MDA MnS-P MDA MnS-P
  • MDA MnS-C MDA MnS-C
  • NWDAF consumer Wireless Fidelity
  • LMF Location Management Function
  • the internal business logic related to MDA leverages current and historical data related to: performance measurements (PM) as per [TS28552] and Key Performance Indicators (KPIs) as per [TS28554]; trace data, including MDT/RLF/RCEF, as per 3GPP TS 32.422 and TS 32.423; QoE and service experience data as per 3GPP TS 28.405 and 3GPP TS 28.406; analytics data offered by an NWDAF 562 as per [TS23288] including 5GC data and external web/app-based information (e.g., web crawler that provides online news) from an AF 560; alarm information and notifications as per [TS28532]; CM information and notifications; UE location information provided by LMF as per 3GPP TS 23.273; MDA reports from other MDA MnS producers; and management data from non-3GPP systems. Additionally or alternatively, the MDAF and/or the MDA internal business logic includes the MDA capability for the energy saving analysis discussed herein.
  • Analytics output from the MDA internal business logic are made available by the management functions (MDAFs) playing the role of MDA MnS-Ps to authorized consumers including, but not limited to, other MnFs, NFs/NEs, NWDAF 562, SON functions, optimization tools, and human operators.
  • Historical analytics reports may be saved and retrieved for use at later times by a MDA MnS-C, and historical analytics input (enabling) data (along with current analytics input data) may be used for analytics by MDA MnS-P.
  • Such a historical data usage may be applicable to both or one of the MDA MnS-P and MDA MnS-C side.
  • “historical data” refers to (a) historical analytics reports that have been produced in the past, and (b) historical analytics input (enabling) data that had been collected in the past.
  • the MDA process may utilize AI/ML technologies, such as any of those discussed herein.
  • An MDAF may optionally be deployed as one or more AI/ML inference function(s) in which the relevant ML entities are used for inference per the corresponding MDA capability. Specifications for MDA ML entity training to enable ML entity deployments are given in [TS28105].
  • FIG. 9 depicts an example functional framework 900 for RAN and/or NF intelligence.
  • the functional framework 900 includes a data collection function 905, which is a function that provides input data to the model training function (MTF) 910 and model inference function (MIF) 915.
  • AI/ML algorithm-specific data preparation e.g., data pre-processing and cleaning, formatting, and transformation
  • Examples of input data may include measurements from UEs 502, RAN nodes 514, and/or additional or alternative network entities; feedback from actor 920; and/or output(s) from AI/ML model(s).
  • the input data fed to the MTF 910 is training data
  • the input data fed to the MIF 915 is inference data.
  • the MTF 910 is a function that performs the AI/ML model training, validation, and testing.
  • the MTF 910 may generate model performance metrics as part of the model testing procedure and/or as part of the model validation procedure. Examples of the model performance metrics are discussed infra.
  • the MTF 910 may also be responsible for data preparation (e.g., data preprocessing and cleaning, formatting, and transformation) based on training data delivered by the data collection function 905, if required.
  • the MTF 910 performs model deployment/updates, wherein the MTF 910 initially deploys a trained, validated, and tested AI/ML model (e.g., ML models 250 discussed previously) to the MIF 915, and/or delivers updated model(s) to the MIF 915. Examples of the model deployments and updates are discussed herein.
  • the MTF 910 corresponds to the MTF 125 of Figure 1 and/or the MLTF 1025 of Figure 10.
  • the MIF 915 is a function that provides AI/ML model inference output (e.g., statistical inferences, predictions, decisions, probabilities and/or probability distributions, actions, configurations, policies, data analytics, outcomes, optimizations, and/or the like).
  • AI/ML model inference output e.g., statistical inferences, predictions, decisions, probabilities and/or probability distributions, actions, configurations, policies, data analytics, outcomes, optimizations, and/or the like.
  • the MIF 915 corresponds to the inference function 245 of Figure 2 and/or the inference engine 1045 of Figure 10.
  • the MIF 915 produces an inference output, which is the inferences generated or otherwise produced when the MIF 915 operates the AI/ML model using the inference data.
  • the MIF 915 provides the inference output to the actor 920. Details of inference output are use case specific and may be based on the specific type of AI/ML model being used.
  • the MIF 915 may also be responsible for data preparation (e.g., data pre-processing and cleaning, formatting, and transformation) based on inference data delivered by the data collection function 905, if required.
  • the MIF 915 may provide model performance feedback to the MTF 910 when applicable.
  • the model performance feedback may include various performance metrics (e.g., any of those discussed herein) related to producing inferences.
  • the model performance feedback may be used for monitoring the performance of the AI/ML model, when available.
  • the actor 920 is a function that receives the inference output from the MIF 915, and triggers or otherwise performs corresponding actions based on the inference output.
  • the actor 920 may trigger actions directed to other entities and/or to itself.
  • the actor 920 is a network energy saving (NES) function, a mobility robustness optimization (MRO) function, a load balancing optimization (LBO) function, handover (HO) optimization and/or conditional HO (CHO) optimization, physical cell identifier (PCI) configuration, automatic neighbor relation (ANR) management, random access (RACH) optimization, radio resource management (RRM) optimization, and/or some other SON function.
  • NES network energy saving
  • MRO mobility robustness optimization
  • LBO load balancing optimization
  • HO handover
  • CHO conditional HO
  • PCI physical cell identifier
  • ANR automatic neighbor relation
  • RACH random access
  • RRM radio resource management
  • the inference output is related to NES, MRO, LBO HO/CHO optimization, PCI configuration, ANR management, RACH optimization, RRM optimization, and/or related to some other SON function
  • the actor 920 is one or more RAN nodes 514, UEs 502, VNE 101, PEE sensors 120, one or more NFs, and/or some other entities/elements discussed herein that perform various operations based on the output inferences.
  • the output inferences may be the VNF ECD 212 and the actor may be the VNE 101, PEE sensors 120, one or more NFs, and/or some other entities/elements discussed herein.
  • the actor 920 may also provide feedback to the data collection function 905 for storage.
  • the feedback includes information related to the actions performed by the actor 920.
  • the feedback may include any information that may be needed to derive training data (and/or testing data and/or validation data), inference data, and/or data to monitor the performance of the AI/ML model and its impact to the network through updating of KPIs, performance counters, and the like.
  • Figure 10 depicts an example AI/ML-assisted communication network, which includes communication between an ML function (MLF) 1002 and an MLF 1004. More specifically, as described in further detail below, AI/ML models may be used or leveraged to facilitate wired and/or over-the-air communication between the MLF 1002 and the MLF 1004.
  • MLF ML function
  • the MLF 1002 and the MLF 1004 operate in a matter consistent with 3GPP technical specifications and/or technical reports for 5G and/or 6G systems.
  • the communication mechanisms between the MLF 1002 and the MLF 1004 include any suitable access technologies and/or RATs, such as any of those discussed herein. Additionally, the communication mechanisms in Figure 10 may be part of, or operate concurrently with networks 500, 600, 708, and/or some other network described herein, and/or concurrently with deployments 100, 200, and/or some other deployment described herein.
  • the MLFs 1002, 1004 may correspond to any of the entities/elements discussed herein.
  • the MLF 1002 corresponds to an MnF and/or MnS-P and the MLF 1004 corresponds to another MnF and/or an MnS-C, or vice versa.
  • the MLF 1002 corresponds to a set of the MLFs of Figures 1-4, 8, and/or 9 and the MLF 1004 corresponds to a different set of the MLFs of Figures 1-4, 8, and/or 9.
  • the sets of MLFs may be mutually exclusive, or some or all of the MLFs in each of the sets of MLFs may overlap or be shared.
  • the MLF 1002 and/or the MLF 1004 is/are implemented by respective UEs (e.g., UE 502, UE 602). Additionally or alternatively, the MLF 1002 and/or the MLF 1004 is/are implemented by a same UE or different UEs. In another example, the MLF 1002 and/or the MLF 1004 is/are implemented by respective RANs (e.g., RAN 504) or respective NANs (e.g., AP 506, NAN 514, NAN 604). Additionally or alternatively, the MLF 1002 is implemented as or by a UE and the MLF 1004 is implemented by as or by a RAN node, or vice versa.
  • RANs e.g., RAN 504
  • respective NANs e.g., AP 506, NAN 514, NAN 604
  • the MLF 1002 is implemented as or by a UE and the MLF 1004 is implemented by as or by a RAN node, or vice versa.
  • the MLF 1002 and the MLF 1004 include various AI/ML-related components, functions, elements, or entities, which may be implemented as hardware, software, firmware, and/or some combination thereof.
  • one or more of the AI/ML-rclated elements are implemented as part of the same hardware (e.g., IC, chip, or multi-processor chip), software (e.g., program, process, engine, and/or the like), or firmware as at least one other component, function, element, or entity.
  • the A I/ML- related elements of MLF 1002 may be the same or similar to the AI/ML-related elements of MLF 1004.
  • description of the various elements is provided from the point of view of the MLF 1002, however it will be understood that such description applies to like named/numbered elements of MLF 1004, unless explicitly stated otherwise.
  • the data repository 1015 is responsible for data collection and storage.
  • the data repository 1015 may collect and store RAN configuration parameters, NF configuration parameters, measurement data, RLM data, key performance indicators (KPIs), SLAs, model performance metrics, knowledge base data, ground truth data, ML model parameters, hyperparameters, and/or other data for model training, update, and inference.
  • KPIs key performance indicators
  • SLAs model performance metrics
  • knowledge base data knowledge base data
  • ground truth data ML model parameters
  • hyperparameters hyperparameters
  • the collected data is stored into the repository 1015, and the stored data can be discovered and extracted by other elements from the data repository 1015.
  • the inference data selection/filter 1050 may retrieve data from the data repository 1015 and provide that data to the inference engine 1045 for generating/determining inferences.
  • the MLF 1002 is configured to discover and request data from the data repository 1015 in the MLF 1004, and/or vice versa.
  • the data repository 1015 of the MLF 1002 may be communicatively coupled with the data repository 1015 of the MLF 1004 such that the respective data repositories 1015 may share collected data with one another.
  • the MLF 1002 and/or MLF 1004 is/are configured to discover and request data from one or more external sources and/or data storage systems/devices.
  • the training data selection/filter 1020 is configured to generate training, validation, and testing datasets for ML training (MLT) (or ML model training). One or more of these datasets may be extracted or otherwise obtained from the data repository 1015. Data may be selected/filtered based on the specific AI/ML model to be trained. Data may optionally be transformed, augmented, and/or pre-processed (e.g., normalized) before being loaded into datasets.
  • the training data selection/filter 1020 may label data in datasets for supervised learning, or the data may remain unlabeled for unsupervised learning. The produced datasets may then be fed into the MLT function (MLTF) 1025.
  • MLT MLT function
  • the MLTF 1025 is responsible for training and updating (e.g., tuning and/or re-training) AI/ L models.
  • a selected model (or set of models) may be trained using the fed-in datasets (including training, validation, testing) from the training data selection/filtering 1020.
  • the MLTF 1025 produces trained and tested AI/ML models that are ready for deployment.
  • the produced trained and tested models can be stored in a model repository 1035.
  • the MLTF 1025 corresponds to the MTF 125 and/or model training function 910.
  • the model repository 1035 is responsible for AI/ML models’ (both trained and un-trained) storage and exposure.
  • Various model data can be stored in the model repository 1035.
  • the model data can include, for example, trained/updated model(s), model parameters, hyperparameters, and/or model metadata, such as model performance metrics, hardware platform/configuration data, model execution parameters/conditions, and/or the like.
  • the model data can also include inferences made when operating the ML model. Examples of AI/ML models and other ML model aspects are discussed herein.
  • the model data may be discovered and requested by other MLF components (e.g., the training data selection/filter 1020 and/or the MLTF 1025).
  • the MLF 1002 can discover and request model data from the model repository 1035 of the MLF 1004. Additionally or alternatively, the MLF 1004 can discover and/or request model data from the model repository 1035 of the MLF 1002. In some examples, the MLF 1004 may configure models, model parameters, hyperparameters, model execution parameters/conditions, and/or other ML model aspects in the model repository 1035 of the MLF 1002.
  • the model management function 1040 is responsible for management of the AI/ML model produced by the MLTF 1025. Such management functions may include deployment of a trained model, monitoring ML entity performance, reporting ML entity validation and/or performance data, and/or the like. In model deployment, the model management 1040 may allocate and schedule hardware and/or software resources for inference, based on received trained and tested models.
  • the term “inference” refers to the process of using trained AI/ML model(s) to generate statistical inferences, predictions, decisions, probabilities and/or probability distributions, actions, configurations, policies, data analytics, outcomes, optimizations, and/or the like based on new, unseen data (e.g., “input inference data”).
  • the inference process can include feeding input inference data into the ML model (e.g., inference engine 1045), forward passing the input inference data through the ML model’s architecture/topology wherein the ML model performs computations on the data using its learned parameters (e.g., weights and biases), and predictions output.
  • the inference process can include data transformation before the forward pass, wherein the input inference data is pre-processed or transformed to match the format required by the ML model.
  • the model management 1040 may decide to terminate the running model, start model re-training and/or tuning, select another model, and/or the like.
  • the model management 1040 of the MLF 1004 may be able to configure model management policies in the MLF 1002, and vice versa.
  • the inference data selection/filter 1050 is responsible for generating datasets for model inference at the inference 1045, as described infra. For example, inference data may be extracted from the data repository 1015. The inference data selection/filter 1050 may select and/or filter the data based on the deployed AI/ML model. Data may be transformed, augmented, and/or pre- processed in a same or similar manner as the transformation, augmentation, and/or pre-processing of the training data selection/filtering as described w.r.t training data selection filter 1020. The produced inference dataset may be fed into the inference engine 1045.
  • the inference engine 1045 is responsible for executing inference as described herein.
  • the inference engine 1045 may consume the inference dataset provided by the inference data selection/filter 1050, and generate one or more inferences.
  • the inferences may be or include, for example, statistical inferences, predictions, decisions, probabilities and/or probability distributions, actions, configurations, policies, data analytics, outcomes, optimizations, and/or the like.
  • the inference(s)/outcome(s) may be provided to the performance measurement function 1030.
  • the performance measurement function 1030 is configured to measure model performance metrics (e.g., accuracy, momentum, precision, quantile, recall/sensitivity, model bias, run-time latency, resource consumption, and/or other suitable metrics/measures, such as any of those discussed herein) of deployed and executing models based on the inference(s) for monitoring purposes.
  • Model performance data may be stored in the data repository 1015 and/or reported according to the validation reporting mechanisms discussed herein.
  • the performance metrics that may be measured and/or predicted by the performance measurement function 1030 may be based on the particular AI/ML task and the other inputs/parameters of the ML entity.
  • the performance metrics may include model-based metrics and platform-based metrics.
  • the model-based metrics are metrics related to the performance of the model itself and/or without considering the underlying hardware platform.
  • the platform-based metrics are metrics related to the performance of the underlying hardware platform when operating the ML model.
  • the model-based metrics may be based on the particular type of AI/ML model and/or the AI/ML domain.
  • regression-related metrics may be predicted for regression-based ML models.
  • regression-related metrics include error value, mean error, mean absolute error (MAE), mean reciprocal rank (MRR), mean squared error (MSE), root MSE (RMSE), correlation coefficient (R), coefficient of determination (R 2 ), Golbraikh and Tropsha criterion, and/or other like regression-related metrics such as those discussed in Naser et al., Insights into Performance Fitness and Error Metrics for Machine Learning, arXiv:2006.00887vl (17 May 2020) (“[Naser]”).
  • correlation-related metrics may be predicted for correlation-related metrics
  • correlation-related metrics include accuracy, precision (also referred to as positive predictive value (PPV)), mean average precision (mAP), negative predictive value (NPV), recall (also referred to as true positive rate (TPR) or sensitivity), specificity (also referred to as true negative rate (TNR) or selectivity), false positive rate, false negative rate, F score (e.g., Fi score, F2 score, Fp score, and/or the like), Matthews Correlation Coefficient (MCC), markedness, receiver operating characteristic (ROC), area under the ROC curve (AUC), distance score, and/or other like correlation-related metrics such as those discussed in [Naser].
  • Additional or alternative model-based metrics may also be predicted such as, for example, cumulative gain (CG), discounted CG (DCG), normalized DCG (NDCG), signal-to-noise ratio (SNR), peak SNR (PSNR), structural similarity (SSIM), Intersection over Union (loU), perplexity, bilingual evaluation understudy (BLEU) score, inception score, Wasserstein metric, Frechet inception distance (FID), string metric, edit distance, Levenshtein distance, Damerau-Levenshtein distance, number of evaluation instances (e.g., iterations, epochs, or episodes), learning rate (e.g., the speed at which the algorithm reaches (converges to) optimal weights), learning rate decay (or weight decay), number and/or type of computations, number and/or type of multiply and accumulates (MACs), number and/or type of multiply adds (MAdds) operations and/or other like performance metrics related to the performance of the ML model.
  • CG cumulative gain
  • DCG discounted
  • Examples of the platform-based metrics include latency, response time, throughput (e.g., rate of processing work of a processor or platform/system), availability and/or reliability, power consumption (e.g., performance per Watt, and/or the like), transistor count, execution time (e.g., amount of time to obtain an inference, and/or the like), memory footprint, memory utilization, processor utilization, processor time, number of computations, instructions per second (IPS), floating point operations per second (FLOPS), and/or other like performance metrics related to the performance of the ML model and/or the underlying hardware platform to be used to operate the ML model.
  • throughput e.g., rate of processing work of a processor or platform/system
  • availability and/or reliability e.g., power consumption (e.g., performance per Watt, and/or the like)
  • power consumption e.g., performance per Watt, and/or the like
  • transistor count e.g., amount of time to obtain an inference, and/or the like
  • memory footprint memory utilization
  • proxy metrics e.g., a metric or attribute used as a stand-in or substitute for another metric or attribute
  • proxy metrics can be used for predicting the ML model performance.
  • the total, mean, and/or some other distribution of such metrics may be predicted and/or measured using any suitable data collection and/or measurement mechanism(s).
  • FIG 11 shows an example process 1100 to be performed by a producer of performance assurance MnS 150.
  • the process 1100 includes receiving, from an MTF 125, a request to collect data from a VNF 103 within VNE 101 (1101); and sending VNE ECD 122 to the MTF 125 that includes an indication of a measurement associated with the VNF 103, such as VRUD 112 of the VNF 103 (1102).
  • Figure 12 shows an example process 1200 to be performed by an MTF 125.
  • the process 1200 includes requesting a producer of performance assurance MnS 150 to create a performance measurement (PM) job to collect measurement data from VNF instance(s) 103 and VNE 101 (1201); receive, from the producer of performance assurance MnS 851, VNF measurement data 112 and VNE measurement data 122 (1202); perform model training based on the VNF measurement data 112 and the VNE measurement data 122 (1203); and deploy the model to MDAF 851 (1204).
  • PM performance measurement
  • processes 1100-1200 can be arranged in different orders, one or more of the depicted operations may be combined and/or divided/split into multiple operations, depicted operations may be omitted, and/or additional or alternative operations may be included in any of the depicted processes.
  • Additional examples of the presently described methods, devices, systems, and networks discussed herein include the following, non-limiting example implementations. Each of the following non-limiting examples may stand on its own or may be combined in any permutation or combination with any one or more of the other examples provided below or throughout the present disclosure.
  • Example 1 includes a method of operating a model training function, the method comprising: receiving, from one or more performance assurance management service producers (MnS-Ps), virtualized network function (VNF) measurement data related to one or more VNF instances and virtualized network entity (VNE) measurement data related to a VNE; training a machine learning (ML) model to predict, based on the VNF measurement data and the VNE measurement data, VNF energy consumption data (ECD) for respective VNF instances of the one or more VNF instances; and deploying the ML model to a model inference function to generate predictions of VNF ECD.
  • MnS-Ps performance assurance management service producers
  • VNF virtualized network function
  • VNE virtualized network entity
  • ECD VNF energy consumption data
  • Example 2 includes the method of example 1 and/or some other example(s) herein, wherein the method includes: sending, to respective performance assurance MnS-Ps, a request to create a performance management job to collect the VNF measurement data from the one or more VNF instances and the VNE measurement date from the VNE.
  • Example 3 includes the method examples 1-2 and/or some other example(s) herein, wherein the VNF measurement data includes virtual resource usage data (VRUD) for individual VNF instances of the one or more VNF instances.
  • VNF measurement data includes virtual resource usage data (VRUD) for individual VNF instances of the one or more VNF instances.
  • VRUD virtual resource usage data
  • Example 4 includes the method of example 3 and/or some other example(s) herein, wherein the VRUD includes, for the respective VNF instances, virtual compute usage data, virtual memory usage data, and virtual disk usage data.
  • Example 5 includes the method of example 4 and/or some other example(s) herein, wherein the VRUD includes, for the respective VNF instances, virtual network usage data.
  • Example 6 includes the method of examples 1-5 and/or some other example(s) herein, wherein the VNE measurement data includes VNE ECD collected from one or more power, energy, environmental (PEE) sensors.
  • VNE ECD power, energy, environmental
  • Example 7 includes the method of example 6 and/or some other example(s) herein, wherein the VNE ECD is generated by mapping energy consumption metrics received from the one or more PEE sensors to a managed element representing the VNE.
  • Example 8 includes the method of examples 1-7 and/or some other example(s) herein, wherein the VNF measurement data and VNE measurement data are collected at a same interval.
  • Example 9 includes the method of example 8 and/or some other example(s) herein, wherein the VNF measurement data and VNE measurement data are time synchronized.
  • Example 10 includes the method of examples 1-9 and/or some other example(s) herein, wherein the training includes: training the ML model using the VRUD of the respective VNF instances as data features and the VNE ECD as data labels to compute model parameters of the ML model.
  • Example 11 includes a method of operating a model inference function, the method comprising: receiving, from a machine learning (ML) model training function, an ML model trained to predict virtualized network function (VNF) energy consumption data (ECD); receiving, from one or more performance assurance management service producers (MnS-Ps), VNF measurement data related to one or more VNF instances and measurement data related to the VNF ECD; and generating, using the trained ML model, predicted VNF ECD for respective VNF instances of the one or more VNF instances based on the VNF measurement data.
  • ML machine learning
  • MnS-Ps performance assurance management service producers
  • Example 12 includes the method example 11 and/or some other example(s) herein, wherein the method includes: sending, to respective performance assurance MnS-Ps of the one or more performance assurance MnS-Ps, a request to create a performance management job to collect the VNF measurement data from the one or more VNF instances.
  • Example 13 includes the method examples 11-12 and/or some other example(s) herein, wherein the VNF measurement data includes virtual resource usage data (VRUD) for individual VNF instances of the one or more VNF instances.
  • VNF measurement data includes virtual resource usage data (VRUD) for individual VNF instances of the one or more VNF instances.
  • VRUD virtual resource usage data
  • Example 14 includes the method of example 13 and/or some other example(s) herein, wherein the VRUD includes, for the respective VNF instances, virtual compute usage data, virtual memory usage data, and virtual disk usage data.
  • Example 15 includes the method of example 14 and/or some other example(s) herein, wherein the VRUD includes, for the respective VNF instances, virtual network usage data.
  • Example 16a includes the method examples 11-15 and/or some other example(s) herein, wherein a KPI is generated from mapping to the predicted VNF ECD.
  • Example 16b includes the method examples 11-15 and/or some other example(s) herein, wherein a KPI is generated for the predicted VNF ECD.
  • Example 17 includes the method of examples 16a- 16b and/or some other example(s) herein, wherein the KPI is a measure of energy consumption of the respective VNF instances.
  • Example 18 includes the method of examples 1-17 and/or some other example(s) herein, wherein the predicted VNF ECD is expressed in kilowatt-hours.
  • Example 19 includes the method of examples 1-18 and/or some other example(s) herein, wherein the model inference function is operated by a management data analytic function (MDAF).
  • MDAF management data analytic function
  • Example 20 includes the method of examples 1-19 and/or some other example(s) herein, wherein the model training function is a machine learning training MnS-P and the model inference function is a machine learning training management service consumer (MnS-C).
  • the model training function is a machine learning training MnS-P
  • the model inference function is a machine learning training management service consumer (MnS-C).
  • Example 21 includes the method of examples 1-19 and/or some other example(s) herein, wherein the model training function is a machine learning training function (MLTF) contained by a Network Data Analytics Function (NWDAF).
  • MLTF machine learning training function
  • NWDAAF Network Data Analytics Function
  • Example 22 includes an apparatus comprising means to perform one or more elements of a method described in or related to any of examples 1-21 , or any other method or process described herein.
  • Example 23 includes one or more non-transitory computer-readable media comprising instructions to cause an electronic device, upon execution of the instructions by one or more processors of the electronic device, to perform one or more elements of a method described in or related to any of examples 1-21, or any other method or process described herein.
  • Example 24 includes an apparatus comprising logic, modules, or circuitry to perform one or more elements of a method described in or related to any of examples 1-21 , or any other method or process described herein.
  • Example 25 includes a method, technique, or process as described in or related to any of examples 1-21, or portions or parts thereof.
  • Example 26 includes an apparatus comprising: one or more processors and one or more computer-readable media comprising instructions that, when executed by the one or more processors, cause the one or more processors to perform the method, techniques, or process as described in or related to any of examples 1-21, or portions thereof.
  • Example 27 includes a signal as described in or related to any of examples 1-21, or portions or parts thereof.
  • Example 28 includes a datagram, packet, frame, segment, protocol data unit (PDU), or message as described in or related to any of examples 1-21, or portions or parts thereof, or otherwise described in the present disclosure.
  • Example 29 includes a signal encoded with data as described in or related to any of examples 1-21, or portions or parts thereof, or otherwise described in the present disclosure.
  • Example 30 includes a signal encoded with a datagram, packet, frame, segment, protocol data unit (PDU), or message as described in or related to any of examples 1-21, or portions or parts thereof, or otherwise described in the present disclosure.
  • Example 31 includes an electromagnetic signal carrying computer-readable instructions, wherein execution of the computer-readable instructions by one or more processors is to cause the one or more processors to perform the method, techniques, or process as described in or related to any of examples 1-21, or portions thereof.
  • Example 32 includes a computer program comprising instructions, wherein execution of the program by a processing element is to cause the processing element to carry out the method, techniques, or process as described in or related to any of examples 1-21, or portions thereof.
  • Example 33 includes a signal in a wireless network as shown and described herein.
  • Example 34 includes a method of communicating in a wireless network as shown and described herein.
  • Example 35 includes a system for providing wireless communication as shown and described herein.
  • Example 36 includes a device for providing wireless communication as shown and described herein.
  • ETSI GR NFV 003, ETSI ES 202 336-12 (“[ES202336- 12]”), and 3GPP TS 28.500 (“[TS28500]”) may also be applicable to the examples and embodiments discussed herein
  • the phrase “A, B, and/or C” means (A), (B), (C), (A and B), (A and C), (B and C), or (A, B and C).
  • the phrase “X(s)” means one or more X or a set of X.
  • the description may use the phrases “in an embodiment,” “In some embodiments,” “in one implementation,” “In some implementations,” “in some examples”, and the like, each of which may refer to one or more of the same or different embodiments, implementations, and/or examples.
  • the terms “comprising,” “including,” “having,” and the like, as used with respect to the present disclosure are synonymous.
  • master and slave at least in some examples refers to a model of asymmetric communication or control where one device, process, element, or entity (the “master”) controls one or more other device, process, element, or entity (the “slaves”).
  • master and “slave” are used in this disclosure only for their technical meaning.
  • master or “grandmaster” may be substituted with any of the following terms “main”, “source”, “primary”, “initiator”, “requestor”, “transmitter”, “host”, “maestro”, “controller”, “provider”, “producer”, “client”, “source”, “mix”, “parent”, “chief’, “manager”, “reference” (e.g., as in “reference clock” or the like), and/or the like.
  • slave may be substituted with any of the following terms “receiver”, “secondary”, “subordinate”, “replica”, target”, “responder”, “device”, “performer”, “agent”, “standby”, “consumer”, “peripheral”, “follower”, “server”, “child”, “helper”, “worker”, “node”, and/or the like.
  • Coupled may mean two or more elements are in direct physical or electrical contact with one another, may mean that two or more elements indirectly contact each other but still cooperate or interact with each other, and/or may mean that one or more other elements are coupled or connected between the elements that are said to be coupled with each other.
  • directly coupled may mean that two or more elements are in direct contact with one another.
  • communicatively coupled may mean that two or more elements may be in contact with one another by a means of communication including through a wire or other interconnect connection, through a wireless communication channel or ink, and/or the like.
  • establish or “establishment” at least in some examples refers to (partial or in full) acts, tasks, operations, and the like, related to bringing or the readying the bringing of something into existence either actively or passively (e.g., exposing a device identity or entity identity). Additionally or alternatively, the term “establish” or “establishment” at least in some examples refers to (partial or in full) acts, tasks, operations, and the like, related to initiating, starting, or warming communication or initiating, starting, or warming a relationship between two entities or elements (e.g., establish a session, establish a session, and the like).
  • the term “establish” or “establishment” at least in some examples refers to initiating something to a state of working readiness.
  • the term “established” at least in some examples refers to a state of being operational or ready for use (e.g., full establishment).
  • any definition for the term “establish” or “establishment” defined in any specification or standard can be used for purposes of the present disclosure and such definitions are not disavowed by any of the aforementioned definitions.
  • the term “obtain” at least in some examples refers to (partial or in full) acts, tasks, operations, and the like, of intercepting, movement, copying, retrieval, or acquisition (e.g., from a memory, an interface, or a buffer), on the original packet stream or on a copy (e.g., a new instance) of the packet stream.
  • Other aspects of obtaining or receiving may involving instantiating, enabling, or controlling the ability to obtain or receive a stream of packets (or the following parameters and templates or template values).
  • the term “receipt” at least in some examples refers to any action (or set of actions) involved with receiving or obtaining an object, data, data unit, and the like, and/or the fact of the object, data, data unit, and the like being received.
  • the term “receipt” at least in some examples refers to an object, data, data unit, and the like, being pushed to a device, system, element, and the like (e.g., often referred to as a push model), pulled by a device, system, element, and the like (e.g., often referred to as a pull model), and/or the like.
  • element at least in some examples refers to a unit that is indivisible at a given level of abstraction and has a clearly defined boundary, wherein an element may be any type of entity including, for example, one or more devices, systems, controllers, network elements, modules, engines, components, and so forth, or combinations thereof.
  • entity at least in some examples refers to a distinct element of a component, architecture, platform, device, and/or system. Additionally or alternatively, the term “entity” at least in some examples refers to information transferred as a payload.
  • the term “measurement” at least in some examples refers to the observation and/or quantification of attributes of an object, event, or phenomenon. Additionally or alternatively, the term “measurement” at least in some examples refers to a set of operations having the object of determining a measured value or measurement result, and/or the actual instance or execution of operations leading to a measured value. Additionally or alternatively, the term “measurement” at least in some examples refers to data recorded during testing.
  • the term “metric” at least in some examples refers to a quantity produced in an assessment of a measured value. Additionally or alternatively, the term “metric” at least in some examples refers to data derived from a set of measurements.
  • the term “metric” at least in some examples refers to set of events combined or otherwise grouped into one or more values. Additionally or alternatively, the term “metric” at least in some examples refers to a combination of measures or set of collected data points. Additionally or alternatively, the term “metric” at least in some examples refers to a standard definition of a quantity, produced in an assessment of performance and/or reliability of the network, which has an intended utility and is carefully specified to convey the exact meaning of a measured value.
  • signal at least in some examples refers to an observable change in a quality and/or quantity. Additionally or alternatively, the term “signal” at least in some examples refers to a function that conveys information about of an object, event, or phenomenon. Additionally or alternatively, the term “signal” at least in some examples refers to any time varying voltage, current, or electromagnetic wave that may or may not carry information.
  • digital signal at least in some examples refers to a signal that is constructed from a discrete set of waveforms of a physical quantity so as to represent a sequence of discrete values.
  • identifier at least in some examples refers to a value, or a set of values, that uniquely identify an identity in a certain scope. Additionally or alternatively, the term “identifier” at least in some examples refers to a sequence of characters that identifies or otherwise indicates the identity of a unique object, element, or entity, or a unique class of objects, elements, or entities. Additionally or alternatively, the term “identifier” at least in some examples refers to a sequence of characters used to identify or refer to an application, program, session, object, element, entity, variable, set of data, and/or the like.
  • sequence of characters mentioned previously at least in some examples refers to one or more names, labels, words, numbers, letters, symbols, and/or any combination thereof.
  • identifier at least in some examples refers to a name, address, label, distinguishing index, and/or attribute. Additionally or alternatively, the term “identifier” at least in some examples refers to an instance of identification.
  • persistent identifier at least in some examples refers to an identifier that is reused by a device or by another device associated with the same person or group of persons for an indefinite period.
  • identity at least in some examples refers to a process of recognizing an identity as distinct from other identities in a particular scope or context, which may involve processing identifiers to reference an identity in an identity database.
  • app identifier refers to an identifier that can be mapped to a specific application, application instance, or application instance.
  • an “application identifier” at least in some examples refers to an identifier that can be mapped to a specific application traffic detection rule.
  • circuitry at least in some examples refers to a circuit or system of multiple circuits configured to perform a particular function in an electronic device.
  • the circuit or system of circuits may be part of, or include one or more hardware components, such as a logic circuit, a processor (shared, dedicated, or group) and/or memory (shared, dedicated, or group), an application-specific integrated circuit (ASIC), field-programmable gate array (FPGA), programmable logic controller (PLC), single-board computer (SBC), system on chip (SoC), system in package (SiP), multi-chip package (MCP), digital signal processor (DSP), and the like, that are configured to provide the described functionality.
  • ASIC application-specific integrated circuit
  • FPGA field-programmable gate array
  • PLC programmable logic controller
  • SBC single-board computer
  • SoC system on chip
  • SiP system in package
  • MCP multi-chip package
  • DSP digital signal processor
  • circuitry may also refer to a combination of one or more hardware elements with the program code used to carry out the functionality of that program code. Some types of circuitry may execute one or more software or firmware programs to provide at least some of the described functionality. Such a combination of hardware elements and program code may be referred to as a particular type of circuitry.
  • processor circuitry at least in some examples refers to, is part of, or includes circuitry capable of sequentially and automatically carrying out a sequence of arithmetic or logical operations, or recording, storing, and/or transferring digital data.
  • processor circuitry at least in some examples refers to one or more application processors, one or more baseband processors, a physical CPU, a single-core processor, a dual-core processor, a triple-core processor, a quad-core processor, and/or any other device capable of executing or otherwise operating computer-executable instructions, such as program code, software modules, and/or functional processes.
  • application circuitry and/or “baseband circuitry” may be considered synonymous to, and may be referred to as, “processor circuitry.”
  • memory and/or “memory circuitry” at least in some examples refers to one or more hardware devices for storing data, including random access memory (RAM), static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), magnetoresistive RAM (MRAM), conductive bridge Random Access Memory (CB-RAM), spin transfer torque (STT)- MRAM, phase change RAM (PRAM), core memory, read-only memory (ROM), programmable ROM (PROM), erasable PROM (EPROM), electrically EPROM (EEPROM), flash memory, nonvolatile RAM (NVRAM), magnetic disk storage mediums, optical storage mediums, flash memory devices or other machine readable mediums for storing data.
  • computer- readable medium includes, but is not limited to, memory, portable or fixed storage devices, optical storage devices, and various other mediums capable of storing, containing or carrying instructions or data.
  • interface circuitry at least in some examples refers to, is part of, or includes circuitry that enables the exchange of information between two or more components or devices.
  • interface circuitry at least in some examples refers to one or more hardware interfaces, for example, buses, I/O interfaces, peripheral component interfaces, network interface cards, and/or the like.
  • infrastructure processing unit or “IPU” at least in some examples refers to an advanced networking device with hardened accelerators and network connectivity (e.g., Ethernet or the like) that accelerates and manages infrastructure functions using tightly coupled, dedicated, programmable cores.
  • an IPU offers full infrastructure offload and provides an extra layer of security by serving as a control point of a host for running infrastructure applications.
  • An IPU is capable of offloading the entire infrastructure stack from the host and can control how the host attaches to this infrastructure. This gives service providers an extra layer of security and control, enforced in hardware by the IPU.
  • the term “device” at least in some examples refers to a physical entity embedded inside, or attached to, another physical entity in its vicinity, with capabilities to convey digital information from or to that physical entity.
  • the term “controller” at least in some examples refers to an element or entity that has the capability to affect a physical entity, such as by changing its state or causing the physical entity to move.
  • the term “scheduler” at least in some examples refers to an entity or element that assigns resources (e.g., processor time, network links, memory space, and/or the like) to perform tasks.
  • network scheduler at least in some examples refers to a node, element, or entity that manages network packets in transmit and/or receive queues of one or more protocol stacks of network access circuitry (e.g., a network interface controller (NIC), baseband processor, and the like).
  • network scheduler at least in some examples can be used interchangeably with the terms “packet scheduler”, “queueing discipline” or “qdisc”, and/or “queueing algorithm”.
  • compute node or “compute device” at least in some examples refers to an identifiable entity implementing an aspect of computing operations, whether part of a larger system, distributed collection of systems, or a standalone apparatus.
  • a compute node may be referred to as a “computing device”, “computing system”, or the like, whether in operation as a client, server, or intermediate entity.
  • Specific implementations of a compute node may be incorporated into a server, base station, gateway, road side unit, on-premise unit, user equipment, end consuming device, appliance, or the like.
  • the term “node” at least in some examples refers to and/or is interchangeable with the terms “device”, “component”, “sub-system”, and/or the like.
  • the term “computer system” at least in some examples refers to any type interconnected electronic devices, computer devices, or components thereof. Additionally, the terms “computer system” and/or “system” at least in some examples refer to various components of a computer that are communicatively coupled with one another. Furthermore, the term “computer system” and/or “system” at least in some examples refer to multiple computer devices and/or multiple computing systems that are communicatively coupled with one another and configured to share computing and/or networking resources.
  • user equipment at least in some examples refers to a device with radio communication capabilities and may describe a remote user of network resources in a communications network.
  • the term “user equipment” or “UE” may be considered synonymous to, and may be referred to as, client, mobile, mobile device, mobile terminal, user terminal, mobile unit, station, mobile station, mobile user, subscriber, user, remote station, access agent, user agent, receiver, radio equipment, reconfigurable radio equipment, reconfigurable mobile device, and the like.
  • user equipment or “UE” includes any type of wireless/wired device or any computing device including a wireless communications interface.
  • Examples of UEs, client devices, and the like include desktop computers, workstations, laptop computers, mobile data terminals, smartphones, tablet computers, wearable devices, machine-to-machine (M2M) devices, machine-type communication (MTC) devices, Internet of Things (loT) devices, embedded systems, sensors, autonomous vehicles, drones, robots, in-vehicle infotainment systems, instrument clusters, onboard diagnostic devices, dashtop mobile equipment, electronic engine management systems, electronic/engine control units/modules, microcontrollers, control module, server devices, network appliances, head-up display (HUD) devices, helmet-mounted display devices, augmented reality (AR) devices, virtual reality (VR) devices, mixed reality (MR) devices, and/or other like systems or devices.
  • M2M machine-to-machine
  • MTC machine-type communication
  • LoT Internet of Things
  • embedded systems embedded systems
  • sensors autonomous vehicles
  • drones drones
  • robots in-vehicle infotainment systems
  • instrument clusters on
  • station at least in some examples refers to a logical entity that is a singly addressable instance of a medium access control (MAC) and physical layer (PHY) interface to the wireless medium (WM).
  • wireless medium at least in some examples refers to the medium used to implement the transfer of protocol data units (PDUs) between peer physical layer (PHY) entities of a wireless local area network (LAN).
  • PDUs protocol data units
  • network element at least in some examples refers to physical or virtualized equipment and/or infrastructure used to provide wired or wireless communication network services.
  • network element may be considered synonymous to and/or referred to as a networked computer, networking hardware, network equipment, network node, router, switch, hub, bridge, radio network controller, network access node (NAN), base station, access point (AP), RAN device, RAN node, gateway, server, network appliance, network function (NF), virtualized NF (VNF), and/or the like.
  • network controller at least in some examples refers to a functional block that centralizes some or all of the control and management functionality of a network domain and may provide an abstract view of the network domain to other functional blocks via an interface.
  • network access node at least in some examples refers to a network element in a radio access network (RAN) responsible for the transmission and reception of radio signals in one or more cells or coverage areas to or from a UE or station.
  • RAN radio access network
  • a “network access node” or “NAN” can have an integrated antenna or may be connected to an antenna array by feeder cables.
  • a “network access node” or “NAN” includes specialized digital signal processing, network function hardware, and/or compute hardware to operate as a compute node.
  • a “network access node” or “NAN” may be split into multiple functional blocks operating in software for flexibility, cost, and performance.
  • a “network access node” or “NAN” may be a base station (e.g., an evolved Node B (eNB) or a next generation Node B (gNB)), an access point and/or wireless network access point, router, switch, hub, radio unit or remote radio head, Transmission Reception Point (TRP), a gateway device (e.g., Residential Gateway, Wireline 5G Access Network, Wireline 5G Cable Access Network, Wireline BBF Access Network, and the like), network appliance, and/or some other network access hardware.
  • the term “access point” or “AP” at least in some examples refers to an entity that contains one station (STA) and provides access to the distribution services, via the wireless medium (WM) for associated STAs.
  • An AP comprises a STA and a distribution system access function (DSAF).
  • DSAF distribution system access function
  • cell at least in some examples refers to a radio network object that can be uniquely identified by a UE from an identifier (e.g., cell ID) that is broadcasted over a geographical area from a network access node (NAN). Additionally or alternatively, the term “cell” at least in some examples refers to a geographic area covered by a NAN.
  • serving cell at least in some examples refers to a primary cell (PCell) for a UE in a connected mode or state (e.g., RRC_CONNECTED) and not configured with carrier aggregation (CA) and/or dual connectivity (DC).
  • PCell primary cell
  • CA carrier aggregation
  • DC dual connectivity
  • the term “serving cell” at least in some examples refers to a set of cells comprising zero or more special cells and one or more secondary cells for a UE in a connected mode or state (e.g., RRC_CONNECTED) and configured with CA.
  • the term “primary cell” or “PCell” at least in some examples refers to a Master Cell Group (MCG) cell, operating on a primary frequency, in which a UE either performs an initial connection establishment procedure or initiates a connection re-establishment procedure.
  • MCG Master Cell Group
  • Secondary Cell or “SCell” at least in some examples refers to a cell providing additional radio resources on top of a special cell (SpCell) for a UE configured with CA.
  • the term “special cell” or “SpCell” at least in some examples refers to a PCell for non-DC operation or refers to a PCell of an MCG or a PSCell of an SCG for DC operation.
  • the term “Master Cell Group” or “MCG” at least in some examples refers to a group of serving cells associated with a “Master Node” comprising a SpCell (PCell) and optionally one or more SCells.
  • the term “Secondary Cell Group” or “SCG” at least in some examples refers to a subset of serving cells comprising a Primary SCell (PSCell) and zero or more SCells for a UE configured with DC.
  • PSCell Primary SCell
  • Primary SCG Cell refers to the SCG cell in which a UE performs random access when performing a reconfiguration with sync procedure for DC operation.
  • the term “handover” at least in some examples refers to the transfer of a user's connection from one radio channel to another (can be the same or different cell). Additionally or alternatively, the term “handover” at least in some examples refers to the process in which a radio access network changes the radio transmitters, radio access mode, and/or radio system used to provide the bearer services, while maintaining a defined bearer service QoS.
  • Master Node or “MN” at least in some examples refers to a NAN that provides control plane connection to a core network.
  • Secondary Node or “SN” at least in some examples refers to a NAN providing resources to the UE in addition to the resources provided by an MN and/or a NAN with no control plane connection to a core network.
  • E-UTEAN NodeB refers to a RAN node providing E-UTRA user plane (e.g., PDCP, RLC, MAC, PHY) and control plane (e.g., RRC) protocol terminations towards a UE, and connected via an S 1 interface to the Evolved Packet Core (EPC).
  • EPC Evolved Packet Core
  • Two or more eNBs are interconnected with each other (and/or with one or more en-gNBs) by means of an X2 interface.
  • next generation eNB or “ng-eNB” at least in some examples refers to a RAN node providing E-UTRA user plane and control plane protocol terminations towards a UE, and connected via the NG interface to the 5GC.
  • Two or more ng-eNBs are interconnected with each other (and/or with one or more gNBs) by means of an Xn interface.
  • Next Generation NodeB “gNodeB”, or “gNB” at least in some examples refers to a RAN node providing NR user plane and control plane protocol terminations towards a UE, and connected via the NG interface to the 5GC.
  • E-UTRA-NR gNB or “en-gNB” at least in some examples refers to a RAN node providing NR user plane and control plane protocol terminations towards a UE, and acting as a Secondary Node in E-UTRA-NR Dual Connectivity (EN-DC) scenarios (see e.g., 3GPP TS 37.340).
  • E-UTRA-NR gNB or “en-gNB” at least in some examples refers to a RAN node providing NR user plane and control plane protocol terminations towards a UE, and acting as a Secondary Node in E-UTRA-NR Dual Connectivity (EN-DC) scenarios (see e.g., 3GPP TS 37.340).
  • EN-DC E-UTRA-NR Dual Connectivity
  • next Generation RAN node or “NG-RAN node” at least in some examples refers to either a gNB or an ng-eNB.
  • NG-RAN node at least in some examples refers to either a gNB or an ng-eNB.
  • lAB-node at least in some examples refers to a RAN node that supports new radio (NR) access links to user equipment (UEs) and NR backhaul links to parent nodes and child nodes.
  • NR new radio
  • IAB -donor at least in some examples refers to a RAN node (e.g., a gNB) that provides network access to UEs via a network of backhaul and access links.
  • TRP Transmission Reception Point
  • CU Central Unit
  • RRC radio resource control
  • SDAP Service Data Adaptation Protocol
  • PDCP Packet Data Convergence Protocol
  • NG-RAN node a logical node hosting radio resource control (RRC), Service Data Adaptation Protocol (SDAP), and/or Packet Data Convergence Protocol (PDCP) protocols/layers of an NG- RAN node, or RRC and PDCP protocols of the en-gNB that controls the operation of one or more DUs; a CU terminates an Fl interface connected with a DU and may be connected with multiple DUs.
  • RRC radio resource control
  • SDAP Service Data Adaptation Protocol
  • PDCP Packet Data Convergence Protocol
  • the term “Distributed Unit” or “DU” at least in some examples refers to a logical node hosting Backhaul Adaptation Protocol (BAP), Fl application protocol (F1AP), radio link control (RLC), medium access control (MAC), and physical (PHY) layers of the NG-RAN node or en- gNB, and its operation is partly controlled by a CU; one DU supports one or multiple cells, and one cell is supported by only one DU; and a DU terminates the Fl interface connected with a CU.
  • the term “Radio Unit” or “RU” at least in some examples refers to a logical node hosting PHY layer or Low-PHY layer and radiofrequency (RF) processing based on a lower layer functional split.
  • split architecture at least in some examples refers to an architecture in which an CU, DU, and/or RU are physically separated from one another. Additionally or alternatively, the term “split architecture” at least in some examples refers to a RAN architecture such as those discussed in 3GPP TS 38.401, 3GPP TS 38.410, and 3GPP TS 38.473.
  • integrated architecture at least in some examples refers to an architecture in which an RU and DU are implemented on one platform, and/or an architecture in which a DU and a CU are implemented on one platform.
  • the term “Residential Gateway” or “RG” at least in some examples refers to a device providing, for example, voice, data, broadcast video, video on demand, to other devices in customer premises.
  • the term “Wireline 5G Access Network” or “W-5GAN” at least in some examples refers to a wireline AN that connects to a 5GC via N2 and N3 reference points.
  • the W- 5GAN can be either a W-5GBAN or W-5GCAN.
  • the term “Wireline 5G Cable Access Network” or “W-5GCAN” at least in some examples refers to an Access Network defined in/by CableLabs.
  • Wi-BBF Access Network or “W-5GBAN” at least in some examples refers to an Access Network defined in/by the Broadband Forum (BBF).
  • BBF Broadband Forum
  • W-AGF Wireless Advanced Network Gateway Function
  • W-AGF Wireless Advanced Network Gateway Function
  • 5GC 3GPP 5G Core network
  • 5G-RG at least in some examples refers to an RG capable of connecting to a 5GC playing the role of a user equipment with regard to the 5GC; it supports secure element and exchanges N1 signaling with 5GC.
  • the 5G-RG can be either a 5G-BRG or 5G-CRG.
  • Primary Cell refers to the MCG cell, operating on the primary frequency, in which the UE either performs the initial connection establishment procedure or initiates the connection re-establishment procedure.
  • Primary SCG Cell refers to the SCG cell in which the UE performs random access when performing the Reconfiguration with Sync procedure for DC operation.
  • Secondary Cell refers to a cell providing additional radio resources on top of a Special Cell for a UE configured with CA.
  • Secondary Cell Group refers to the subset of serving cells comprising the PSCell and zero or more secondary cells for a UE configured with DC.
  • the term “Serving Cell” refers to the primary cell for a UE in RRC_CONNECTED not configured with CA/DC there is only one serving cell comprising of the primary cell.
  • the term “serving cell” or “serving cells” refers to the set of cells comprising the Special Cell(s) and all secondary cells for a UE in RRC_CONNECTED configured with CA.
  • the term “Special Cell” refers to the PCell of the MCG or the PSCell of the SCG for DC operation; otherwise, the term “Special Cell” refers to the Pcell.
  • edge computing at least in some examples refers to an implementation or arrangement of distributed computing elements that move processing activities and resources (e.g., compute, storage, acceleration, and/or network resources) towards the “edge” of the network in an effort to reduce latency and increase throughput for endpoint users (client devices, user equipment, and the like). Additionally or alternatively, term “edge computing” at least in some examples refers to a set of services hosted relatively close to a client/UE’s access point of attachment to a network to achieve relatively efficient service delivery through reduced end-to- end latency and/or load on the transport network. In some examples, edge computing implementations involve the offering of services and/or resources in a cloud-like systems, functions, applications, and subsystems, from one or multiple locations accessible via wireless networks.
  • processing activities and resources e.g., compute, storage, acceleration, and/or network resources
  • edge computing at least in some examples refers to a set of services hosted relatively close to a client/UE’s access point of attachment to a network to achieve relatively efficient service delivery through reduced end
  • edge computing at least in some examples refers to the concept, as described in [TS23501], that enables operator and 3rd party services to be hosted close to a UE's access point of attachment, to achieve an efficient service delivery through the reduced end-to-end latency and load on the transport network.
  • edge compute node or “edge compute device” at least in some examples refers to an identifiable entity implementing an aspect of edge computing operations, whether part of a larger system, distributed collection of systems, or a standalone apparatus.
  • a compute node may be referred to as a “edge node”, “edge device”, “edge system”, whether in operation as a client, server, or intermediate entity.
  • edge compute node at least in some examples refers to a real-world, logical, or virtualized implementation of a compute-capable element in the form of a device, gateway, bridge, system or subsystem, component, whether operating in a server, client, endpoint, or peer mode, and whether located at an “edge” of an network or at a connected location further within the network, however, references to an “edge computing system” generally refer to a distributed architecture, organization, or collection of multiple nodes and devices, and which is organized to accomplish or offer some aspect of services or resources in an edge computing setting.
  • edge computing platform or “edge platform” at least in some examples refers to a collection of functionality that is used to instantiate, execute, or run edge applications on a specific edge compute node (e.g., virtualization infrastructure and/or the like), enable such edge applications to provide and/or consume edge services, and/or otherwise provide one or more edge services.
  • edge application or “edge app” at least in some examples refers to an application that can be instantiated on, or executed by, an edge compute node within an edge computing network, system, or framework, and can potentially provide and/or consume edge computing services.
  • edge service at least in some examples refers to a service provided via an edge compute node and/or edge platform, either by the edge platform itself and/or by an edge application.
  • cloud computing or “cloud” at least in some examples refers to a paradigm for enabling network access to a scalable and elastic pool of shareable computing resources with self- service provisioning and administration on-demand and without active management by users.
  • Cloud computing provides cloud computing services (or cloud services), which are one or more capabilities offered via cloud computing that are invoked using a defined interface (e.g., an API or the like).
  • network function or “NF” at least in some examples refers to a functional block within a network infrastructure that has one or more external interfaces and a defined functional behavior.
  • network instance at least in some examples refers to information identifying a domain; in some examples, a network instance is used by a UPF for traffic detection and routing.
  • network service or “NS” at least in some examples refers to a composition or collection of NF(s) and/or network service(s), defined by its functional and behavioral specification(s).
  • NF service instance at least in some examples refers to an identifiable instance of the NF service.
  • NF instance at least in some examples refers to an identifiable instance of an NF.
  • NF service at least in some examples refers to functionality exposed by an NF through a service-based interface and consumed by other authorized NFs.
  • NF service operation at least in some examples refers to an elementary unit that an NF service is composed of.
  • NF service set at least in some examples refers to a group of interchangeable NF service instances of the same service type within an NF instance; in some examples, the NF service instances in the same NF service set have access to the same context data.
  • NF set at least in some examples refers to a group of interchangeable NF instances of the same type, supporting the same services and the same network slice(s) ; in some examples, the NF instances in the same NF Set may be geographically distributed but have access to the same context data.
  • RAN function refers to a functional block within a RAN architecture that has one or more external interfaces and a defined behavior related to the operation of a RAN or RAN node. Additionally or alternatively, the term “RAN function” or “RANF” at least in some examples refers to a set of functions and/or NFs that are part of a RAN.
  • the term “Application Function” or “AF” at least in some examples refers to an element or entity that interacts with a 3GPP core network in order to provide services. Additionally or alternatively, the term “Application Function” or “AF” at least in some examples refers to an edge compute node or ECT framework from the perspective of a 5G core network.
  • management function at least in some examples refers to a logical entity playing the roles of a service consumer and/or a service producer.
  • management service at least in some examples refers to a set of offered management capabilities.
  • network function virtualization or “NFV” at least in some examples refers to the principle of separating network functions from the hardware they run on by using virtualization techniques and/or virtualization technologies.
  • virtualized network function or “VNF” at least in some examples refers to an implementation of an NF that can be deployed on a Network Function Virtualization Infrastructure (NFVI).
  • NFVI Network Function Virtualization Infrastructure
  • NFVI Network Functions Virtualization Infrastructure Manager
  • NFVI Network Functions Virtualization Infrastructure Manager
  • VIM Virtualized Infrastructure Manager
  • VMM functional block that is responsible for controlling and managing the NFVI compute, storage and network resources, usually within one operator's infrastructure domain.
  • virtualization container “execution container”, or “container” at least in some examples refers to a partition of a compute node that provides an isolated virtualized computation environment.
  • OS container at least in some examples refers to a virtualization container utilizing a shared Operating System (OS) kernel of its host, where the host providing the shared OS kernel can be a physical compute node or another virtualization container.
  • container at least in some examples refers to a standard unit of software (or a package) including code and its relevant dependencies, and/or an abstraction at the application layer that packages code and dependencies together.
  • container or container image at least in some examples refers to a lightweight, standalone, executable software package that includes everything needed to run an application such as, for example, code, runtime environment, system tools, system libraries, and settings.
  • VM virtual machine
  • hypervisor at least in some examples refers to a software element that partitions the underlying physical resources of a compute node, creates VMs, manages resources for VMs, and isolates individual VMs from each other.
  • Data Network at least in some examples refers to a network hosting data-centric services such as, for example, operator services, the internet, third-party services, or enterprise networks. Additionally or alternatively, a DN at least in some examples refers to service networks that belong to an operator or third party, which are offered as a service to a client or user equipment (UE). DNs are sometimes referred to as “Packet Data Networks” or “PDNs”.
  • Packet Data Networks or “Local Area Data Network” at least in some examples refers to a DN that is accessible by the UE only in specific locations, that provides connectivity to a specific DNN, and whose availability is provided to the UE.
  • the term “Internet of Things” or “loT” at least in some examples refers to a system of interrelated computing devices, mechanical and digital machines capable of transferring data with little or no human interaction, and may involve technologies such as real-time analytics, machine learning and/or Al, embedded systems, wireless sensor networks, control systems, automation (e.g., smarthome, smart building and/or smart city technologies), and the like. loT devices are usually low-power devices without heavy compute or storage capabilities.
  • protocol at least in some examples refers to a predefined procedure or method of performing one or more operations. Additionally or alternatively, the term “protocol” at least in some examples refers to a common means for unrelated objects to communicate with each other (sometimes also called interfaces).
  • communication protocol at least in some examples refers to a set of standardized rules or instructions implemented by a communication device and/or system to communicate with other devices and/or systems, including instructions for packetizing/depacketizing data, modulating/demodulating signals, implementation of protocols stacks, and/or the like.
  • a “protocol” and/or a “communication protocol” may be represented using a protocol stack, a finite state machine (FSM), and/or any other suitable data structure.
  • standard protocol at least in some examples refers to a protocol whose specification is published and known to the public and is controlled by a standards body.
  • protocol stack or “network stack” at least in some examples refers to an implementation of a protocol suite or protocol family.
  • a protocol stack includes a set of protocol layers, where the lowest protocol deals with low-level interaction with hardware and/or communications interfaces and each higher layer adds additional capabilities.
  • the term “protocol” at least in some examples refers to a formal set of procedures that are adopted to ensure communication between two or more functions within the within the same layer of a hierarchy of functions.
  • the term “application layer” at least in some examples refers to an abstraction layer that specifies shared communications protocols and interfaces used by hosts in a communications network. Additionally or alternatively, the term “application layer” at least in some examples refers to an abstraction layer that interacts with software applications that implement a communicating component, and includes identifying communication partners, determining resource availability, and synchronizing communication.
  • Examples of application layer protocols include HTTP, HTTPs, File Transfer Protocol (FTP), Dynamic Host Configuration Protocol (DHCP), Internet Message Access Protocol (IMAP), Eightweight Directory Access Protocol (EDAP), MQTT (MQ Telemetry Transport), Remote Authentication Dial-In User Service (RADIUS), Diameter protocol, Extensible Authentication Protocol (EAP), RDMA over Converged Ethernet version 2 (RoCEv2), Real-time Transport Protocol (RTP), RTP Control Protocol (RTCP), Real Time Streaming Protocol (RTSP), SBMV Protocol, Skinny Client Control Protocol (SCCP), Session Initiation Protocol (SIP), Session Description Protocol (SDP), Simple Mail Transfer Protocol (SMTP), Simple Network Management Protocol (SNMP), Simple Service Discovery Protocol (SSDP), Small Computer System Interface (SCSI), Internet SCSI (iSCSI), iSCSI Extensions for RDMA (iSER), Transport Layer Security (TLS), voice over IP (VoIP), Virtual Private Network (VPN), Extensible Messaging and Presence Protocol
  • session layer at least in some examples refers to an abstraction layer that controls dialogues and/or connections between entities or elements, and may include establishing, managing and terminating the connections between the entities or elements.
  • transport layer at least in some examples refers to a protocol layer that provides end-to-end (e2e) communication services such as, for example, connection-oriented communication, reliability, flow control, and multiplexing.
  • transport layer protocols include datagram congestion control protocol (DCCP), fibre channel protocol (FBC), Generic Routing Encapsulation (GRE), GPRS Tunneling (GTP), Micro Transport Protocol (pTP), Multipath TCP (MPTCP), MultiPath QUIC (MPQUIC), Multipath UDP (MPUDP), Quick UDP Internet Connections (QUIC), Remote Direct Memory Access (RDMA), Resource Reservation Protocol (RSVP), Stream Control Transmission Protocol (SCTP), transmission control protocol (TCP), user datagram protocol (UDP), and/or the like.
  • DCCP datagram congestion control protocol
  • FBC Generic Routing Encapsulation
  • GTP Generic Routing Encapsulation
  • GTP Generic Routing Encapsulation
  • GTP Generic Routing Encapsulation
  • GTP Generic Routing Encapsulation
  • GTP Generic Rou
  • network layer at least in some examples refers to a protocol layer that includes means for transferring network packets from a source to a destination via one or more networks. Additionally or alternatively, the term “network layer” at least in some examples refers to a protocol layer that is responsible for packet forwarding and/or routing through intermediary nodes. Additionally or alternatively, the term “network layer” or “internet layer” at least in some examples refers to a protocol layer that includes interworking methods, protocols, and specifications that are used to transport network packets across a network.
  • the network layer protocols include internet protocol (IP), IP security (IPsec), Internet Control Message Protocol (ICMP), Internet Group Management Protocol (IGMP), Open Shortest Path First protocol (OSPF), Routing Information Protocol (RIP), RDMA over Converged Ethernet version 2 (RoCEv2), Subnetwork Access Protocol (SNAP), and/or some other internet or network protocol layer.
  • IP internet protocol
  • IPsec Internet Control Message Protocol
  • IGMP Internet Group Management Protocol
  • OSPF Open Shortest Path First protocol
  • RIP Routing Information Protocol
  • RoCEv2 Subnetwork Access Protocol
  • SNAP Subnetwork Access Protocol
  • link layer or “data link layer” at least in some examples refers to a protocol layer that transfers data between nodes on a network segment across a physical layer.
  • link layer protocols include logical link control (LLC), medium access control (MAC), Ethernet, RDMA over Converged Ethernet version 1 (RoCEvl), and/or the like.
  • RRC layer refers to a protocol layer or sublayer that performs system information handling; paging; establishment, maintenance, and release of RRC connections; security functions; establishment, configuration, maintenance and release of Signalling Radio Bearers (SRBs) and Data Radio Bearers (DRBs); mobility functions/services; QoS management; and some sidelink specific services and functions over the Uu interface (see e.g., 3GPP TS 36.331 and 3GPP TS 38.331 (“[TS38331]”)).
  • SRBs Signalling Radio Bearers
  • DRBs Data Radio Bearers
  • SDAP layer refers to a protocol layer or sublayer that performs mapping between QoS flows and a data radio bearers (DRBs) and marking QoS flow IDs (QFI) in both DL and UL packets (see e.g., 3GPP TS 37.324).
  • DRBs data radio bearers
  • QFI QoS flow IDs
  • Packet Data Convergence Protocol refers to a protocol layer or sublayer that performs transfer user plane or control plane data; maintains PDCP sequence numbers (SNs); header compression and decompression using the Robust Header Compression (ROHC) and/or Ethernet Header Compression (EHC) protocols; ciphering and deciphering; integrity protection and integrity verification; provides timer based SDU discard; routing for split bearers; duplication and duplicate discarding; reordering and inorder delivery; and/or out-of-order delivery (see e.g., 3GPP TS 36.323 and/or 3GPP TS 38.323).
  • ROHC Robust Header Compression
  • EHC Ethernet Header Compression
  • radio link control layer refers to a protocol layer or sublayer that performs transfer of upper layer PDUs; sequence numbering independent of the one in PDCP; error Correction through ARQ; segmentation and/or re- segmentation of RLC SDUs; reassembly of SDUs; duplicate detection; RLC SDU discarding; RLC re-establishment; and/or protocol error detection (see e.g., 3GPP TS 36.322 and 3GPP TS 38.322).
  • medium access control protocol refers to a protocol that governs access to the transmission medium in a network, to enable the exchange of data between stations in a network.
  • medium access control layer refers to a protocol layer or sublayer that performs functions to provide frame-based, connectionless-mode (e.g., datagram style) data transfer between stations or devices.
  • the term “medium access control layer”, “MAC layer”, or “MAC” at least in some examples refers to a protocol layer or sublayer that performs mapping between logical channels and transport channels; multiplexing/demultiplexing of MAC SDUs belonging to one or different logical channels into/from transport blocks (TB) delivered to/from the physical layer on transport channels; scheduling information reporting; error correction through HARQ (one HARQ entity per cell in case of CA); priority handling between UEs by means of dynamic scheduling; priority handling between logical channels of one UE by means of logical channel prioritization; priority handling between overlapping resources of one UE; and/or padding (see e.g., 3GPP TS 36.321 and 3GPP TS 38.321).
  • the term “physical layer”, “PHY layer”, or “PHY” at least in some examples refers to a protocol layer or sublayer that includes capabilities to transmit and receive modulated signals for communicating in a communications network (see e.g., 3GPP TS 36.201 and 3GPP TS 38.201).
  • the term “access technology” at least in some examples refers to the technology used for the underlying physical connection to a communication network.
  • the term “radio access technology” or “RAT” at least in some examples refers to the technology used for the underlying physical connection to a radio based communication network.
  • the term “radio technology” at least in some examples refers to technology for wireless transmission and/or reception of electromagnetic radiation for information transfer.
  • the term “RAT type” at least in some examples may identify a transmission technology and/or communication protocol used in an access network. Examples of access technologies include wireless access technologies/RATs, wireline, wirelinecable, wireline broadband forum (wireline-BBF), Ethernet (see e.g., IEEE Standard for Ethernet, IEEE Std 802.3-2018 (31 Aug.
  • fiber optics networks e.g., ITU-T G.651, ITU-T G.652, Optical Transport Network (OTN), Synchronous optical networking (SONET) and synchronous digital hierarchy (SDH), and the like
  • OTN Optical Transport Network
  • SONET Synchronous optical networking
  • SDH synchronous digital hierarchy
  • DSL digital subscriber line
  • DOCSIS Data Over Cable Service Interface Specification
  • HFC hybrid fiber-coaxial
  • RATs or RAT types
  • communications protocols include Advanced Mobile Phone System (AMPS) technologies (e.g., Digital AMPS (D-AMPS), Total Access Communication System (TACS) and variants thereof, such as Extended TACS (ETACS), and the like); Global System for Mobile Communications (GSM) technologies (e.g., Circuit Switched Data (CSD), High-Speed CSD (HSCSD), General Packet Radio Service (GPRS), and Enhanced Data Rates for GSM Evolution (EDGE)); Third Generation Partnership Project (3GPP) technologies (e.g., Universal Mobile Telecommunications System (UMTS) and variants thereof (e.g., UMTS Terrestrial Radio Access (UTRA), Wideband Code Division Multiple Access (W-CDMA), Freedom of Multimedia Access (FOMA), Time Division-Code Division Multiple Access (TD-CDMA), Time Division- Synchronous Code Division Multiple Access (TD-SCDMA), and the like), Generic Access Network (GAN) / Unlicensed Mobile Access (UMA), High Speed Packet Access
  • GAN
  • IEEE802 (“[IEEE802]”), [IEEE80211], IEEE 802.15 technologies (e.g., IEEE 802.15.4 and variants thereof (e.g., ZigBee, WirelessHART, MiWi, ISAlOO.lla, Thread, IPv6 over Low power WPAN (6L0WPAN), and the like), IEEE 802.15.6 and/or the like), WLAN V2X RATs (e.g., [IEEE80211], IEEE Wireless Access in Vehicular Environments (WAVE) Architecture (IEEE 1609.0), IEEE 802.11bd, Dedicated Short Range Communications (DSRC), and/or the like), Worldwide Interoperability for Microwave Access (WiMAX) (e.g., IEEE 802.16), Mobile Broadband Wireless Access (MBWA)ZiBurst (e.g., IEEE 802.20 and variants thereof), Wireless Gigabit Alliance (WiGig) standards (e.g., IEEE 802.
  • Integrated Digital Enhanced Network and variants thereof (e.g., Wideband Integrated Digital Enhanced Network (WiDEN)); millimeter wave (mmWave) technologies/standards (e.g., wireless systems operating at 10-300 GHz and above 3GPP 5G); short-range and/or wireless personal area network (WPAN) technologies/standards (e.g., IEEE 802.15 technologies (e.g., as mentioned previously); Bluetooth and variants thereof (e.g., Bluetooth 5.3, Bluetooth Low Energy (BLE), and the like), WiFi-direct, Miracast, ANT/ANT+, Z-Wave, Universal Plug and Play (UPnP), low power Wide Area Networks (LPWANs), Long Range Wide Area Network (LoRA or LoRaWANTM), and the like); optical and/or visible light communication (VLC) technologies/standards (e.g., IEEE Std 802.15.7 and/or the like); Sigfox; Mobitex; 3GPP
  • any number of satellite uplink technologies may be used for purposes of the present disclosure including, for example, radios compliant with standards issued by the International Telecommunication Union (ITU), or the ETSI, among others.
  • ITU International Telecommunication Union
  • ETSI European Telecommunication Union
  • channel at least in some examples refers to any transmission medium, either tangible or intangible, which is used to communicate data or a data stream.
  • channel may be synonymous with and/or equivalent to “communications channel,” “data communications channel,” “transmission channel,” “data transmission channel,” “access channel,” “data access channel,” “link,” “data link,” “carrier,” “radiofrequency carrier,” and/or any other like term denoting a pathway or medium through which data is communicated.
  • link at least in some examples refers to a connection between two devices through a RAT for the purpose of transmitting and receiving information.
  • carrier at least in some examples refers to a modulated waveform conveying one or more physical channels (e.g., 5G/NR, E-UTRA, UTRA, and/or GSM/EDGE physical channels).
  • carrier frequency at least in some examples refers to the center frequency of a cell.
  • subframe at least in some examples at least in some examples refers to a time interval during which a signal is signaled. In some implementations, a subframe is equal to 1 millisecond (ms).
  • time slot at least in some examples at least in some examples refers to an integer multiple of consecutive subframes.
  • superframe at least in some examples at least in some examples refers to a time interval comprising two time slots.
  • network address at least in some examples refers to an identifier for a node or host in a computer network, and may be a unique identifier across a network and/or may be unique to a locally administered portion of the network.
  • application or “app” at least in some examples refers to a computer program designed to carry out a specific task other than one relating to the operation of the computer itself. Additionally or alternatively, term “application” or “app” at least in some examples refers to a complete and deployable package, environment to achieve a certain function in an operational environment.
  • process at least in some examples refers to an instance of a computer program that is being executed by one or more threads. In some implementations, a process may be made up of multiple threads of execution that execute instructions concurrently.
  • algorithm at least in some examples refers to an unambiguous specification of how to solve a problem or a class of problems by performing calculations, input/output operations, data processing, automated reasoning tasks, and/or the like.
  • API application programming interface
  • API refers to a set of subroutine definitions, communication protocols, and tools for building software. Additionally or alternatively, the term “application programming interface” or “API” at least in some examples refers to a set of clearly defined methods of communication among various components. In some examples, an API may be defined or otherwise used for a web-based system, operating system, database system, computer hardware, software library, and/or the like.
  • instantiate refers to the creation of an instance.
  • instance refers to a concrete occurrence of an object, which may occur, for example, during execution of program code.
  • reference point at least in some examples refers to a conceptual point at the conjunction of two non-overlapping functional groups, elements, or entities.
  • service based interface at least in some examples refers to a representation how a set of services is provided and/or exposed by a particular NF.
  • Use case at least in some examples refers to a description of a system from a user's perspective. Use cases sometimes treat a system as a black box, and the interactions with the system, including system responses, are perceived as from outside the system. Use cases typically avoid technical jargon, preferring instead the language of the end user or domain expert.
  • the term “user” at least in some examples refers to an abstract representation of any entity issuing commands, requests, and/or data to a compute node or system, and/or otherwise consumes or uses services. Additionally or alternatively, the term “user” at least in some examples refers to an entity, not part of the 3GPP System , which uses 3GPP System services (e.g., a person using a 3GPP system mobile station as a portable telephone).
  • the term “user profile” at least in some examples refers to a set of information to provide a user with a consistent, personalized service environment, irrespective of the user's location or the terminal used (within the limitations of the terminal and the serving network).
  • service consumer or “consumer” at least in some examples refers to an entity that consumes one or more services.
  • service producer or “producer” at least in some examples refers to an entity that offers, serves, or otherwise provides one or more services.
  • service provider or “provider” at least in some examples refers to an organization or entity that provides one or more services to at least one service consumer.
  • service provider and “service producer” may be used interchangeably even though these terms may refer to difference concepts.
  • service providers examples include cloud service provider (CSP), network service provider (NSP), application service provider (ASP) (e.g., Application software service provider in a service-oriented architecture (ASSP)), internet service provider (ISP), telecommunications service provider (TSP), online service provider (OSP), payment service provider (PSP), managed service provider (MSP), storage service providers (SSPs), SAML service provider, and/or the like.
  • CSP cloud service provider
  • NSP network service provider
  • ASP application service provider
  • ISP internet service provider
  • TSP telecommunications service provider
  • OSP online service provider
  • PSP payment service provider
  • MSP managed service provider
  • SSPs storage service providers
  • SAML service provider and/or the like.
  • datagram at least in some examples at least in some examples refers to a basic transfer unit associated with a packet-switched network; a datagram may be structured to have header and payload sections.
  • datagram at least in some examples may be synonymous with any of the following terms, even though they may refer to different aspects: “data unit”, a “protocol data unit” or “PDU”, a “service data unit” or “SDU”, “frame”, “packet”, a “network packet”, “segment”, “block”, “cell”, “chunk”, “Type Length Value” or “TLV”, and/or the like.
  • Examples of datagrams, network packets, and the like include internet protocol (IP) packet, Internet Control Message Protocol (ICMP) packet, UDP packet, TCP packet, SCTP packet, ICMP packet, Ethernet frame, RRC messages/packets, SDAP PDU, SDAP SDU, PDCP PDU, PDCP SDU, MAC PDU, MAC SDU, BAP PDU.
  • IP internet protocol
  • ICMP Internet Control Message Protocol
  • UDP Internet Control Message Protocol
  • TCP packet Transmission Control Message Protocol
  • SCTP Internet Control Message Protocol
  • Ethernet frame Ethernet frame
  • RRC messages/packets SDAP PDU, SDAP SDU, PDCP PDU, PDCP SDU, MAC PDU, MAC SDU, BAP PDU.
  • BAP SDU, RLC PDU, RLC SDU, WiFi frames as discussed in a IEEE 802 protocol/standard (e.g., [IEEE80211] or the like), Type Length Value (TLV), and/or other like
  • packet at least in some examples refers to an information unit identified by a label at layer 3 of the OSI reference model.
  • a “packet” may also be referred to as a “network protocol data unit” or “NPDU”.
  • protocol data unit at least in some examples refers to a unit of data specified in an (N)-protocol layer and includes (N)-protocol control information and possibly (N)-user data.
  • information element refers to a structural element containing one or more fields. Additionally or alternatively, the term “information element” or “IE” at least in some examples refers to a field or set of fields defined in a standard or specification that is used to convey data and/or protocol information.
  • field at least in some examples refers to individual contents of an information element, or a data element that contains content.
  • data frame”, “data field”, or “DF” at least in some examples refers to a data type that contains more than one data element in a predefined order.
  • data element or “DE” at least in some examples refers to a data type that contains one single data.
  • data element at least in some examples refers to an atomic state of a particular object with at least one specific property at a certain point in time, and may include one or more of a data element name or identifier, a data element definition, one or more representation terms, enumerated values or codes (e.g., metadata), and/or a list of synonyms to data elements in other metadata registries.
  • a “data element” at least in some examples refers to a data type that contains one single data. Data elements may store data, which may be referred to as the data element’s content (or “content items”).
  • Content items may include text content, attributes, properties, and/or other elements referred to as “child elements.” Additionally or alternatively, data elements may include zero or more properties and/or zero or more attributes, each of which may be defined as database objects (e.g., fields, records, and the like), object instances, and/or other data elements.
  • An “attribute” at least in some examples refers to a markup construct including a name-value pair that exists within a start tag or empty element tag. Attributes contain data related to its element and/or control the element’ s behavior.
  • reference at least in some examples refers to data useable to locate other data and may be implemented a variety of ways (e.g., a pointer, an index, a handle, a key, an identifier, a hyperlink, and/or the like).
  • configuration refers to a machine -readable information object that contains instructions, conditions, parameters, criteria, data, metadata, and/or other information that is/are relevant to a component, device, system, network, service producer, service consumer, and/or other element/entity.
  • data set at least in some examples refers to a collection of data; a “data set” or “dataset” may be formed or arranged in any type of data structure.
  • one or more characteristics can define or influence the structure and/or properties of a dataset such as the number and types of attributes and/or variables, and various statistical measures (e.g., standard deviation, kurtosis, and/or the like).
  • data structure at least in some examples refers to a data organization, management, and/or storage format. Additionally or alternatively, the term “data structure” at least in some examples refers to a collection of data values, the relationships among those data values, and/or the functions, operations, tasks, and the like, that can be applied to the data.
  • Examples of data structures include primitives (e.g., Boolean, character, floating-point numbers, fixed-point numbers, integers, reference or pointers, enumerated type, and/or the like), composites (e.g., arrays, records, strings, union, tagged union, and/or the like), abstract data types (e.g., data container, list, tuple, associative array, map, dictionary, set (or dataset), multiset or bag, stack, queue, graph (e.g., tree, heap, and the like), and/or the like), routing table, symbol table, quad-edge, blockchain, purely-functional data structures (e.g., stack, queue, (multi)set, random access list, hash consing, zipper data structure, and/or the like).
  • primitives e.g., Boolean, character, floating-point numbers, fixed-point numbers, integers, reference or pointers, enumerated type, and/or the like
  • composites e.g., arrays, records
  • association at least in some examples refers to a model of relationships between Managed Objects. Associations can be implemented in several ways, such as: (1) name bindings, (2) reference attributes, and (3) association objects.
  • Information Object Class or “IOC” at least in some examples refers to a representation of the management aspect of a network resource. Additionally or alternatively, the term “Information Object Class” or “IOC” at least in some examples refers to a description of the information that can be passed/used in management interfaces. In some examples, their representations are technology agnostic software objects. Additionally or alternatively, an IOC has attributes that represents the various properties of the class of objects.
  • IOC can support operations providing network management services invocable on demand for that class of objects. Additionally or alternatively, an IOC may support notifications that report event occurrences relevant for that class of objects. In some examples, an IOC is modelled using the stereotype "Class" in the UML meta-model.
  • Managed Object at least in some examples refers to an instance of a Managed Object Class (MOC) representing the management aspects of a network resource. Its representation is a technology specific software object.
  • MO is called an “MO instance” or “MOI”.
  • an MOC is the same as an IOC except that the former is defined in technology specific terms and the latter is defined in technology agnostic terms. MOCs are used/defined in SS level specifications. In some examples, lOCs and/or MOCs are used/defined in IS level specifications.
  • MIB Management Information Base
  • an MIB includes a name space (describing the MO containment hierarchy in the MIB through Distinguished Names), a number of MOs with their attributes, and a number of associations between the MOs.
  • name space at least in some examples refers to a collection of names.
  • a name space is restricted to a hierarchical containment structure, including its simplest form - the one-level, flat name space.
  • all MOs in an MIB are included in the corresponding name space and the MIB/name space shall only support a strict hierarchical containment structure (with one root object).
  • An MO that contains another is said to be the superior (parent); the contained MO is referred to as the subordinate (child).
  • the parent of all MOs in a single name space is called a Local Root.
  • the ultimate parent of all MOs of all managed systems is called the Global Root.
  • network resource at least in some examples refers to a discrete entity represented by an IOC for the purpose of network and service management.
  • a network resource may represent intelligence, information, hardware and/or software of a telecommunication network.
  • Network Resource Model or “NRM” at least in some examples refers to a collection of IOCS, inclusive of their associations, attributes and operations, representing a set of network resources under management.
  • SON self-organizing network
  • 3GPP TS 32.500 3GPP TS 32.522, 3GPP TS 32.541, 3GPP TS 32.551, 3GPP TS 28.310, 3GPP TS 28.313, 3GPP TS 28.627, and 3GPP TS 28.628).
  • performance indicator at least in some examples refers to performance data aggregated over a group of NFs that is derived from performance measurements collected at the NFs that belong to the group.
  • performance indicators are derived, collected or aggregated according to an aggregation method identified in a performance indicator definition.
  • artificial intelligence at least in some examples refers to any intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals. Additionally or alternatively, the term “artificial intelligence” or “Al” at least in some examples refers to the study of “intelligent agents” and/or any device that perceives its environment and takes actions that maximize its chance of successfully achieving a goal.
  • artificial neural network refers to an ML technique comprising a collection of connected artificial neurons or nodes that (loosely) model neurons in a biological brain that can transmit signals to other arterial neurons or nodes, where connections (or edges) between the artificial neurons or nodes are (loosely) modeled on synapses of a biological brain.
  • the artificial neurons and edges typically have a weight that adjusts as learning proceeds. The weight increases or decreases the strength of the signal at a connection.
  • Neurons may have a threshold such that a signal is sent only if the aggregate signal crosses that threshold.
  • the artificial neurons can be aggregated or grouped into one or more layers where different layers may perform different transformations on their inputs.
  • NNs are usually used for supervised learning, but can be used for unsupervised learning as well.
  • Examples of NNs include deep NN, feed forward NN (FFN), deep FNN (DFF), convolutional NN (CNN), deep CNN (DCN), deconvolutional NN (DNN), a deep belief NN, a perception NN, recurrent NN (RNN) (e.g., including Long Short Term Memory (LSTM) algorithm, gated recurrent unit (GRU), echo state network (ESN), and the like), spiking NN (SNN), deep stacking network (DSN), Markov chain, perception NN, generative adversarial network (GAN), transformers, stochastic NNs (e.g., Bayesian Network (BN), Bayesian belief network (BBN), a Bayesian NN (BNN), Deep BNN (DBNN), Dynamic BN (DBN),
  • matrix at least in some examples refer to a system of postulates, data, and inferences presented as a mathematical description of an entity or state of affairs including governing equations, assumptions, and constraints.
  • statistic model at least in some examples refers to a mathematical model that embodies a set of statistical assumptions concerning the generation of sample data and/or similar data from a population; in some examples, a “statistical model” represents a data-generating process.
  • machine learning at least in some examples refers to the use of computer systems to optimize a performance criterion using example (training) data and/or past experience.
  • ML involves using algorithms to perform specific task(s) without using explicit instructions to perform the specific task(s), and/or relying on patterns, predictions, and/or inferences.
  • ML uses statistics to build ML model(s) (also referred to as “models”) in order to make predictions or decisions based on sample data (e.g., training data).
  • machine learning model or “ML model” at least in some examples refers to an application, program, process, algorithm, and/or function that is capable of making predictions, inferences, or decisions based on an input data set and/or is capable of detecting patterns based on an input data set. Additionally or alternatively, the term “machine learning model” or “ML model” at least in some examples refers to a mathematical algorithm that can be "trained” by data (or otherwise learn from data) and/or human expert input as examples to replicate a decision an expert would make when provided that same information. In some examples, a “machine learning model” or “ML model” is trained on a training data to detect patterns and/or make predictions, inferences, and/or decisions.
  • a “machine learning model” or “ML model” is based on a mathematical and/or statistical model.
  • the terms “ML model”, “Al model”, “AI/ML model”, and the like may be used interchangeably.
  • the term “mathematical model” at least in some examples refer to a system of postulates, data, and inferences presented as a mathematical description of an entity or state of affairs including governing equations, assumptions, and constraints.
  • the term “statistical model” at least in some examples refers to a mathematical model that embodies a set of statistical assumptions concerning the generation of sample data and/or similar data from a population; in some examples, a “statistical model” represents a data-generating process.
  • machine learning algorithm or “ML algorithm” at least in some examples refers to an application, program, process, algorithm, and/or function that builds or estimates an ML model based on sample data or training data. Additionally or alternatively, the term “machine learning algorithm” or “ML algorithm” at least in some examples refers to a program, process, algorithm, and/or function that learns from experience w.r.t some task(s) and some performance measure(s)/metric(s), and an ML model is an object or data structure created after an ML algorithm is trained with training data.
  • the terms “ML algorithm”, “Al algorithm”, “Al/ML algorithm”, and the like may be used interchangeably. Additionally, although the term “ML algorithm” may refer to different concepts than the term “ML model,” these terms may be used interchangeably for the purposes of the present disclosure.
  • machine learning application or “ML application” at least in some examples refers to an application, program, process, algorithm, and/or function that contains some AI/ML model(s) and application-level descriptions. Additionally or alternatively, the term “machine learning application” or “ML application” at least in some examples refers to a complete and deployable application and/or package that includes at least one ML model and/or other data capable of achieving a certain function and/or performing a set of actions or tasks in an operational environment.
  • the terms “ML application”, “Al application”, “Al/ML application”, and the like may be used interchangeably.
  • machine learning entity or “ML entity” at least in some examples refers to an entity that is either an ML model or contains an ML model and ML model-related metadata that can be managed as a single composite entity.
  • metadata may include, for example, the applicable runtime context for the ML model.
  • Al decision entity “machine learning decision entity”, or “ML decision entity” at least in some examples refers to an entity that applies a non- Al and/or non-ML based logic for making decisions that can be managed as a single composite entity.
  • machine learning training at least in some examples refers to capabilities and associated end-to-end (e2e) processes to enable an ML training function to perform ML model training (e.g., as defined herein).
  • ML training capabilities include interaction with other parties/entities to collect and/or format the data required for ML model training.
  • machine learning model training or “ML model training” at least in some examples refers to capabilities of an ML training function to take data, run the data through an ML model, derive associated loss, optimization, and/or objective/goal, and adjust the parameterization of the ML model based on the computed loss, optimization, and/or objective/goal.
  • machine learning training function at least in some examples refers to a function with MLT capabilities.
  • AI/ML inference function or “ML inference function” at least in some examples refers to a function (or set of functions) that employs an ML model and/or Al decision entity to conduct inference. Additionally or alternatively, the term “AI/ML inference function” or “ML inference function” at least in some examples refers to an inference framework used to run a compiled model in the inference host. In some examples, an “AI/ML inference function” or “ML inference function” may also be referred to an “model inference engine”, “ML inference engine”, or “inference engine”.
  • model parameter in the context of ML, at least in some examples refer to values, characteristics, and/or properties that are learnt during training. Additionally or alternatively, “model parameter” and/or “parameter” in the context of ML, at least in some examples refer to a configuration variable that is internal to the model and whose value can be estimated from the given data. Model parameters are usually required by a model when making predictions, and their values define the skill of the model on a particular problem.
  • model parameters / parameters include weights (e.g., in an ANN); constraints; support vectors in a support vector machine (SVM); coefficients in a linear regression and/or logistic regression; word frequency, sentence length, noun or verb distribution per sentence, the number of specific character n-grams per word, lexical diversity, and the like, for natural language processing (NLP) and/or natural language understanding (NLU); and/or the like.
  • NLP natural language processing
  • NLU natural language understanding
  • hyperparameter at least in some examples refers to characteristics, properties, and/or parameters for an ML process that cannot be learnt during a training process. Hyperparameter are usually set before training takes place, and may be used in processes to help estimate model parameters.
  • hyperparameters examples include model size (e.g., in terms of memory space, bytes, number of layers, and the like); training data shuffling (e.g., whether to do so and by how much); number of evaluation instances, iterations, epochs (e.g., a number of iterations or passes over the training data), or episodes; number of passes over training data; regularization; learning rate (e.g., the speed at which the algorithm reaches (converges to) optimal weights); learning rate decay (or weight decay); momentum; number of hidden layers; size of individual hidden layers; weight initialization scheme; dropout and gradient clipping thresholds; the C value and sigma value for SVMs; the k in k-nearest neighbors; number of branches in a decision tree; number of clusters in a clustering algorithm; vector size; word vector size for NLP and NLU; and/or the like.
  • model size e.g., in terms of memory space, bytes, number of layers, and the like
  • object function at least in some examples refers to a function to be maximized or minimized for a specific optimization problem.
  • an objective function is defined by its decision variables and an objective.
  • the objective is the value, target, or goal to be optimized, such as maximizing profit or minimizing usage of a particular resource.
  • the specific objective function chosen depends on the specific problem to be solved and the objectives to be optimized. Constraints may also be defined to restrict the values the decision variables can assume thereby influencing the objective value (output) that can be achieved.
  • an objective function’s decision variables are often changed or manipulated within the bounds of the constraints to improve the objective function’s values. In general, the difficulty in solving an objective function increases as the number of decision variables included in that objective function increases.
  • the term “decision variable” refers to a variable that represents a decision to be made.
  • optimization at least in some examples refers to an act, process, or methodology of making something (e.g., a design, system, or decision) as fully perfect, functional, or effective as possible. Optimization usually includes mathematical procedures such as finding the maximum or minimum of a function.
  • the term “optimal” at least in some examples refers to a most desirable or satisfactory end, outcome, or output.
  • the term “optimum” at least in some examples refers to an amount or degree of something that is most favorable to some end.
  • opticalma at least in some examples refers to a condition, degree, amount, or compromise that produces a best possible result. Additionally or alternatively, the term “optima” at least in some examples refers to a most favorable or advantageous outcome or result.
  • precision at least in some examples refers to the closeness of the two or more measurements to each other.
  • precision may also be referred to as “positive predictive value”.
  • accuracy at least in some examples refers to the closeness of one or more measurements to a specific value.
  • quantile at least in some examples refers to a cut point(s) dividing a range of a probability distribution into continuous intervals with equal probabilities, or dividing the observations in a sample in the same way.
  • quantile function at least in some examples refers to a function that is associated with a probability distribution of a random variable, and the specifies the value of the random variable such that the probability of the variable being less than or equal to that value equals the given probability.
  • quantile function may also be referred to as a percentile function, percent-point function, or inverse cumulative distribution function.
  • recall at least in some examples refers to the fraction of relevant instances that were retrieved, or he number of true positive predictions or inferences divided by the number of true positives plus false negative predictions or inferences.
  • recall may also be referred to as “sensitivity”.
  • regression algorithm and/or “regression analysis” in the context of ML at least in some examples refers to a set of statistical processes for estimating the relationships between a dependent variable (often referred to as the “outcome variable”) and one or more independent variables (often referred to as “predictors”, “covariates”, or “features”).
  • regression algorithms/models include logistic regression, linear regression, gradient descent (GD), stochastic GD (SGD), and the like.
  • RL reinforcement learning
  • reinforcement learning at least in some examples refers to a goal- oriented learning technique based on interaction with an environment.
  • an agent aims to optimize a long-term objective by interacting with the environment based on a trial and error process.
  • Examples of RL algorithms include Markov decision process, Markov chain, Q-learning, multi-armed bandit learning, temporal difference learning, and deep RL.
  • the term “reward function”, in the context of RL, at least in some examples refers to a function that outputs a reward value based on one or more reward variables; the reward value provides feedback for an RL policy so that an RL agent can learn a desirable behavior.
  • reward shaping in the context of RL, at least in some examples refers to a adjusting or altering a reward function to output a positive reward for desirable behavior and a negative reward for undesirable behavior.
  • supervised learning at least in some examples refers to an ML technique that aims to learn a function or generate an ML model that produces an output given a labeled data set.
  • Supervised learning algorithms build models from a set of data that contains both the inputs and the desired outputs.
  • supervised learning involves learning a function or model that maps an input to an output based on example input-output pairs or some other form of labeled training data including a set of training examples.
  • Each input-output pair includes an input object (e.g., a vector) and a desired output object or value (referred to as a “supervisory signal”).
  • Supervised learning can be grouped into classification algorithms, regression algorithms, and instance-based algorithms.
  • tuning at least in some examples refers to a process of adjusting model parameters or hyperparameters of an ML model in order to improve its performance. Additionally or alternatively, the term “tuning” or “tune” at least in some examples refers to a optimizing an ML model’s model parameters and/or hyperparameters.
  • unsupervised learning at least in some examples refers to an ML technique that aims to learn a function to describe a hidden structure from unlabeled data and/or builds/generates models from a set of data that contains only inputs and no desired output labels.
  • unsupervised learning approaches/methods include K-means clustering, hierarchical clustering, mixture models, density-based spatial clustering of applications with noise (DBSCAN), ordering points to identify the clustering structure (OPTICS), anomaly detection methods (e.g., local outlier factor, isolation forest, and/or the like), expectation-maximization algorithm (EM), method of moments, topic modeling, and blind signal separation techniques (e.g., principal component analysis (PCA), independent component analysis, non- negative matrix factorization, singular value decomposition).
  • PCA principal component analysis
  • PCA principal component analysis
  • independent component analysis non- negative matrix factorization, singular value decomposition
  • unsupervised training methods include backpropagation, Hopfield learning rule, Boltzmann learning rule, contrastive divergence, wake sleep, variational inference, maximum likelihood, maximum a posteriori, Gibbs sampling, backpropagating reconstruction errors, and hidden state reparameterizations.
  • the term ”semi- supervised learning at least in some examples refers to ML algorithms that develop ML models from incomplete training data, where a portion of the sample input does not include labels.
  • inventive subject matter may be referred to herein, individually and/or collectively, merely for convenience and without intending to voluntarily limit the scope of this application to any single aspect or inventive concept if more than one is in fact disclosed.
  • inventive subject matter may be referred to herein, individually and/or collectively, merely for convenience and without intending to voluntarily limit the scope of this application to any single aspect or inventive concept if more than one is in fact disclosed.
  • specific aspects have been illustrated and described herein, it should be appreciated that any arrangement calculated to achieve the same purpose may be substituted for the specific aspects shown.
  • This disclosure is intended to cover any and all adaptations or variations of various aspects. Combinations of the above aspects and other aspects not specifically described herein will be apparent to those of skill in the art upon reviewing the above description.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Artificial Intelligence (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Environmental & Geological Engineering (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The present disclosure is related to artificial intelligence (AI) and machine learning (ML), and in particular, to technologies for predicting energy consumption of virtualized network function (VNF) instances. AI/ML models are trained to predict energy consumption of VNF instances based on VNF measurement data and virtualized network entity (VNE) measurement data. The VNF measurement data may include virtual resource usage data and the VNE measurement data may include VNE energy consumption data collected by one or more power, energy, environmental (PEE) sensors. Other embodiments may be described and/or claimed.

Description

ARTIFICIAL INTELLIGENCE/MACHINE LEARNING (AI/ML) MODELS FOR DETERMINING ENERGY CONSUMPTION IN VIRTUAL NETWORK FUNCTION INSTANCES
CROSS REFERENCE TO RELATED APPLICATIONS
The present application claims priority to U.S. Provisional App. No. 63/420,471 filed October 28, 2022, the contents of which is hereby incorporated by reference in its entirety.
BACKGROUND
Fast-growing mobile communications services are consuming more energy than ever before. For operators, energy conservation and emission reductions are not just a social responsibility, but also a critical requirement for energy cost savings. The rapid increase in wholesale energy prices is further forcing operators to prioritize the topic of energy efficiency. Since most network functions (NFs) in fifth generation systems (5GS) are virtualized, the energy efficiency for the 5GS and fifth generation core network (5GC) NFs are tightly coupled to the energy consumption of virtualized network function (VNF) instances.
BRIEF DESCRIPTION OF THE DRAWINGS
Embodiments will be readily understood by the following detailed description in conjunction with the accompanying drawings. To facilitate this description, like reference numerals designate like structural elements. Embodiments are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings in which: Figure 1 depicts an example model training architecture for VNF energy consumption; Figure 2 depicts example data samples used for the architecture of Figure 1 ; Figure 3 depicts an example MDA inference function for VNF energy consumption predictions; Figure 4 depicts an example VNF energy consumption prediction procedure; Figures 5 and 6 depict example wireless networks; Figure 7 depicts example hardware resources; Figure 8 depicts an example of management services (MnS); Figure 9 depicts an example AI/ML functional framework; Figure 10 depicts an example AI/ML- assisted communication architecture; and Figures 11 and 12 depict example processes for practicing the various embodiments discussed herein.
DETAILED DESCRIPTION
1. VNF ENERGY CONSUMPTION PREDICTION AND MDA ASSISTED ENERGY SAVING ASPECTS
The present disclosure is generally related to wireless communications technologies, cloud computing, edge computing, artificial intelligence (Al) and machine learning (ML), and in particular, to technologies to predict energy consumption of VNF instances. Embodiments of the present disclosure address the aforementioned issues and other issues by utilizing artificial intelligence (Al) and/or machine learning (ML) models to predict energy consumption of VNF instances. For example, some embodiments use AI/ML models to predict VNF instance energy consumption based on virtual compute usage, virtual memory usage, and/or virtual disk usage measurements.
1.1. VNF ENERGY CONSUMPTION PREDICTION ASPECTS
Figure 1 shows an example architecture 100, which involves training an ML model used by an inference function (see e.g., inference engines 915 and 1045 of Figures 9 and 10) to predict VNF energy consumption. In this example, a virtualized network entity (NE) 101 (see e.g., [TS28500]) contains Network Function Virtualization Infrastructure (NFVI) 102 (see e.g., ETSI GS NFV 003 and/or ETSI GR NFV 003) in/on which one or more VNF instances 103 (e.g., VNF 103-1 to VNF 103-/2, where n is a number) are deployed. The NFVI comprises various resources, such as hypervisor/VMM, compute, storage/memory, networking, and/or other hardware (HW) resources.
Power, energy, environmental (PEE) sensor(s) 120 (see e.g., [ES202336-12]), are used to collect energy consumption data (ECD) 122 of the VNE 101, which is then provided to a model training function (MTF) 125. Examples of PEE sensors 120 are discussed infra with respect to (w.r.t) Figure 7). Some or all of the PEE sensor(s) 120 may be built-in or embedded inside the VNE 101, NFVI 102, or in individual components of the NFVI 102, or may be external to the VNE 101 and/or NFVI 102. Additionally or alternatively, some or all of the PEE sensor(s) 120 may be part of power distribution frame, power supply system, junction box, electrical panel, and/or the like. In some examples, a power input 121 to the PEE sensor(s) 120 is provided to the VNE 101 to power the VNE 101. Additionally or alternatively, the PEE sensor(s) 120 collect data based on the power input 121, which may be provided as part of the VNE ECD 122.
Figure 1 also shows virtual resource usage data (VRUD) 112 for VNFs 103 (see e.g., clauses 5.7.1, 5.7.2, and/or 6.2 in [TS28552]) being collected/utilized by the MTF 125. In this example, VRUD 112-1 is provided by VNF 103-1 and VRUD 112-n is provided by VNF 103-n. The VRUD 112 includes statistics, measurements, metrics, and/or other data related to a respective VNF’s usage of compute (e.g., mean virtual CPU usage, peak virtual CPU usage, and/or the like), memory (e.g., mean virtual memory usage, peak virtual memory usage, and/or the like), disk/storage (e.g., mean virtual disk usage, peak virtual disk/storage usage, and/or the like), network (e.g., connection data volumes of NFs, number of incoming and/or outgoing packets, and/or the like), and/or other resources. The VRUD 112 can be calculated, generated, measured, and/or collected according to [TS28532], [TS28552], [TS28554], ETSI GS NFV-IFA 027 (e.g., ETSI GS NFV-IFA 027 v4.4.1 (2023-03)), and/or as discussed in Intel® VTune™ Profiler User Guide, Intel Corp., version 2023.1 (31 Mar. 2023).
The MTF 125 uses relatively large volume dataset(s), including VRUD 112 for VNFs 103 and the VNE ECD 122 to compute the parameters of an ML model that is (or will be) used to predict the energy consumption for VNF instance(s) 103. In some examples, the training dataset(s) include data samples (or features) of the VRUD 112 and data labels of the VNE ECD 122. As examples, the MTF 125 may be the same or similar as the MTF 910 and/or MLTF 1045 of Figures 9 and 10, a model training logical function (MTLF) of an NWDAF 562, and/or the AI/ML model entities discussed in U.S. App. No. 18/358,288 filed on 25 Jul. 2023.
Figure 2 shows an example MDA architecture 200 where an MDA inference function is used for VNF energy consumption prediction. Here, an MDA function (MDAF) 851 may be deployed as an AI/ML inference function 145 using one or more ML models 250 to predict the VNF energy consumption of VNFs 103. In the example of Figure 2, the AI/ML inference function 245 utilizes the ML model(s) 250 to predict the ECD 212 for respective VNFs 104 (e.g., VNF ECD 212-1 for VNF 103-1, and so forth, to VNF ECD 212-n for VNF 103-n) based on the VRUD 112-1 to 112-n, and VNE ECD 122. As examples, the AI/ML inference function 245 and/or the ML model(s) 250 may predict the ECD 212 according to any suitable energy efficiency (EE) and/or energy consumption (EC) metrics, such as those discussed herein (see e.g., sections 1.2.3 and 1.2.4, infra), discussed in [TS28554], and/or the like.
Figure 3 shows example samples 300 of VNE ECD 122 and the VRUD 112 for VNFs 103 (see e.g., clause 5.7.1.1.1-3 in [TS28552]), which fluctuate continuously over time. In this example, the VRUD 112 are time synchronized with the VNE ECD 122 so they can be used as data samples (or features) and data labels for model training.
Figure 4 shows an example of a procedure 400 for VNF energy consumption prediction. The procedure 400 includes an offline training phase (operations 1-6) and an offline inference phase (operations 7-8). The procedure 400 may operate as follows.
At operation 1, the MTF 125 requests a producer of performance assurance management service (MnS) 150 to create a performance management (PM) job to collect measurement data (PM data) from VNFs 103 and/or VNE 101 (see e.g., [TS28550]). In some examples, the PM job request can include any of the parameters discussed herein (see e.g., Table 1.2.2.3-1, infra) and/or as discussed in [TS28550] (e.g., IOC name, IOC instance list, measurement category list and/or list of measurement/KPI type names (e.g., VNE measurement data and/or VNF measurement data), granularity period, reporting period, start time, end/stop time, schedule, stream target, priority, reliability, and/or the like). At operation 2, the producer of performance assurance MnS 150 sends the VNE ECD 122 to the MTF 125. At operations 3.1 to 3.n, the producer of performance assurance MnS 150 sends the VRUD 112 for VNFs 103-1 to 103-n to the MTF 125. At operation 4, the MTF 125 performs model training using data samples of the VRUD 112 for VNFs 103-1 to 103-n, and data labels of the VNE ECD 122.
At operation 5, the MTF 125 deploys the model to a model inference engine 245. In some examples, the model inference engine 245 is, or is part of, an MDAF 851 and/or is the same or similar as the inference engine 915 and/or inference engine 1045 of Figures 9 and 10. At operation 6, the inference engine 245 requests the producer of performance assurance MnS 150 to create PM job to collect measurement data from VNFs 103-1 to 103-n. In some examples, the PM job request can include any of the parameters discussed herein (see e.g., Table 1.2.2.3-1, infra) and/or as discussed in [TS28550] (e.g., IOC name, IOC instance list, measurement category list and/or list of measurement/KPI type names (e.g., VNF measurement data), granularity period, reporting period, start time, end/stop time, schedule, stream target, priority, reliability, and/or the like).
At operations 7.1 to 7.n, the producer of performance assurance MnS 150 sends the VRUD 112 for VNFs 103-1 to 103-n to the inference engine 245. At operation 8, the model inference engine 245 (e.g., in the MDAF 851) uses respective VRUDs 112 to predict the energy consumption for corresponding VNS 103, for example, using the VRUD 112 of VNF 103-1 to predict the energy consumption of VNF 103-1, and so forth to using the VRUD 112 of VNF 103- n to predict the energy consumption of VNF 103-n.
1.2. MDA ASSISTED ENERGY SAVING
1.2.1. MDA ASSISTED ENERGY SAVING USE CASES
Operators are aiming at decreasing power consumption in 5G networks to lower their operational expense with energy saving management solutions. Energy saving is achieved by activating the energy saving mode of the NR capacity booster cell or 5GC NFs (e.g., UPF and/or the like). The energy saving decision making is typically based on the load information of the related cells/UPFs, the energy saving policies set by operators and the energy saving recommendations provided by MDAS producer (e.g., MDAS-P or MDA MnS-P). To achieve an optimized balance between the energy consumption and the network performance, MDA can be used to assist the MDAS consumer (MDAS-C or MDA MnS-C) to make energy saving decisions.
To make energy saving decisions, an MDAS-C can determine where the energy efficiency issues (e.g., high energy consumption, low energy efficiency) exist, and the cause of the energy efficiency issues. In these examples, MDA 851 to correlate and analyze energy saving related performance measurements (e.g., PDCP data volume of cells, power consumption, and/or the like) and network analysis data (e.g., observed service experience related network data analytics) to provide the analytics results which indicate current network energy efficiency. In some low-traffic scenarios, MDA MnS-Cs may expect to reduce energy consumption to save energy. In this case, the MDA MnS-C may request the MDAS-P to report only high energy consumption issue related analytics results. When the consumer expects to improve energy efficiency, although it may lead to high energy consumption in network or in certain parts of network, then the related issue is the low energy efficiency one. In that case, the consumer may request analytics results related to low energy efficiency issue. So, the target could be to enhance the performance of NF for a given energy consumption. This will result in higher Energy Efficiency of network.
To make the energy saving decision, it is necessary for MDAS-C to determine which Energy Efficiency (EE) KPI related factor(s) (e.g., traffic load, end-to-end latency, active UE numbers, and/or the like) are affected or potentially affected. The MDAS-P can utilize historical data to predict the EE KPI related factors (e.g., load variation of cells at some future time, and/or the like). The prediction result of these information can then be used by operators to make energysaving decision to guarantee the service experience.
The MDAS-P may also provide energy saving related recommendation with the energy saving state to the MDAS-C. Under the energy saving state, the required network performance and network experience should be guaranteed. Therefore, it is important to formulate appropriate energy saving policies (start time, dynamic threshold setting, base station parameter configuration, and/or the like). The MDAS-C may take the recommendations with the energy saving state into account for making analysis or making energy saving decisions. After the recommendations have been executed, the MDA producer may start evaluating and further analyzing network management data to optimize the recommendations.
As described previously, energy saving analysis is used to determine when to take energy saving actions for 5G/NR and/or 5GC NFs that can be triggered by energy efficiency related measurements and/or KPIs. For example, when the energy efficiency measurements/metrics is/are below a certain threshold, the energy saving analysis can be triggered and/or initiated to analyze the cause and determine the mitigation actions. The energy efficiency for 5G/NR and/or 5GC NFs are tightly coupled to the VNF ECD and/or VNE ECD 122. The energy saving analysis (e.g., as performed by the AI/ML entity 245) is used to predict the energy consumption for VNF instance(s) 103 (e.g., VNF ECD 212), based on virtual resource usage data (e.g., VRUD 112), such as compute usage, virtual memory usage, and virtual disk usage measurements (see e.g., clause 5.7.1.1 in [TS28552]).
Figure imgf000006_0001
for VNF ii I analysis
Figure imgf000007_0001
DEFINITIONS FOR MDA ASSISTED ENERGY SAVING
The MDA type for energy saving analysis is: MDAAssistedEnergySaving.EnergySavingAnalysis.
I.2.2.I. Enabling data The enabling data for MDAAssistedEnergySaving.EnergySavingAnalysis MDA type are provided in table 1.2.2.1-1. For general information about enabling data, see clause 8.2.1 of [TS28104],
Table 1.2.2.1-1; Enabling data for energy saving analysis
Figure imgf000007_0002
I.2.2.2. Analytics output The specific information elements (lEs) of the analytics output for energy saving analysis, in addition to the common information elements of the analytics outputs (see clause 8.3), are provided in table 1.2.2.2-1 and/or table 8.4.4.1.3-1 of [TS28104],
Table 1.2.2.2-1; Analytics output for energy saving analysis
Figure imgf000007_0003
Figure imgf000008_0001
I.2.2.3. Predicted VnfEC «dataType»
The Predicted VnfEC data type specifies the type of predicted VNF energy consumption.
Table 1.2.2.3- 1 shows example lEs/parameters/data elements for the Predicted VnfEC data type.
Table 1.2.2.3-1
Figure imgf000008_0002
Figure imgf000009_0002
1.2.3. POWER, ENERGY AND ENVIRONMENTAL (PEE) MEASUREMENTS
The PEE related measurements defined herein are valid for a 5G Physical Network Function (PNF). The NR NRM is defined in [TS28541] .
I.2.3.I. VNF Energy Consumption a) The VNF energy consumption measurement provides the energy consumption of the virtualized NE (see e.g., [TS28500]) that contains the NFVI (see e.g., ETSI GS NFV 003 and/or ETSI GR NFV 003) where VNF instance(s) are deployed. b) OM. c) This measurement is obtained by mapping the energy consumption E Tr) received from the PEE sensor(s) 120 (see e.g., clause 4.4.3.1 in [ES202336-12]) to the ManagedElement MOI representing the VNE 101. In some examples, the energy consumption E(Tr) can be calculated according to equation 1.2.3.1-1. wherein:
Figure imgf000009_0001
In equation 1.2.3.1-1, Tr is a record period (see e.g., clause 4.4.3.4 in [ES202336-12]), P(j~) is a measurement of power, and u(/) and i(y) are values of voltage and current acquired over a sampling period Ta by analog-digital conversion equipment of measurements at the AC or DC power interface of the NE and/or VNE 101 under measurement (see e.g., table 1 in [ES202336- 12]). In some examples, E(Tr) represents the energy consumed within a granularity period (e.g., granularityPeriod as discussed in clause 6.1.1.2 of [TS28550]). d) In some examples, this measurement is a real value in Watt-hours (Wh), kiloWatt-hours (kWh), or the like. For example, the unit of this measurement may be in kWh. In other examples, the unit of this measurement may be in Joules (J) or some other unit of energy. e) In some examples, the measurement name has the form of “PEE.VirtualizedNeEC”. f) In some examples, this measurement object is a ManagedElement. g) In some examples, this measurement is valid for packet switched traffic. h) In some examples, this measurement can be used in a 5GS (see e.g., Figure 5). i) One example usage of this measurement is in the VNF energy consumption prediction.
1.2.4. VNF ENERGY CONSUMPTION (EC)
I.2.4.I. Predicted VNF energy consumption KPI a) The name of this KPI may be “PredictedEC-VNF”. b) This KPI describes the predicted energy consumption for a VNF instance 103. c) This KPI is mapped from the predicted VNF energy consumption that was provided by the MDA function 851. The value of this KPI is in kilowatt-hours (kWh). In some examples, the PredictedEC-VNF can be expressed as shown by equation 1.2.4.1-1.
PredictedEC-VNF = predictedVnfEnergyConsumption (1.2.4.1-1)
The predictedVnfEnergyConsumpt ion parameter is described in table 1.2.2.2-1, supra (see also e.g., clause 8.4.4.1.3 in [TS28104]). d) In some examples, this KPI is a ManagedFunction.
2. NETWORK, SYSTEM, AND DEVICE CONFIGURATIONS AND ARRANGEMENTS
Figure 5 depicts an example network architecture 500. The network 500 may operate in a manner consistent with 3 GPP technical specifications for ETE or 5G/NR systems. However, the example embodiments are not limited in this regard and the described examples may apply to other networks that benefit from the principles described herein, such as future 3 GPP systems, or the like.
The network 500 includes a UE 502, which is any mobile or non-mobile computing device designed to communicate with a RAN 504 via an over-the-air connection. The UE 502 is communicatively coupled with the RAN 504 by a Uu interface, which may be applicable to both LTE and NR systems. Examples of the UE 502 include, but are not limited to, a smartphone, tablet computer, wearable device (e.g., smart watch, fitness tracker, smart glasses, smart clothing/fabrics, head-mounted displays, smart shows, and/or the like), desktop computer, workstation, laptop computer, in-vehicle infotainment system, in-car entertainment system, instrument cluster, head-up display (HUD) device, onboard diagnostic device, dashtop mobile equipment, mobile data terminal, electronic engine management system, electronic/engine control unit, electronic/engine control module, embedded system, sensor, microcontroller, control module, engine management system, networked appliance, machine-type communication device, machine-to-machine (M2M), device-to-device (D2D), machine-type communication (MTC) device, Internet of Things (loT) device, smart appliance, flying drone or unmanned aerial vehicle (UAV), terrestrial drone or autonomous vehicle, robot, electronic signage, single-board computer (SBC) (e.g., Raspberry Pi, Arduino, Intel Edison, and the like), plug computers, and/or any type of computing device such as any of those discussed herein. The UE 502 may be the same or similar to any of the other UEs discussed herein such as, for example, UE 602, hardware resources 700, and/or any other UE discussed herein.
The network 500 may include a set of UEs 502 coupled directly with one another via a device-to-device (D2D), proximity services (ProSe), PC5, and/or sidelink (SL) interface, and/or any other suitable interface such as any of those discussed herein. These UEs 502 may be M2M, D2D, MTC, and/or loT devices, and/or V2X systems that communicate using physical sidelink channels such as, but not limited to, PSBCH, PSDCH, PSSCH, PSCCH, PSFCH, and the like. The UE 502 may perform blind decoding attempts of SL channels/links according to the various examples herein.
In some examples, the UE 502 may additionally communicate with an AP 506 via an over- the-air (OTA) connection. The AP 506 manages a WLAN connection, which may serve to offload some/all network traffic from the RAN 504. The connection between the UE 502 and the AP 506 may be consistent with any IEEE 802.11 protocol. Additionally, the UE 502, RAN 504, and AP 506 may utilize cellular-WLAN aggregation/integration (e.g., LWA/LWIP). Cellular-WLAN aggregation may involve the UE 502 being configured by the RAN 504 to utilize both cellular radio resources and WLAN resources.
The RAN 504 includes one or more network access nodes (NANs) 514 (also referred to as “RAN nodes 514”). The NANs 514 terminate air-interface(s) for the UE 502 by providing access stratum protocols including RRC, PDCP, RLC, MAC, and PHY/L1 protocols. In this manner, the NAN 514 enables data/voice connectivity between a core network (CN) 540 and the UE 502. The NANs 514 may be a macrocell base station or a low power base station for providing femtocells, picocells or other like cells having smaller coverage areas, smaller user capacity, or higher bandwidth compared to macrocells; or some combination thereof. In these implementations, a NAN 514 may be referred to as a base station (BS), next generation nodeB (gNB), RAN node, eNodeB (eNB), next generation (ng)-eNB, NodeB, RSU, TRP, and/or the like.
One example implementation is a “CU/DU split” architecture where the NANs 514 are embodied as a gNB-Central Unit (CU) that is communicatively coupled with one or more gNB- Distributed Units (DUs), where each DU may be communicatively coupled with one or more Radio Units (RUs) (also referred to as RRHs, RRUs, or the like). In some implementations, the one or more RUs may be individual RSUs. In some implementations, the CU/DU split may include an ng-eNB-CU and one or more ng-eNB-DUs instead of, or in addition to, the gNB-CU and gNB- DUs, respectively. The NANs 514 employed as the CU may be implemented in a discrete device or as one or more software entities running on server computers as part of, for example, a virtual network including a virtual Base Band Unit (BBU) or BBU pool, cloud RAN (CRAN), Radio Equipment Controller (REC), Radio Cloud Center (RCC), centralized RAN (C-RAN), virtualized RAN (vRAN), and/or the like (although these terms may refer to different implementation concepts). Any other type of architectures, arrangements, and/or configurations can be used.
The set of NANs 514 are coupled with one another via respective Xn interfaces if the RAN 504 is a NG-RAN 504. The X2/Xn interfaces, which may be separated into control/user plane interfaces in some examples, may allow the ANs to communicate information related to handovers, data/context transfers, mobility, load management, interference coordination, and the like.
The ANs of the RAN 504 may each manage one or more cells, cell groups, component carriers, and the like to provide the UE 502 with an air interface for network access. The UE 502 may be simultaneously connected with a set of cells provided by the same or different NANs 514 of the RAN 504. For example, the UE 502 and RAN 504 may use carrier aggregation to allow the UE 502 to connect with a set of component carriers, each corresponding to a PCell or SCell. In dual connectivity scenarios, a first NAN 514 may be a master node that provides an MCG and a second NAN 514 may be secondary node that provides an SCG. The first/second NANs 514 may be any combination of eNB, gNB, ng-eNB, and the like.
The RAN 504 may provide the air interface over a licensed spectrum or an unlicensed spectrum. To operate in the unlicensed spectrum, the nodes may use LAA, eLAA, and/or feLAA mechanisms based on CA technology with PCells/Scells. Prior to accessing the unlicensed spectrum, the nodes may perform medium/carrier-sensing operations based on, for example, a listen-before-talk (LBT) protocol.
Additionally or alternatively, individual UEs 502 provide radio information to one or more NANs 514 and/or one or more edge compute nodes (e.g., edge servers/hosts, and the like). The radio information may be in the form of one or more measurement reports, and/or may include, for example, signal strength measurements, signal quality measurements, and/or the like. Each measurement report is tagged with a timestamp and the location of the measurement (e.g., the UEs 502 current location). As examples, the measurements collected by the UEs 502 and/or included in the measurement reports may include one or more of the following: bandwidth (BW), network or cell load, latency, jitter, round trip time (RTT), number of interrupts, out-of-order delivery of data packets, transmission power, bit error rate, bit error ratio (BER), Block Error Rate (BLER), packet error ratio (PER), packet loss rate, packet reception rate (PRR), data rate, peak data rate, end-to-end (e2e) delay, signal-to-noise ratio (SNR), signal-to-noise and interference ratio (SINR), signal-plus-noise-plus-distortion to noise-plus-distortion (SINAD) ratio, carrier-to-interference plus noise ratio (CINR), Additive White Gaussian Noise (AW GN), energy per bit to noise power density ratio (Eb/No), energy per chip to interference power density ratio (Ec/Io), energy per chip to noise power density ratio (Ec/NO), peak-to-average power ratio (PAPR), reference signal received power (RSRP), reference signal received quality (RSRQ), received signal strength indicator (RSSI), received channel power indicator (RCPI), received signal to noise indicator (RSNI), Received Signal Code Power (RSCP), average noise plus interference (ANPI), GNSS timing of cell frames for UE positioning for E-UTRAN or 5G/NR (e.g., a timing between an AP 506 or RAN node 514 reference time and a GNSS -specific reference time for a given GNSS), GNSS code measurements (e.g., the GNSS code phase (integer and fractional parts) of the spreading code of the ith GNSS satellite signal), GNSS carrier phase measurements (e.g., the number of carrier-phase cycles (integer and fractional parts) of the ith GNSS satellite signal, measured since locking onto the signal; also called Accumulated Delta Range (ADR)), channel interference measurements, thermal noise power measurements, received interference power measurements, power histogram measurements, channel load measurements, STA statistics, and/or other like measurements. The RSRP, RSSI, and/or RSRQ measurements may include RSRP, RSSI, and/or RSRQ measurements of cell-specific reference signals, channel state information reference signals (CSI-RS), and/or synchronization signals (SS) or SS blocks for 3GPP networks (e.g., LTE or 5G/NR), and RSRP, RSSI, RSRQ, RCPI, RSNI, and/or ANPI measurements of various beacon, Fast Initial Link Setup (FILS) discovery frames, or probe response frames for WLAN/WiFi (e.g., [IEEE80211]) networks. Other measurements may be additionally or alternatively used, such as those discussed in 3GPP TS 36.214, 3GPP TS 38.215 (“[TS38215]”), 3GPP TS 38.314, IEEE Standard for Information Technology- Telecommunications and Information Exchange between Systems - Local and Metropolitan Area Networks— Specific Requirements - Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications, IEEE Std 802.11 -2020, pp.1-4379 (26 Feb. 2021) (“[IEEE80211]”), and/or the like. Additionally or alternatively, any of the aforementioned measurements (or combination of measurements) may be collected by one or more NANs 514 and provided to the edge compute node(s).
Additionally or alternatively, the measurements can include one or more of the following measurements: measurements related to Data Radio Bearer (DRB) (e.g., number of DRBs attempted to setup, number of DRBs successfully setup, number of released active DRBs, insession activity time for DRB, number of DRBs attempted to be resumed, number of DRBs successfully resumed, and the like); measurements related to RRC (e.g., mean number of RRC connections, maximum number of RRC connections, mean number of stored inactive RRC connections, maximum number of stored inactive RRC connections, number of attempted, successful, and/or failed RRC connection establishments, and the like); measurements related to UE Context (UECNTX); measurements related to Radio Resource Utilization (RRU) (e.g., DL total PRB usage, UL total PRB usage, distribution of DL total PRB usage, distribution of UL total PRB usage, DL PRB used for data traffic, UL PRB used for data traffic, DL total available PRBs, UL total available PRBs, and the like); measurements related to Registration Management (RM); measurements related to Session Management (SM) (e.g., number of PDU sessions requested to setup; number of PDU sessions successfully setup; number of PDU sessions failed to setup, and the like); measurements related to GTP Management (GTP); measurements related to IP Management (IP); measurements related to Policy Association (PA); measurements related to Mobility Management (MM) (e.g., for inter-RAT, intra-RAT, and/or Intra/Inter-frequency handovers and/or conditional handovers: number of requested, successful, and/or failed handover preparations; number of requested, successful, and/or failed handover resource allocations; number of requested, successful, and/or failed handover executions; mean and/or maximum time of requested handover executions; number of successful and/or failed handover executions per beam pair, and the like); measurements related to Virtualized Resource(s) (VR); measurements related to Carrier (CARR); measurements related to QoS Flows (QF) (e.g., number of released active QoS flows, number of QoS flows attempted to release, in-session activity time for QoS flow, in-session activity time for a UE 502, number of QoS flows attempted to setup, number of QoS flows successfully established, number of QoS flows failed to setup, number of initial QoS flows attempted to setup, number of initial QoS flows successfully established, number of initial QoS flows failed to setup, number of QoS flows attempted to modify, number of QoS flows successfully modified, number of QoS flows failed to modify, and the like); measurements related to Application Triggering (AT); measurements related to Short Message Service (SMS); measurements related to Power, Energy and Environment (PEE); measurements related to NF service (NFS); measurements related to Packet Flow Description (PFD); measurements related to Random Access Channel (RACH); measurements related to Measurement Report (MR); measurements related to Layer 1 Measurement (L1M); measurements related to Network Slice Selection (NSS); measurements related to Paging (PAG); measurements related to Non-IP Data Delivery (NIDD); measurements related to external parameter provisioning (EPP); measurements related to traffic influence (TI); measurements related to Connection Establishment (CE); measurements related to Service Parameter Provisioning (SPP); measurements related to Background Data Transfer Policy (BDTP); measurements related to Data Management (DM); and/or any other performance measurements such as those discussed in 3GPP TS 28.552 (“[TS28552]”), 3GPP TS 32.425 (“[TS32425]”), and/or the like.
The radio information may be reported in response to a trigger event and/or on a periodic basis. Additionally or alternatively, individual UEs 502 report radio information either at a low periodicity or a high periodicity depending on a data transfer that is to take place, and/or other information about the data transfer. Additionally or alternatively, the edge compute node(s) may request the measurements from the NANs 514 at low or high periodicity, or the NANs 514 may provide the measurements to the edge compute node(s) at low or high periodicity. Additionally or alternatively, the edge compute node(s) may obtain other relevant data from other edge compute node(s), core network functions (NFs), application functions (AFs), and/or other UEs 502 such as Key Performance Indicators (KPIs), with the measurement reports or separately from the measurement reports.
Additionally or alternatively, in cases where is discrepancy in the observation data from one or more UEs, one or more RAN nodes, and/or core network NFs (e.g., missing reports, erroneous data, and the like) simple imputations may be performed to supplement the obtained observation data such as, for example, substituting values from previous reports and/or historical data, apply an extrapolation filter, and/or the like. Additionally or alternatively, acceptable bounds for the observation data may be predetermined or configured. For example, CQI and MCS measurements may be configured to only be within ranges defined by suitable 3 GPP standards. In cases where a reported data value does not make sense (e.g., the value exceeds an acceptable range/bounds, or the like), such values may be dropped for the current learning/training episode or epoch. For example, on packet delivery delay bounds may be defined or configured, and packets determined to have been received after the packet delivery delay bound may be dropped.
The UE 502 can also perform determine reference signal (RS) measurement and reporting procedures to provide the network with information about the quality of one or more wireless channels and/or the communication media in general, and this information can be used to optimize various aspects of the communication system. As examples, the measurement and reporting procedures performed by the UE 502 can include those discussed in 3GPP TS 38.211, 3GPP TS 38.212, 3GPP TS 38.213, 3GPP TS 38.214, [TS38215], 3GPP TS 38.101-1, 3GPP TS 38.104, 3GPP TS 38.133, [TS38331], and/or other the like. The physical signals and/or reference signals can include demodulation reference signals (DM-RS), phase-tracking reference signals (PT-RS), positioning reference signal (PRS), channel-state information reference signal (CSI-RS), synchronization signal block (SSB), primary synchronization signal (PSS), secondary synchronization signal (SSS), and sounding reference signal (SRS).
In any of the examples discussed herein, any suitable data collection and/or measurement mechanism(s) may be used to collect the observation data. For example, data marking (e.g., sequence numbering, and the like), packet tracing, signal measurement, data sampling, and/or timestamping techniques may be used to determine any of the aforementioned metrics/observations. The collection of data may be based on occurrence of events that trigger collection of the data. Additionally or alternatively, data collection may take place at the initiation or termination of an event. The data collection can be continuous, discontinuous, and/or have start and stop times. The data collection techniques/mechanisms may be specific to a HW configuration/implementation or non-HW-specific, or may be based on various software parameters (e.g., OS type and version, and the like). Various configurations may be used to define any of the aforementioned data collection parameters. Such configurations may be defined by suitable specifications/standards, such as 3GPP (e.g., [5GEdge]), ETSI (e.g., [MEC]), O-RAN (e.g., [O-RAN]), Intel® Smart Edge Open (e.g., [ISEO]), IETF (e.g., [MAMS]), lEEE/WiFi (e.g., [IEEE80211] and the like), and/or any other like standards such as those discussed herein.
In some examples, the RAN 504 is an E-UTRAN with one or more eNBs, and provides an LTE air interface (Uu) with the parameters and characteristics at least as discussed in 3GPP TS 36.300. In some examples, the RAN 504 is an next generation (NG)-RAN 504 with a set of RAN nodes 514 (including gNBs 514a and ng-eNBs 514b). Each gNB 514a connects with 5G-enabled UEs 502 using a 5G-NR Uu interface with parameters and characteristics as discussed in [TS383OO], among many other 3GPP standards, including any of those discussed herein. Where the NG-RAN 504 includes a set of ng-eNBs 514b, the one or more ng-eNBs 514b connect with a UE 502 via the 5G Uu and/or LTE Uu interface. The gNBs 514a and the ng-eNBs 514b connect with the 5GC 540 through respective NG interfaces, which include an N2 interface, an N3 interface, and/or other interfaces. The gNBs 514a and the ng-eNBs 514b are connected with each other over an Xn interface. Additionally, individual gNBs 514a are connected to one another via respective Xn interfaces, and individual ng-eNBs 514b are connected to one another via respective Xn interfaces. In some examples, the NG interface may be split into two parts, an NG user plane (NG-U) interface, which carries traffic data between the nodes of the NG-RAN 504 and a UPF 548 (e.g., N3 interface), and an NG control plane (NG-C) interface, which is a signaling interface between the nodes of the NG-RAN 504 and an AMF 544 (e.g., N2 interface).
The NG-RAN 504 may provide a 5G-NR air interface (which may also be referred to as a Uu interface) with the following characteristics: variable SCS; CP-OFDM for DL, CP-OFDM and DFT-s-OFDM for UL; polar, repetition, simplex, and Reed-Muller codes for control and LDPC for data. The 5G-NR air interface may rely on CSI-RS, PDSCH/PDCCH DMRS similar to the LTE air interface. The 5G-NR air interface may not use a CRS, but may use PBCH DMRS for PBCH demodulation; PTRS for phase tracking for PDSCH; and tracking reference signal for time tracking. The 5G-NR air interface may operating on FR1 bands that include sub-6 GHz bands or FR2 bands that include bands from 24.25 GHz to 52.6 GHz. The 5G-NR air interface may include an SSB that is an area of a DL resource grid that includes PSS/SSS/PBCH. The 5G-NR air interface may utilize BWPs for various purposes. For example, BWP can be used for dynamic adaptation of the SCS. For example, the UE 502 can be configured with multiple BWPs where each BWP configuration has a different SCS. When a BWP change is indicated to the UE 502, the SCS of the transmission is changed as well. Another use case example of BWP is related to power saving. In particular, multiple BWPs can be configured for the UE 502 with different amount of frequency resources (e.g., PRBs) to support data transmission under different traffic loading scenarios. A BWP containing a smaller number of PRBs can be used for data transmission with small traffic load while allowing power saving at the UE 502 and in some cases at the gNB 514a. A BWP containing a larger number of PRBs can be used for scenarios with higher traffic load.
In some implementations, individual gNBs 514a can include a gNB-CU and a set of gNB- DUs. Additionally or alternatively, gNBs 514a can include one or more RUs. In these implementations, the gNB-CU may be connected to each gNB-DU via respective Fl interfaces. In case of network sharing with multiple cell ID broadcast(s), each cell identity associated with a subset of PLMNs corresponds to a gNB-DU and the gNB-CU it is connected to, share the same physical layer cell resources. For resiliency, a gNB-DU may be connected to multiple gNB-CUs by appropriate implementation. Additionally, a gNB-CU can be separated into gNB-CU control plane (gNB-CU-CP) and gNB-CU user plane (gNB-CU-UP) functions. The gNB-CU-CP is connected to a gNB-DU through an Fl control plane interface (Fl-C), the gNB-CU-UP is connected to the gNB-DU through an Fl user plane interface (Fl-U), and the gNB-CU-UP is connected to the gNB-CU-CP through an El interface. In some implementations, one gNB-DU is connected to only one gNB-CU-CP, and one gNB-CU-UP is connected to only one gNB-CU-CP. For resiliency, a gNB-DU and/or a gNB-CU-UP may be connected to multiple gNB-CU-CPs by appropriate implementation. One gNB-DU can be connected to multiple gNB-CU-UPs under the control of the same gNB-CU-CP, and one gNB-CU-UP can be connected to multiple DUs under the control of the same gNB-CU-CP. Data forwarding between gNB-CU-UPs during intra-gNB- CU-CP handover within a gNB may be supported by Xn-U. Similarly, individual ng-eNBs 514b can include an ng-eNB-CU and a set of ng-eNB-DUs. In these implementations, the ng-eNB-CU and each ng-eNB-DU are connected to one another via respective W1 interface. An ng-eNB can include an ng-eNB-CU-CP, one or more ng-eNB-CU-UP(s), and one or more ng-eNB-DU(s). An ng-eNB-CU-CP and an ng-eNB-CU-UP is connected via the El interface. An ng-eNB-DU is connected to an ng-eNB-CU-CP via the Wl-C interface, and to an ng-eNB-CU-UP via the Wl-U interface. The general principle described herein w.r.t gNB aspects also applies to ng-eNB aspects and corresponding El and W1 interfaces, if not explicitly specified otherwise.
The node hosting user plane part of the PDCP protocol layer (e.g., gNB-CU, gNB-CU-UP, and for EN-DC, MeNB or SgNB depending on the bearer split) performs user inactivity monitoring and further informs its inactivity or (re)activation to the node having control plane connection towards the core network (e.g., over El, X2, or the like). The node hosting the RLC protocol layer (e.g., gNB-DU) may perform user inactivity monitoring and further inform its inactivity or (re)activation to the node hosting the control plane (e.g., gNB-CU or gNB-CU-CP).
In these implementations, the NG-RAN 504, is layered into a Radio Network Layer (RNL) and a Transport Network Layer (TNL). The NG-RAN 504 architecture (e.g., the NG-RAN logical nodes and interfaces between them) is part of the RNL. For each NG-RAN interface (e.g., NG, Xn, Fl, and the like) the related TNL protocol and the functionality are specified. The TNL provides services for user plane transport and/or signaling transport. In NG-Flex configurations, each NG-RAN node is connected to all AMFs 544 of AMF sets within an AMF region supporting at least one slice also supported by the NG-RAN node. The AMF Set and the AMF Region are defined in [TS23501].
The RAN 504 is communicatively coupled to CN 540 that includes network elements and/or network functions (NFs) to provide various functions to support data and telecommunications services to customers/subscribers (e.g., UE 502). The components of the CN 540 may be implemented in one physical node or separate physical nodes. In some examples, NFV may be utilized to virtualize any or all of the functions provided by the network elements of the CN 540 onto physical compute/storage resources in servers, switches, and the like. A logical instantiation of the CN 540 may be referred to as a network slice, and a logical instantiation of a portion of the CN 540 may be referred to as a network sub-slice.
In the example of Figure 5, the CN 540 is a 5GC 540 including an Authentication Server Function (AUSF) 542, Access and Mobility Management Function (AMF) 544, Session Management Function (SMF) 546, User Plane Function (UPF) 548, Network Slice Selection Function (NSSF) 550, Network Exposure Function (NEF) 552, Network Repository Function (NRF) 554, Policy Control Function (PCF) 556, Unified Data Management (UDM) 558, Unified Data Repository (UDR), Application Function (AF) 560, and Network Data Analytics Function (NWDAF) 562 coupled with one another over various interfaces as shown. The NFs in the 5GC 540 are briefly introduced as follows.
The NWDAF 562 is an NF capable of collecting data from UEs 502, other NF(s) in 5GC 540, an Operations, Administration and Maintenance (0AM) entities/functions, MnS (see e.g., Figure 8), MnF (see e.g., Figure 8), AFs 560, DNs 536, server(s) 538, cloud computing services, edge compute nodes and/or edge networks, and/or other entities/elements that can be used for analytics. The NWDAF 562 includes one or more of the following functionalities: support data collection from NFs and AFs 560; support data collection from 0AM; NWDAF service registration and metadata exposure to NFs and AFs 560; support analytics information provisioning to NFs and AFs 560; support ML model training and provisioning to NWDAF(s) 562 (e.g., those containing analytics logical function). Some or all of the NWDAF functionalities can be supported in a single instance of an NWDAF 562. The NWDAF 562 also includes an analytics reporting capability, which comprises means that allow discovery of the type of analytics that can be consumed by an external party and/or the request for consumption of analytics information generated by the NWDAF 562. The NWDAF 562 can collect data from NF(s) and/or other entities/elements/functions over an Nnf service-based interface associated with the NF(s) and/or other entities/elements/functions. The NWDAF 562 belongs to the same PLMN as the NF that provides the data. The Nnf interface is defined for the NWDAF 562 to request subscription to data delivery for a particular context, cancel subscription to data delivery, and request a specific report of data for a particular context. The 5GS architecture also allows the NWDAF 562 to retrieve management data from an 0AM entity by invoking 0AM services.
The NWDAF 562 interacts with different entities for different purposes, such as one or more of the following: data collection based on subscription to events provided by AMF 544, SMF 546, PCF 556, UDM 558, NSACF, AF 560 (directly or via NEF 552) and 0AM; analytics and data collection using the Data Collection Coordination Function (DCCF); retrieval of information from data repositories (e.g., UDR via UDM 558 for subscriber-related information); data collection of location information from LCS system; storage and retrieval of information from an Analytics Data Repository Function (ADRF); analytics and data collection from a Messaging Framework Adaptor Function (MFAF); retrieval of information about NFs (e.g., from NRF 554 for NF-related information); on-demand provision of analytics to consumers, as specified in clause 6 of [TS23288]; and/or provision of bulked data related to analytics ID(s). NWDAF discovery and selection procedures are discussed in clause 6.3.13 in [TS23501] and clause 5.2 of [TS23288].
A single instance or multiple instances of NWDAF 562 may be deployed in a PLMN. If multiple NWDAF 562 instances are deployed, the architecture supports deploying the NWDAF 562 as a central NF, as a collection of distributed NFs, or as a combination of both. If multiple NWDAF 562 instances are deployed, an NWDAF 562 can act as an aggregate point (e.g., aggregator NWDAF 562) and collect analytics information from other NWDAFs 562, which may have different serving areas, to produce the aggregated analytics (e.g., per analytics ID), possibly with analytics generated by itself. When multiple NWDAFs 562 exist, not all of them need to be able to provide the same type of analytics results. For example, some of the NWDAFs 562 can be specialized in providing certain types of analytics. An analytics ID information element is used to identify the type of supported analytics that NWDAF 562 can generate. In some implementations, NWDAF 562 instance(s) can be collocated with a 5GS NF. The NWDAF 562 may contain an analytics logical function (AnLF) and/or a model training logical function (MTLF). The NWDAF 562 can contain only an MTLF, only an AnLF, or both logical functions. The 5GS architecture allows an NWDAF containing an AnLF (referred to herein as “NWDAF-ANLF”) to use trained ML model provisioning services from the same or different NWDAF containing an MTLF (also referred to herein as “NWDAF-MTLF”). The Nnwdaf interface is used by the NWDAF-AnLF to request and subscribe to trained ML model provisioning services provided by the NWDAF-MTLF. The NWDAF 562 provides an Nnwdaf_MLModelProvision service enables an NF service consumer (NFc) to receive a notification when an ML model matching the subscription parameters becomes available in the NWDAF-MTLF (see e.g., clause 7.5 of [TS23288]). The NWDAF 562 provides an Nnwdaf_MLModelInfo service that enables an NFc to request and get ML Model information from the NWDAF-MTLF (see e.g., clause 7.6 of [TS23288]).
The AnLF is a logical function in the NWDAF 562 that performs inference, derives analytics information (e.g., derives statistics, inferences, and/or predictions based on analytics consumer requests) and exposes analytics services (e.g., Nnwdaf_AnalyticsSubscription or Nnwdaf_AnalyticsInfo). Analytics information are either statistical information of the past events, or predictive information. The MTLF is a logical function in the NWDAF 562 that trains AI/ML models and exposes new training services (e.g., providing trained ML model) as defined in clauses 7.5 and 7.6 of [TS23288],
Since multiple NWDAF 562 instances may be deployed in a network, an NFc can utilize the NRF 554 to discover NWDAF 562 instance(s) unless NWDAF information is available by other means (e.g., locally configured on NFcs). NFcs may make an additional query to the UDM 558, when supported. An NWDAF selection function in an NFc selects an NWDAF 562 (or NWDAF-MTLF and/or NWDAF-AnLF) instance based on the available NWDAF 562 instances, a list of supported analytics ID(s) (e.g., possibly per supported service) stored/from an NRF 554, NWDAF capabilities (e.g., analytics aggregation capability, analytics metadata provisioning capability, ML model training capabilities, ML model deployment capabilities, and/or the like), and/or other NRF 554 registration elements of the NF profile. Additional aspects of NWDAF 562 functionality are defined in 3GPP TS 23.288 (“[TS23288]”).
The AUSF 542 stores data for authentication of UE 502 and handle authentication-related functionality. The AUSF 542 may facilitate a common authentication framework for various access types.
The AMF 544 allows other functions of the 5GC 540 to communicate with the UE 502 and the RAN 504 and to subscribe to notifications about mobility events w.r.t the UE 502. The AMF 544 is also responsible for registration management (e.g., for registering UE 502), connection management, reachability management, mobility management, lawful interception of AMF-related events, and access authentication and authorization. The AMF 544 provides transport for SM messages between the UE 502 and the SMF 546, and acts as a transparent proxy for routing SM messages. AMF 544 also provides transport for SMS messages between UE 502 and an SMSF. AMF 544 interacts with the AUSF 542 and the UE 502 to perform various security anchor and context management functions. Furthermore, AMF 544 is a termination point of a RAN-CP interface, which includes the N2 reference point between the RAN 504 and the AMF 544. The AMF 544 is also a termination point of NAS (Nl) signaling, and performs NAS ciphering and integrity protection.
The AMF 544 also supports NAS signaling with the UE 502 over an N3IWF interface. The N3IWF provides access to untrusted entities. N3IWF may be a termination point for the N2 interface between the (R)AN 504 and the AMF 544 for the control plane, and may be a termination point for the N3 reference point between the (R)AN 504 and the 548 for the user plane. As such, the AMF 544 handles N2 signaling from the SMF 546 and the AMF 544 for PDU sessions and QoS, encapsulate/de-encapsulate packets for IPSec and N3 tunneling, marks N3 user-plane packets in the UL, and enforces QoS corresponding to N3 packet marking taking into account QoS requirements associated with such marking received over N2. N3IWF may also relay UL and DL control-plane NAS signaling between the UE 502 and AMF 544 via an Nl reference point between the UE 502and the AMF 544, and relay UL and DL user-plane packets between the UE 502 and UPF 548. The N3IWF also provides mechanisms for IPsec tunnel establishment with the UE 502. The AMF 544 may exhibit an Namf service-based interface, and may be a termination point for an N14 reference point between two AMFs 544 and an N17 reference point between the AMF 544 and a 5G-EIR (not shown by Figure 5). In addition to the functionality of the AMF 544 described herein, the AMF 544 may provide support for Network Slice restriction and Network Slice instance restriction based on NWDAF analytics.
The SMF 546 is responsible for SM (e.g., session establishment, tunnel management between UPF 548 and NAN 514); UE IP address allocation and management (including optional authorization); selection and control of UP function; configuring traffic steering at UPF 548 to route traffic to proper destination; termination of interfaces toward policy control functions; controlling part of policy enforcement, charging, and QoS; lawful intercept (for SM events and interface to LI system); termination of SM parts of NAS messages; DL data notification; initiating AN specific SM information, sent via AMF 544 over N2 to NAN 514; and determining SSC mode of a session. SM refers to management of a PDU session, and a PDU session or “session” refers to a PDU connectivity service that provides or enables the exchange of PDUs between the UE 502 and the DN 536. The SMF 546 may also include the following functionalities to support edge computing enhancements (see e.g., [TS23548]): selection of EASDF 561 and provision of its address to the UE as the DNS server for the PDU session; usage of EASDF 561 services as defined in [TS23548]; and for supporting the application layer architecture defined in [TS23558], provision and updates of ECS address configuration information to the UE. Discovery and selection procedures for EASDFs 561 is discussed in [TS23501] § 6.3.23.
The UPF 548 acts as an anchor point for intra-RAT and inter-RAT mobility, an external PDU session point of interconnect to data network 536, and a branching point to support multihomed PDU session. The UPF 548 also performs packet routing and forwarding, packet inspection, enforces user plane part of policy rules, lawfully intercept packets (UP collection), performs traffic usage reporting, perform QoS handling for a user plane (e.g., packet filtering, gating, UL/DL rate enforcement), performs UL traffic verification (e.g., SDF-to-QoS flow mapping), transport level packet marking in the UL and DL, and performs DL packet buffering and DL data notification triggering. UPF 548 may include an UL classifier to support routing traffic flows to a data network.
The NSSF 550 selects a set of network slice instances serving the UE 502. The NSSF 550 also determines allowed NSSAI and the mapping to the subscribed S-NSSAIs, if needed. The NSSF 550 also determines an AMF set to be used to serve the UE 502, or a list of candidate AMFs 544 based on a suitable configuration and possibly by querying the NRF 554. The selection of a set of network slice instances for the UE 502 may be triggered by the AMF 544 with which the UE 502 is registered by interacting with the NSSF 550; this may lead to a change of AMF 544. The NSSF 550 interacts with the AMF 544 via an N22 reference point; and may communicate with another NSSF in a visited network via an N31 reference point (not shown).
The NEF 552 securely exposes services and capabilities provided by 3GPP NFs for third party, internal exposure/re-exposure, AFs 560, edge computing networks/frame works, and the like. In such examples, the NEF 552 may authenticate, authorize, or throttle the AFs 560. The NEF 552 stores/retrieves information as structured data using the Nudr interface to a Unified Data Repository (UDR). The NEF 552 also translates information exchanged with the AF 560 and information exchanged with internal NFs. For example, the NEF 552 may translate between an AF-Service-Identifier and an internal 5GC information, such as DNN, S-NSSAI, as described in clause 5.6.7 of [TS23501]. In particular, the NEF 552 handles masking of network and user sensitive information to external AF's 560 according to the network policy. The NEF 552 also receives information from other NFs based on exposed capabilities of other NFs. This information may be stored at the NEF 552 as structured data, or at a data storage NF using standardized interfaces. The stored information can then be re-exposed by the NEF 552 to other NFs and AFs, or used for other purposes such as analytics. For example, NWDAF analytics may be securely exposed by the NEF 552 for external party, as specified in [TS23288]. Furthermore, data provided by an external party may be collected by the NWDAF 562 via the NEF 552 for analytics generation purpose. The NEF 552 handles and forwards requests and notifications between the NWDAF 562 and AF(s) 560, as specified in [TS23288],
The NRF 554 supports service discovery functions, receives NF discovery requests from NF instances, and provides information of the discovered NF instances to the requesting NF instances. The NRF 554 also maintains NF profiles of available NF instances and their supported services. The NF profile of NF instance maintained in the NRF 554 includes the following information: NF instance ID; NF type; PLMN ID in the case of PLMN, PLMN ID + NID in the case of SNPN; Network Slice related Identifier(s) (e.g., S-NSSAI, NSI ID); an NF’s network address(es) (e.g., FQDN, IP address, and/or the like), NF capacity information, NF priority information (e.g., for AMF selection), NF set ID, NF service set ID of the NF service instance; NF specific service authorization information; names of supported services, if applicable; endpoint address(es) of instance(s) of each supported service; identification of stored data/information (e.g., for UDR profile and/or other NF profiles); other service parameter(s) (e.g., DNN or DNN list, LADN DNN or LADN DNN list, notification endpoint for each type of notification that the NF service is interested in receiving, and/or the like); location information for the NF instance (e.g., geographical location, data center, and/or the like); TAI(s); NF load information; Routing Indicator, Home Network Public Key identifier, for UDM 558 and AUSF 542; for UDM 558, AUSF 542, and NSSAAF in the case of access to an SNPN using credentials owned by a Credentials Holder with AAA Server, identification of Credentials Holder (e.g., the realm of the Network Specific Identifier based SUPI); for UDM 558 and AUSF 542, and if UDM 558/AUSF 542 is used for access to an SNPN using credentials owned by a Credentials Holder, identification of Credentials Holder (e.g., the realm if network specific identifier based SUPI is used or the MCC and MNC if IMSI based SUPI is used); for AUSF 542 and NSSAAF in the case of SNPN Onboarding using a DCS with AAA server, identification of DCS (e.g., the realm of the Network Specific Identifier based SUPI); for UDM 558 and AUSF 542, and if UDM 558/AUSF 542 is used as DCS in the case of SNPN Onboarding, identification of DCS ((e.g., the realm if Network Specific Identifier based SUPI, or the MCC and MNC if IMSI based SUPI); one or more GUAMI(s), in the case of AMF 544; for the UPF 548, see clause 5.2.7.2.2 of [TS23502]; UDM Group ID, range(s) of SUPIs, range(s) of GPSIs, range(s) of internal group identifiers, range(s) of external group identifiers for UDM 558; UDR Group ID, range(s) of SUPIs, range(s) of GPSIs, range(s) of external group identifiers for UDR; AUSF Group ID, range(s) of SUPIs for AUSF 542; PCF Group ID, range(s) of SUPIs for PCF 556; HSS Group ID, set(s) of IMPIs, set(s) of IMPU, set(s) of IMSIs, set(s) of PSIs, set(s) of MSISDN for HSS; event ID(s) supported by AFs 560, in the case of NEF 552; event Exposure service supported event ID(s) by UPF 548; application identifier(s) supported by AFs 560, in the case of NEF 552; range(s) of external identifiers, or range(s) of external group identifiers, or the domain names served by the NEF, in the case of NEF 552 (e.g., used when the NEF 552 exposes AF information for analytics purpose as detailed in [TS23288]; additionally the NRF 554 may store a mapping between UDM Group ID and SUPI(s), UDR Group ID and SUPI(s), AUSF Group ID and SUPI(s) and PCF Group ID and SUPI(s), to enable discovery of UDM 558, UDR, AUSF 542 and PCF 556 using SUPI, SUPI ranges as specified in clause 6.3 of [TS23501], and/or interact with UDR to resolve the UDM Group ID/UDR Group ID/AUSF Group ID/PCF Group ID based on UE identity (e.g., SUPI)); IP domain list as described in clause 6.1.6.2.21 of 3GPP TS 29.510, Range(s) of (UE) IPv4 addresses or Range(s) of (UE) IPv6 prefixes, Range(s) of SUPIs or Range(s) of GPSIs or a BSF Group ID, in the case of BSF; SCP Domain the NF belongs to; DCCF Serving Area information, NF types of the data sources, NF Set IDs of the data sources, if available, in the case of DCCF; supported DNAI list, in the case of SMF 546; for SNPN, capability to support SNPN Onboarding in the case of AMF and capability to support User Plane Remote Provisioning in the case of SMF 546; IP address range, DNAI for UPF 548; additional V2X related NF profile parameters are defined in 3GPP TS 23.287; additional ProSe related NF profile parameters are defined in 3GPP TS 23.304; additional MBS related NF profile parameters are defined in 3GPP TS 23.247; additional UAS related NF profile parameters are defined in TS 23.256; among many others discussed in [TS23501]. In some examples, service authorization information provided by an 0AM system is also included in the NF profile in the case that, for example, an NF instance has an exceptional service authorization information.
For NWDAF 562, the NF profile includes: supported analytics ID(s), possibly per service, NWDAF serving area information (e.g., a list of TAIs for which the NWDAF can provide services and/or data), Supported Analytics Delay per Analytics ID (if available), NF types of the NF data sources, NF Set IDs of the NF data sources, if available, analytics aggregation capability (if available), analytics metadata provisioning capability (if available), ML model filter information parameters S-NSSAI(s) and area(s) of interest for the trained ML model(s) per analytics ID(s) (if available), federated learning (FL) capability type (e.g., FL server or FL client, if available), Time interval supporting FL (if available). The NWDAF's 562 Serving Area information is common to all its supported analytics IDs. The analytics IDs supported by the NWDAF 562 may be associated with a supported analytics delay, for example, the analytics report can be generated with a time (including data collection delay and inference delay) in less than or equal to the supported analytics delay. The determination of supported analytics delay, and how the NWDAF 562 avoid updating its Supported Analytics Delay in NRF frequently may be NWDAF-implementation specific.
The PCF 556 provides policy rules to control plane functions to enforce them, and may also support unified policy framework to govern network behavior. The PCF 556 may also implement a front end to access subscription information relevant for policy decisions in a UDR 559 of the UDM 558. In addition to communicating with functions over reference points as shown, the PCF 556 exhibit an Npcf service-based interface.
The UDM 558 handles subscription-related information to support the network entities’ handling of communication sessions, and stores subscription data of UE 502. For example, subscription data may be communicated via an N8 reference point between the UDM 558 and the AMF 544. The UDM 558 may include two parts, an application front end and a UDR. The UDR may store subscription data and policy data for the UDM 558 and the PCF 556, and/or structured data for exposure and application data (including PFDs for application detection, application request information for multiple UEs 502) for the NEF 552. The Nudr service-based interface may be exhibited by the UDR to allow the UDM 558, PCF 556, and NEF 552 to access a particular set of the stored data, as well as to read, update (e.g., add, modify), delete, and subscribe to notification of relevant data changes in the UDR. The UDM 558 may include a UDM-FE, which is in charge of processing credentials, location management, subscription management and so on. Several different front ends may serve the same user in different transactions. The UDM-FE accesses subscription information stored in the UDR and performs authentication credential processing, user identification handling, access authorization, registration/mobility management, and subscription management. In addition to communicating with other NFs over reference points as shown, the UDM 558 may exhibit the Nudm service-based interface.
Edge Application Server Discovery Function (EASDF) 561 exhibits an Neasdf servicebased interface, and is connected to the SMF 546 via an N88 interface. One or multiple EASDF instances may be deployed within a PLMN, and interactions between 5GC NF(s) and the EASDF 561 take place within a PLMN. The EASDF 561 includes one or more of the following functionalities: registering to NRF 554 for EASDF 561 discovery and selection; handling the DNS messages according to the instruction from the SMF 546; and/or terminating DNS security, if used. Handling the DNS messages according to the instruction from the SMF 546 includes one or more of the following functionalities: receiving DNS message handling rules and/or BaselineDNS Pattern from the SMF 546; exchanging DNS messages from/with the UE 502; forwarding DNS messages to C-DNS or L-DNS for DNS query; adding EDNS client subnet (ECS) option into DNS query for an FQDN; reporting to the SMF 546 the information related to the received DNS messages; and/or buffering/discarding DNS messages from the UE 502 or DNS Server. The EASDF has direct user plane connectivity (e.g., without any NAT) with the PSA UPF over N6 for the transmission of DNS signaling exchanged with the UE. The deployment of a NAT between EASDF 561 and PSA UPF 548 may or may not be supported. Additional aspects of the EASDF 561 are discussed in [TS23548].
AF 560 provides application influence on traffic routing, provide access to NEF 552, and interact with the policy framework for policy control. The AF 560 may influence UPF 548 (re)selection and traffic routing. Based on operator deployment, when AF 560 is considered to be a trusted entity, the network operator may permit AF 560 to interact directly with relevant NFs. In some implementations, the AF 560 is used for edge computing implementations.
An NF that needs to collect data from an AF 560 may subscribe/unsubscribe to notifications regarding data collected from an AF 560, either directly from the AF 560 or via NEF 552. The data collected from an AF 560 is used as input for analytics by the NWDAF 562. The details for the data collected from an AF 560 as well as interactions between NEF 552, AF 560 and NWDAF 562 are described in [TS23288],
The 5GC 540 may enable edge computing by selecting operator/3rd party services to be geographically close to a point that the UE 502 is attached to the network. This may reduce latency and load on the network. In edge computing implementations, the 5GC 540 may select a UPF 548 close to the UE 502 and execute traffic steering from the UPF 548 to DN 536 via the N6 interface. This may be based on the UE subscription data, UE location, and information provided by the AF 560, which allows the AF 560 to influence UPF (re)selection and traffic routing.
The data network (DN) 536 may represent various network operator services, Internet access, or third party services that may be provided by one or more servers including, for example, application (app)Zcontent server 538. The DN 536 may be an operator external public, a private PDN, or an intra-operator packet data network, for example, for provision of IMS services. In this example, the app server 538 can be coupled to an IMS via an S-CSCF or the I-CSCF. In some implementations, the DN 536 may represent one or more local area DNs (LADNs), which are DNs 536 (or DN names (DNNs)) that is/are accessible by a UE 502 in one or more specific areas. Outside of these specific areas, the UE 502 is not able to access the LADN/DN 536.
Additionally or alternatively, the DN 536 may be an edge DN 536, which is a (local) DN that supports the architecture for enabling edge applications. In these examples, the app server 538 may represent the physical hardware systems/devices providing app server functionality and/or the application software resident in the cloud or at an edge compute node that performs server function(s). In some examples, the app/content server 538 provides an edge hosting environment that provides support required for Edge Application Server's execution.
In some examples, the 5GS can use one or more edge compute nodes to provide an interface and offload processing of wireless communication traffic. In these examples, the edge compute nodes may be included in, or co-located with one or more RANs 504 or RAN nodes 514. For example, the edge compute nodes can provide a connection between the RAN 504 and UPF 548 in the 5GC 540. The edge compute nodes can use one or more NFV instances instantiated on virtualization infrastructure within the edge compute nodes to process wireless connections to and from the RAN 514 and UPF 548.
In some implementations, the edge compute nodes provide a distributed computing environment for application and service hosting, and also provide storage and processing resources so that data and/or content can be processed in close proximity to subscribers (e.g., users of UEs 502) for faster response times. The edge compute nodes also support multitenancy runtime and hosting environment(s) for applications, including virtual appliance applications that may be delivered as packaged virtual machine (VM) images, middleware application and infrastructure services, content delivery services including content caching, mobile big data analytics, and computational offloading, among others. Computational offloading involves offloading computational tasks, workloads, applications, and/or services to the edge compute nodes from the UEs 502, CN 540, DN 536, and/or server(s) 538, or vice versa. For example, a device application or client application operating in a UE 502 may offload application tasks or workloads to one or more edge compute nodes. In another example, an edge compute node may offload application tasks or workloads to a set of UEs 502 (e.g., for distributed machine learning computation and/or the like).
The edge compute nodes may include or be part of an edge system that employs one or more edge computing technologies (ECTs) (also referred to as an “edge computing framework” or the like). The edge compute nodes may also be referred to as “edge hosts” or “edge servers.” The edge system includes a collection of edge servers and edge management systems (not shown) necessary to run edge computing applications within an operator network or a subset of an operator network. The edge servers are physical computer systems that may include an edge platform and/or virtualization infrastructure, and provide compute, storage, and network resources to edge computing applications. Each of the edge servers are disposed at an edge of a corresponding access network, and are arranged to provide computing resources and/or various services (e.g., computational task and/or workload offloading, cloud-computing capabilities, IT services, and other like resources and/or services as discussed herein) in relatively close proximity to UEs 502. The VI of the edge compute nodes provide virtualized environments and virtualized resources for the edge hosts, and the edge computing applications may run as VMs and/or application containers on top of the VI.
In one example implementation, the ECT is and/or operates according to the MEC framework, as discussed in ETSI GR MEC 001, ETSI GS MEC 003, ETSI GS MEC 009, ETSI GS MEC 010-1, ETSI GS MEC 010-2, ETSI GS MEC Oi l, ETSI GS MEC 012, ETSI GS MEC 013, ETSI GS MEC 014, ETSI GS MEC 015, ETSI GS MEC 016, ETSI GS MEC 021, ETSI GR MEC 024, ETSI GS MEC 028, ETSI GS MEC 029, ETSI MEC GS 030, and ETSI GR MEC 031 (collectively referred to herein as “[MEC]”). This example implementation (and/or in any other example implementation discussed herein) may also include NFV and/or other like virtualization technologies such as those discussed in ETSI GR NFV 001, ETSI GS NFV 002, ETSI GR NFV 003, ETSI GR NFV 003, ETSI GS NFV 006, ETSI GS NFV-INF 001, ETSI GS NFV-INF 003, ETSI GS NFV-INF 004, ETSI GS NFV-MAN 001, and/or Israel et al., OSM Release FIVE Technical Overview, ETSI OPEN SOURCE MANO, OSM White Paper, 1st ed. (Jan. 2019). Other virtualization technologies and/or service orchestration and automation platforms may be used such as, for example, those discussed in E2E Network Slicing Architecture, GSMA, Official Doc. NG.127, vl.O (03 Jun. 2021), Open Network Automation Platform (ONAP) documentation, Release Istanbul, v9.0.1 (17 Feb. 2022), 3GPP Service Based Management Architecture (SBMA) as discussed in [TS28533].
In another example implementation, the ECT is and/or operates according to the O-RAN framework, as described in O-RAN Working Group 1 (Use Cases and Overall Architecture): O- RAN Architecture Description, O-RAN ALLIANCE WG1, O-RAN Architecture Description v09.00, Release R003 (Jun. 2023); O-RAN Working Group 2 (Non-RT R1C and Al interface WG) Al interface: Application Protocol, v04.00, R003 (Mar. 2023); O-RAN Working Group 2 (Non- RT RIC and Al interface WG) Al interface: General Aspects and Principles, v03.01, Release R003 (Mar. 2023); O-RAN Working Group 2 Al/ML workflow description and requirements v01.03 O-RAN ALLIANCE WG2 (Oct. 2021); O-RAN Working Group 2 (Non-RT RIC and Al interface WG): R1 interface: General Aspects and Principles 5.0, v05.00, R003 (Jun. 2023); O- RAN Working Group 2 (Non-RT RIC and Al interface WG) Non-RT RIC Architecture, v03.00, Release R003 (Jun. 2023); O-RAN Working Group 3 Near-Real-time RAN Intelligent Controller Architecture & E2 General Aspects and Principles, v03.01, Release R003 (Jun. 2023); O-RAN Working Group 3, Near-Real-time Intelligent Controller, E2 Application Protocol (E2AP), v03.01, Release R003 (Jun. 2023); O-RAN Working Group 3 Near-Real-time Intelligent Controller E2 Service Model (E2SM), v03.01, Release R003 (Jun. 2023); O-RAN Working Group 3 Near-Real-time Intelligent Controller E2 Service Model (E2SM) KPM, v03.00, Release R003 (Mar. 2023); O-RAN Working Group 3 Near-Real-time Intelligent Controller E2 Service Model (E2SM), Cell Configuration and Control, vOl.Ol, Release R003 (Mar. 2023); O-RAN Working Group 3 Near-Real-time Intelligent Controller E2 Service Model (E2SM) RAN Function Network Interface (NI) vOl.OO (Feb. 2020); O-RAN Working Group 3 Near-Real-time Intelligent Controller E2 Service Model (E2SM) RAN Control v03.00, Release R003 (Jun. 2023); O-RAN Working Group 3 (Near-Real-time RAN Intelligent Controller and E2 Interface Working Group): Near-RT R1C Architecture, v04.00, Release R003 (Mar. 2023) (collectively referred to as “[O- RAN]”).
In another example implementation, the ECT is and/or operates according to the 3rd Generation Partnership Project (3GPP) System Aspects Working Group 6 (SA6) Architecture for enabling Edge Applications (referred to as “3GPP edge computing”) as discussed in 3GPP TS 23.222, 3GPP TS 23.401, 3GPP TS 23.434, 3GPP TS 23.501 (“[TS23501]”), 3GPP TS 23.502 (“[TS23502]”), 3GPP TS 23.548 (“[TS23548]”), 3GPP TS 23.558 (“[TS23558]”), 3GPP TS 23.682, 3GPP TR 23.700-98, 3GPP TS 28.104 (“[TS28104]”), 3GPP TS 28.105 (“[TS28105]”), 3GPP TS 28.312, 3GPP TS 28.532 (“[TS28532]”), 3GPP TS 28.533 (“[TS28533]”), 3GPP TS 28.535, 3GPP TS 28.536, 3GPP TS 28.538, 3GPP TS 28.541 (“[TS28541]”), 3GPP TS 28.545 (“[TS28545]”), 3GPP TS 28.550 (“[TS28550]”), 3GPP TS 28.554 (“[TS28554]”), 3GPP TS 28.622 (“[TS28622]”), 3GPP TS 29.122, 3GPP TS 29.222, 3GPP TS 29.522, 3GPP TR 28.908, 3GPP TS 33.122 (collectively referred to as “[5GEdge]”).
In another example implementation, the ECT is and/or operates according to the Intel® Smart Edge Open framework (formerly known as OpenNESS) as discussed in Intel® Smart Edge Open Developer Guide, version 21.09 (30 Sep. 2021), available at: https://smart-edge- open.github.io/ (“[ISEO]”).
In another example implementation, the ECT operates according to the Multi-Access Management Services (MAMS) framework as discussed in Kanugovi et al., Multi-Access Management Services (MAMS), INTERNET ENGINEERING TASK FORCE (IETF), Request for Comments (RFC) 8743 (Mar. 2020), Ford et al., TCP Extensions for Multipath Operation with Multiple Addresses, IETF RFC 8684, (Mar. 2020), De Coninck et al., Multipath Extensions for QUIC (MP-QUIC), IETF DRAFT-DECONINCK-QUIC -MULTIPATH-07, IETA, QUIC Working Group (03-May-2021), Zhu et al., User-Plane Protocols for Multiple Access Management Service, IETF DRAFT -ZHU-INTAREA-MAMS-USER-PROTOCOL-09, IETA, INTAREA (04-Mar-2020), and Zhu et al., Generic Multi-Access (GMA) Convergence Encapsulation Protocols, IETF RFC 9188 (Feb. 2022) (collectively referred to as “[MAMS]”).
It should be understood that the aforementioned edge computing frameworks/ECTs and services deployment examples are only illustrative examples of ECTs, and that the present disclosure may be applicable to many other or additional edge computing/networking technologies in various combinations and layouts of devices located at the edge of a network including the various edge computing networks/systems described herein. Further, the techniques disclosed herein may relate to other loT edge network systems and configurations, and other intermediate processing entities and architectures may also be applicable to the present disclosure. Examples of such edge computing/networking technologies Examples of such edge computing/networking technologies include [MEC]; [O-RAN]; [ISEO]; [5GEdge]; Content Delivery Networks (CDNs) (also referred to as “Content Distribution Networks” or the like); Mobility Service Provider (MSP) edge computing and/or Mobility as a Service (MaaS) provider systems (e.g., used in AECC architectures); Nebula edge-cloud systems; Fog computing systems; Cloudlet edge-cloud systems; Mobile Cloud Computing (MCC) systems; Central Office Rearchitected as a Datacenter (CORD), mobile CORD (M-CORD) and/or Converged Multi-Access and Core (COMAC) systems; and/or the like. Further, the techniques disclosed herein may relate to other loT edge network systems and configurations, and other intermediate processing entities and architectures may also be used for purposes of the present disclosure.
The interfaces of the 5GC 540 include reference points and service-based interfaces. A reference point, at least in some examples, is a point at the conjunction of two non-overlapping functional groups, elements, or entities. The reference points include: N 1 (between the UE 502 and the AMF 544), N2 (between RAN 514 and AMF 544), N3 (between RAN 514 and UPF 548), N4 (between the SMF 546 and UPF 548), N5 (between PCF 556 and AF 560), N6 (between UPF 548 and DN 536), N7 (between SMF 546 and PCF 556), N8 (between UDM 558 and AMF 544), N9 (between two UPFs 548), N10 (between the UDM 558 and the SMF 546), Ni l (between the AMF 544 and the SMF 546), N12 (between AUSF 542 and AMF 544), N13 (between AUSF 542 and UDM 558), N14 (between two AMFs 544; not shown), N15 (between PCF 556 and AMF 544 in case of a non-roaming scenario, or between the PCF 556 in a visited network and AMF 544 in case of a roaming scenario), N16 (between two SMFs 546; not shown), and N22 (between AMF 544 and NSSF 550). Other reference point representations not shown in Figure 5 can also be used, such as any of those discussed in [TS23501] .
The service-based representation of Figure 5 represents NFs within the control plane that enable other authorized NFs to access their services. A service-based interface (SBI), at least in some examples, is an interface over which an NF can access the services of one or more other NFs. In some implementations, the service-based interfaces are API-based interfaces (e.g., northbound APIs, southbound APIs, HTTP/2, RESTful, SOAP, A1AP, E2AP, and/or any other API, web service, application layer and/or other communication protocol, such as any of those discussed herein) that can be used by an NF to call or invoke a particular service or service operation. The SBIs include: Namf (SBI exhibited by AMF 544), Nsmf (SBI exhibited by SMF 546), Nnef (SBI exhibited by NEF 552), Npcf (SBI exhibited by PCF 556), Nudm (SBI exhibited by the UDM 558), Naf (SBI exhibited by AF 560), Nnrf (SBI exhibited by NRF 554), Nnssf (SBI exhibited by NSSF 550), Nausf (SBI exhibited by AUSF 542). Other service-based interfaces (e.g., Nudr, N5g-eir, and Nudsf) not shown in Figure 5 can also be used, such as any of those discussed in [TS23501].
Although not shown by Figure 5, the system 500 may also include NFs that are not shown such as, for example, UDR, Unstructured Data Storage Function (UDSF), Network Slice Admission Control Function (NSACF), Network Slice- specific and Stand-alone Non-Public Network (SNPN) Authentication and Authorization Function (NSSAAF), UE radio Capability Management Function (UCMF), 5G-Equipment Identity Register (5G-EIR), CHarging Function (CHF), Time Sensitive Networking (TSN) AF 560, Time Sensitive Communication and Time Synchronization Function (TSCTSF), DCCF, Analytics Data Repository Function (ADRF), MFAF, Non-Seamless WLAN Offload Function (NSWOF), Service Communication Proxy (SCP), Security Edge Protection Proxy (SEPP), Non-3GPP InterWorking Function (N3IWF), Trusted Non-3GPP Gateway Function (TNGF), Wireline Access Gateway Function (W-AGF), and/or Trusted WLAN Interworking Function (TWIF) as discussed in [TS23501] .
Figure 6 schematically illustrates a wireless network 600. The wireless network 600 includes a UE 602 in wireless communication with a NAN 604. The UE 602 may be the same or similar to, and substantially interchangeable with any of the of the UEs discussed herein such as, for example, UE 502, hardware resources 700, and/or any other UE discussed herein. The NAN 604 may be the same or similar to, and substantially interchangeable with any of the NANs discussed herein such as, for example, AP 506, NANs 514, RAN 504, hardware resources 700, and/or any other NAN(s) discussed herein.
The UE 602 may be communicatively coupled with the NAN 604 via connection 606. The connection 606 is illustrated as an air interface to enable communicative coupling, and can be consistent with cellular communications protocols such as an LTE protocol or a 5G NR protocol operating at mmWave or sub-6GHz frequencies.
The UE 602 includes a host platform 608 coupled with a modem platform 610. The host platform 608 includes application processing circuitry 612, which may be coupled with protocol processing circuitry 614 of the modem platform 610. The application processing circuitry 612 may run various applications for the UE 602 that source/sink application data. The application processing circuitry 612 may further implement one or more layer operations to transmit/receive application data to/from a data network. These layer operations includes transport (for example UDP) and Internet (e.g., IP) operations
The protocol processing circuitry 614 may implement one or more of layer operations to facilitate transmission or reception of data over the connection 606. The layer operations implemented by the protocol processing circuitry 614 includes, for example, MAC, RLC, PDCP, RRC and NAS operations.
The modem platform 610 may further include digital baseband circuitry 616 that may implement one or more layer operations that are “below” layer operations performed by the protocol processing circuitry 614 in a network protocol stack. These operations includes, for example, PHY operations including one or more of HARQ-ACK functions, scrambling/descrambling, encoding/decoding, layer mapping/de-mapping, modulation symbol mapping, received symbol/bit metric determination, multi-antenna port precoding/decoding, which includes one or more of space-time, space-frequency or spatial coding, reference signal generation/detection, preamble sequence generation and/or decoding, synchronization sequence generation/detection, control channel signal blind decoding, and other related functions.
The modem platform 610 may further include transmit circuitry 618, receive circuitry 620, RF circuitry 622, and RF front end (RFFE) 624, which includes or connect to one or more antenna panels 626. Briefly, the transmit circuitry 618 includes a digital- to- analog converter, mixer, intermediate frequency (IF) components, and/or the like; the receive circuitry 620 includes an analog-to-digital converter, mixer, IF components, and/or the like; the RF circuitry 622 includes a low-noise amplifier, a power amplifier, power tracking components, and/or the like; RFFE 624 includes filters (e.g., surface/bulk acoustic wave filters), switches, antenna tuners, beamforming components (e.g., phase-array antenna components), and/or the like The selection and arrangement of the components of the transmit circuitry 618, receive circuitry 620, RF circuitry 622, RFFE 624, and antenna panels 626 (referred generically as “transmit/receive components” or “Tx/Rx components”) may be specific to details of a specific implementation such as, for example, whether communication is TDM or FDM, in mmWave or sub-6 gHz frequencies, and/or the like. In some examples, the transmit/receive components may be arranged in multiple parallel transmit/receive chains, may be disposed in the same or different chips/modules, and/or the like.
In some examples, the protocol processing circuitry 614 includes one or more instances of control circuitry (not shown) to provide control functions for the transmit/receive components.
A UE reception may be established by and via the antenna panels 626, RFFE 624, RF circuitry 622, receive circuitry 620, digital baseband circuitry 616, and protocol processing circuitry 614. In some examples, the antenna panels 626 may receive a transmission from the NAN 604 by receive-beamforming signals received by a set of antennas/antenna elements of the one or more antenna panels 626.
A UE transmission may be established by and via the protocol processing circuitry 614, digital baseband circuitry 616, transmit circuitry 618, RF circuitry 622, RFFE 624, and antenna panels 626. In some examples, the transmit components of the UE 604 may apply a spatial filter to the data to be transmitted to form a transmit beam emitted by the antenna elements of the antenna panels 626.
Similar to the UE 602, the NAN 604 includes a host platform 628 coupled with a modem platform 630. The host platform 628 includes application processing circuitry 632 coupled with protocol processing circuitry 634 of the modem platform 630. The modem platform may further include digital baseband circuitry 636, transmit circuitry 638, receive circuitry 640, RF circuitry 642, RFFE circuitry 644, and antenna panels 646. The components of the AN 604 may be similar to and substantially interchangeable with like-named components of the UE 602. In addition to performing data transmission/reception as described above, the components of the AN 608 may perform various logical functions that include, for example, RNC functions such as radio bearer management, UL and DL dynamic radio resource management, and data packet scheduling.
Examples of the antenna elements of the antenna panels 626 and/or the antenna elements of the antenna panels 646 include planar inverted-F antennas (PIFAs), monopole antennas, dipole antennas, loop antennas, patch antennas, Yagi antennas, parabolic dish antennas, omni-directional antennas, and/or the like.
Figure 7 illustrates components capable of reading instructions from a machine-readable or computer-readable medium (e.g., a non-transitory machine-readable storage medium) and perform any one or more of the methodologies discussed herein. Specifically, Figure 7 shows a diagrammatic representation of hardware resources 700 including one or more processors (or processor cores) 710, one or more memory/storage devices 720, and one or more communication resources 730, each of which may be communicatively coupled via a bus 740 or other interface circuitry. For examples where node virtualization (e.g., NFV) is utilized, a hypervisor 702 may be executed to provide an execution environment for one or more network slices/sub-slices to utilize the hardware resources 700.
The processors 710 may include, for example, a processor 712 and a processor 714. The processors 710 may be, for example, a central processing unit (CPU), a reduced instruction set computing (RISC) processor, a complex instruction set computing (CISC) processor, a graphics processing unit (GPU), a DSP such as a baseband processor, an ASIC, an FPGA, a radio-frequency integrated circuit (RFIC), another processor (including those discussed herein), or any suitable combination thereof.
The memory/storage devices 720 may include main memory, disk storage, or any suitable combination thereof. The memory/storage devices 720 may include, but are not limited to, any type of volatile, non-volatile, or semi-volatile memory such as dynamic random access memory (DRAM), static random access memory (SRAM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), Flash memory, solid-state storage, and/or the like.
The communication resources 730 may include interconnection or network interface controllers, components, or other suitable devices to communicate with one or more peripheral devices 704 or one or more databases 706 or other network elements via a network 708. For example, the communication resources 730 may include wired communication components (e.g., for coupling via USB, Ethernet, and/or the like), cellular communication components, NFC components, Bluetooth® components, WiFi® components, and other communication components.
Instructions 750 may comprise software, a program, an application, an applet, an app, or other executable code for causing at least any of the processors 710 to perform any one or more of the methodologies discussed herein. The instructions 750 may reside, completely or partially, within at least one of the processors 710 (e.g., within the processor’s cache memory), the memory/storage devices 720, or any suitable combination thereof. Furthermore, any portion of the instructions 750 may be transferred to the hardware resources 700 from any combination of the peripheral devices 704 or the databases 706. Accordingly, the memory of processors 710, the memory/storage devices 720, the peripheral devices 704, and the databases 706 are examples of computer-readable and machine-readable media.
In some implementations, the peripheral devices 704 may represent one or more sensors (also referred to as “sensor circuitry”). The sensor circuitry includes devices, modules, or subsystems whose purpose is to detect events or changes in its environment and send the information (sensor data) about the detected events to some other a device, module, subsystem, and/or the like. Individual sensors may be exteroceptive sensors (e.g., sensors that capture and/or measure environmental phenomena and/ external states), proprioceptive sensors (e.g., sensors that capture and/or measure internal states of a compute node or platform and/or individual components of a compute node or platform), and/or exproprioceptive sensors (e.g., sensors that capture, measure, or correlate internal states and external states). Examples of such sensors include, inter alia, inertia measurement units (IMU) comprising accelerometers, gyroscopes, and/or magnetometers; microelectromechanical systems (MEMS) or nanoelectromechanical systems (NEMS) comprising 3-axis accelerometers, 3-axis gyroscopes, and/or magnetometers; level sensors; flow sensors; temperature sensors (e.g., thermistors, including sensors for measuring the temperature of internal components and sensors for measuring temperature external to the compute node or platform); pressure sensors; barometric pressure sensors; gravimeters; altimeters; image capture devices (e.g., cameras); light detection and ranging (LiDAR) sensors; proximity sensors (e.g., infrared radiation detector and the like); depth sensors, ambient light sensors; optical light sensors; ultrasonic transceivers; microphones; and the like.
Additionally or alternatively, the sensor circuitry includes the PEE sensor(s) 120 discussed previously. As examples, the PEE sensors can include energy/power meters (e.g., analog, digital, and/or smart electric meters, wattmeter including current coils and voltage coils, volt-ampere meters, reactive power meters, power quality analyzers, and/or the like), voltage meters (e.g., analog voltmeters, digital voltmeters, moving-coild voltmeters, moving-iron voltmeters, electrostatic voltmeters, vacuum tube voltmeters, digital storage oscilloscopes, high-voltage probes, AC voltage sensors, digital panel meters, and/or the like), alternating current (AC) and/or direct current (DC) meters/sensors (e.g., open-loop and/or closed loop hall effect sensors, Rogowski coil sensors, current transformers, shunt resistors, resistor-based current sensors, zeroflux current sensors, digital current sensors, fiber optic current sensors, and/or the like), AC frequency measurement sensors (e.g., electromagnetic frequency meters and/or induction-based AC frequency measurement sensors, DSP-based sensors, vibration frequency sensors, piezoelectric sensors, optical frequency sensors, frequency counters, phase-locked loop (PLL) frequency detectors, resonant circuits, microcontroller-based frequency sensors, and/or the like), true power factor measurement devices, thermal environment sensors (e.g., temperature sensors, humidity sensors, and/or the like), and/or any other types of sensor(s) discussed herein. Examples of the temperature sensors include resistance temperature detectors (RTDs), thermocouples, thermistors (e.g., negative temperature coefficient (NTC) and/or positive temperature coefficient (PTC) thermistors), IR sensors, bimetallic temperature sensors, fiber optic temperature sensors, digital temperature sensors (IC sensors), gas thermometers, hygrothermo meters, and/or the like. Examples of humidity sensors include capacitive humidity sensors, resistive humidity sensors, gravimetric hygrometers, dew point sensors, hygrothermometers. Additionally or alternatively, the PEE sensor(s) 120 can include environmental monitoring sensors, which may include temperature sensors, humidity sensors, pressure sensors, light sensors and/or photodetectors (e.g., photodiodes, phototransistors, photovoltaic cells, photomultiplier tubes, light-dependent resistors, charge-coupled devices (CCDs), active-pixel sensors, avalanche photodiodes, photonic sensors, pyroelectric sensors, radiometers, and/or the like), air quality sensors (e.g., particulate matter (e.g., PM2.5 and PM10) sensors, gas sensors, particle counters, temperature and/or humidity sensors, and/or the like), and/or any other suitable sensor(s). Examples of gas sensors include carbon monoxide sensors, carbon dioxide sensors, ozone sensors, volatile organic compound sensors, nitrogen dioxide sensors, sulfur dioxide sensors, ammonia sensors, and/or the like.
Additionally or alternatively, the peripheral devices 704 may represent one or more actuators, which allow a compute node, platform, machine, device, mechanism, system, or other object to change its state, position, and/or orientation, or move or control a compute node (e.g., node 700), platform, machine, device, mechanism, system, or other object. The actuators comprise electrical and/or mechanical devices for moving or controlling a mechanism or system, and converts energy (e.g., electric current or moving air and/or liquid) into some kind of motion. As examples, the actuators can be or include any number and combination of the following: soft actuators (e.g., actuators that changes its shape in response to a stimuli such as, for example, mechanical, thermal, magnetic, and/or electrical stimuli), hydraulic actuators, pneumatic actuators, mechanical actuators, electromechanical actuators (EMAs), microelectromechanical actuators, electrohydraulic actuators, linear actuators, linear motors, rotary motors, DC motors, stepper motors, servomechanisms, electromechanical switches, electromechanical relays (EMRs), power switches, valve actuators, piezoelectric actuators and/or biomorphs, thermal biomorphs, solid state actuators, solid state relays (SSRs), shape-memory alloy-based actuators, electroactive polymer-based actuators, relay driver integrated circuits (ICs), solenoids, impactive actuators/mechanisms (e.g., jaws, claws, tweezers, clamps, hooks, mechanical fingers, humaniform dexterous robotic hands, and/or other gripper mechanisms that physically grasp by direct impact upon an object), propulsion actuators/mechanisms, projectile actuators/mechanisms, and/or audible sound generators, visual warning devices, and/or other like electromechanical components. The compute node 700 may be configured to operate one or more actuators based on one or more captured events, instructions, control signals, and/or configurations received from a service provider, client device, and/or other components of a compute node or platform. Additionally or alternatively, the actuators are used to change the operational state, position, and/or orientation of the sensors
Figure 8 depicts an example of management services (MnS) deployment 800. MnS is a Service Based Management Architecture (SBMA). An MnS is a set of offered management capabilities (e.g., capabilities for management and orchestration (MANO) of network and services). The entity producing an MnS is referred to as an MnS producer (MnS-P) and the entity consuming an MnS is referred to as an MnS consumer (MnS-C). An MnS provided by an MnS-P can be consumed by any entity with appropriate authorization and authentication. As shown by Figure 8, the MnS-P offers its services via a standardized service interface composed of individually specified MnS components (e.g., MnS-C).
A MnS is specified using different independent components. A concrete MnS includes at least two of these components. Three different component types are defined, including MnS component type A, MnS component type B, and MnS component type C. The MnS component type A is a group of management operations and/or notifications that is agnostic with regard to the entities managed. The operations and notifications as such are hence not involving any information related to the managed network. These operations and notifications are called generic or network agnostic. For example, operations for creating, reading, updating and deleting managed object instances, where the managed object instance to be manipulated is specified only in the signature of the operation, are generic.
MnS component type B refers to management information represented by information models representing the managed entities. A MnS component type B is also called Network Resource Model (NRM). Examples of MnS component type B include network resource models (see e.g., [TS28622]) and network resource models (see e.g., [TS28541]). MnS component type C is performance information of the managed entity and fault information of the managed entity. Examples of management service component type C include alarm information (see e.g., [TS28532] and [TS28545]) and performance data (see e.g., [TS28552], [TS28554], and [TS32425]).
An MnS-P is described by a set of metadata called MnS-P profile. The profile holds information about the supported MnS components and their version numbers. This may include also information about support of optional features. For example, a read operation on a complete subtree of managed object instances may support applying filters on the scoped set of objects as optional feature. In this case, the MnS profile should include the information if filtering is supported.
Figure 8 also depicts an example management function (MnF) deployment 810. The MnF is a logical entity playing the roles of MnS-C and/or MnS-P. An MnF with the role of management service exposure governance is referred to as an “Exposure governance management function” or “exposure governance MnF”. An MnS produced by an MnF 810 may have multiple consumers. The MnF 810 may consume multiple MnS from one or multiple MnS-Ps. In the MnF deployment 810, the MnF plays both roles (e.g., MnS-P and MnS-C). An MnF can be deployed as a separate entity or embedded in an NF to provide MnS(s). For example, MnF deployment scenario 820 shows an example where the MnF is deployed as a separate entity to provide MnS(s) and MnF deployment scenario 830 in which an MnF is embedded in an NF to provide MnS(s). In these examples, the MnFs may interact by consuming MnS produced by other MnFs.
Figure 8 also depicts an example MDA service (MDAS or MDA MnS) deployment 850. Management data analytics (MDA), as a key enabler of automation and intelligence, is considered a foundational capability for mobile networks and services management and orchestration. The MDA provides a capability of processing and analysing data related to network and service events and status including, for example, performance measurements, KPIs, trace data, minimization drive tests (MDT) reports, radio link failure (RLF) reports, RRC connection establishment failure event (RCEF) reports, QoE reports, alarms, configuration data, network analytics data, and service experience data from AFs 560, and/or the like, to provide analytics output, (e.g., statistics, predictions, inferences, root cause analysis issues, and/or the like), and may also include recommendations to enable necessary actions for network and service operations. The MDA output is provided by the MDAS-P to the corresponding consumer(s) (e.g., MDAS-C/MDA MnS- C) that requested the analytics. The MDA can identify ongoing issues impacting the performance of the network and services, and help to identify in advance potential issues that may cause potential failure and/or performance degradation. The MDA can also assist to predict the network and service demand to enable the timely resource provisioning and deployments which would allow fast time-to-market network and service deployments. The MDAS includes the services exposed by the MDA, which can be consumed by various consumers including, for example, MnFs (e.g., MnS-Ps and/or MnS- Cs for network and service management), NFs (e.g., NWDAF 562 and/or any other NFs/NEs discussed herein), SON functions, network and service optimization tools/functions, SLS assurance functions, human operators, AFs 560, and/or the like. For purposes of the present disclosure, the terms MDAS and MDA MnS may be used interchangeably.
The MDAS in the context of SBMA enables any authorized consumer to request and receive analytics. A management function (MDAF) may play the roles of MDA MnS-P, MDA MnS-C, other MnS-C, NWDAF consumer, and Location Management Function (LMF) service consumer, and may also interact with other non-3GPP management systems.
The internal business logic related to MDA leverages current and historical data related to: performance measurements (PM) as per [TS28552] and Key Performance Indicators (KPIs) as per [TS28554]; trace data, including MDT/RLF/RCEF, as per 3GPP TS 32.422 and TS 32.423; QoE and service experience data as per 3GPP TS 28.405 and 3GPP TS 28.406; analytics data offered by an NWDAF 562 as per [TS23288] including 5GC data and external web/app-based information (e.g., web crawler that provides online news) from an AF 560; alarm information and notifications as per [TS28532]; CM information and notifications; UE location information provided by LMF as per 3GPP TS 23.273; MDA reports from other MDA MnS producers; and management data from non-3GPP systems. Additionally or alternatively, the MDAF and/or the MDA internal business logic includes the MDA capability for the energy saving analysis discussed herein.
Analytics output from the MDA internal business logic are made available by the management functions (MDAFs) playing the role of MDA MnS-Ps to authorized consumers including, but not limited to, other MnFs, NFs/NEs, NWDAF 562, SON functions, optimization tools, and human operators. Historical analytics reports may be saved and retrieved for use at later times by a MDA MnS-C, and historical analytics input (enabling) data (along with current analytics input data) may be used for analytics by MDA MnS-P. Such a historical data usage may be applicable to both or one of the MDA MnS-P and MDA MnS-C side. In some examples, “historical data” refers to (a) historical analytics reports that have been produced in the past, and (b) historical analytics input (enabling) data that had been collected in the past.
In some examples, the MDA process may utilize AI/ML technologies, such as any of those discussed herein. An MDAF may optionally be deployed as one or more AI/ML inference function(s) in which the relevant ML entities are used for inference per the corresponding MDA capability. Specifications for MDA ML entity training to enable ML entity deployments are given in [TS28105].
Figure 9 depicts an example functional framework 900 for RAN and/or NF intelligence. The functional framework 900 includes a data collection function 905, which is a function that provides input data to the model training function (MTF) 910 and model inference function (MIF) 915. AI/ML algorithm- specific data preparation (e.g., data pre-processing and cleaning, formatting, and transformation) may or may not be carried out in the data collection function 905. Examples of input data may include measurements from UEs 502, RAN nodes 514, and/or additional or alternative network entities; feedback from actor 920; and/or output(s) from AI/ML model(s). The input data fed to the MTF 910 is training data, and the input data fed to the MIF 915 is inference data.
The MTF 910 is a function that performs the AI/ML model training, validation, and testing. The MTF 910 may generate model performance metrics as part of the model testing procedure and/or as part of the model validation procedure. Examples of the model performance metrics are discussed infra. The MTF 910 may also be responsible for data preparation (e.g., data preprocessing and cleaning, formatting, and transformation) based on training data delivered by the data collection function 905, if required.
The MTF 910 performs model deployment/updates, wherein the MTF 910 initially deploys a trained, validated, and tested AI/ML model (e.g., ML models 250 discussed previously) to the MIF 915, and/or delivers updated model(s) to the MIF 915. Examples of the model deployments and updates are discussed herein. In some examples, the MTF 910 corresponds to the MTF 125 of Figure 1 and/or the MLTF 1025 of Figure 10.
The MIF 915 is a function that provides AI/ML model inference output (e.g., statistical inferences, predictions, decisions, probabilities and/or probability distributions, actions, configurations, policies, data analytics, outcomes, optimizations, and/or the like). In some examples, the MIF 915 corresponds to the inference function 245 of Figure 2 and/or the inference engine 1045 of Figure 10. The MIF 915 produces an inference output, which is the inferences generated or otherwise produced when the MIF 915 operates the AI/ML model using the inference data. The MIF 915 provides the inference output to the actor 920. Details of inference output are use case specific and may be based on the specific type of AI/ML model being used. The MIF 915 may also be responsible for data preparation (e.g., data pre-processing and cleaning, formatting, and transformation) based on inference data delivered by the data collection function 905, if required. The MIF 915 may provide model performance feedback to the MTF 910 when applicable. The model performance feedback may include various performance metrics (e.g., any of those discussed herein) related to producing inferences. The model performance feedback may be used for monitoring the performance of the AI/ML model, when available.
The actor 920 is a function that receives the inference output from the MIF 915, and triggers or otherwise performs corresponding actions based on the inference output. The actor 920 may trigger actions directed to other entities and/or to itself. In some examples, the actor 920 is a network energy saving (NES) function, a mobility robustness optimization (MRO) function, a load balancing optimization (LBO) function, handover (HO) optimization and/or conditional HO (CHO) optimization, physical cell identifier (PCI) configuration, automatic neighbor relation (ANR) management, random access (RACH) optimization, radio resource management (RRM) optimization, and/or some other SON function. In these examples, the inference output is related to NES, MRO, LBO HO/CHO optimization, PCI configuration, ANR management, RACH optimization, RRM optimization, and/or related to some other SON function, and the actor 920 is one or more RAN nodes 514, UEs 502, VNE 101, PEE sensors 120, one or more NFs, and/or some other entities/elements discussed herein that perform various operations based on the output inferences. Additionally or alternatively, the output inferences may be the VNF ECD 212 and the actor may be the VNE 101, PEE sensors 120, one or more NFs, and/or some other entities/elements discussed herein.
The actor 920 may also provide feedback to the data collection function 905 for storage. The feedback includes information related to the actions performed by the actor 920. The feedback may include any information that may be needed to derive training data (and/or testing data and/or validation data), inference data, and/or data to monitor the performance of the AI/ML model and its impact to the network through updating of KPIs, performance counters, and the like.
Figure 10 depicts an example AI/ML-assisted communication network, which includes communication between an ML function (MLF) 1002 and an MLF 1004. More specifically, as described in further detail below, AI/ML models may be used or leveraged to facilitate wired and/or over-the-air communication between the MLF 1002 and the MLF 1004. In this example, the MLF 1002 and the MLF 1004 operate in a matter consistent with 3GPP technical specifications and/or technical reports for 5G and/or 6G systems. In some examples, the communication mechanisms between the MLF 1002 and the MLF 1004 include any suitable access technologies and/or RATs, such as any of those discussed herein. Additionally, the communication mechanisms in Figure 10 may be part of, or operate concurrently with networks 500, 600, 708, and/or some other network described herein, and/or concurrently with deployments 100, 200, and/or some other deployment described herein.
The MLFs 1002, 1004 may correspond to any of the entities/elements discussed herein. In one example, the MLF 1002 corresponds to an MnF and/or MnS-P and the MLF 1004 corresponds to another MnF and/or an MnS-C, or vice versa. Additionally or alternatively, the MLF 1002 corresponds to a set of the MLFs of Figures 1-4, 8, and/or 9 and the MLF 1004 corresponds to a different set of the MLFs of Figures 1-4, 8, and/or 9. In this example, the sets of MLFs may be mutually exclusive, or some or all of the MLFs in each of the sets of MLFs may overlap or be shared. In another example, the MLF 1002 and/or the MLF 1004 is/are implemented by respective UEs (e.g., UE 502, UE 602). Additionally or alternatively, the MLF 1002 and/or the MLF 1004 is/are implemented by a same UE or different UEs. In another example, the MLF 1002 and/or the MLF 1004 is/are implemented by respective RANs (e.g., RAN 504) or respective NANs (e.g., AP 506, NAN 514, NAN 604). Additionally or alternatively, the MLF 1002 is implemented as or by a UE and the MLF 1004 is implemented by as or by a RAN node, or vice versa.
As shown by Figure 10, the MLF 1002 and the MLF 1004 include various AI/ML-related components, functions, elements, or entities, which may be implemented as hardware, software, firmware, and/or some combination thereof. In some examples, one or more of the AI/ML-rclated elements are implemented as part of the same hardware (e.g., IC, chip, or multi-processor chip), software (e.g., program, process, engine, and/or the like), or firmware as at least one other component, function, element, or entity. The A I/ML- related elements of MLF 1002 may be the same or similar to the AI/ML-related elements of MLF 1004. For the sake of brevity, description of the various elements is provided from the point of view of the MLF 1002, however it will be understood that such description applies to like named/numbered elements of MLF 1004, unless explicitly stated otherwise.
The data repository 1015 is responsible for data collection and storage. As examples, the data repository 1015 may collect and store RAN configuration parameters, NF configuration parameters, measurement data, RLM data, key performance indicators (KPIs), SLAs, model performance metrics, knowledge base data, ground truth data, ML model parameters, hyperparameters, and/or other data for model training, update, and inference. The collected data is stored into the repository 1015, and the stored data can be discovered and extracted by other elements from the data repository 1015. For example, the inference data selection/filter 1050 may retrieve data from the data repository 1015 and provide that data to the inference engine 1045 for generating/determining inferences. In various examples, the MLF 1002 is configured to discover and request data from the data repository 1015 in the MLF 1004, and/or vice versa. In these examples, the data repository 1015 of the MLF 1002 may be communicatively coupled with the data repository 1015 of the MLF 1004 such that the respective data repositories 1015 may share collected data with one another. Additionally or alternatively, the MLF 1002 and/or MLF 1004 is/are configured to discover and request data from one or more external sources and/or data storage systems/devices.
The training data selection/filter 1020 is configured to generate training, validation, and testing datasets for ML training (MLT) (or ML model training). One or more of these datasets may be extracted or otherwise obtained from the data repository 1015. Data may be selected/filtered based on the specific AI/ML model to be trained. Data may optionally be transformed, augmented, and/or pre-processed (e.g., normalized) before being loaded into datasets. The training data selection/filter 1020 may label data in datasets for supervised learning, or the data may remain unlabeled for unsupervised learning. The produced datasets may then be fed into the MLT function (MLTF) 1025.
The MLTF 1025 is responsible for training and updating (e.g., tuning and/or re-training) AI/ L models. A selected model (or set of models) may be trained using the fed-in datasets (including training, validation, testing) from the training data selection/filtering 1020. The MLTF 1025 produces trained and tested AI/ML models that are ready for deployment. The produced trained and tested models can be stored in a model repository 1035. In some examples, the MLTF 1025 corresponds to the MTF 125 and/or model training function 910.
The model repository 1035 is responsible for AI/ML models’ (both trained and un-trained) storage and exposure. Various model data can be stored in the model repository 1035. The model data can include, for example, trained/updated model(s), model parameters, hyperparameters, and/or model metadata, such as model performance metrics, hardware platform/configuration data, model execution parameters/conditions, and/or the like. In some examples, the model data can also include inferences made when operating the ML model. Examples of AI/ML models and other ML model aspects are discussed herein. The model data may be discovered and requested by other MLF components (e.g., the training data selection/filter 1020 and/or the MLTF 1025). In some examples, the MLF 1002 can discover and request model data from the model repository 1035 of the MLF 1004. Additionally or alternatively, the MLF 1004 can discover and/or request model data from the model repository 1035 of the MLF 1002. In some examples, the MLF 1004 may configure models, model parameters, hyperparameters, model execution parameters/conditions, and/or other ML model aspects in the model repository 1035 of the MLF 1002.
The model management function 1040 is responsible for management of the AI/ML model produced by the MLTF 1025. Such management functions may include deployment of a trained model, monitoring ML entity performance, reporting ML entity validation and/or performance data, and/or the like. In model deployment, the model management 1040 may allocate and schedule hardware and/or software resources for inference, based on received trained and tested models. For purposes of the present disclosure, the term “inference” refers to the process of using trained AI/ML model(s) to generate statistical inferences, predictions, decisions, probabilities and/or probability distributions, actions, configurations, policies, data analytics, outcomes, optimizations, and/or the like based on new, unseen data (e.g., “input inference data”). In some examples, the inference process can include feeding input inference data into the ML model (e.g., inference engine 1045), forward passing the input inference data through the ML model’s architecture/topology wherein the ML model performs computations on the data using its learned parameters (e.g., weights and biases), and predictions output. In some examples, the inference process can include data transformation before the forward pass, wherein the input inference data is pre-processed or transformed to match the format required by the ML model. In performance monitoring, based on model performance KPIs and/or metrics, the model management 1040 may decide to terminate the running model, start model re-training and/or tuning, select another model, and/or the like. In examples, the model management 1040 of the MLF 1004 may be able to configure model management policies in the MLF 1002, and vice versa.
The inference data selection/filter 1050 is responsible for generating datasets for model inference at the inference 1045, as described infra. For example, inference data may be extracted from the data repository 1015. The inference data selection/filter 1050 may select and/or filter the data based on the deployed AI/ML model. Data may be transformed, augmented, and/or pre- processed in a same or similar manner as the transformation, augmentation, and/or pre-processing of the training data selection/filtering as described w.r.t training data selection filter 1020. The produced inference dataset may be fed into the inference engine 1045.
The inference engine 1045 is responsible for executing inference as described herein. The inference engine 1045 may consume the inference dataset provided by the inference data selection/filter 1050, and generate one or more inferences. The inferences may be or include, for example, statistical inferences, predictions, decisions, probabilities and/or probability distributions, actions, configurations, policies, data analytics, outcomes, optimizations, and/or the like. The inference(s)/outcome(s) may be provided to the performance measurement function 1030.
The performance measurement function 1030 is configured to measure model performance metrics (e.g., accuracy, momentum, precision, quantile, recall/sensitivity, model bias, run-time latency, resource consumption, and/or other suitable metrics/measures, such as any of those discussed herein) of deployed and executing models based on the inference(s) for monitoring purposes. Model performance data may be stored in the data repository 1015 and/or reported according to the validation reporting mechanisms discussed herein.
The performance metrics that may be measured and/or predicted by the performance measurement function 1030 may be based on the particular AI/ML task and the other inputs/parameters of the ML entity. The performance metrics may include model-based metrics and platform-based metrics. The model-based metrics are metrics related to the performance of the model itself and/or without considering the underlying hardware platform. The platform-based metrics are metrics related to the performance of the underlying hardware platform when operating the ML model.
The model-based metrics may be based on the particular type of AI/ML model and/or the AI/ML domain. For example, regression-related metrics may be predicted for regression-based ML models. Examples of regression-related metrics include error value, mean error, mean absolute error (MAE), mean reciprocal rank (MRR), mean squared error (MSE), root MSE (RMSE), correlation coefficient (R), coefficient of determination (R2), Golbraikh and Tropsha criterion, and/or other like regression-related metrics such as those discussed in Naser et al., Insights into Performance Fitness and Error Metrics for Machine Learning, arXiv:2006.00887vl (17 May 2020) (“[Naser]”).
In another example, correlation-related metrics may be predicted for correlation-related metrics Examples of correlation-related metrics include accuracy, precision (also referred to as positive predictive value (PPV)), mean average precision (mAP), negative predictive value (NPV), recall (also referred to as true positive rate (TPR) or sensitivity), specificity (also referred to as true negative rate (TNR) or selectivity), false positive rate, false negative rate, F score (e.g., Fi score, F2 score, Fp score, and/or the like), Matthews Correlation Coefficient (MCC), markedness, receiver operating characteristic (ROC), area under the ROC curve (AUC), distance score, and/or other like correlation-related metrics such as those discussed in [Naser].
Additional or alternative model-based metrics may also be predicted such as, for example, cumulative gain (CG), discounted CG (DCG), normalized DCG (NDCG), signal-to-noise ratio (SNR), peak SNR (PSNR), structural similarity (SSIM), Intersection over Union (loU), perplexity, bilingual evaluation understudy (BLEU) score, inception score, Wasserstein metric, Frechet inception distance (FID), string metric, edit distance, Levenshtein distance, Damerau-Levenshtein distance, number of evaluation instances (e.g., iterations, epochs, or episodes), learning rate (e.g., the speed at which the algorithm reaches (converges to) optimal weights), learning rate decay (or weight decay), number and/or type of computations, number and/or type of multiply and accumulates (MACs), number and/or type of multiply adds (MAdds) operations and/or other like performance metrics related to the performance of the ML model.
Examples of the platform-based metrics include latency, response time, throughput (e.g., rate of processing work of a processor or platform/system), availability and/or reliability, power consumption (e.g., performance per Watt, and/or the like), transistor count, execution time (e.g., amount of time to obtain an inference, and/or the like), memory footprint, memory utilization, processor utilization, processor time, number of computations, instructions per second (IPS), floating point operations per second (FLOPS), and/or other like performance metrics related to the performance of the ML model and/or the underlying hardware platform to be used to operate the ML model.
Additionally or alternatively, proxy metrics (e.g., a metric or attribute used as a stand-in or substitute for another metric or attribute) can be used for predicting the ML model performance. For any of the aforementioned performance metrics, the total, mean, and/or some other distribution of such metrics may be predicted and/or measured using any suitable data collection and/or measurement mechanism(s).
3. EX MPLE IMPLEMENTATIONS
Figure 11 shows an example process 1100 to be performed by a producer of performance assurance MnS 150. The process 1100 includes receiving, from an MTF 125, a request to collect data from a VNF 103 within VNE 101 (1101); and sending VNE ECD 122 to the MTF 125 that includes an indication of a measurement associated with the VNF 103, such as VRUD 112 of the VNF 103 (1102).
Figure 12 shows an example process 1200 to be performed by an MTF 125. The process 1200 includes requesting a producer of performance assurance MnS 150 to create a performance measurement (PM) job to collect measurement data from VNF instance(s) 103 and VNE 101 (1201); receive, from the producer of performance assurance MnS 851, VNF measurement data 112 and VNE measurement data 122 (1202); perform model training based on the VNF measurement data 112 and the VNE measurement data 122 (1203); and deploy the model to MDAF 851 (1204).
The example operations of processes 1100-1200 can be arranged in different orders, one or more of the depicted operations may be combined and/or divided/split into multiple operations, depicted operations may be omitted, and/or additional or alternative operations may be included in any of the depicted processes. Additional examples of the presently described methods, devices, systems, and networks discussed herein include the following, non-limiting example implementations. Each of the following non-limiting examples may stand on its own or may be combined in any permutation or combination with any one or more of the other examples provided below or throughout the present disclosure.
Example 1 includes a method of operating a model training function, the method comprising: receiving, from one or more performance assurance management service producers (MnS-Ps), virtualized network function (VNF) measurement data related to one or more VNF instances and virtualized network entity (VNE) measurement data related to a VNE; training a machine learning (ML) model to predict, based on the VNF measurement data and the VNE measurement data, VNF energy consumption data (ECD) for respective VNF instances of the one or more VNF instances; and deploying the ML model to a model inference function to generate predictions of VNF ECD.
Example 2 includes the method of example 1 and/or some other example(s) herein, wherein the method includes: sending, to respective performance assurance MnS-Ps, a request to create a performance management job to collect the VNF measurement data from the one or more VNF instances and the VNE measurement date from the VNE.
Example 3 includes the method examples 1-2 and/or some other example(s) herein, wherein the VNF measurement data includes virtual resource usage data (VRUD) for individual VNF instances of the one or more VNF instances.
Example 4 includes the method of example 3 and/or some other example(s) herein, wherein the VRUD includes, for the respective VNF instances, virtual compute usage data, virtual memory usage data, and virtual disk usage data.
Example 5 includes the method of example 4 and/or some other example(s) herein, wherein the VRUD includes, for the respective VNF instances, virtual network usage data.
Example 6 includes the method of examples 1-5 and/or some other example(s) herein, wherein the VNE measurement data includes VNE ECD collected from one or more power, energy, environmental (PEE) sensors.
Example 7 includes the method of example 6 and/or some other example(s) herein, wherein the VNE ECD is generated by mapping energy consumption metrics received from the one or more PEE sensors to a managed element representing the VNE.
Example 8 includes the method of examples 1-7 and/or some other example(s) herein, wherein the VNF measurement data and VNE measurement data are collected at a same interval.
Example 9 includes the method of example 8 and/or some other example(s) herein, wherein the VNF measurement data and VNE measurement data are time synchronized.
Example 10 includes the method of examples 1-9 and/or some other example(s) herein, wherein the training includes: training the ML model using the VRUD of the respective VNF instances as data features and the VNE ECD as data labels to compute model parameters of the ML model.
Example 11 includes a method of operating a model inference function, the method comprising: receiving, from a machine learning (ML) model training function, an ML model trained to predict virtualized network function (VNF) energy consumption data (ECD); receiving, from one or more performance assurance management service producers (MnS-Ps), VNF measurement data related to one or more VNF instances and measurement data related to the VNF ECD; and generating, using the trained ML model, predicted VNF ECD for respective VNF instances of the one or more VNF instances based on the VNF measurement data.
Example 12 includes the method example 11 and/or some other example(s) herein, wherein the method includes: sending, to respective performance assurance MnS-Ps of the one or more performance assurance MnS-Ps, a request to create a performance management job to collect the VNF measurement data from the one or more VNF instances.
Example 13 includes the method examples 11-12 and/or some other example(s) herein, wherein the VNF measurement data includes virtual resource usage data (VRUD) for individual VNF instances of the one or more VNF instances.
Example 14 includes the method of example 13 and/or some other example(s) herein, wherein the VRUD includes, for the respective VNF instances, virtual compute usage data, virtual memory usage data, and virtual disk usage data.
Example 15 includes the method of example 14 and/or some other example(s) herein, wherein the VRUD includes, for the respective VNF instances, virtual network usage data.
Example 16a includes the method examples 11-15 and/or some other example(s) herein, wherein a KPI is generated from mapping to the predicted VNF ECD. Example 16b includes the method examples 11-15 and/or some other example(s) herein, wherein a KPI is generated for the predicted VNF ECD.
Example 17 includes the method of examples 16a- 16b and/or some other example(s) herein, wherein the KPI is a measure of energy consumption of the respective VNF instances.
Example 18 includes the method of examples 1-17 and/or some other example(s) herein, wherein the predicted VNF ECD is expressed in kilowatt-hours.
Example 19 includes the method of examples 1-18 and/or some other example(s) herein, wherein the model inference function is operated by a management data analytic function (MDAF).
Example 20 includes the method of examples 1-19 and/or some other example(s) herein, wherein the model training function is a machine learning training MnS-P and the model inference function is a machine learning training management service consumer (MnS-C).
Example 21 includes the method of examples 1-19 and/or some other example(s) herein, wherein the model training function is a machine learning training function (MLTF) contained by a Network Data Analytics Function (NWDAF).
Example 22 includes an apparatus comprising means to perform one or more elements of a method described in or related to any of examples 1-21 , or any other method or process described herein. Example 23 includes one or more non-transitory computer-readable media comprising instructions to cause an electronic device, upon execution of the instructions by one or more processors of the electronic device, to perform one or more elements of a method described in or related to any of examples 1-21, or any other method or process described herein. Example 24 includes an apparatus comprising logic, modules, or circuitry to perform one or more elements of a method described in or related to any of examples 1-21 , or any other method or process described herein. Example 25 includes a method, technique, or process as described in or related to any of examples 1-21, or portions or parts thereof. Example 26 includes an apparatus comprising: one or more processors and one or more computer-readable media comprising instructions that, when executed by the one or more processors, cause the one or more processors to perform the method, techniques, or process as described in or related to any of examples 1-21, or portions thereof. Example 27 includes a signal as described in or related to any of examples 1-21, or portions or parts thereof. Example 28 includes a datagram, packet, frame, segment, protocol data unit (PDU), or message as described in or related to any of examples 1-21, or portions or parts thereof, or otherwise described in the present disclosure. Example 29 includes a signal encoded with data as described in or related to any of examples 1-21, or portions or parts thereof, or otherwise described in the present disclosure. Example 30 includes a signal encoded with a datagram, packet, frame, segment, protocol data unit (PDU), or message as described in or related to any of examples 1-21, or portions or parts thereof, or otherwise described in the present disclosure. Example 31 includes an electromagnetic signal carrying computer-readable instructions, wherein execution of the computer-readable instructions by one or more processors is to cause the one or more processors to perform the method, techniques, or process as described in or related to any of examples 1-21, or portions thereof. Example 32 includes a computer program comprising instructions, wherein execution of the program by a processing element is to cause the processing element to carry out the method, techniques, or process as described in or related to any of examples 1-21, or portions thereof. Example 33 includes a signal in a wireless network as shown and described herein. Example 34 includes a method of communicating in a wireless network as shown and described herein. Example 35 includes a system for providing wireless communication as shown and described herein. Example 36 includes a device for providing wireless communication as shown and described herein.
Any of the above -described examples may be combined with any other example (or combination of examples), unless explicitly stated otherwise. The foregoing description of one or more implementations provides illustration and description, but is not intended to be exhaustive or to limit the scope of embodiments to the precise form disclosed. Modifications and variations are possible in light of the above teachings or may be acquired from practice of various embodiments.
4. TERMINOLOGY
For the purposes of the present document, the following terms and definitions are applicable to the examples and embodiments discussed herein. Additionally, the terminology discussed in ETSI GR NFV 003, ETSI ES 202 336-12 (“[ES202336- 12]”), and 3GPP TS 28.500 (“[TS28500]”) may also be applicable to the examples and embodiments discussed herein
As used herein, the singular forms “a,” “an” and “the” are intended to include plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specific the presence of stated features, integers, steps, operations, elements, epochs, iterations, stages, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operation, elements, components, and/or groups thereof. The phrase “A and/or B” means (A), (B), or (A and B). For the purposes of the present disclosure, the phrase “A, B, and/or C” means (A), (B), (C), (A and B), (A and C), (B and C), or (A, B and C). The phrase “X(s)” means one or more X or a set of X. The description may use the phrases “in an embodiment,” “In some embodiments,” “in one implementation,” “In some implementations,” “in some examples”, and the like, each of which may refer to one or more of the same or different embodiments, implementations, and/or examples. Furthermore, the terms “comprising,” “including,” “having,” and the like, as used with respect to the present disclosure, are synonymous.
The terms “master” and “slave” at least in some examples refers to a model of asymmetric communication or control where one device, process, element, or entity (the “master”) controls one or more other device, process, element, or entity (the “slaves”). The terms “master” and “slave” are used in this disclosure only for their technical meaning. The term “master” or “grandmaster” may be substituted with any of the following terms “main”, “source”, “primary”, “initiator”, “requestor”, “transmitter”, “host”, “maestro”, “controller”, “provider”, “producer”, “client”, "source", "mix", "parent", “chief’, “manager”, “reference” (e.g., as in “reference clock” or the like), and/or the like. Additionally, the term “slave” may be substituted with any of the following terms “receiver”, “secondary”, “subordinate”, “replica”, target”, “responder”, “device”, “performer”, “agent”, “standby”, “consumer”, “peripheral”, “follower”, “server”, “child”, “helper”, “worker”, “node”, and/or the like.
The terms “coupled,” “communicatively coupled,” along with derivatives thereof are used herein. The term “coupled” may mean two or more elements are in direct physical or electrical contact with one another, may mean that two or more elements indirectly contact each other but still cooperate or interact with each other, and/or may mean that one or more other elements are coupled or connected between the elements that are said to be coupled with each other. The term “directly coupled” may mean that two or more elements are in direct contact with one another. The term “communicatively coupled” may mean that two or more elements may be in contact with one another by a means of communication including through a wire or other interconnect connection, through a wireless communication channel or ink, and/or the like.
The term “establish” or “establishment” at least in some examples refers to (partial or in full) acts, tasks, operations, and the like, related to bringing or the readying the bringing of something into existence either actively or passively (e.g., exposing a device identity or entity identity). Additionally or alternatively, the term “establish” or “establishment” at least in some examples refers to (partial or in full) acts, tasks, operations, and the like, related to initiating, starting, or warming communication or initiating, starting, or warming a relationship between two entities or elements (e.g., establish a session, establish a session, and the like). Additionally or alternatively, the term “establish” or “establishment” at least in some examples refers to initiating something to a state of working readiness. The term “established” at least in some examples refers to a state of being operational or ready for use (e.g., full establishment). Furthermore, any definition for the term “establish” or “establishment” defined in any specification or standard can be used for purposes of the present disclosure and such definitions are not disavowed by any of the aforementioned definitions.
The term “obtain” at least in some examples refers to (partial or in full) acts, tasks, operations, and the like, of intercepting, movement, copying, retrieval, or acquisition (e.g., from a memory, an interface, or a buffer), on the original packet stream or on a copy (e.g., a new instance) of the packet stream. Other aspects of obtaining or receiving may involving instantiating, enabling, or controlling the ability to obtain or receive a stream of packets (or the following parameters and templates or template values).
The term “receipt” at least in some examples refers to any action (or set of actions) involved with receiving or obtaining an object, data, data unit, and the like, and/or the fact of the object, data, data unit, and the like being received. The term “receipt” at least in some examples refers to an object, data, data unit, and the like, being pushed to a device, system, element, and the like (e.g., often referred to as a push model), pulled by a device, system, element, and the like (e.g., often referred to as a pull model), and/or the like.
The term “element” at least in some examples refers to a unit that is indivisible at a given level of abstraction and has a clearly defined boundary, wherein an element may be any type of entity including, for example, one or more devices, systems, controllers, network elements, modules, engines, components, and so forth, or combinations thereof. The term “entity” at least in some examples refers to a distinct element of a component, architecture, platform, device, and/or system. Additionally or alternatively, the term “entity” at least in some examples refers to information transferred as a payload.
The term “measurement” at least in some examples refers to the observation and/or quantification of attributes of an object, event, or phenomenon. Additionally or alternatively, the term “measurement” at least in some examples refers to a set of operations having the object of determining a measured value or measurement result, and/or the actual instance or execution of operations leading to a measured value. Additionally or alternatively, the term “measurement” at least in some examples refers to data recorded during testing. The term “metric” at least in some examples refers to a quantity produced in an assessment of a measured value. Additionally or alternatively, the term “metric” at least in some examples refers to data derived from a set of measurements. Additionally or alternatively, the term “metric” at least in some examples refers to set of events combined or otherwise grouped into one or more values. Additionally or alternatively, the term “metric” at least in some examples refers to a combination of measures or set of collected data points. Additionally or alternatively, the term “metric” at least in some examples refers to a standard definition of a quantity, produced in an assessment of performance and/or reliability of the network, which has an intended utility and is carefully specified to convey the exact meaning of a measured value.
The term “signal” at least in some examples refers to an observable change in a quality and/or quantity. Additionally or alternatively, the term “signal” at least in some examples refers to a function that conveys information about of an object, event, or phenomenon. Additionally or alternatively, the term “signal” at least in some examples refers to any time varying voltage, current, or electromagnetic wave that may or may not carry information. The term “digital signal” at least in some examples refers to a signal that is constructed from a discrete set of waveforms of a physical quantity so as to represent a sequence of discrete values.
The term “identifier” at least in some examples refers to a value, or a set of values, that uniquely identify an identity in a certain scope. Additionally or alternatively, the term “identifier” at least in some examples refers to a sequence of characters that identifies or otherwise indicates the identity of a unique object, element, or entity, or a unique class of objects, elements, or entities. Additionally or alternatively, the term “identifier” at least in some examples refers to a sequence of characters used to identify or refer to an application, program, session, object, element, entity, variable, set of data, and/or the like. The “sequence of characters” mentioned previously at least in some examples refers to one or more names, labels, words, numbers, letters, symbols, and/or any combination thereof. Additionally or alternatively, the term “identifier” at least in some examples refers to a name, address, label, distinguishing index, and/or attribute. Additionally or alternatively, the term “identifier” at least in some examples refers to an instance of identification. The term “persistent identifier” at least in some examples refers to an identifier that is reused by a device or by another device associated with the same person or group of persons for an indefinite period. The term “identification” at least in some examples refers to a process of recognizing an identity as distinct from other identities in a particular scope or context, which may involve processing identifiers to reference an identity in an identity database. The term “application identifier”, “application ID”, or “app ID” at least in some examples refers to an identifier that can be mapped to a specific application, application instance, or application instance. In the context of 3GPP 5G/NR, an “application identifier” at least in some examples refers to an identifier that can be mapped to a specific application traffic detection rule.
The term “circuitry” at least in some examples refers to a circuit or system of multiple circuits configured to perform a particular function in an electronic device. The circuit or system of circuits may be part of, or include one or more hardware components, such as a logic circuit, a processor (shared, dedicated, or group) and/or memory (shared, dedicated, or group), an application-specific integrated circuit (ASIC), field-programmable gate array (FPGA), programmable logic controller (PLC), single-board computer (SBC), system on chip (SoC), system in package (SiP), multi-chip package (MCP), digital signal processor (DSP), and the like, that are configured to provide the described functionality. In addition, the term “circuitry” may also refer to a combination of one or more hardware elements with the program code used to carry out the functionality of that program code. Some types of circuitry may execute one or more software or firmware programs to provide at least some of the described functionality. Such a combination of hardware elements and program code may be referred to as a particular type of circuitry.
The term “processor circuitry” at least in some examples refers to, is part of, or includes circuitry capable of sequentially and automatically carrying out a sequence of arithmetic or logical operations, or recording, storing, and/or transferring digital data. The term “processor circuitry” at least in some examples refers to one or more application processors, one or more baseband processors, a physical CPU, a single-core processor, a dual-core processor, a triple-core processor, a quad-core processor, and/or any other device capable of executing or otherwise operating computer-executable instructions, such as program code, software modules, and/or functional processes. The terms “application circuitry” and/or “baseband circuitry” may be considered synonymous to, and may be referred to as, “processor circuitry.”
The term “memory” and/or “memory circuitry” at least in some examples refers to one or more hardware devices for storing data, including random access memory (RAM), static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), magnetoresistive RAM (MRAM), conductive bridge Random Access Memory (CB-RAM), spin transfer torque (STT)- MRAM, phase change RAM (PRAM), core memory, read-only memory (ROM), programmable ROM (PROM), erasable PROM (EPROM), electrically EPROM (EEPROM), flash memory, nonvolatile RAM (NVRAM), magnetic disk storage mediums, optical storage mediums, flash memory devices or other machine readable mediums for storing data. The term “computer- readable medium” includes, but is not limited to, memory, portable or fixed storage devices, optical storage devices, and various other mediums capable of storing, containing or carrying instructions or data.
The term “interface circuitry” at least in some examples refers to, is part of, or includes circuitry that enables the exchange of information between two or more components or devices. The term “interface circuitry” at least in some examples refers to one or more hardware interfaces, for example, buses, I/O interfaces, peripheral component interfaces, network interface cards, and/or the like.
The term “infrastructure processing unit” or “IPU” at least in some examples refers to an advanced networking device with hardened accelerators and network connectivity (e.g., Ethernet or the like) that accelerates and manages infrastructure functions using tightly coupled, dedicated, programmable cores. In some implementations, an IPU offers full infrastructure offload and provides an extra layer of security by serving as a control point of a host for running infrastructure applications. An IPU is capable of offloading the entire infrastructure stack from the host and can control how the host attaches to this infrastructure. This gives service providers an extra layer of security and control, enforced in hardware by the IPU.
The term “device” at least in some examples refers to a physical entity embedded inside, or attached to, another physical entity in its vicinity, with capabilities to convey digital information from or to that physical entity. The term “controller” at least in some examples refers to an element or entity that has the capability to affect a physical entity, such as by changing its state or causing the physical entity to move. The term “scheduler” at least in some examples refers to an entity or element that assigns resources (e.g., processor time, network links, memory space, and/or the like) to perform tasks. The term “network scheduler” at least in some examples refers to a node, element, or entity that manages network packets in transmit and/or receive queues of one or more protocol stacks of network access circuitry (e.g., a network interface controller (NIC), baseband processor, and the like). The term “network scheduler” at least in some examples can be used interchangeably with the terms “packet scheduler”, “queueing discipline” or “qdisc”, and/or “queueing algorithm”.
The term “compute node” or “compute device” at least in some examples refers to an identifiable entity implementing an aspect of computing operations, whether part of a larger system, distributed collection of systems, or a standalone apparatus. In some examples, a compute node may be referred to as a “computing device”, “computing system”, or the like, whether in operation as a client, server, or intermediate entity. Specific implementations of a compute node may be incorporated into a server, base station, gateway, road side unit, on-premise unit, user equipment, end consuming device, appliance, or the like. For purposes of the present disclosure, the term “node” at least in some examples refers to and/or is interchangeable with the terms “device”, “component”, “sub-system”, and/or the like. The term “computer system” at least in some examples refers to any type interconnected electronic devices, computer devices, or components thereof. Additionally, the terms “computer system” and/or “system” at least in some examples refer to various components of a computer that are communicatively coupled with one another. Furthermore, the term “computer system” and/or “system” at least in some examples refer to multiple computer devices and/or multiple computing systems that are communicatively coupled with one another and configured to share computing and/or networking resources.
The term “user equipment” or “UE” at least in some examples refers to a device with radio communication capabilities and may describe a remote user of network resources in a communications network. The term “user equipment” or “UE” may be considered synonymous to, and may be referred to as, client, mobile, mobile device, mobile terminal, user terminal, mobile unit, station, mobile station, mobile user, subscriber, user, remote station, access agent, user agent, receiver, radio equipment, reconfigurable radio equipment, reconfigurable mobile device, and the like. Furthermore, the term “user equipment” or “UE” includes any type of wireless/wired device or any computing device including a wireless communications interface. Examples of UEs, client devices, and the like, include desktop computers, workstations, laptop computers, mobile data terminals, smartphones, tablet computers, wearable devices, machine-to-machine (M2M) devices, machine-type communication (MTC) devices, Internet of Things (loT) devices, embedded systems, sensors, autonomous vehicles, drones, robots, in-vehicle infotainment systems, instrument clusters, onboard diagnostic devices, dashtop mobile equipment, electronic engine management systems, electronic/engine control units/modules, microcontrollers, control module, server devices, network appliances, head-up display (HUD) devices, helmet-mounted display devices, augmented reality (AR) devices, virtual reality (VR) devices, mixed reality (MR) devices, and/or other like systems or devices. The term “station” or “STA” at least in some examples refers to a logical entity that is a singly addressable instance of a medium access control (MAC) and physical layer (PHY) interface to the wireless medium (WM). The term “wireless medium” or WM” at least in some examples refers to the medium used to implement the transfer of protocol data units (PDUs) between peer physical layer (PHY) entities of a wireless local area network (LAN).
The term “network element” at least in some examples refers to physical or virtualized equipment and/or infrastructure used to provide wired or wireless communication network services. The term “network element” may be considered synonymous to and/or referred to as a networked computer, networking hardware, network equipment, network node, router, switch, hub, bridge, radio network controller, network access node (NAN), base station, access point (AP), RAN device, RAN node, gateway, server, network appliance, network function (NF), virtualized NF (VNF), and/or the like. The term “network controller” at least in some examples refers to a functional block that centralizes some or all of the control and management functionality of a network domain and may provide an abstract view of the network domain to other functional blocks via an interface. The term “network access node” or “NAN” at least in some examples refers to a network element in a radio access network (RAN) responsible for the transmission and reception of radio signals in one or more cells or coverage areas to or from a UE or station. A “network access node” or “NAN” can have an integrated antenna or may be connected to an antenna array by feeder cables. Additionally or alternatively, a “network access node” or “NAN” includes specialized digital signal processing, network function hardware, and/or compute hardware to operate as a compute node. In some examples, a “network access node” or “NAN” may be split into multiple functional blocks operating in software for flexibility, cost, and performance. In some examples, a “network access node” or “NAN” may be a base station (e.g., an evolved Node B (eNB) or a next generation Node B (gNB)), an access point and/or wireless network access point, router, switch, hub, radio unit or remote radio head, Transmission Reception Point (TRP), a gateway device (e.g., Residential Gateway, Wireline 5G Access Network, Wireline 5G Cable Access Network, Wireline BBF Access Network, and the like), network appliance, and/or some other network access hardware. The term “access point” or “AP” at least in some examples refers to an entity that contains one station (STA) and provides access to the distribution services, via the wireless medium (WM) for associated STAs. An AP comprises a STA and a distribution system access function (DSAF).
The term “cell” at least in some examples refers to a radio network object that can be uniquely identified by a UE from an identifier (e.g., cell ID) that is broadcasted over a geographical area from a network access node (NAN). Additionally or alternatively, the term “cell” at least in some examples refers to a geographic area covered by a NAN. The term “serving cell” at least in some examples refers to a primary cell (PCell) for a UE in a connected mode or state (e.g., RRC_CONNECTED) and not configured with carrier aggregation (CA) and/or dual connectivity (DC). Additionally or alternatively, the term “serving cell” at least in some examples refers to a set of cells comprising zero or more special cells and one or more secondary cells for a UE in a connected mode or state (e.g., RRC_CONNECTED) and configured with CA. The term “primary cell” or “PCell” at least in some examples refers to a Master Cell Group (MCG) cell, operating on a primary frequency, in which a UE either performs an initial connection establishment procedure or initiates a connection re-establishment procedure. The term “Secondary Cell” or “SCell” at least in some examples refers to a cell providing additional radio resources on top of a special cell (SpCell) for a UE configured with CA. The term “special cell” or “SpCell” at least in some examples refers to a PCell for non-DC operation or refers to a PCell of an MCG or a PSCell of an SCG for DC operation. The term “Master Cell Group” or “MCG” at least in some examples refers to a group of serving cells associated with a “Master Node” comprising a SpCell (PCell) and optionally one or more SCells. The term “Secondary Cell Group” or “SCG” at least in some examples refers to a subset of serving cells comprising a Primary SCell (PSCell) and zero or more SCells for a UE configured with DC. The term “Primary SCG Cell” refers to the SCG cell in which a UE performs random access when performing a reconfiguration with sync procedure for DC operation. The term “handover” at least in some examples refers to the transfer of a user's connection from one radio channel to another (can be the same or different cell). Additionally or alternatively, the term “handover” at least in some examples refers to the process in which a radio access network changes the radio transmitters, radio access mode, and/or radio system used to provide the bearer services, while maintaining a defined bearer service QoS.
The term “Master Node” or “MN” at least in some examples refers to a NAN that provides control plane connection to a core network. The term “Secondary Node” or “SN” at least in some examples refers to a NAN providing resources to the UE in addition to the resources provided by an MN and/or a NAN with no control plane connection to a core network. The term “E-UTEAN NodeB”, “eNodeB”, or “eNB” at least in some examples refers to a RAN node providing E-UTRA user plane (e.g., PDCP, RLC, MAC, PHY) and control plane (e.g., RRC) protocol terminations towards a UE, and connected via an S 1 interface to the Evolved Packet Core (EPC). Two or more eNBs are interconnected with each other (and/or with one or more en-gNBs) by means of an X2 interface. The term “next generation eNB” or “ng-eNB” at least in some examples refers to a RAN node providing E-UTRA user plane and control plane protocol terminations towards a UE, and connected via the NG interface to the 5GC. Two or more ng-eNBs are interconnected with each other (and/or with one or more gNBs) by means of an Xn interface. The term “Next Generation NodeB”, “gNodeB”, or “gNB” at least in some examples refers to a RAN node providing NR user plane and control plane protocol terminations towards a UE, and connected via the NG interface to the 5GC. Two or more gNBs are interconnected with each other (and/or with one or more ng- eNBs) by means of an Xn interface. The term “E-UTRA-NR gNB” or “en-gNB” at least in some examples refers to a RAN node providing NR user plane and control plane protocol terminations towards a UE, and acting as a Secondary Node in E-UTRA-NR Dual Connectivity (EN-DC) scenarios (see e.g., 3GPP TS 37.340). Two or more en-gNBs are interconnected with each other (and/or with one or more eNBs) by means of an X2 interface.
The term “Next Generation RAN node” or “NG-RAN node” at least in some examples refers to either a gNB or an ng-eNB. The term “lAB-node” at least in some examples refers to a RAN node that supports new radio (NR) access links to user equipment (UEs) and NR backhaul links to parent nodes and child nodes. The term “IAB -donor” at least in some examples refers to a RAN node (e.g., a gNB) that provides network access to UEs via a network of backhaul and access links. The term “Transmission Reception Point” or “TRP” at least in some examples refers to an antenna array with one or more antenna elements available to a network located at a specific geographical location for a specific area. The term “Central Unit” or “CU” at least in some examples refers to a logical node hosting radio resource control (RRC), Service Data Adaptation Protocol (SDAP), and/or Packet Data Convergence Protocol (PDCP) protocols/layers of an NG- RAN node, or RRC and PDCP protocols of the en-gNB that controls the operation of one or more DUs; a CU terminates an Fl interface connected with a DU and may be connected with multiple DUs. The term “Distributed Unit” or “DU” at least in some examples refers to a logical node hosting Backhaul Adaptation Protocol (BAP), Fl application protocol (F1AP), radio link control (RLC), medium access control (MAC), and physical (PHY) layers of the NG-RAN node or en- gNB, and its operation is partly controlled by a CU; one DU supports one or multiple cells, and one cell is supported by only one DU; and a DU terminates the Fl interface connected with a CU. The term “Radio Unit” or “RU” at least in some examples refers to a logical node hosting PHY layer or Low-PHY layer and radiofrequency (RF) processing based on a lower layer functional split. The term “split architecture” at least in some examples refers to an architecture in which an CU, DU, and/or RU are physically separated from one another. Additionally or alternatively, the term “split architecture” at least in some examples refers to a RAN architecture such as those discussed in 3GPP TS 38.401, 3GPP TS 38.410, and 3GPP TS 38.473. The term “integrated architecture at least in some examples refers to an architecture in which an RU and DU are implemented on one platform, and/or an architecture in which a DU and a CU are implemented on one platform.
The term “Residential Gateway” or “RG” at least in some examples refers to a device providing, for example, voice, data, broadcast video, video on demand, to other devices in customer premises. The term “Wireline 5G Access Network” or “W-5GAN” at least in some examples refers to a wireline AN that connects to a 5GC via N2 and N3 reference points. The W- 5GAN can be either a W-5GBAN or W-5GCAN. The term “Wireline 5G Cable Access Network” or “W-5GCAN” at least in some examples refers to an Access Network defined in/by CableLabs. The term “Wireline BBF Access Network” or “W-5GBAN” at least in some examples refers to an Access Network defined in/by the Broadband Forum (BBF). The term “Wireline Access Gateway Function” or “W-AGF” at least in some examples refers to a Network function in W- 5GAN that provides connectivity to a 3GPP 5G Core network (5GC) to 5G-RG and/or FN-RG. The term “5G-RG” at least in some examples refers to an RG capable of connecting to a 5GC playing the role of a user equipment with regard to the 5GC; it supports secure element and exchanges N1 signaling with 5GC. The 5G-RG can be either a 5G-BRG or 5G-CRG.
The term “Primary Cell” refers to the MCG cell, operating on the primary frequency, in which the UE either performs the initial connection establishment procedure or initiates the connection re-establishment procedure. The term “Primary SCG Cell” refers to the SCG cell in which the UE performs random access when performing the Reconfiguration with Sync procedure for DC operation. The term “Secondary Cell” refers to a cell providing additional radio resources on top of a Special Cell for a UE configured with CA. The term “Secondary Cell Group” refers to the subset of serving cells comprising the PSCell and zero or more secondary cells for a UE configured with DC. The term “Serving Cell” refers to the primary cell for a UE in RRC_CONNECTED not configured with CA/DC there is only one serving cell comprising of the primary cell. The term “serving cell” or “serving cells” refers to the set of cells comprising the Special Cell(s) and all secondary cells for a UE in RRC_CONNECTED configured with CA. The term “Special Cell” refers to the PCell of the MCG or the PSCell of the SCG for DC operation; otherwise, the term “Special Cell” refers to the Pcell.
The term “edge computing” at least in some examples refers to an implementation or arrangement of distributed computing elements that move processing activities and resources (e.g., compute, storage, acceleration, and/or network resources) towards the “edge” of the network in an effort to reduce latency and increase throughput for endpoint users (client devices, user equipment, and the like). Additionally or alternatively, term “edge computing” at least in some examples refers to a set of services hosted relatively close to a client/UE’s access point of attachment to a network to achieve relatively efficient service delivery through reduced end-to- end latency and/or load on the transport network. In some examples, edge computing implementations involve the offering of services and/or resources in a cloud-like systems, functions, applications, and subsystems, from one or multiple locations accessible via wireless networks. Additionally or alternatively, term “edge computing” at least in some examples refers to the concept, as described in [TS23501], that enables operator and 3rd party services to be hosted close to a UE's access point of attachment, to achieve an efficient service delivery through the reduced end-to-end latency and load on the transport network. The term “edge compute node” or “edge compute device” at least in some examples refers to an identifiable entity implementing an aspect of edge computing operations, whether part of a larger system, distributed collection of systems, or a standalone apparatus. In some examples, a compute node may be referred to as a “edge node”, “edge device”, “edge system”, whether in operation as a client, server, or intermediate entity. Additionally or alternatively, the term “edge compute node” at least in some examples refers to a real-world, logical, or virtualized implementation of a compute-capable element in the form of a device, gateway, bridge, system or subsystem, component, whether operating in a server, client, endpoint, or peer mode, and whether located at an “edge” of an network or at a connected location further within the network, however, references to an “edge computing system” generally refer to a distributed architecture, organization, or collection of multiple nodes and devices, and which is organized to accomplish or offer some aspect of services or resources in an edge computing setting. The term “edge computing platform” or “edge platform” at least in some examples refers to a collection of functionality that is used to instantiate, execute, or run edge applications on a specific edge compute node (e.g., virtualization infrastructure and/or the like), enable such edge applications to provide and/or consume edge services, and/or otherwise provide one or more edge services. The term “edge application” or “edge app” at least in some examples refers to an application that can be instantiated on, or executed by, an edge compute node within an edge computing network, system, or framework, and can potentially provide and/or consume edge computing services. The term “edge service” at least in some examples refers to a service provided via an edge compute node and/or edge platform, either by the edge platform itself and/or by an edge application.
The term “cloud computing” or “cloud” at least in some examples refers to a paradigm for enabling network access to a scalable and elastic pool of shareable computing resources with self- service provisioning and administration on-demand and without active management by users. Cloud computing provides cloud computing services (or cloud services), which are one or more capabilities offered via cloud computing that are invoked using a defined interface (e.g., an API or the like).
The term “network function” or “NF” at least in some examples refers to a functional block within a network infrastructure that has one or more external interfaces and a defined functional behavior. The term “network instance” at least in some examples refers to information identifying a domain; in some examples, a network instance is used by a UPF for traffic detection and routing. The term “network service” or “NS” at least in some examples refers to a composition or collection of NF(s) and/or network service(s), defined by its functional and behavioral specification(s). The term “NF service instance” at least in some examples refers to an identifiable instance of the NF service. The term “NF instance” at least in some examples refers to an identifiable instance of an NF. The term “NF service” at least in some examples refers to functionality exposed by an NF through a service-based interface and consumed by other authorized NFs. The term “NF service operation” at least in some examples refers to an elementary unit that an NF service is composed of. The term “NF service set” at least in some examples refers to a group of interchangeable NF service instances of the same service type within an NF instance; in some examples, the NF service instances in the same NF service set have access to the same context data. The term “NF set” at least in some examples refers to a group of interchangeable NF instances of the same type, supporting the same services and the same network slice(s) ; in some examples, the NF instances in the same NF Set may be geographically distributed but have access to the same context data.
The term “RAN function” or “RANF” at least in some examples refers to a functional block within a RAN architecture that has one or more external interfaces and a defined behavior related to the operation of a RAN or RAN node. Additionally or alternatively, the term “RAN function” or “RANF” at least in some examples refers to a set of functions and/or NFs that are part of a RAN. The term “Application Function” or “AF” at least in some examples refers to an element or entity that interacts with a 3GPP core network in order to provide services. Additionally or alternatively, the term “Application Function” or “AF” at least in some examples refers to an edge compute node or ECT framework from the perspective of a 5G core network. The term “management function” at least in some examples refers to a logical entity playing the roles of a service consumer and/or a service producer. The term “management service” at least in some examples refers to a set of offered management capabilities. The term “network function virtualization” or “NFV” at least in some examples refers to the principle of separating network functions from the hardware they run on by using virtualization techniques and/or virtualization technologies. The term “virtualized network function” or “VNF” at least in some examples refers to an implementation of an NF that can be deployed on a Network Function Virtualization Infrastructure (NFVI). The term “Network Functions Virtualization Infrastructure Manager” or “NFVI” at least in some examples refers to a totality of all hardware and software components that build up the environment in which VNFs are deployed. The term “Virtualized Infrastructure Manager” or “VIM” at least in some examples refers to a functional block that is responsible for controlling and managing the NFVI compute, storage and network resources, usually within one operator's infrastructure domain. The term “virtualization container”, “execution container”, or “container” at least in some examples refers to a partition of a compute node that provides an isolated virtualized computation environment. The term “OS container” at least in some examples refers to a virtualization container utilizing a shared Operating System (OS) kernel of its host, where the host providing the shared OS kernel can be a physical compute node or another virtualization container. Additionally or alternatively, the term “container” at least in some examples refers to a standard unit of software (or a package) including code and its relevant dependencies, and/or an abstraction at the application layer that packages code and dependencies together. Additionally or alternatively, the term “container” or “container image” at least in some examples refers to a lightweight, standalone, executable software package that includes everything needed to run an application such as, for example, code, runtime environment, system tools, system libraries, and settings. The term “virtual machine” or “VM” at least in some examples refers to a virtualized computation environment that behaves in a same or similar manner as a physical computer and/or a server. The term “hypervisor” at least in some examples refers to a software element that partitions the underlying physical resources of a compute node, creates VMs, manages resources for VMs, and isolates individual VMs from each other.
The term “Data Network” or “DN” at least in some examples refers to a network hosting data-centric services such as, for example, operator services, the internet, third-party services, or enterprise networks. Additionally or alternatively, a DN at least in some examples refers to service networks that belong to an operator or third party, which are offered as a service to a client or user equipment (UE). DNs are sometimes referred to as “Packet Data Networks” or “PDNs”. The term “Local Area Data Network” or “LADN” at least in some examples refers to a DN that is accessible by the UE only in specific locations, that provides connectivity to a specific DNN, and whose availability is provided to the UE.
The term “Internet of Things” or “loT” at least in some examples refers to a system of interrelated computing devices, mechanical and digital machines capable of transferring data with little or no human interaction, and may involve technologies such as real-time analytics, machine learning and/or Al, embedded systems, wireless sensor networks, control systems, automation (e.g., smarthome, smart building and/or smart city technologies), and the like. loT devices are usually low-power devices without heavy compute or storage capabilities.
The term “protocol” at least in some examples refers to a predefined procedure or method of performing one or more operations. Additionally or alternatively, the term “protocol” at least in some examples refers to a common means for unrelated objects to communicate with each other (sometimes also called interfaces). The term “communication protocol” at least in some examples refers to a set of standardized rules or instructions implemented by a communication device and/or system to communicate with other devices and/or systems, including instructions for packetizing/depacketizing data, modulating/demodulating signals, implementation of protocols stacks, and/or the like. In various implementations, a “protocol” and/or a “communication protocol” may be represented using a protocol stack, a finite state machine (FSM), and/or any other suitable data structure. The term “standard protocol” at least in some examples refers to a protocol whose specification is published and known to the public and is controlled by a standards body. The term “protocol stack” or “network stack” at least in some examples refers to an implementation of a protocol suite or protocol family. In various implementations, a protocol stack includes a set of protocol layers, where the lowest protocol deals with low-level interaction with hardware and/or communications interfaces and each higher layer adds additional capabilities. Additionally or alternatively, the term “protocol” at least in some examples refers to a formal set of procedures that are adopted to ensure communication between two or more functions within the within the same layer of a hierarchy of functions. The term “application layer” at least in some examples refers to an abstraction layer that specifies shared communications protocols and interfaces used by hosts in a communications network. Additionally or alternatively, the term “application layer” at least in some examples refers to an abstraction layer that interacts with software applications that implement a communicating component, and includes identifying communication partners, determining resource availability, and synchronizing communication. Examples of application layer protocols include HTTP, HTTPs, File Transfer Protocol (FTP), Dynamic Host Configuration Protocol (DHCP), Internet Message Access Protocol (IMAP), Eightweight Directory Access Protocol (EDAP), MQTT (MQ Telemetry Transport), Remote Authentication Dial-In User Service (RADIUS), Diameter protocol, Extensible Authentication Protocol (EAP), RDMA over Converged Ethernet version 2 (RoCEv2), Real-time Transport Protocol (RTP), RTP Control Protocol (RTCP), Real Time Streaming Protocol (RTSP), SBMV Protocol, Skinny Client Control Protocol (SCCP), Session Initiation Protocol (SIP), Session Description Protocol (SDP), Simple Mail Transfer Protocol (SMTP), Simple Network Management Protocol (SNMP), Simple Service Discovery Protocol (SSDP), Small Computer System Interface (SCSI), Internet SCSI (iSCSI), iSCSI Extensions for RDMA (iSER), Transport Layer Security (TLS), voice over IP (VoIP), Virtual Private Network (VPN), Extensible Messaging and Presence Protocol (XMPP), and/or the like.
The term “session layer” at least in some examples refers to an abstraction layer that controls dialogues and/or connections between entities or elements, and may include establishing, managing and terminating the connections between the entities or elements.
The term “transport layer” at least in some examples refers to a protocol layer that provides end-to-end (e2e) communication services such as, for example, connection-oriented communication, reliability, flow control, and multiplexing. Examples of transport layer protocols include datagram congestion control protocol (DCCP), fibre channel protocol (FBC), Generic Routing Encapsulation (GRE), GPRS Tunneling (GTP), Micro Transport Protocol (pTP), Multipath TCP (MPTCP), MultiPath QUIC (MPQUIC), Multipath UDP (MPUDP), Quick UDP Internet Connections (QUIC), Remote Direct Memory Access (RDMA), Resource Reservation Protocol (RSVP), Stream Control Transmission Protocol (SCTP), transmission control protocol (TCP), user datagram protocol (UDP), and/or the like.
The term “network layer” at least in some examples refers to a protocol layer that includes means for transferring network packets from a source to a destination via one or more networks. Additionally or alternatively, the term “network layer” at least in some examples refers to a protocol layer that is responsible for packet forwarding and/or routing through intermediary nodes. Additionally or alternatively, the term “network layer” or “internet layer” at least in some examples refers to a protocol layer that includes interworking methods, protocols, and specifications that are used to transport network packets across a network. As examples, the network layer protocols include internet protocol (IP), IP security (IPsec), Internet Control Message Protocol (ICMP), Internet Group Management Protocol (IGMP), Open Shortest Path First protocol (OSPF), Routing Information Protocol (RIP), RDMA over Converged Ethernet version 2 (RoCEv2), Subnetwork Access Protocol (SNAP), and/or some other internet or network protocol layer.
The term “link layer” or “data link layer” at least in some examples refers to a protocol layer that transfers data between nodes on a network segment across a physical layer. Examples of link layer protocols include logical link control (LLC), medium access control (MAC), Ethernet, RDMA over Converged Ethernet version 1 (RoCEvl), and/or the like.
The term “radio resource control”, “RRC layer”, or “RRC” at least in some examples refers to a protocol layer or sublayer that performs system information handling; paging; establishment, maintenance, and release of RRC connections; security functions; establishment, configuration, maintenance and release of Signalling Radio Bearers (SRBs) and Data Radio Bearers (DRBs); mobility functions/services; QoS management; and some sidelink specific services and functions over the Uu interface (see e.g., 3GPP TS 36.331 and 3GPP TS 38.331 (“[TS38331]”)).
The term “Service Data Adaptation Protocol”, “SDAP layer”, or “SDAP” at least in some examples refers to a protocol layer or sublayer that performs mapping between QoS flows and a data radio bearers (DRBs) and marking QoS flow IDs (QFI) in both DL and UL packets (see e.g., 3GPP TS 37.324).
The term “Packet Data Convergence Protocol”, “PDCP layer”, or “PDCP” at least in some examples refers to a protocol layer or sublayer that performs transfer user plane or control plane data; maintains PDCP sequence numbers (SNs); header compression and decompression using the Robust Header Compression (ROHC) and/or Ethernet Header Compression (EHC) protocols; ciphering and deciphering; integrity protection and integrity verification; provides timer based SDU discard; routing for split bearers; duplication and duplicate discarding; reordering and inorder delivery; and/or out-of-order delivery (see e.g., 3GPP TS 36.323 and/or 3GPP TS 38.323).
The term “radio link control layer”, “RLC layer”, or “RLC” at least in some examples refers to a protocol layer or sublayer that performs transfer of upper layer PDUs; sequence numbering independent of the one in PDCP; error Correction through ARQ; segmentation and/or re- segmentation of RLC SDUs; reassembly of SDUs; duplicate detection; RLC SDU discarding; RLC re-establishment; and/or protocol error detection (see e.g., 3GPP TS 36.322 and 3GPP TS 38.322).
The term “medium access control protocol”, “MAC protocol”, or “MAC” at least in some examples refers to a protocol that governs access to the transmission medium in a network, to enable the exchange of data between stations in a network. Additionally or alternatively, the term “medium access control layer”, “MAC layer”, or “MAC” at least in some examples refers to a protocol layer or sublayer that performs functions to provide frame-based, connectionless-mode (e.g., datagram style) data transfer between stations or devices. Additionally or alternatively, the term “medium access control layer”, “MAC layer”, or “MAC” at least in some examples refers to a protocol layer or sublayer that performs mapping between logical channels and transport channels; multiplexing/demultiplexing of MAC SDUs belonging to one or different logical channels into/from transport blocks (TB) delivered to/from the physical layer on transport channels; scheduling information reporting; error correction through HARQ (one HARQ entity per cell in case of CA); priority handling between UEs by means of dynamic scheduling; priority handling between logical channels of one UE by means of logical channel prioritization; priority handling between overlapping resources of one UE; and/or padding (see e.g., 3GPP TS 36.321 and 3GPP TS 38.321).
The term “physical layer”, “PHY layer”, or “PHY” at least in some examples refers to a protocol layer or sublayer that includes capabilities to transmit and receive modulated signals for communicating in a communications network (see e.g., 3GPP TS 36.201 and 3GPP TS 38.201).
The term “access technology” at least in some examples refers to the technology used for the underlying physical connection to a communication network. The term “radio access technology” or “RAT” at least in some examples refers to the technology used for the underlying physical connection to a radio based communication network. The term “radio technology” at least in some examples refers to technology for wireless transmission and/or reception of electromagnetic radiation for information transfer. The term “RAT type” at least in some examples may identify a transmission technology and/or communication protocol used in an access network. Examples of access technologies include wireless access technologies/RATs, wireline, wirelinecable, wireline broadband forum (wireline-BBF), Ethernet (see e.g., IEEE Standard for Ethernet, IEEE Std 802.3-2018 (31 Aug. 2018) (“[IEEE8023]”)) and variants thereof, fiber optics networks (e.g., ITU-T G.651, ITU-T G.652, Optical Transport Network (OTN), Synchronous optical networking (SONET) and synchronous digital hierarchy (SDH), and the like), digital subscriber line (DSL) and variants thereof, Data Over Cable Service Interface Specification (DOCSIS) technologies, hybrid fiber-coaxial (HFC) technologies, and/or the like. Examples of RATs (or RAT types) and/or communications protocols include Advanced Mobile Phone System (AMPS) technologies (e.g., Digital AMPS (D-AMPS), Total Access Communication System (TACS) and variants thereof, such as Extended TACS (ETACS), and the like); Global System for Mobile Communications (GSM) technologies (e.g., Circuit Switched Data (CSD), High-Speed CSD (HSCSD), General Packet Radio Service (GPRS), and Enhanced Data Rates for GSM Evolution (EDGE)); Third Generation Partnership Project (3GPP) technologies (e.g., Universal Mobile Telecommunications System (UMTS) and variants thereof (e.g., UMTS Terrestrial Radio Access (UTRA), Wideband Code Division Multiple Access (W-CDMA), Freedom of Multimedia Access (FOMA), Time Division-Code Division Multiple Access (TD-CDMA), Time Division- Synchronous Code Division Multiple Access (TD-SCDMA), and the like), Generic Access Network (GAN) / Unlicensed Mobile Access (UMA), High Speed Packet Access (HSPA) and variants thereof (e.g., HSPA Plus (HSPA+)), Long Term Evolution (LTE) and variants thereof (e.g., LTE-Advanced (LTE-A), Evolved UTRA (E-UTRA), LTE Extra, LTE-A Pro, LTE LAA, MuLTEfire, and the like), Fifth Generation (5G) or New Radio (NR), narrowband loT (NB-IOT), 3GPP Proximity Services (ProSe), and/or the like); ETSI RATs (e.g., High Performance Radio Metropolitan Area Network (HiperMAN), Intelligent Transport Systems (ITS) (e.g., ITS-G5, ITS- G5B, ITS-G5C, and the like), and the like); Institute of Electrical and Electronics Engineers (IEEE) technologies and/or WiFi (e.g., IEEE Standard for Local and Metropolitan Area Networks: Overview and Architecture, IEEE Std 802-2014, pp.1-74 (30 Jun. 2014) (“[IEEE802]”), [IEEE80211], IEEE 802.15 technologies (e.g., IEEE 802.15.4 and variants thereof (e.g., ZigBee, WirelessHART, MiWi, ISAlOO.lla, Thread, IPv6 over Low power WPAN (6L0WPAN), and the like), IEEE 802.15.6 and/or the like), WLAN V2X RATs (e.g., [IEEE80211], IEEE Wireless Access in Vehicular Environments (WAVE) Architecture (IEEE 1609.0), IEEE 802.11bd, Dedicated Short Range Communications (DSRC), and/or the like), Worldwide Interoperability for Microwave Access (WiMAX) (e.g., IEEE 802.16), Mobile Broadband Wireless Access (MBWA)ZiBurst (e.g., IEEE 802.20 and variants thereof), Wireless Gigabit Alliance (WiGig) standards (e.g., IEEE 802. Had, IEEE 802.1 lay, and the like), and so forth); Integrated Digital Enhanced Network (iDEN) and variants thereof (e.g., Wideband Integrated Digital Enhanced Network (WiDEN)); millimeter wave (mmWave) technologies/standards (e.g., wireless systems operating at 10-300 GHz and above 3GPP 5G); short-range and/or wireless personal area network (WPAN) technologies/standards (e.g., IEEE 802.15 technologies (e.g., as mentioned previously); Bluetooth and variants thereof (e.g., Bluetooth 5.3, Bluetooth Low Energy (BLE), and the like), WiFi-direct, Miracast, ANT/ANT+, Z-Wave, Universal Plug and Play (UPnP), low power Wide Area Networks (LPWANs), Long Range Wide Area Network (LoRA or LoRaWAN™), and the like); optical and/or visible light communication (VLC) technologies/standards (e.g., IEEE Std 802.15.7 and/or the like); Sigfox; Mobitex; 3GPP2 technologies (e.g., cdmaOne (2G), Code Division Multiple Access 2000 (CDMA 2000), and Evolution-Data Optimized or Evolution-Data Only (EV-DO); Push-to-talk (PTT), Mobile Telephone System (MTS) and variants thereof (e.g., Improved MTS (IMTS), Advanced MTS (AMTS), and the like); Personal Digital Cellular (PDC); Personal Handy-phone System (PHS), Cellular Digital Packet Data (CDPD); Cellular Digital Packet Data (CDPD); DataTAC; Digital Enhanced Cordless Telecommunications (DECT) and variants thereof (e.g., DECT Ultra Low Energy (DECT ULE), DECT-2020, DECT-5G, and the like); Ultra High Frequency (UHF) communication; Very High Frequency (VHF) communication; and/or any other suitable RAT or protocol. In addition to the aforementioned RATs/standards, any number of satellite uplink technologies may be used for purposes of the present disclosure including, for example, radios compliant with standards issued by the International Telecommunication Union (ITU), or the ETSI, among others. The examples provided herein are thus understood as being applicable to various other communication technologies, both existing and not yet formulated.
The term “channel” at least in some examples refers to any transmission medium, either tangible or intangible, which is used to communicate data or a data stream. The term “channel” may be synonymous with and/or equivalent to “communications channel,” “data communications channel,” “transmission channel,” “data transmission channel,” “access channel,” “data access channel,” “link,” “data link,” “carrier,” “radiofrequency carrier,” and/or any other like term denoting a pathway or medium through which data is communicated. Additionally, the term “link” at least in some examples refers to a connection between two devices through a RAT for the purpose of transmitting and receiving information.
The term “carrier” at least in some examples refers to a modulated waveform conveying one or more physical channels (e.g., 5G/NR, E-UTRA, UTRA, and/or GSM/EDGE physical channels). The term “carrier frequency” at least in some examples refers to the center frequency of a cell.
The term “subframe” at least in some examples at least in some examples refers to a time interval during which a signal is signaled. In some implementations, a subframe is equal to 1 millisecond (ms). The term “time slot” at least in some examples at least in some examples refers to an integer multiple of consecutive subframes. The term “superframe” at least in some examples at least in some examples refers to a time interval comprising two time slots.
The term “network address” at least in some examples refers to an identifier for a node or host in a computer network, and may be a unique identifier across a network and/or may be unique to a locally administered portion of the network.
The term “application” or “app” at least in some examples refers to a computer program designed to carry out a specific task other than one relating to the operation of the computer itself. Additionally or alternatively, term “application” or “app” at least in some examples refers to a complete and deployable package, environment to achieve a certain function in an operational environment. The term “process” at least in some examples refers to an instance of a computer program that is being executed by one or more threads. In some implementations, a process may be made up of multiple threads of execution that execute instructions concurrently. The term “algorithm” at least in some examples refers to an unambiguous specification of how to solve a problem or a class of problems by performing calculations, input/output operations, data processing, automated reasoning tasks, and/or the like.
The term “application programming interface” or “API” at least in some examples refers to a set of subroutine definitions, communication protocols, and tools for building software. Additionally or alternatively, the term “application programming interface” or “API” at least in some examples refers to a set of clearly defined methods of communication among various components. In some examples, an API may be defined or otherwise used for a web-based system, operating system, database system, computer hardware, software library, and/or the like.
The terms “instantiate,” “instantiation,” and the like at least in some examples refers to the creation of an instance. In some examples, the term “instance” refers to a concrete occurrence of an object, which may occur, for example, during execution of program code.
The term “reference point” at least in some examples refers to a conceptual point at the conjunction of two non-overlapping functional groups, elements, or entities. The term “service based interface” at least in some examples refers to a representation how a set of services is provided and/or exposed by a particular NF.
The term “use case” at least in some examples refers to a description of a system from a user's perspective. Use cases sometimes treat a system as a black box, and the interactions with the system, including system responses, are perceived as from outside the system. Use cases typically avoid technical jargon, preferring instead the language of the end user or domain expert.
The term “user” at least in some examples refers to an abstract representation of any entity issuing commands, requests, and/or data to a compute node or system, and/or otherwise consumes or uses services. Additionally or alternatively, the term “user” at least in some examples refers to an entity, not part of the 3GPP System , which uses 3GPP System services (e.g., a person using a 3GPP system mobile station as a portable telephone). The term “user profile” at least in some examples refers to a set of information to provide a user with a consistent, personalized service environment, irrespective of the user's location or the terminal used (within the limitations of the terminal and the serving network).
The term “service consumer” or “consumer” at least in some examples refers to an entity that consumes one or more services. The term “service producer” or “producer” at least in some examples refers to an entity that offers, serves, or otherwise provides one or more services. The term “service provider” or “provider” at least in some examples refers to an organization or entity that provides one or more services to at least one service consumer. For purposes of the present disclosure, the terms “service provider” and “service producer” may be used interchangeably even though these terms may refer to difference concepts. Examples of service providers include cloud service provider (CSP), network service provider (NSP), application service provider (ASP) (e.g., Application software service provider in a service-oriented architecture (ASSP)), internet service provider (ISP), telecommunications service provider (TSP), online service provider (OSP), payment service provider (PSP), managed service provider (MSP), storage service providers (SSPs), SAML service provider, and/or the like.
The term “datagram” at least in some examples at least in some examples refers to a basic transfer unit associated with a packet-switched network; a datagram may be structured to have header and payload sections. The term “datagram” at least in some examples may be synonymous with any of the following terms, even though they may refer to different aspects: “data unit”, a “protocol data unit” or “PDU”, a “service data unit” or “SDU”, “frame”, “packet”, a “network packet”, “segment”, “block”, “cell”, “chunk”, “Type Length Value” or “TLV”, and/or the like. Examples of datagrams, network packets, and the like, include internet protocol (IP) packet, Internet Control Message Protocol (ICMP) packet, UDP packet, TCP packet, SCTP packet, ICMP packet, Ethernet frame, RRC messages/packets, SDAP PDU, SDAP SDU, PDCP PDU, PDCP SDU, MAC PDU, MAC SDU, BAP PDU. BAP SDU, RLC PDU, RLC SDU, WiFi frames as discussed in a IEEE 802 protocol/standard (e.g., [IEEE80211] or the like), Type Length Value (TLV), and/or other like data structures. The term “packet” at least in some examples refers to an information unit identified by a label at layer 3 of the OSI reference model. In some examples, a “packet” may also be referred to as a “network protocol data unit” or “NPDU”. The term “protocol data unit” at least in some examples refers to a unit of data specified in an (N)-protocol layer and includes (N)-protocol control information and possibly (N)-user data.
The term “information element” or “IE” at least in some examples refers to a structural element containing one or more fields. Additionally or alternatively, the term “information element” or “IE” at least in some examples refers to a field or set of fields defined in a standard or specification that is used to convey data and/or protocol information. The term “field” at least in some examples refers to individual contents of an information element, or a data element that contains content. The term “data frame”, “data field”, or “DF” at least in some examples refers to a data type that contains more than one data element in a predefined order. The term “data element” or “DE” at least in some examples refers to a data type that contains one single data. Additionally or alternatively, the term “data element” at least in some examples refers to an atomic state of a particular object with at least one specific property at a certain point in time, and may include one or more of a data element name or identifier, a data element definition, one or more representation terms, enumerated values or codes (e.g., metadata), and/or a list of synonyms to data elements in other metadata registries. Additionally or alternatively, a “data element” at least in some examples refers to a data type that contains one single data. Data elements may store data, which may be referred to as the data element’s content (or “content items”). Content items may include text content, attributes, properties, and/or other elements referred to as “child elements.” Additionally or alternatively, data elements may include zero or more properties and/or zero or more attributes, each of which may be defined as database objects (e.g., fields, records, and the like), object instances, and/or other data elements. An “attribute” at least in some examples refers to a markup construct including a name-value pair that exists within a start tag or empty element tag. Attributes contain data related to its element and/or control the element’ s behavior.
The term “reference” at least in some examples refers to data useable to locate other data and may be implemented a variety of ways (e.g., a pointer, an index, a handle, a key, an identifier, a hyperlink, and/or the like).
The terms “configuration”, “policy”, “ruleset”, and/or “operational parameters”, at least in some examples refer to a machine -readable information object that contains instructions, conditions, parameters, criteria, data, metadata, and/or other information that is/are relevant to a component, device, system, network, service producer, service consumer, and/or other element/entity.
The term “data set” or “dataset” at least in some examples refers to a collection of data; a “data set” or “dataset” may be formed or arranged in any type of data structure. In some examples, one or more characteristics can define or influence the structure and/or properties of a dataset such as the number and types of attributes and/or variables, and various statistical measures (e.g., standard deviation, kurtosis, and/or the like). The term “data structure” at least in some examples refers to a data organization, management, and/or storage format. Additionally or alternatively, the term “data structure” at least in some examples refers to a collection of data values, the relationships among those data values, and/or the functions, operations, tasks, and the like, that can be applied to the data. Examples of data structures include primitives (e.g., Boolean, character, floating-point numbers, fixed-point numbers, integers, reference or pointers, enumerated type, and/or the like), composites (e.g., arrays, records, strings, union, tagged union, and/or the like), abstract data types (e.g., data container, list, tuple, associative array, map, dictionary, set (or dataset), multiset or bag, stack, queue, graph (e.g., tree, heap, and the like), and/or the like), routing table, symbol table, quad-edge, blockchain, purely-functional data structures (e.g., stack, queue, (multi)set, random access list, hash consing, zipper data structure, and/or the like).
The term “association” at least in some examples refers to a model of relationships between Managed Objects. Associations can be implemented in several ways, such as: (1) name bindings, (2) reference attributes, and (3) association objects. The term “Information Object Class” or “IOC” at least in some examples refers to a representation of the management aspect of a network resource. Additionally or alternatively, the term “Information Object Class” or “IOC” at least in some examples refers to a description of the information that can be passed/used in management interfaces. In some examples, their representations are technology agnostic software objects. Additionally or alternatively, an IOC has attributes that represents the various properties of the class of objects. Additionally or alternatively, IOC can support operations providing network management services invocable on demand for that class of objects. Additionally or alternatively, an IOC may support notifications that report event occurrences relevant for that class of objects. In some examples, an IOC is modelled using the stereotype "Class" in the UML meta-model.
The term “Managed Object” or “MO” at least in some examples refers to an instance of a Managed Object Class (MOC) representing the management aspects of a network resource. Its representation is a technology specific software object. In some examples, an MO is called an “MO instance” or “MOI”. Additionally or alternatively, the term “Managed Object” or “MO” at least in some examples refers to a class of technology specific software objects. In some examples, an MOC is the same as an IOC except that the former is defined in technology specific terms and the latter is defined in technology agnostic terms. MOCs are used/defined in SS level specifications. In some examples, lOCs and/or MOCs are used/defined in IS level specifications.
The term “Management Information Base” or “MIB” at least in some examples refers to an instance of an NRM and has some values on the defined attributes and associations specific for that instance. In some examples, an MIB includes a name space (describing the MO containment hierarchy in the MIB through Distinguished Names), a number of MOs with their attributes, and a number of associations between the MOs.
The term “name space” at least in some examples refers to a collection of names. In some examples, a name space is restricted to a hierarchical containment structure, including its simplest form - the one-level, flat name space. In some examples, all MOs in an MIB are included in the corresponding name space and the MIB/name space shall only support a strict hierarchical containment structure (with one root object). An MO that contains another is said to be the superior (parent); the contained MO is referred to as the subordinate (child). The parent of all MOs in a single name space is called a Local Root. The ultimate parent of all MOs of all managed systems is called the Global Root.
The term “network resource” at least in some examples refers to a discrete entity represented by an IOC for the purpose of network and service management. In some examples, a network resource may represent intelligence, information, hardware and/or software of a telecommunication network. The term “Network Resource Model” or “NRM” at least in some examples refers to a collection of IOCS, inclusive of their associations, attributes and operations, representing a set of network resources under management.
The term “self-organizing network” or “SON” at least in some examples refers to a type of network architecture or system that is designed to automate the planning, configuration, optimization, and/or healing processes of a wireless network with little or no direct human intervention (see e.g., 3GPP TS 32.500, 3GPP TS 32.522, 3GPP TS 32.541, 3GPP TS 32.551, 3GPP TS 28.310, 3GPP TS 28.313, 3GPP TS 28.627, and 3GPP TS 28.628).
The term “performance indicator” at least in some examples refers to performance data aggregated over a group of NFs that is derived from performance measurements collected at the NFs that belong to the group. In some examples, performance indicators are derived, collected or aggregated according to an aggregation method identified in a performance indicator definition.
The term “artificial intelligence” or “Al” at least in some examples refers to any intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals. Additionally or alternatively, the term “artificial intelligence” or “Al” at least in some examples refers to the study of “intelligent agents” and/or any device that perceives its environment and takes actions that maximize its chance of successfully achieving a goal.
The terms “artificial neural network”, “neural network”, or “NN” refer to an ML technique comprising a collection of connected artificial neurons or nodes that (loosely) model neurons in a biological brain that can transmit signals to other arterial neurons or nodes, where connections (or edges) between the artificial neurons or nodes are (loosely) modeled on synapses of a biological brain. The artificial neurons and edges typically have a weight that adjusts as learning proceeds. The weight increases or decreases the strength of the signal at a connection. Neurons may have a threshold such that a signal is sent only if the aggregate signal crosses that threshold. The artificial neurons can be aggregated or grouped into one or more layers where different layers may perform different transformations on their inputs. Signals travel from the first layer (the input layer), to the last layer (the output layer), possibly after traversing the layers multiple times. NNs are usually used for supervised learning, but can be used for unsupervised learning as well. Examples of NNs include deep NN, feed forward NN (FFN), deep FNN (DFF), convolutional NN (CNN), deep CNN (DCN), deconvolutional NN (DNN), a deep belief NN, a perception NN, recurrent NN (RNN) (e.g., including Long Short Term Memory (LSTM) algorithm, gated recurrent unit (GRU), echo state network (ESN), and the like), spiking NN (SNN), deep stacking network (DSN), Markov chain, perception NN, generative adversarial network (GAN), transformers, stochastic NNs (e.g., Bayesian Network (BN), Bayesian belief network (BBN), a Bayesian NN (BNN), Deep BNN (DBNN), Dynamic BN (DBN), probabilistic graphical model (PGM), Boltzmann machine, restricted Boltzmann machine (RBM), Hopfield network or Hopfield NN, convolutional deep belief network (CDBN), and the like), Linear Dynamical System (LDS), Switching LDS (SLDS), Optical NNs (ONNs), an NN for reinforcement learning (RL) and/or deep RL (DRL), attention and/or self-attention mechanisms, and/or the like.
The term “mathematical model” at least in some examples refer to a system of postulates, data, and inferences presented as a mathematical description of an entity or state of affairs including governing equations, assumptions, and constraints. The term “statistical model” at least in some examples refers to a mathematical model that embodies a set of statistical assumptions concerning the generation of sample data and/or similar data from a population; in some examples, a “statistical model” represents a data-generating process.
The term “machine learning” or “ML” at least in some examples refers to the use of computer systems to optimize a performance criterion using example (training) data and/or past experience. ML involves using algorithms to perform specific task(s) without using explicit instructions to perform the specific task(s), and/or relying on patterns, predictions, and/or inferences. ML uses statistics to build ML model(s) (also referred to as “models”) in order to make predictions or decisions based on sample data (e.g., training data).
The term “machine learning model” or “ML model” at least in some examples refers to an application, program, process, algorithm, and/or function that is capable of making predictions, inferences, or decisions based on an input data set and/or is capable of detecting patterns based on an input data set. Additionally or alternatively, the term “machine learning model” or “ML model” at least in some examples refers to a mathematical algorithm that can be "trained" by data (or otherwise learn from data) and/or human expert input as examples to replicate a decision an expert would make when provided that same information. In some examples, a “machine learning model” or “ML model” is trained on a training data to detect patterns and/or make predictions, inferences, and/or decisions. In some examples, a “machine learning model” or “ML model” is based on a mathematical and/or statistical model. For purposes of the present disclosure, the terms “ML model”, “Al model”, “AI/ML model”, and the like may be used interchangeably. The term “mathematical model” at least in some examples refer to a system of postulates, data, and inferences presented as a mathematical description of an entity or state of affairs including governing equations, assumptions, and constraints. The term “statistical model” at least in some examples refers to a mathematical model that embodies a set of statistical assumptions concerning the generation of sample data and/or similar data from a population; in some examples, a “statistical model” represents a data-generating process.
The term “machine learning algorithm” or “ML algorithm” at least in some examples refers to an application, program, process, algorithm, and/or function that builds or estimates an ML model based on sample data or training data. Additionally or alternatively, the term “machine learning algorithm” or “ML algorithm” at least in some examples refers to a program, process, algorithm, and/or function that learns from experience w.r.t some task(s) and some performance measure(s)/metric(s), and an ML model is an object or data structure created after an ML algorithm is trained with training data. For purposes of the present disclosure, the terms “ML algorithm”, “Al algorithm”, “Al/ML algorithm”, and the like may be used interchangeably. Additionally, although the term “ML algorithm” may refer to different concepts than the term “ML model,” these terms may be used interchangeably for the purposes of the present disclosure.
The term “machine learning application” or “ML application” at least in some examples refers to an application, program, process, algorithm, and/or function that contains some AI/ML model(s) and application-level descriptions. Additionally or alternatively, the term “machine learning application” or “ML application” at least in some examples refers to a complete and deployable application and/or package that includes at least one ML model and/or other data capable of achieving a certain function and/or performing a set of actions or tasks in an operational environment. For purposes of the present disclosure, the terms “ML application”, “Al application”, “Al/ML application”, and the like may be used interchangeably.
The term “machine learning entity” or “ML entity” at least in some examples refers to an entity that is either an ML model or contains an ML model and ML model-related metadata that can be managed as a single composite entity. In some examples, metadata may include, for example, the applicable runtime context for the ML model. The term “Al decision entity”, “machine learning decision entity”, or “ML decision entity” at least in some examples refers to an entity that applies a non- Al and/or non-ML based logic for making decisions that can be managed as a single composite entity.
The term “machine learning training”, “ML training”, or “MLT” at least in some examples refers to capabilities and associated end-to-end (e2e) processes to enable an ML training function to perform ML model training (e.g., as defined herein). In some examples, ML training capabilities include interaction with other parties/entities to collect and/or format the data required for ML model training. The term “machine learning model training” or “ML model training” at least in some examples refers to capabilities of an ML training function to take data, run the data through an ML model, derive associated loss, optimization, and/or objective/goal, and adjust the parameterization of the ML model based on the computed loss, optimization, and/or objective/goal.
The term “machine learning training function”, “ML training function”, or “MLT function” at least in some examples refers to a function with MLT capabilities.
The term “AI/ML inference function” or “ML inference function” at least in some examples refers to a function (or set of functions) that employs an ML model and/or Al decision entity to conduct inference. Additionally or alternatively, the term “AI/ML inference function” or “ML inference function” at least in some examples refers to an inference framework used to run a compiled model in the inference host. In some examples, an “AI/ML inference function” or “ML inference function” may also be referred to an “model inference engine”, “ML inference engine”, or “inference engine”.
The terms “model parameter” and/or “parameter” in the context of ML, at least in some examples refer to values, characteristics, and/or properties that are learnt during training. Additionally or alternatively, “model parameter” and/or “parameter” in the context of ML, at least in some examples refer to a configuration variable that is internal to the model and whose value can be estimated from the given data. Model parameters are usually required by a model when making predictions, and their values define the skill of the model on a particular problem. Examples of such model parameters / parameters include weights (e.g., in an ANN); constraints; support vectors in a support vector machine (SVM); coefficients in a linear regression and/or logistic regression; word frequency, sentence length, noun or verb distribution per sentence, the number of specific character n-grams per word, lexical diversity, and the like, for natural language processing (NLP) and/or natural language understanding (NLU); and/or the like. The term “hyperparameter” at least in some examples refers to characteristics, properties, and/or parameters for an ML process that cannot be learnt during a training process. Hyperparameter are usually set before training takes place, and may be used in processes to help estimate model parameters. Examples of hyperparameters include model size (e.g., in terms of memory space, bytes, number of layers, and the like); training data shuffling (e.g., whether to do so and by how much); number of evaluation instances, iterations, epochs (e.g., a number of iterations or passes over the training data), or episodes; number of passes over training data; regularization; learning rate (e.g., the speed at which the algorithm reaches (converges to) optimal weights); learning rate decay (or weight decay); momentum; number of hidden layers; size of individual hidden layers; weight initialization scheme; dropout and gradient clipping thresholds; the C value and sigma value for SVMs; the k in k-nearest neighbors; number of branches in a decision tree; number of clusters in a clustering algorithm; vector size; word vector size for NLP and NLU; and/or the like.
The term “objective function” at least in some examples refers to a function to be maximized or minimized for a specific optimization problem. In some cases, an objective function is defined by its decision variables and an objective. The objective is the value, target, or goal to be optimized, such as maximizing profit or minimizing usage of a particular resource. The specific objective function chosen depends on the specific problem to be solved and the objectives to be optimized. Constraints may also be defined to restrict the values the decision variables can assume thereby influencing the objective value (output) that can be achieved. During an optimization process, an objective function’s decision variables are often changed or manipulated within the bounds of the constraints to improve the objective function’s values. In general, the difficulty in solving an objective function increases as the number of decision variables included in that objective function increases. The term “decision variable” refers to a variable that represents a decision to be made.
The term “optimization” at least in some examples refers to an act, process, or methodology of making something (e.g., a design, system, or decision) as fully perfect, functional, or effective as possible. Optimization usually includes mathematical procedures such as finding the maximum or minimum of a function. The term “optimal” at least in some examples refers to a most desirable or satisfactory end, outcome, or output. The term “optimum” at least in some examples refers to an amount or degree of something that is most favorable to some end. The term “optima” at least in some examples refers to a condition, degree, amount, or compromise that produces a best possible result. Additionally or alternatively, the term “optima” at least in some examples refers to a most favorable or advantageous outcome or result.
The term “precision” at least in some examples refers to the closeness of the two or more measurements to each other. The term “precision” may also be referred to as “positive predictive value”. The term “accuracy” at least in some examples refers to the closeness of one or more measurements to a specific value. The term “quantile” at least in some examples refers to a cut point(s) dividing a range of a probability distribution into continuous intervals with equal probabilities, or dividing the observations in a sample in the same way. The term “quantile function” at least in some examples refers to a function that is associated with a probability distribution of a random variable, and the specifies the value of the random variable such that the probability of the variable being less than or equal to that value equals the given probability. The term “quantile function” may also be referred to as a percentile function, percent-point function, or inverse cumulative distribution function. The term “recall” at least in some examples refers to the fraction of relevant instances that were retrieved, or he number of true positive predictions or inferences divided by the number of true positives plus false negative predictions or inferences. The term “recall” may also be referred to as “sensitivity”.
The terms “regression algorithm” and/or “regression analysis” in the context of ML at least in some examples refers to a set of statistical processes for estimating the relationships between a dependent variable (often referred to as the “outcome variable”) and one or more independent variables (often referred to as “predictors”, “covariates”, or “features”). Examples of regression algorithms/models include logistic regression, linear regression, gradient descent (GD), stochastic GD (SGD), and the like.
The term “reinforcement learning” or “RL” at least in some examples refers to a goal- oriented learning technique based on interaction with an environment. In RL, an agent aims to optimize a long-term objective by interacting with the environment based on a trial and error process. Examples of RL algorithms include Markov decision process, Markov chain, Q-learning, multi-armed bandit learning, temporal difference learning, and deep RL. The term “reward function”, in the context of RL, at least in some examples refers to a function that outputs a reward value based on one or more reward variables; the reward value provides feedback for an RL policy so that an RL agent can learn a desirable behavior. The term “reward shaping”, in the context of RL, at least in some examples refers to a adjusting or altering a reward function to output a positive reward for desirable behavior and a negative reward for undesirable behavior.
The term “supervised learning” at least in some examples refers to an ML technique that aims to learn a function or generate an ML model that produces an output given a labeled data set. Supervised learning algorithms build models from a set of data that contains both the inputs and the desired outputs. For example, supervised learning involves learning a function or model that maps an input to an output based on example input-output pairs or some other form of labeled training data including a set of training examples. Each input-output pair includes an input object (e.g., a vector) and a desired output object or value (referred to as a “supervisory signal”). Supervised learning can be grouped into classification algorithms, regression algorithms, and instance-based algorithms.
The term “tuning” or “tune” at least in some examples refers to a process of adjusting model parameters or hyperparameters of an ML model in order to improve its performance. Additionally or alternatively, the term “tuning” or “tune” at least in some examples refers to a optimizing an ML model’s model parameters and/or hyperparameters.
The term “unsupervised learning” at least in some examples refers to an ML technique that aims to learn a function to describe a hidden structure from unlabeled data and/or builds/generates models from a set of data that contains only inputs and no desired output labels. Examples of unsupervised learning approaches/methods include K-means clustering, hierarchical clustering, mixture models, density-based spatial clustering of applications with noise (DBSCAN), ordering points to identify the clustering structure (OPTICS), anomaly detection methods (e.g., local outlier factor, isolation forest, and/or the like), expectation-maximization algorithm (EM), method of moments, topic modeling, and blind signal separation techniques (e.g., principal component analysis (PCA), independent component analysis, non- negative matrix factorization, singular value decomposition). In some examples, unsupervised training methods include backpropagation, Hopfield learning rule, Boltzmann learning rule, contrastive divergence, wake sleep, variational inference, maximum likelihood, maximum a posteriori, Gibbs sampling, backpropagating reconstruction errors, and hidden state reparameterizations. The term ”semi- supervised learning at least in some examples refers to ML algorithms that develop ML models from incomplete training data, where a portion of the sample input does not include labels.
Aspects of the inventive subject matter may be referred to herein, individually and/or collectively, merely for convenience and without intending to voluntarily limit the scope of this application to any single aspect or inventive concept if more than one is in fact disclosed. Thus, although specific aspects have been illustrated and described herein, it should be appreciated that any arrangement calculated to achieve the same purpose may be substituted for the specific aspects shown. This disclosure is intended to cover any and all adaptations or variations of various aspects. Combinations of the above aspects and other aspects not specifically described herein will be apparent to those of skill in the art upon reviewing the above description.

Claims

1. A method of operating a model training function, the method comprising: receiving, from one or more performance assurance management service producers (MnS- Ps), virtualized network function (VNF) measurement data related to one or more VNF instances and virtualized network entity (VNE) measurement data related to a VNE; training a machine learning (ML) model to predict, based on the VNF measurement data and the VNE measurement data, VNF energy consumption data (ECD) for respective VNF instances of the one or more VNF instances; and deploying the ML model to a model inference function to generate predictions of VNF ECD.
2. The method claim 1, wherein the method includes: sending, to respectives performance assurance MnS-Ps, a request to create a performance management job to collect the VNF measurement data from the one or more VNF instances and the VNE measurement date from the VNE.
3. The method claims 1-2, wherein the VNF measurement data includes virtual resource usage data (VRUD) for individual VNF instances of the one or more VNF instances.
4. The method of claim 3, wherein the VRUD includes, for the respective VNF instances, virtual compute usage data, virtual memory usage data, and virtual disk usage data.
5. The method of claim 4, wherein the VRUD includes, for the respective VNF instances, virtual network usage data.
6. The method of claims 1-5, wherein the VNE measurement data includes VNE ECD collected from one or more power, energy, environmental (PEE) sensors.
7. The method of claim 6, wherein the VNE ECD is generated by mapping energy consumption metrics received from the one or more PEE sensors to a managed element representing the VNE.
8. The method of claims 1-7, wherein the VNF measurement data and VNE measurement data are collected at a same interval.
9. The method of claim 8, wherein the VNF measurement data and VNE measurement data are time synchronized.
10. The method of claims 6-9, wherein the training includes: training the ML model using the VRUD of the respective VNF instances as data features and the VNE ECD as data labels to compute model parameters of the ML model.
11. A method of operating a model inference function, the method comprising: receiving, from a machine learning (ML) model training function, an ML model trained to predict virtualized network function (VNF) energy consumption data (ECD); receiving, from one or more performance assurance management service producers (MnS- Ps), VNF measurement data related to one or more VNF instances and measurement data related to the VNF ECD; and generating, using the trained ML model, predicted VNF ECD for respective VNF instances of the one or more VNF instances based on the VNF measurement data.
12. The method claim 11, wherein the method includes: sending, to respective performance assurance MnS-Ps, a request to create a performance management job to collect the VNF measurement data from the one or more VNF instances.
13. The method claims 11-12, wherein the VNF measurement data includes virtual resource usage data (VRUD) for individual VNF instances of the one or more VNF instances.
14. The method of claim 13, wherein the VRUD includes, for the respective VNF instances, virtual compute usage data, virtual memory usage data, and virtual disk usage data.
15. The method of claim 14, wherein the VRUD includes, for the respective VNF instances, virtual network usage data.
16. The method claims 11-15, wherein a KPI is generated for the predicted VNF ECD.
17. The method of claim 16, wherein the KPI is a measure of energy consumption of the respective VNF instances.
18. The method of claims 1-17, wherein the predicted VNF ECD is expressed in kilowatt- hours.
19. The method of claims 1-18, wherein the model inference function is operated by a management data analytic function (MDAF).
20. The method of claims 1-19, wherein the model training function is a machine learning training MnS-P and the model inference function is a machine learning training management service consumer (MnS-C).
21. The method of claims 1-19, wherein the model training function is a machine learning training function (MLTF) contained by a Network Data Analytics Function (NWDAF).
22. One or more computer readable media comprising instructions, wherein execution of the instructions by processor circuitry is to cause the processor circuitry to perform the method of claims 1-21.
23. A computer program comprising the instructions of claim 22.
24. An electromagnetic signal carrying the instructions of claim 22.
25. An apparatus comprising means for performing the method of claims 1-21.
PCT/US2023/077505 2022-10-28 2023-10-23 Artificial intelligence/machine learning (ai/ml) models for determining energy consumption in virtual network function instances WO2024091862A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202263420471P 2022-10-28 2022-10-28
US63/420,471 2022-10-28

Publications (1)

Publication Number Publication Date
WO2024091862A1 true WO2024091862A1 (en) 2024-05-02

Family

ID=90831823

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2023/077505 WO2024091862A1 (en) 2022-10-28 2023-10-23 Artificial intelligence/machine learning (ai/ml) models for determining energy consumption in virtual network function instances

Country Status (1)

Country Link
WO (1) WO2024091862A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160116966A1 (en) * 2008-11-20 2016-04-28 International Business Machines Corporation Method and apparatus for power-efficiency management in a virtualized cluster system
US20180349202A1 (en) * 2017-05-30 2018-12-06 Hewlett Packard Enterprise Development Lp Virtual Network Function Resource Allocation
US20210105228A1 (en) * 2019-10-04 2021-04-08 Samsung Electronics Co., Ltd. Intelligent cloud platform to host resource efficient edge network function
US20220158897A1 (en) * 2019-06-10 2022-05-19 Apple Inc. End-to-end radio access network (ran) deployment in open ran (o-ran)
WO2022177455A1 (en) * 2021-02-22 2022-08-25 Instituto De Telecomunicações Method and system for optimizing resource and traffic management of a computer execution environment in a vran

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160116966A1 (en) * 2008-11-20 2016-04-28 International Business Machines Corporation Method and apparatus for power-efficiency management in a virtualized cluster system
US20180349202A1 (en) * 2017-05-30 2018-12-06 Hewlett Packard Enterprise Development Lp Virtual Network Function Resource Allocation
US20220158897A1 (en) * 2019-06-10 2022-05-19 Apple Inc. End-to-end radio access network (ran) deployment in open ran (o-ran)
US20210105228A1 (en) * 2019-10-04 2021-04-08 Samsung Electronics Co., Ltd. Intelligent cloud platform to host resource efficient edge network function
WO2022177455A1 (en) * 2021-02-22 2022-08-25 Instituto De Telecomunicações Method and system for optimizing resource and traffic management of a computer execution environment in a vran

Similar Documents

Publication Publication Date Title
NL2033617B1 (en) Resilient radio resource provisioning for network slicing
US20220124543A1 (en) Graph neural network and reinforcement learning techniques for connection management
US20220014963A1 (en) Reinforcement learning for multi-access traffic management
US11917527B2 (en) Resource allocation and activation/deactivation configuration of open radio access network (O-RAN) network slice subnets
US20220109622A1 (en) Reliability enhancements for multi-access traffic management
US20220232423A1 (en) Edge computing over disaggregated radio access network functions
US11423254B2 (en) Technologies for distributing iterative computations in heterogeneous computing environments
KR20220092366A (en) Interoperable framework for secure dual mode edge application programming interface consumption in hybrid edge computing platforms
CN114567875A (en) Techniques for radio equipment network space security and multiple radio interface testing
US20220124043A1 (en) Multi-access management service enhancements for quality of service and time sensitive applications
US20230006889A1 (en) Flow-specific network slicing
US20220224776A1 (en) Dynamic latency-responsive cache management
CN117897980A (en) Intelligent application manager for wireless access network
WO2023069757A1 (en) Traffic engineering in fabric topologies with deterministic services
US20230368077A1 (en) Machine learning entity validation performance reporting
WO2022060777A1 (en) Online reinforcement learning
WO2023215720A1 (en) Authorization and authentication of machine learning model transfer
US20230268982A1 (en) Network controlled repeater
WO2023283102A1 (en) Radio resource planning and slice-aware scheduling for intelligent radio access network slicing
WO2023114017A1 (en) Network resource model based solutions for ai-ml model training
WO2024091862A1 (en) Artificial intelligence/machine learning (ai/ml) models for determining energy consumption in virtual network function instances
WO2022221260A1 (en) O-cloud lifecycle management service support
WO2022221495A1 (en) Machine learning support for management services and management data analytics services
US20230370879A1 (en) Measurement data collection to support radio access network intelligence
WO2023212705A1 (en) Timing advance and channel state information enhancements