WO2024092132A1 - Chargement d'entité d'intelligence artificielle et d'apprentissage automatique dans des réseaux cellulaires - Google Patents

Chargement d'entité d'intelligence artificielle et d'apprentissage automatique dans des réseaux cellulaires Download PDF

Info

Publication number
WO2024092132A1
WO2024092132A1 PCT/US2023/077924 US2023077924W WO2024092132A1 WO 2024092132 A1 WO2024092132 A1 WO 2024092132A1 US 2023077924 W US2023077924 W US 2023077924W WO 2024092132 A1 WO2024092132 A1 WO 2024092132A1
Authority
WO
WIPO (PCT)
Prior art keywords
entity
moi
loading
request
data
Prior art date
Application number
PCT/US2023/077924
Other languages
English (en)
Inventor
Yizhi Yao
Joey Chou
Original Assignee
Intel Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intel Corporation filed Critical Intel Corporation
Publication of WO2024092132A1 publication Critical patent/WO2024092132A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/02Standardisation; Integration
    • H04L41/0233Object-oriented techniques, for representation of network management data, e.g. common object request broker architecture [CORBA]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/02Arrangements for optimising operational condition

Definitions

  • This disclosure generally relates to systems and methods for wireless communications and, more particularly, to artificial intelligence/machine learning (AI/ML) entity loading.
  • AI/ML artificial intelligence/machine learning
  • AI/ML Artificial intelligence and machine learning play a pivotal role in various aspects of fifth generation (5G) networks and/or later releases, encompassing 5G Core (5GC), Next-Generation Radio Access Network (NG-RAN), and network management systems.
  • 5G fifth generation
  • 5G Core 5G Core
  • NG-RAN Next-Generation Radio Access Network
  • network management systems 5G Core
  • FIG. 1 depicts an illustrative schematic diagram for AI/ML entity loading, in accordance with one or more example embodiments of the present disclosure.
  • FIG. 2 depicts an illustrative schematic diagram for AI/ML entity loading, in accordance with one or more example embodiments of the present disclosure.
  • FIG. 3 depicts an illustrative schematic diagram for AI/ML entity loading, in accordance with one or more example embodiments of the present disclosure.
  • FIG. 4 illustrates a flow diagram of a process for an illustrative AI/ML entity loading system, in accordance with one or more example embodiments of the present disclosure.
  • FIG. 5 illustrates an example network architecture, in accordance with one or more example embodiments of the present disclosure.
  • FIG. 6 schematically illustrates a wireless network, in accordance with one or more example embodiments of the present disclosure.
  • FIG. 7 illustrates components of a computing device, in accordance with one or more example embodiments of the present disclosure.
  • FIG. 8 illustrates a network 800 in accordance with various embodiments.
  • FIG. 9 illustrates a simplified block diagram of artificial (Al)-assisted communication between a user equipment (UE) and a radio access network (RAN), in accordance with various embodiments.
  • UE user equipment
  • RAN radio access network
  • AI/ML Artificial intelligence/machine learning
  • 5GS including 5GC, NG-RAN, and management systems.
  • AI/ML Artificial intelligence/machine learning
  • ML Artificial intelligence/machine learning
  • the ML entity is either an ML model or an entity that contains an ML model and its related metadata.
  • Example embodiments of the present disclosure relate to systems, methods, and devices for artificial intelligence/machine learning (AI/ML) entity loading and performance indicator selection for machine learning model training.
  • AI/ML artificial intelligence/machine learning
  • Some embodiments herein are directed to AI/ML entity loading for 5GS.
  • the name of the Management Service (MnS), and its terms are not significant, and they may be named differently in alternate embodiments.
  • MnS Management Service
  • an alternative term may be used for “loading”, e.g., transfer, distribution, deployment, etc.
  • Some embodiments described herein are directed to performance indicator selection for ML model training.
  • the name of the IOC, attribute, or information element is not significant and may be named differently in alternate embodiments.
  • FIG. 1 depicts an illustrative schematic diagram for AI/ML entity loading, in accordance with one or more example embodiments of the present disclosure.
  • FIG. 1 there is shown an AI/ML operational workflow.
  • AI/ML Artificial intelligence/machine learning
  • the workflow involves 3 main phases; the training phase, deployment phase, and inference phase.
  • the ML entity needs to be loaded (or transferred) to the inference function, so that the ML entity can be activated in the inference function to conduct inference.
  • the ML model training phase (including training, validation, and testing), the performance of the ML model needs to be evaluated.
  • the related performance indicators need to be collected and analyzed. The consumer needs to know what kind of ML model the ML training function can train, and what performance indicators are supported for each kind of model.
  • FIG. 2 depicts an illustrative schematic diagram for AI/ML entity loading, in accordance with one or more example embodiments of the present disclosure.
  • ML entity loading refers to the process of making an ML entity available in the operational environments, where it could start adding value by conducting inference (e.g., prediction) in the inference function. After the trained ML entity meets the performance criteria per the ML entity testing, the ML entity could be loaded in target inference function(s) in the 3GPP system, e.g., via a software installation, file transfer, or a configuration management procedure and subsequently activated.
  • the ML entity loading may be requested by the consumer, or initiated by the producer based on the loading policy (e.g., the threshold of the testing performance of the ML entity, etc., the threshold of the inference performance of the existing ML model, etc.) provided by the consumer.
  • the data fed to the ML entity may change to the level where it is different from the data used in the initial prior training of the respective ML entity.
  • the ML entity’ therein may need to be retrained and reloaded.
  • a use case may be for ML entity loading management. This use case is applicable to the case that the ML training function are inference function are not co-located.
  • the ML entity needs to be loaded by the ML entity loading MnS producer to the target inference function(s) per the request from the MnS consumer or initiated by the producer based on the loading policy provided by the consumer. It should be noted that in some embodiments, an ML entity loading a MnS producer may be inside or outside the inference function.
  • the MnS consumer needs to be notified about the ML entity loading or to retrieve the loading information of the ML entity.
  • the general information used to describe a loaded ML entity may include:
  • Resource information which describes the static parameters of ML entity (e.g. alMLEntity Version, alMLEntityld, trainingContext.
  • Management information which describes the information model that is used for ML entity lifecycle management (e.g. activation flag, status, creation time, last update time).
  • Capability information which describes the capability information (e.g. inference ty pe, performance metrics).
  • a AI/ML entity loading system may facilitate potential requirements:
  • REQ-MODEL DPL-CON-l The ML entity loading MnS producer should have a capability allowing the consumer to request and retrieve loading information of an ML entity.
  • REQ-MODEL DPL-CON-2 The ML entity loading MnS producer should have a capability to notify the consumer about the loading information (process) of an ML entity.
  • REQ-MODEL DPL-CON-3 The ML entity loading MnS producer should have a capability allowing the consumer to request the loading of an ML entity to the inference function(s).
  • the ML entity loading MnS producer should have a capability allowing the consumer to provide the loading policy for an ML entity.
  • a AI/ML entity loading system may facilitate an NRM- based solution.
  • This solution uses the instances of following IOCS for interaction between ML loading MnS producer and consumer to support the ML entity loading, where the ML loading MnS producer could be located inside or outside the inference function:
  • the IOC representing the ML entity loading request for example, named as MLLoadingRequest.
  • This IOC is created by the ML entity loading MnS consumer on the producer, and it contains the following attributes:
  • the IOC representing the ML entity loading policy for example named as MLLoadingPolicy.
  • This IOC is created by the ML entity loading MnS consumer on the producer, so that the producer can load the ML entity according to the policy without an explicit loading request from the consumer, and it contains the following attributes: identifier or inference type of the ML entity to be loaded; trigger of ML entity loading, including the threshold of the testing performance of the ML entity and/or the threshold of the inference performance of the existing ML entity in the target inference function(s); identifier (e.g., DN) of target inference functions where the ML entity’ is loaded to.
  • the IOC representing the ML entity loadingprocess for example named as MLLoadingProcess.
  • This IOC is created by the ML entity loadingMnS producer and reported to the consumer, and it contains the following attributes: identifier of the ML entity being loaded; associated ML entity loadingrequest;
  • - identifier e.g., DN
  • loading progress e.g., DN
  • control of the loading process like cancel, suspend and resume.
  • the IOC representing the ML entity loaded in the inference function for example by extension of the existing IOC (AIMLEntity) representing the ML entity, or by a new IOC.
  • This IOC is created by the ML loading MnS producer and reported to the consumer, and it contains the following attributes:
  • associated trained ML entity e.g., DN of the MOI representing the trained ML entity
  • status (such as activated, de-activated, etc) of the loaded ML entity.
  • FIG. 2 there is shown an example of ML entity loading related NRMs.
  • the examples of IOCS and their relations between the IOCS are depicted in FIG. 2.
  • the name of the IOCs and attributes are not defined in this disclosure.
  • a AI/ML entity loading system may include a service producer supported by one or more processors, and it may be configured to undertake several actions. Firstly, the AI/ML entity loading system may receive a request from a sendee consumer to create a first managed object instance (MOI) for AI/ML entity loading. Subsequently, it may respond to the consumer to indicate whether the MOI creation request is accepted. Following this, the AI/ML entity loading system may notify the consumer about the creation of the first MOI and proceed to prepare for the AI/ML entity loading. It may also create a second MOI for the AI/ML entity loading process and notify the consumer accordingly. Once this is done, the AI/ML entity loading system may initiate the AI/ML entity loading and keep the consumer informed about the progress of the AI/ML loading process by modifying the second MOI.
  • MOI managed object instance
  • the AI/ML entity loading system may encompass various methods related to the aforementioned actions.
  • the method may involve the first MOI representing the AI/ML entity loading request.
  • the first MOI may represent the AI/ML entity loading policy.
  • the first MOI may contain essential information, such as the identifier of the AI/ML entity to be loaded and the identifier (e.g., DN) of target inference functions where the AI/ML entity is loaded to.
  • the first MOI may also encompass additional details, such as the identifier or inference type of the AI/ML entity to be loaded, the threshold of the testing performance of the AI/ML entity' for triggering the AI/ML entity' loading, the identifier (e.g., DN) of target inference functions where the AI/ML entity is loaded to, and the threshold of the inference performance of the existing AI/ML entity in the target inference function(s) for triggering the AI/ML entity' loading.
  • additional details such as the identifier or inference type of the AI/ML entity to be loaded, the threshold of the testing performance of the AI/ML entity' for triggering the AI/ML entity' loading, the identifier (e.g., DN) of target inference functions where the AI/ML entity is loaded to, and the threshold of the inference performance of the existing AI/ML entity in the target inference function(s) for triggering the AI/ML entity' loading.
  • the second MOI may contain information related to the AI/ML entity being loaded, the associated AI/ML entity loading request, loading progress, and control of the loading process, including actions like canceling, suspending, and resuming.
  • a third MOI may be created to represent the loaded AI/ML entity'.
  • This third MOI as per one or more embodiments, may' be created either by the same MnS producer as mentioned in one of the embodiments or by a different MnS producer.
  • the third MOI in accordance with specific embodiments, may contain information such as the identifier of the loaded AI/ML entity, the associated trained AI/ML entity, the associated AI/ML entity loaded process, and the status (such as activated, de-activated, etc.) of the loaded AI/ML entity.
  • the service producer may be further configured to receive a request from a service consumer to modify the second MOI to control the AI/ML loading process. It may respond to the consumer to indicate whether the MOI modification request is accepted and notify the consumer about the modification of the second MOI. Subsequently, the AI/ML entity loading system may control the AI/ML loading process accordingly based on the modification made to the MOI.
  • This modification in certain embodiments, may involve changing attributes for canceling, suspending, resuming, or terminating the AI/ML loading process.
  • the creation of any MOI and the modification of the second MOI may be notified to the consumer through a notification process, ensuring transparency and effective communication throughout the AI/ML entity loading process.
  • FIG. 3 depicts an illustrative schematic diagram for AI/ML entity loading, in accordance with one or more example embodiments of the present disclosure.
  • a AI/ML entity loading system may facilitate performance indicator selection for ML model training.
  • the ML training function may support training for one or more kinds of ML models, and may support to evaluate each kind of ML model by one or more performance indicators.
  • the MnS consumer may prefer to use some performance indicator(s) over the others to evaluate one kind of ML model.
  • the MnS producer for ML training needs to provide the name(s) of supported performance indicator(s) for the MnS consumer to query and select for ML model performance evaluation.
  • the MnS consumer may also need to provide the performance requirements of the ML model using the selected performance indicators.
  • the MnS producer uses the selected performance indiators for ML model training, and reports ML training result with the corresponding performance score when the training is finished.
  • a AI/ML entity loading system may facilitate requirements for the management service (MnS).
  • MnS management service
  • REQ-MODEL PERF-TRAIN-l the MnS producer for ML model training should have a capability to allow the authorized consumer to get the capabilities about what kinds of ML models the training function is able to train.
  • REQ-MODEL PERF-TRAIN-2 the MnS producer for ML model training should have a capability to allow the authorized consumer to query what performance indicators are supported by the ML training function for each kind of ML model.
  • REQ-MODEL PERF-TRAIN-3 the MnS producer for ML model training should have a capability to allow the authorized consumer to select the performance indicators from those supported by the ML training function for reporting the training performance for each kind of ML model.
  • REQ-MODEL PERF-TRAIN-4 the MnS producer for ML model training should have a capability’ to allow the authorized consumer to provide the performance requirements for the ML model training using the selected the performance indicators from those supported by the ML training function.
  • a AI/ML entity loading system may facilitate solutions for performance indicator selection for ML model training. This solution uses the instances of following IOCS for interaction between MnS producer and consumer to support the performance indicator selection for ML model training:
  • the IOC representing the ML training capability for example named as MLTrainingCapability, contained by MLTrainingFunction.
  • This IOC is created by the MnS producer and contains the following attributes: inference type of the ML model that the ML training function supports to train;
  • the IOC MLTrainingRequest with the existing performanceRequirements attribute is semantically extended to indicate the MnS consumer selected performance indicator/metric.
  • FIG. 3 there is shown an example of performance indicator selection related NRMs. That is, the examples of IOCs and their relations between the IOCs are depicted in FIG. 3.
  • a AI/ML entity loading system may include a service producer supported by one or more processors, configured to create a managed object instance (MOI) representing the ML training capability' and send a notification to a consumer about the creation of MOL
  • MOI managed object instance
  • the AI/ML entity loading system may further involve the service producer receiving a request from a service consumer to obtain the attributes of the said MOI and responding to the consumer with the attributes of the said MOI. Additionally, in one or more embodiments, the AI/ML entity loading system may have the service producer being further configured to modify the attributes of the said MOI and send a notification to a consumer about the modification of the said MOI.
  • the AI/ML entity loading system may encompass the service producer being further configured to delete the said MOI and send a notification to a consumer about the deletion of the said MOI.
  • attributes of the said MOI may include the inference type of the ML model that the ML training function supports to train, or supported performance indicators (also called performance metrics).
  • the said MOI may be contained by the MOI representing the ML training function.
  • the AI/ML entity loading system may entail the service producer responding to the consumer with the results of the request.
  • the request to select the performance indicator(s) may be received in the MOI representing the ML training request, and it may be indicated by the performanceMetric element of the ModelPerformance data type for the performanceRequirements attribute.
  • the request to select the performance indicator(s) may be included in the creation or modification of the MOI representing the ML training request.
  • the AI/ML entity loading system may have the service producer further configured to train the ML model per the request using the selected performance indicator and send the ML training report to the consumer with the corresponding performance score of the selected performance indicator(s).
  • the electronic device(s), network(s), system(s), chip(s) or component(s), or portions or implementations thereof, of FIGs. 5-7, or some other figure herein may be configured to perform one or more processes, techniques, or methods as described herein, or portions thereof. One such process is depicted in FIG. 4.
  • the process may include, at 402, receiving a request from a service consumer to create a first managed object instance (MOI) for artificial intelligence (AI)/machine learning (ML) entity loading.
  • MOI managed object instance
  • AI artificial intelligence
  • ML machine learning
  • the process further includes, at 404, sending a response to the service consumer to indicate whether the MOI creation request is accepted.
  • the process further includes, at 406, notifying the service consumer about the creation of the first MOI.
  • the process further includes, at 408, preparing for the AI/ML entity loading.
  • the process further includes, at 410, creating a second MOI for the AI/ML entity loading process
  • the process further includes, at 412, notifying the service consumer about the creation of the second MOI;
  • the process further includes, at 414, initiating the AI/ML entity loading;
  • the process further includes, at 416, modifying the second MOI to keep the sendee consumer informed about the progress of the AI/ML loading process.
  • the device may encompass various aspects related to AI/ML entity loading.
  • the first MOI within the device may pertain to the AI/ML entity loading request. Additionally, the first MOI may also relate to the AI/ML entity loading policy. Within the first MOI, one can find an identifier for the AI/ML entity to be loaded and another identifier specifying target inference functions for the AI/ML entity's destination.
  • the second MOI may contain information associated with the loaded AI/ML entity. This information encompasses details about the AI/ML entity being loaded, the corresponding AI/ML entity 7 loading request, the loading progress, or even control over the AI/ML entity loading process.
  • a third MOI may be created within the device to represent the loaded AI/ML entity 7 .
  • this third MOI one can expect to find an identifier for the loaded AI/ML entity 7 , an associated trained AI/ML entity, details regarding the AI/ML entity 7 loading process, and the status of the loaded AI/ML entity 7 .
  • attributes related to canceling, suspending, resuming, or terminating the AI/ML entity 7 loading process may be adjusted.
  • the device can extend its capabilities to create a third managed object instance (MOI) that represents the ML training capability 7 . It can send notifications to consumers regarding the creation of this third MOL Inside the third MOL one may find details about the inference type of the ML model that the ML training function trains, as well as information about supported performance metrics.
  • MOI managed object instance
  • At least one of the components set forth in one or more of the preceding figures may be configured to perform one or more operations, techniques, processes, and/or methods as set forth in the example section below.
  • the baseband circuitry as described above in connection with one or more of the preceding figures may be configured to operate in accordance with one or more of the examples set forth below.
  • circuitry associated with a UE, base station, network element, etc. as described above in connection with one or more of the preceding figures may be configured to operate in accordance with one or more of the examples set forth below in the example section.
  • FIGs. Error! Reference source not found.-Error! Reference source not found illustrate various systems, devices, and components that may implement aspects of disclosed embodiments.
  • FIG. 5 illustrates an example network architecture 500 according to various embodiments.
  • the network 500 may operate in a manner consistent with 3GPP technical specifications for LTE or 5G/NR systems.
  • 3GPP technical specifications for LTE or 5G/NR systems 3GPP technical specifications for LTE or 5G/NR systems.
  • the example embodiments are not limited in this regard and the described embodiments may apply to other networks that benefit from the principles described herein, such as future 3GPP systems, or the like.
  • the network 500 includes a UE 502, which is any mobile or non-mobile computing device designed to communicate with a RAN 504 via an over-the-air connection.
  • the UE 502 is communicatively coupled with the RAN 504 by a Uu interface, which may be applicable to both LTE and NR systems.
  • Examples of the UE 502 include, but are not limited to, a smartphone, tablet computer, wearable computer, desktop computer, laptop computer, in- vehicle infotainment system, in-car entertainment system, instrument cluster, head-up display (HUD) device, onboard diagnostic device, dashtop mobile equipment, mobile data terminal, electronic engine management system, electron!
  • the network 500 may include a plurality' of UEs 502 coupled directly with one another via a D2D, ProSe, PC5, and/or sidelink (SL) interface.
  • UEs 502 may be M2M/D2D/MTC/IoT devices and/or vehicular systems that communicate using physical sidelink channels such as, but not limited to, PSBCH, PSDCH, PSSCH. PSCCH, PSFCH, etc.
  • the UE 502 may perform blind decoding attempts of SL channels/links according to the various embodiments herein.
  • the UE 502 may additionally communicate with an AP 506 via an over-the-air (OTA) connection.
  • the AP 506 manages a WLAN connection, which may serve to offload some/all network traffic from the RAN 504.
  • the connection between the UE 502 and the AP 506 may be consistent with any IEEE 802.11 protocol.
  • the UE 502, RAN 504. and AP 506 may utilize cellular-WLAN aggregation/integration (e.g., LWA/LWIP).
  • Cellular-WLAN aggregation may involve the UE 502 being configured by the RAN 504 to utilize both cellular radio resources and WLAN resources.
  • the RAN 504 includes one or more access network nodes (ANs) 508.
  • the ANs 508 terminate air-interface(s) for the UE 502 by providing access stratum protocols including RRC, PDCP, RLC, MAC, and PHY/L1 protocols. In this manner, the AN 508 enables data/voice connectivity between CN 520 and the UE 502.
  • the ANs 508 may be a macrocell base station or a low power base station for providing femtocells, picocells or other like cells having smaller coverage areas, smaller user capacity, or higher bandwidth compared to macrocells; or some combination thereof.
  • an AN 508 be referred to as a BS, gNB, RAN node, eNB, ng-eNB, NodeB, RSU, TRxP, etc.
  • One example implementation is a “CU/DU split” architecture where the ANs 508 are embodied as a gNB-Central Unit (CU) that is communicatively coupled with one or more gNB- Distributed Units (DUs). where each DU may be communicatively coupled with one or more Radio Units (RUs) (also referred to as RRHs, RRUs, or the like) (see e.g., 3GPP TS 38.401 v 16.1.0 (2020-03)).
  • RUs Radio Units
  • the one or more RUs may be individual RSUs.
  • the CU/DU split may include an ng-eNB-CU and one or more ng- eNB-DUs instead of, or in addition to, the gNB-CU and gNB-DUs, respectively.
  • the ANs 508 employed as the CU may be implemented in a discrete device or as one or more software entities running on server computers as part of, for example, a virtual network including a virtual Base Band Unit (BBU) or BBU pool, cloud RAN (CRAN), Radio Equipment Controller (REC), Radio Cloud Center (RCC), centralized RAN (C-RAN), virtualized RAN (vRAN), and/or the like (although these terms may refer to different implementation concepts). Any other type of architectures, arrangements, and/or configurations can be used.
  • BBU Virtual Base Band Unit
  • CRAN cloud RAN
  • REC Radio Equipment Controller
  • RRCC Radio Cloud Center
  • C-RAN centralized RAN
  • vRAN virtualized RAN
  • the plurality of ANs may be coupled with one another via an X2 interface (if the RAN 504 is an LTE RAN or Evolved Universal Terrestrial Radio Access Network (E-UTRAN) 510) or an Xn interface (if the RAN 504 is a NG-RAN 514).
  • the X2/Xn interfaces which may be separated into control/user plane interfaces in some embodiments, may allow the ANs to communicate information related to handovers, data/context transfers, mobility, load management, interference coordination, etc.
  • the ANs of the RAN 504 may each manage one or more cells, cell groups, component earners, etc. to provide the UE 502 with an air interface for network access.
  • the UE 502 may be simultaneously connected with a plurality of cells provided by the same or different ANs 508 of the RAN 504.
  • the UE 502 and RAN 504 may use carrier aggregation to allow the UE 502 to connect with a plurality of component carriers, each corresponding to a Pcell or Scell.
  • a first AN 508 may be a master node that provides an MCG and a second AN 508 may be secondary node that provides an SCG.
  • the first/second ANs 508 may be any combination of eNB, gNB, ng-eNB, etc.
  • the RAN 504 may provide the air interface over a licensed spectrum or an unlicensed spectrum.
  • the nodes may use LAA, eLAA, and/or feLAA mechanisms based on CA technology with PCells/Scells.
  • the nodes Prior to accessing the unlicensed spectrum, the nodes may perform medium/carrier-sensing operations based on, for example, a listen-before-talk (LBT) protocol.
  • LBT listen-before-talk
  • the UE 502 or AN 508 may be or act as a roadside unit (RSU), which may refer to any transportation infrastructure entity used for V2X communications.
  • RSU may be implemented in or by a suitable AN or a stati on ary (or relatively stationary) UE.
  • An RSU implemented in or by: a UE may be referred to as a “UE-type RSU”; an eNB may be referred to as an “eNB-type RSU”; a gNB may be referred to as a “gNB-type RSU”; and the like.
  • an RSU is a computing device coupled with radio frequency circuitry located on a roadside that provides connectivity support to passing vehicle UEs.
  • the RSU may also include internal data storage circuitry to store intersection map geometry', traffic statistics, media, as well as applications/software to sense and control ongoing vehicular and pedestrian traffic.
  • the RSU may provide very low latency communications required for high speed events, such as crash avoidance, traffic warnings, and the like. Additionally or alternatively, the RSU may provide other cellular/WLAN communications services.
  • the components of the RSU may be packaged in a w eatherproof enclosure suitable for outdoor installation, and may include a network interface controller to provide a wired connection (e.g.. Ethernet) to a traffic signal controller or a backhaul network.
  • the RAN 504 may be an E-UTRAN 510 wdth one or more eNBs 512.
  • the an E-UTRAN 510 provides an LTE air interface (Uu) with the following characteristics: SCS of 15 kHz; CP-OFDM waveform for DL and SC-FDMA waveform for UL; turbo codes for data and TBCC for control; etc.
  • the LTE air interface may rely on CSI- RS for CSI acquisition and beam management; PDSCH/PDCCH DMRS for PDSCH/PDCCH demodulation; and CRS for cell search and initial acquisition, channel quality measurements, and channel estimation for coherent demodulation/detection at the UE.
  • the LTE air interface may operating on sub-6 GHz bands.
  • the RAN 504 may be an next generation (NG)-RAN 514 with one or more gNB 516 and/or on or more ng-eNB 518.
  • the gNB 516 connects with 5G-enabled UEs 502 using a 5G NR interface.
  • the gNB 516 connects with a 5GC 540 through an NG interface, which includes an N2 interface or an N3 interface.
  • the ng-eNB 518 also connects with the 5GC 540 through an NG interface, but may connect with a UE 502 via the Uu interface.
  • the gNB 516 and the ng-eNB 518 may connect with each other over an Xn interface.
  • the NG interface may be split into two parts, an NG user plane (NG-U) interface, which carries traffic data between the nodes of the NG-RAN 514 and a UPF 548 (e.g., N3 interface), and an NG control plane (NG-C) interface, which is a signaling interface between the nodes of the NG-RAN 514 and an AMF 544 (e.g., N2 interface).
  • NG-U NG user plane
  • N-C NG control plane
  • the NG-RAN 514 may provide a 5G-NR air interface (which may also be referred to as a Uu interface) with the following characteristics: variable SCS; CP-OFDM for DL, CP- OFDM and DFT-s-OFDM for UL; polar, repetition, simplex, and Reed-Muller codes for control and LDPC for data.
  • the 5G-NR air interface may rely on CSI-RS, PDSCH/PDCCH DMRS similar to the LTE air interface.
  • the 5G-NR air interface may not use a CRS, but may use PBCH DMRS for PBCH demodulation; PTRS for phase tracking for PDSCH; and tracking reference signal for time tracking.
  • the 5G-NR air interface may operating on FR1 bands that include sub-6 GHz bands or FR2 bands that include bands from 24.25 GHz to 52.6 GHz.
  • the 5G-NR air interface may include an SSB that is an area of a downlink resource grid that includes PSS/SSS/PBCH.
  • the 5G-NR air interface may utilize BWPs for various purposes.
  • BWP can be used for dynamic adaptation of the SCS.
  • the UE 502 can be configured with multiple BWPs where each BWP configuration has a different SCS. When a BWP change is indicated to the UE 502, the SCS of the transmission is changed as well.
  • Another use case example of BWP is related to power saving.
  • multiple BWPs can be configured for the UE 502 with different amount of frequency resources (e.g., PRBs) to support data transmission under different traffic loading scenarios.
  • a BWP containing a smaller number of PRBs can be used for data transmission with small traffic load while allowing power saving at the UE 502 and in some cases at the gNB 516.
  • a BWP containing a larger number of PRBs can be used for scenarios with higher traffic load.
  • the RAN 504 is communicatively coupled to CN 520 that includes network elements and/or network functions (NFs) to provide various functions to support data and telecommunications services to customers/subscribers (e.g., UE 502).
  • the components of the CN 520 may be implemented in one physical node or separate physical nodes.
  • NFV may be utilized to virtualize any or all of the functions provided by the network elements of the CN 520 onto physical compute/storage resources in servers, switches, etc.
  • a logical instantiation of the CN 520 may be referred to as a network slice, and a logical instantiation of a portion of the CN 520 may be referred to as a network sub-slice.
  • the CN 520 may be an LTE CN 522 (also referred to as an Evolved Packet Core (EPC) 522).
  • the EPC 522 may include MME 524, SGW 526. SGSN 528, HSS 530, PGW 532, and PCRF 534 coupled with one another over interfaces (or ‘"reference points’") as shown.
  • the NFs in the EPC 522 are briefly introduced as follows.
  • the MME 524 implements mobility management functions to track a current location of the UE 502 to facilitate paging, bearer activation/ deactivation, handovers, gateway selection, authentication, etc.
  • the SGW 526 terminates an SI interface toward the RAN 510 and routes data packets between the RAN 510 and the EPC 522.
  • the SGW 526 may be a local mobility anchor point for inter-RAN node handovers and also may provide an anchor for inter-3 GPP mobility. Other responsibilities may include lawful intercept, charging, and some policy enforcement.
  • the SGSN 528 tracks a location of the UE 502 and performs security functions and access control.
  • the SGSN 528 also performs inter-EPC node signaling for mobility’ between different RAT networks; PDN and S-GW selection as specified by MME 524; MME 524 selection for handovers; etc.
  • the S3 reference point between the MME 524 and the SGSN 528 enable user and bearer information exchange for inter-3GPP access network mobility in idle/active states.
  • the HSS 530 includes a database for network users, including subscription-related information to support the network entities’ handling of communication sessions.
  • the HSS 530 can provide support for routing/roaming, authentication, authorization, naming/addressing resolution, location dependencies, etc.
  • An S6a reference point between the HSS 530 and the MME 524 may enable transfer of subscription and authentication data for authenticating/ authorizing user access to the EPC 520.
  • the PGW 532 may terminate an SGi interface toward a data network (DN) 536 that may include an application (app)/content server 538.
  • the PGW 532 routes data packets between the EPC 522 and the data network 536.
  • the PGW 532 is communicatively coupled with the SGW 526 by an S5 reference point to facilitate user plane tunneling and tunnel management.
  • the PGW 532 may further include a node for policy enforcement and charging data collection (e.g., PCEF).
  • the SGi reference point may communicatively couple the PGW 532 with the same or different data network 536.
  • the PGW 532 may be communicatively coupled with a PCRF 534 via a Gx reference point.
  • the PCRF 534 is the policy and charging control element of the EPC 522.
  • the PCRF 534 is communicatively coupled to the app/content server 538 to determine appropriate QoS and charging parameters for service flows.
  • the PCRF 532 also provisions associated rules into a PCEF (via Gx reference point) with appropriate TFT and QCI.
  • the CN 520 may be a 5GC 540 including an AUSF 542, AMF 544, SMF 546. UPF 548, NSSF 550, NEF 552, NRF 554, PCF 556, UDM 558, and AF 560 coupled with one another over various interfaces as shown.
  • the NFs in the 5GC 540 are briefly introduced as follows.
  • the AUSF 542 stores data for authentication of UE 502 and handle authentication- related functionality.
  • the AUSF 542 may facilitate a common authentication framework for various access types..
  • the AMF 544 allows other functions of the 5GC 540 to communicate w ith the UE 502 and the RAN 504 and to subscribe to notifications about mobility events with respect to the UE 502.
  • the AMF 544 is also responsible for registration management (e.g., for registering UE 502), connection management, reachability management, mobility management, lawful interception of AMF-related events, and access authentication and authorization.
  • the AMF 544 provides transport for SM messages between the UE 502 and the SMF 546, and acts as a transparent proxy for routing SM messages.
  • AMF 544 also provides transport for SMS messages between UE 502 and an SMSF.
  • AMF 544 interacts with the AUSF 542 and the UE 502 to perform various security anchor and context management functions.
  • AMF 544 is a termination point of a RAN-CP interface, which includes the N2 reference point between the RAN 504 and the AMF 544.
  • the AMF 544 is also a termination point of NAS (Nl) signaling, and performs NAS ciphering and integrity- protection.
  • AMF 544 also supports NAS signaling with the UE 502 over an N3IWF interface.
  • the N3IWF provides access to untrusted entities.
  • N3IWF may be a termination point for the N2 interface between the (R)AN 504 and the AMF 544 for the control plane, and may be a termination point for the N3 reference point between the (R)AN 514 and the 548 for the user plane.
  • the AMF 544 handles N2 signalling from the SMF 546 and the AMF 544 for PDU sessions and QoS, encapsulate/de-encapsulate packets for IPSec and N3 tunnelling, marks N3 user-plane packets in the uplink, and enforces QoS corresponding to N3 packet marking taking into account QoS requirements associated with such marking received overN2.
  • N3IWF may also relay UL and DL control-plane NAS signalling between the UE 502 and AMF 544 via an N1 reference point between the UE 502and the AMF 544, and relay uplink and downlink user-plane packets between the UE 502 and UPF 548.
  • the N3IWF also provides mechanisms for IPsec tunnel establishment with the UE 502.
  • the AMF 544 may exhibit an Namf servicebased interface, and may be a termination point for an N14 reference point between two AMFs 544 and an N17 reference point between the AMF 544 and a 5G-EIR (not shown by FIG. 5).
  • the SMF 546 is responsible for SM (e.g., session establishment, tunnel management between UPF 548 and AN 508); UE IP address allocation and management (including optional authorization); selection and control of UP function; configuring traffic steering at UPF 548 to route traffic to proper destination; termination of interfaces toward policy control functions; controlling part of policy enforcement, charging, and QoS; lawful intercept (for SM events and interface to LI system); termination of SM parts of NAS messages; downlink data notification; initiating AN specific SM information, sent via AMF 544 over N2 to AN 508; and determining SSC mode of a session.
  • SM refers to management of a PDU session
  • a PDU session or “session” refers to a PDU connectivity service that provides or enables the exchange of PDUs between the UE 502 and the DN 536.
  • the UPF 548 acts as an anchor point for intra-RAT and inter-RAT mobility, an external PDU session point of interconnect to data network 536, and a branching point to support multihomed PDU session.
  • the UPF 548 also performs packet routing and forwarding, packet inspection, enforces user plane part of policy rules, lawfully intercept packets (UP collection), performs traffic usage reporting, perform QoS handling for a user plane (e.g., packet filtering, gating, UL/DL rate enforcement), performs uplink traffic verification (e.g., SDF-to-QoS flow mapping), transport level packet marking in the uplink and downlink, and performs downlink packet buffering and downlink data notification triggering.
  • UPF 548 may include an uplink classifier to support routing traffic flows to a data network.
  • the NSSF 550 selects a set of network slice instances serving the UE 502.
  • the NSSF 550 also determines allowed NSSAI and the mapping to the subscribed S-NSSAIs, if needed.
  • the NSSF 550 also determines an AMF set to be used to serve the UE 502, or a list of candidate AMFs 544 based on a suitable configuration and possibly by querying the NRF 554.
  • the selection of a set of network slice instances for the UE 502 may be triggered by the AMF 544 with which the UE 502 is registered by interacting with the NSSF 550; this may lead to a change of AMF 544.
  • the NSSF 550 interacts with the AMF 544 via an N22 reference point; and may communicate with another NSSF in a visited network via an N31 reference point (not shown).
  • the NEF 552 securely exposes services and capabilities provided by 3GPP NFs for third party, internal exposure/re-exposure, AFs 560, edge computing or fog computing systems (e.g., edge compute node, etc.
  • the NEF 552 may authenticate, authorize, or throttle the AFs.
  • NEF 552 may also translate information exchanged with the AF 560 and information exchanged with internal network functions. For example, the NEF 552 may translate between an AF-Service-Identifier and an internal 5GC information.
  • NEF 552 may also receive information from other NFs based on exposed capabilities of other NFs. This information may be stored at the NEF 552 as structured data, or at a data storage NF using standardized interfaces. The stored information can then be re-exposed by the NEF 552 to other NFs and AFs, or used for other purposes such as analytics.
  • the NRF 554 supports service discovery functions, receives NF discovery requests from NF instances, and provides information of the discovered NF instances to the requesting NF instances. NRF 554 also maintains information of available NF instances and their supported services. The NRF 554 also supports service discovery functions, wherein the NRF 554 receives NF Discovery Request from NF instance or an SCP (not shown), and provides information of the discovered NF instances to the NF instance or SCP.
  • the PCF 556 provides policy rules to control plane functions to enforce them, and may also support unified policy framework to govern network behavior.
  • the PCF 556 may also implement a front end to access subscription information relevant for policy decisions in a UDR of the UDM 558.
  • the PCF 556 exhibit an Npcf service-based interface.
  • the UDM 558 handles subscription-related information to support the network entities’ handling of communication sessions, and stores subscription data of UE 502. For example, subscription data may be communicated via an N8 reference point between the UDM 558 and the AMF 544.
  • the UDM 558 may include two parts, an application front end and a UDR.
  • the UDR may store subscription data and policy data for the UDM 558 and the PCF 556, and/or structured data for exposure and application data (including PFDs for application detection, application request information for multiple UEs 502) for the NEF 552.
  • the Nudr servicebased interface may be exhibited by the UDR 221 to allow the UDM 558, PCF 556, and NEF 552 to access a particular set of the stored data, as well as to read, update (e.g., add, modify), delete, and subscribe to notification of relevant data changes in the UDR.
  • the UDM may include a UDM-FE, which is in charge of processing credentials, location management, subscription management and so on. Several different front ends may serve the same user in different transactions.
  • the UDM-FE accesses subscription information stored in the UDR and performs authentication credential processing, user identification handling, access authorization, registration/mobility management, and subscription management.
  • the UDM 558 may exhibit the Nudm service-based interface.
  • AF 560 provides application influence on traffic routing, provide access to NEF 552, and interact with the policy framework for policy control.
  • the AF 560 may influence UPF 548 (re)selection and traffic routing. Based on operator deployment, when AF 560 is considered to be a trusted entity, the network operator may permit AF 560 to interact directly with relevant NFs. Additionally, the AF 560 may be used for edge computing implementations,
  • the 5GC 540 may enable edge computing by selecting operator/3rd party services to be geographically close to a point that the UE 502 is attached to the network. This may reduce latency and load on the network.
  • the 5GC 540 may select a UPF 548 close to the UE 502 and execute traffic steering from the UPF 548 to DN 536 via theN6 interface. This may be based on the UE subscription data, UE location, and information provided by the AF 560, which allows the AF 560 to influence UPF (re)selection and traffic routing.
  • the data network (DN) 536 may represent various network operator services, Internet access, or third party services that may be provided by one or more servers including, for example, application (app)/content server 538.
  • the DN 536 may be an operator external public, a private PDN, or an intra-operator packet data network, for example, for provision of IMS services.
  • the app server 538 can be coupled to an IMS via an S-CSCF or the I-CSCF.
  • the DN 536 may represent one or more local area DNs (LADNs), which are DNs 536 (or DN names (DNNs)) that is/are accessible by a UE 502 in one or more specific areas. Outside of these specific areas, the UE 502 is not able to access the LADN/DN 536.
  • LADNs local area DNs
  • DNNs DN names
  • the DN 536 may be an Edge DN 536, which is a (local) Data Network that supports the architecture for enabling edge applications.
  • the app server 538 may represent the physical hardware systems/devices providing app server functionality and/or the application software resident in the cloud or at an edge compute node that performs server function(s).
  • the app/content server 538 provides an edge hosting environment that provides support required for Edge Application Server's execution.
  • the 5GS can use one or more edge compute nodes to provide an interface and offload processing of wireless communication traffic.
  • the edge compute nodes may be included in, or co-located with one or more RAN510, 514.
  • the edge compute nodes can provide a connection between the RAN 514 and UPF 548 in the 5GC 540.
  • the edge compute nodes can use one or more NFV instances instantiated on virtualization infrastructure within the edge compute nodes to process wireless connections to and from the RAN 514 and UPF 548.
  • the interfaces of the 5GC 540 include reference points and service-based itnterfaces.
  • the reference points include: N1 (between the UE 502 and the AMF 544), N2 (between RAN 514 and AMF 544), N3 (between RAN 514 and UPF 548), N4 (between the SMF 546 and UPF 548), N5 (between PCF 556 and AF 560), N6 (between UPF 548 and DN 536), N7 (between SMF 546 and PCF 556), N8 (between UDM 558 and AMF 544), N9 (between two UPFs 548), N10 (between the UDM 558 and the SMF 546).
  • Ni l between the AMF 544 and the SMF 546
  • N12 between AUSF 542 and AMF 544)
  • N13 between AUSF 542 and UDM 558
  • N14 between two AMFs 544; not shown
  • N15 between PCF 556 and AMF 544 in case of a nonroaming scenario, or between the PCF 556 in a visited network and AMF 544 in case of a roaming scenario
  • N16 between two SMFs 546; not shown
  • N22 between AMF 544 and NSSF 550.
  • Other reference point representations not shown in FIG. 5 can also be used.
  • the service-based representation of FIG. 5 represents NFs within the control plane that enable other authorized NFs to access their services.
  • the service-based interfaces include: Namf (SBI exhibited by AMF 544), Nsmf (SBI exhibited by SMF 546), Nnef (SBI exhibited by NEF 552), Npcf (SBI exhibited by PCF 556), Nudm (SBI exhibited by the UDM 558), Naf (SBI exhibited by AF 560), Nnrf (SBI exhibited by NRF 554), Nnssf (SBI exhibited by NSSF 550), Nausf (SBI exhibited by AUSF 542).
  • Other sendee-based interfaces e.g., Nudr, N5g-eir, and Nudsf
  • Other sendee-based interfaces e.g., Nudr, N5g-eir, and Nudsf
  • the NEF 552 can provide an interface to edge compute nodes 536x, which can be used to process wireless connections with the RAN 514.
  • the system 500 may include an SMSF, which is responsible for SMS subscription checking and verification, and relaying SM messages to/from the UE 502 to/from other entities, such as an SMS-GMSC/IWMSC/SMS- router.
  • the SMS may also interact with AMF 544 and UDM 558 for a notification procedure that the UE 502 is available for SMS transfer (e.g.. set a UE not reachable flag, and notifying UDM 558 when UE 502 is available for SMS).
  • the 5GS may also include an SCP (or individual instances of the SCP) that supports indirect communication (see e.g., 3GPP TS 23.501 section 7.1.1); delegated discovery (see e.g., 3GPP TS 23.501 section 7.1.1): message forwarding and routing to destination NF/NF service(s), communication security (e.g., authorization of the NF Service Consumer to access the NF Service Producer API) (see e g., 3GPP TS 33.501), load balancing, monitoring, overload control, etc.; and discovery and selection functionality for UDM(s), AUSF(s), UDR(s), PCF(s) with access to subscription data stored in the UDR based on UE's SUPI, SUCI or GPSI (see e.g., 3GPP TS 23.501 section 6.3).
  • SCP or individual instances of the SCP
  • indirect communication see e.g., 3GPP TS 23.501 section 7.1.1
  • delegated discovery see e.g.
  • Load balancing, monitoring, overload control functionality 7 provided by the SCP may be implementation specific.
  • the SCP may be deployed in a distributed manner. More than one SCP can be present in the communication path between various NF Services.
  • the SCP although not an NF instance, can also be deployed distributed, redundant, and scalable.
  • FIG. 6 schematically illustrates a wireless network 600 in accordance with various embodiments.
  • the wireless network 600 may include a UE 602 in wireless communication with an AN 604.
  • the UE 602 and AN 604 may be similar to, and substantially interchangeable with, like-named components described with respect to FIG. 5.
  • the UE 602 may be communicatively coupled with the AN 604 via connection 606.
  • the connection 606 is illustrated as an air interface to enable communicative coupling, and can be consistent with cellular communications protocols such as an LTE protocol or a 5G NR protocol operating at mmWave or sub-6GHz frequencies.
  • the UE 602 may include a host platform 608 coupled with a modem platform 610.
  • the host platform 608 may include application processing circuitry 7 612, which may be coupled with protocol processing circuitry 614 of the modem platform 610.
  • the application processing circuitry 612 may run various applications for the UE 602 that source/sink application data.
  • the application processing circuitry 612 may further implement one or more layer operations to transmit/receive application data to/from a data network. These layer operations may include transport (for example UDP) and Internet (for example, IP) operations
  • the protocol processing circuitry 7 614 may implement one or more of layer operations to facilitate transmission or reception of data over the connection 606.
  • the layer operations implemented by the protocol processing circuitry 614 may include, for example, MAC, RLC, PDCP, RRC and NAS operations.
  • the modem platform 610 may further include digital baseband circuitry 616 that may implement one or more layer operations that are ⁇ ’below’’ layer operations performed by the protocol processing circuitry 614 in a network protocol stack. These operations may include, for example, PHY operations including one or more of HARQ acknowledgement (ACK) functions, scrambling/descrambling, encoding/decoding, layer mapping/de-mapping, modulation symbol mapping, received symbol/bit metric determination, multi-antenna port precoding/decoding, which may include one or more of space-time, space-frequency or spatial coding, reference signal generation/detection, preamble sequence generation and/or decoding, synchronization sequence generation/detection, control channel signal blind decoding, and other related functions.
  • PHY operations including one or more of HARQ acknowledgement (ACK) functions, scrambling/descrambling, encoding/decoding, layer mapping/de-mapping, modulation symbol mapping, received symbol/bit metric determination, multi-antenna
  • the modem platform 610 may further include transmit circuitry 618, receive circuitry 620, RF circuitry 7 622, and RF front end (RFFE) 624, which may include or connect to one or more antenna panels 626.
  • the transmit circuitry 618 may include a digital-to-analog converter, mixer, intermediate frequency (IF) components, etc.
  • the receive circuitry 620 may include an analog-to-digital converter, mixer, IF components, etc.
  • the RF circuitry 622 may include a low-noise amplifier, a power amplifier, power tracking components, etc.
  • RFFE 624 may include filters (for example, surface/bulk acoustic wave filters), switches, antenna tuners, beamforming components (for example, phase-array antenna components), etc.
  • transmit/receive components may be specific to details of a specific implementation such as, for example, whether communication is TDM or FDM, in mmWave or sub-6 gHz frequencies, etc.
  • the transmit/receive components may be arranged in multiple parallel transmit/receive chains, may 7 be disposed in the same or different chips/modules, etc.
  • the protocol processing circuitry' 614 may include one or more instances of control circuitry (not shown) to provide control functions for the transmit/receive components.
  • a UE 602 reception may be established by and via the antenna panels 626, RFFE 624, RF circuitry 7 622, receive circuitry 7 620, digital baseband circuitry 7 616, and protocol processing circuitry 614.
  • the antenna panels 626 may receive a transmission from the AN 604 by receive-beamforming signals received by a plurality of antennas/antenna elements of the one or more antenna panels 626.
  • a UE 602 transmission may be established by and via the protocol processing circuitry 614, digital baseband circuitry 616, transmit circuitry' 618, RF circuitry' 622, RFFE 624, and antenna panels 626.
  • the transmit components of the UE 604 may apply a spatial filter to the data to be transmitted to form a transmit beam emitted by the antenna elements of the antenna panels 626.
  • the AN 604 may include a host platform 628 coupled with a modem platform 630.
  • the host platform 628 may include application processing circuitry' 632 coupled with protocol processing circuitry 634 of the modem platform 630.
  • the modem platform may further include digital baseband circuitry 636, transmit circuitry 638, receive circuitry 640, RF circuitry 642, RFFE circuitry' 644, and antenna panels 646.
  • the components of the AN 604 may be similar to and substantially interchangeable with like-named components of the UE 602.
  • the components of the AN 608 may perform various logical functions that include, for example, RNC functions such as radio bearer management, uplink and downlink dynamic radio resource management, and data packet scheduling.
  • FIG. 7 illustrates components of a computing device 700 according to some example embodiments, able to read instructions from a machine-readable or computer-readable medium (e.g., a non-transitory machine-readable storage medium) and perform any one or more of the methodologies discussed herein.
  • FIG. 7 shows a diagrammatic representation of hardware resources 701 including one or more processors (or processor cores) 710, one or more memory /storage devices 720, and one or more communication resources 730, each of which may be communicatively coupled via a bus 740 or other interface circuitry 7 .
  • a hypervisor 702 may be executed to provide an execution environment for one or more network slices/sub-slices to utilize the hardware resources 701.
  • the processors 710 include, for example, processor 712 and processor 714.
  • the processors 710 include circuitry 7 such as, but not limited to one or more processor cores and one or more of cache memory, low drop-out voltage regulators (LDOs), interrupt controllers, serial interfaces such as SPI. I2C or universal programmable serial interface circuit, real time clock (RTC), timer-counters including interval and watchdog timers, general purpose I/O, memory card controllers such as secure digital/multi-media card (SD/MMC) or similar, interfaces, mobile industry processor interface (MIPI) interfaces and Joint Test Access Group (JTAG) test access ports.
  • circuitry 7 such as, but not limited to one or more processor cores and one or more of cache memory, low drop-out voltage regulators (LDOs), interrupt controllers, serial interfaces such as SPI. I2C or universal programmable serial interface circuit, real time clock (RTC), timer-counters including interval and watchdog timers, general purpose I/O, memory card controller
  • the processors 710 may be, for example, a central processing unit (CPU), reduced instruction set computing (RISC) processors, Acorn RISC Machine (ARM) processors, complex instruction set computing (CISC) processors, graphics processing units (GPUs), one or more Digital Signal Processors (DSPs) such as a baseband processor, Application-Specific Integrated Circuits (ASICs), an Field-Programmable Gate Array (FPGA), a radio-frequency integrated circuit (RFIC), one or more microprocessors or controllers, another processor (including those discussed herein), or any suitable combination thereof.
  • CPU central processing unit
  • RISC reduced instruction set computing
  • ARM Acorn RISC Machine
  • CISC complex instruction set computing
  • GPUs graphics processing units
  • DSPs Digital Signal Processors
  • ASICs Application-Specific Integrated Circuits
  • FPGA Field-Programmable Gate Array
  • RFIC radio-frequency integrated circuit
  • microprocessors or controllers another processor (including those discussed herein), or any suitable combination
  • the processor circuitry 710 may include one or more hardware accelerators, which may be microprocessors, programmable processing devices (e.g., FPGA, complex programmable logic devices (CPLDs). etc.), or the like.
  • the memory /storage devices 720 may include main memory, disk storage, or any suitable combination thereof.
  • the memory /storage devices 720 may include, but are not limited to, any type of volatile, non-volatile, or semi-volatile memory such as random access memory (RAM), dynamic RAM (DRAM), static RAM (SRAM), synchronous DRAM (SDRAM), erasable programmable read-only memory' (EPROM), electrically erasable programmable read-only memory (EEPROM), Flash memory, solid-state storage, phase change RAM (PRAM), resistive memory such as magnetoresistive random access memory (MRAM), etc., and may incorporate three-dimensional (3D) cross-point (XPOINT) memories from Intel® and Micron®.
  • the memory/storage devices 720 may also comprise persistent storage devices, which may be temporal and/or persistent storage of any type, including, but not limited to, nonvolatile memory, optical, magnetic, and/or solid state mass storage, and so forth.
  • the communication resources 730 may include interconnection or network interface controllers, components, or other suitable devices to communicate with one or more peripheral devices 704 or one or more databases 706 or other network elements via a network 708.
  • the communication resources 730 may include wired communication components (e.g., for coupling via USB, Ethernet, Ethernet, Ethernet over GRE Tunnels, Ethernet over Multiprotocol Label Switching (MPLS), Ethernet over USB, Controller Area Network (CAN), Local Interconnect Network (LIN), DeviceNet, ControlNet, Data Highway-)-, PROFIBUS, or PROFINET, among many others), cellular communication components, NFC components, Bluetooth® (or Bluetooth® Low Energy) components. WiFi® components, and other communication components.
  • wired communication components e.g., for coupling via USB, Ethernet, Ethernet, Ethernet over GRE Tunnels, Ethernet over Multiprotocol Label Switching (MPLS), Ethernet over USB, Controller Area Network (CAN), Local Interconnect Network (LIN), DeviceNet, ControlNet, Data Highway-)-, PROFIBUS, or PROFINET, among many others
  • Network connectivity' may be provided to/from the computing device 700 via the communication resources 730 using a physical connection, which may be electrical (e.g., a “copper interconnect'’) or optical.
  • the physical connection also includes suitable input connectors (e g., ports, receptacles, sockets, etc.) and output connectors (e.g., plugs, pins, etc.).
  • the communication resources 730 may include one or more dedicated processors and/or FPGAs to communicate using one or more of the aforementioned netw ork interface protocols.
  • Instructions 750 may comprise software, a program, an application, an applet, an app, or other executable code for causing at least any of the processors 710 to perform any one or more of the methodologies discussed herein.
  • the instructions 750 may reside, completely or partially, within at least one of the processors 710 (e.g., within the processor’s cache memory), the memory /storage devices 720, or any suitable combination thereof.
  • any portion of the instructions 750 may be transferred to the hardware resources 701 from any combination of the peripheral devices 704 or the databases 706.
  • the memory of processors 710, the memory /storage devices 720, the peripheral devices 704, and the databases 706 are examples of computer-readable and machine-readable media.
  • FIG. 8 illustrates a network 800 in accordance with various embodiments.
  • the network 800 may operate in a matter consistent with 3GPP technical specifications or technical reports for 6G systems.
  • the network 800 may operate concurrently with network 500.
  • the network 800 may share one or more frequency or bandwidth resources with network 500.
  • a UE e.g., UE 802
  • UE 802 may be configured to operate in both network 800 and network 500.
  • Such configuration may be based on a UE including circuitry configured for communication with frequency and bandwidth resources of both networks 500 and 800.
  • several elements of network 800 may share one or more characteristics with elements of network 500. For the sake of brevity and clarity, such elements may not be repeated in the description of network 800.
  • the network 800 may include a UE 802, which may include any mobile or non-mobile computing device designed to communicate with a RAN 808 via an over-the-air connection.
  • the UE 802 may be similar to, for example, UE 502.
  • the UE 802 may be, but is not limited to, a smartphone, tablet computer, wearable computer device, desktop computer, laptop computer, in-vehicle infotainment, in-car entertainment device, instrument cluster, head-up display device, onboard diagnostic device, dashtop mobile equipment, mobile data terminal, electronic engine management system, electron! c/engine control unit, electronic/ engine control module, embedded system, sensor, microcontroller, control module, engine management system, networked appliance, machine-type communication device, M2M or D2D device, loT device, etc.
  • the network 800 may include a plurality of UEs coupled directly with one another via a sidelink interface.
  • the UEs may be M2M/D2D devices that communicate using physical sidelink channels such as, but not limited to, PSBCH, PSDCH, PSSCH, PSCCH, PSFCH, etc.
  • the UE 802 may be communicatively coupled with an AP such as AP 506 as described with respect to FIG. 5.
  • the RAN 808 may include one or more ANss such as AN 508 as described with respect to FIG. 5.
  • the RAN 808 and/or the AN of the RAN 808 may be referred to as a base station (BS), a RAN node, or using some other term or name.
  • the UE 802 and the RAN 808 may be configured to communicate via an air interface that may be referred to as a sixth generation (6G) air interface.
  • the 6G air interface may include one or more features such as communication in a terahertz (THz) or sub-THz bandwidth, or joint communication and sensing.
  • THz terahertz
  • sub-THz bandwidth may refer to a system that allows for wireless communication as well as radar-based sensing via various types of multiplexing.
  • THz or sub-THz bandwidths may refer to communication in the 80 GHz and above frequency ranges. Such frequency ranges may additionally or alternatively be referred to as “millimeter wave” or “mmWave” frequency ranges.
  • the RAN 808 may allow for communication between the UE 802 and a 6G core network (CN) 810. Specifically, the RAN 808 may facilitate the transmission and reception of data between the UE 802 and the 6G CN 810.
  • the 6G CN 810 may include various functions such as NSSF 550, NEF 552, NRF 554, PCF 556, UDM 558, AF 560, SMF 546, and AUSF 542.
  • the 6G CN 810 may additional include UPF 548 and DN 536 as shown in FIG. 8.
  • the RAN 808 may include various additional functions that are in addition to, or alternative to, functions of a legacy cellular network such as a 4G or 5G network.
  • Two such functions may include a Compute Control Function (Comp CF) 824 and a Compute Service Function (Comp SF) 836.
  • the Comp CF 824 and the Comp SF 836 may be parts or functions of the Computing Sendee Plane.
  • Comp CF 824 may be a control plane function that provides functionalities such as management of the Comp SF 836, computing task context generation and management (e.g., create, read, modify, delete), interaction with the underlaying computing infrastructure for computing resource management, etc..
  • Comp SF 836 may be a user plane function that serves as the gateway to interface computing sen ice users (such as UE 802) and computing nodes behind a Comp SF instance. Some functionalities of the Comp SF 836 may include: parse computing service data received from users to compute tasks executable by computing nodes; hold service mesh ingress gateway or service API gateway; service and charging policies enforcement; performance monitoring and telemetry collection, etc.
  • a Comp SF 836 instance may sen e as the user plane gateway for a cluster of computing nodes.
  • a Comp CF 824 instance may control one or more Comp SF 836 instances.
  • Two other such functions may include a Communication Control Function (Comm CF) 828 and a Communication Service Function (Comm SF) 838, which may be parts of the Communication Service Plane.
  • the Comm CF 828 may be the control plane function for managing the Comm SF 838, communication sessions creation/configuration/releasing, and managing communication session context.
  • the Comm SF 838 may be a user plane function for data transport.
  • Comm CF 828 and Comm SF 838 may be considered as upgrades of SMF 546 and UPF 548, which were described with respect to a 5G system in FIG. 5.
  • the upgrades provided by the Comm CF 828 and the Comm SF 838 may enable service-aware transport. For legacy (e.g., 4G or 5G) data transport, SMF 546 and UPF 548 may still be used.
  • Data CF 822 may be a control plane function and provides functionalities such as Data SF 832 management, Data service creation/configuration/releasing. Data service context management, etc.
  • Data SF 832 may be a user plane function and serve as the gateway between data service users (such as UE 802 and the various functions of the 6G CN 810) and data service endpoints behind the gateway. Specific functionalities may include include: parse data service user data and forward to corresponding data service endpoints, generate charging data, report data service status.
  • SOCF 820 may discover, orchestrate and chain up communication/computing/data services provided by functions in the network.
  • SOCF 820 may interact with one or more of Comp CF 824, Comm CF 828, and Data CF 822 to identify Comp SF 836, Comm SF 838, and Data SF 832 instances, configure service resources, and generate the service chain, which could contain multiple Comp SF 836, Comm SF 838, and Data SF 832 instances and their associated computing endpoints. Workload processing and data movement may then be conducted within the generated service chain.
  • the SOCF 820 may also responsible for maintaining, updating, and releasing a created service chain.
  • SRF service registration function
  • NRF 554 may act as the registry' for network functions.
  • eSCP evolved service communication proxy
  • SCP service communication proxy
  • eSCP-C 812 service communication proxy
  • eSCP- U 834 control plane sendee communication proxy and user plane service communication proxy, respectively.
  • SICF 826 may control and configure eCSP instances in terms of service traffic routing policies, access rules, load balancing configurations, performance monitoring, etc.
  • the AMF 844 may be similar to 544, but with additional functionality. Specifically, the AMF 844 may include potential functional repartition, such as move the message forwarding functionality from the AMF 844 to the RAN 808.
  • SOEF sen-ice orchestration exposure function 818.
  • the SOEF may be configured to expose service orchestration and chaining services to external users such as applications.
  • the UE 802 may include an additional function that is referred to as a computing client service function (comp CSF) 804.
  • the comp CSF 804 may have both the control plane functionalities and user plane functionalities, and may interact with corresponding network side functions such as SOCF 820, Comp CF 824, Comp SF 836, Data CF 822, and/or Data SF 832 for service discovery, request/response, compute task workload exchange, etc.
  • the Comp CSF 804 may also work with network side functions to decide on whether a computing task should be run on the UE 802, the RAN 808, and/or an element of the 6G CN 810.
  • the UE 802 and/or the Comp CSF 804 may include a service mesh proxy 806.
  • the service mesh proxy 806 may act as a proxy for service-to-service communication in the user plane. Capabilities of the service mesh proxy 806 may include one or more of addressing, security’, load balancing, etc.
  • FIG. 9 illustrates a simplified block diagram of artificial (Al)-assisted communication between a UE 905 and a RAN 910, in accordance with various embodiments. More specifically, as described in further detail below, Al/machine learning (ML) models may be used or leveraged to facilitate over-the-air communication between UE 905 and RAN 910.
  • Al machine learning
  • One or both of the UE 905 and the RAN 910 may operate in a matter consistent with 3GPP technical specifications or technical reports for 6G systems.
  • the wireless cellular communication between the UE 905 and the RAN 910 may be part of, or operate concurrently with, networks 800, 500. and/or some other network described herein.
  • the UE 905 may be similar to, and share one or more features with, UE 802. UE 502, and/or some other UE described herein.
  • the UE 905 may be, but is not limited to, a smartphone, tablet computer, wearable computer device, desktop computer, laptop computer, in-vehicle infotainment, in-car entertainment device, instrument cluster, head-up display device, onboard diagnostic device, dashtop mobile equipment, mobile data terminal, electronic engine management system, electronic/engine control unit, electronic/engine control module, embedded system, sensor, microcontroller, control module, engine management system, networked appliance, machine-type communication device. M2M or D2D device, loT device, etc.
  • the RAN 910 may be similar to, and share one or more features with, RAN 514, RAN 808, and/or some other RAN described herein.
  • the Al-related elements of UE 905 may be similar to the AI- related elements of RAN 910.
  • description of the various elements will be provided from the point of view of the UE 905, however it will be understood that such discussion or description will apply to equally named/numbered elements of RAN 910, unless explicitly stated otherwise.
  • the UE 905 may include various elements or functions that are related to AI/ML. Such elements may be implemented as hardware, software, firmware, and/or some combination thereof. In embodiments, one or more of the elements may be implemented as part of the same hardware (e.g., chip or multi-processor chip), software (e.g., a computing program), or firmware as another element.
  • the data repository 915 may be responsible for data collection and storage. Specifically, the data repository 915 may collect and store RAN configuration parameters, measurement data, performance key performance indicators (KPIs), model performance metrics, etc., for model training, update, and inference. More generally, collected data is stored into the repository. Stored data can be discovered and extracted by other elements from the data repository 915. For example, as may be seen, the inference data selection/filter element 950 may retrieve data from the data repository 915.
  • the UE 905 may be configured to discover and request data from the data repository 910 in the RAN, and vice versa. More generally, the data repository 915 of the UE 905 may be communicatively coupled with the data repository 915 of the RAN 910 such that the respective data repositories of the UE and the RAN may share collected data with one another.
  • the training data selection/filter functional block 920 may be configured to generate training, validation, and testing datasets for model training. Training data may be extracted from the data repository 915. Data may be selected/filtered based on the specific AI/ML model to be trained. Data may optionally be transformed/augmented/pre-processed (e.g., normalized) before being loaded into datasets. The training data selection/filter functional block 920 may label data in datasets for supervised learning. The produced datasets may then be fed into model training the model training functional block 925. As noted above, another such element may be the model training functional block 925. This functional block may be responsible for training and updating(re- training) AI/ML models.
  • the selected model may be trained using the fed-in datasets (including training, validation, testing) from the training data selection/filtering functional block.
  • the model training functional block 925 may produce trained and tested AI/ML models which are ready for deployment.
  • the produced trained and tested models can be stored in a model repository 935.
  • the model repository 935 may be responsible for AI/ML models’ (both trained and untrained) storage and exposure. Trained/updated model(s) may be stored into the model repository 935. Model and model parameters may be discovered and requested by other functional blocks (e.g.. the training data selection/filter functional block 920 and/or the model training functional block 925).
  • the UE 905 may discover and request AI/ML models from the model repository 935 of the RAN 910.
  • the RAN 910 may be able to discover and/or request AI/ML models from the model repository 935 of the UE 905.
  • the RAN 910 may configure models and/or model parameters in the model repository 935 of the UE 905.
  • the model management functional block 940 may be responsible for management of the AI/ML model produced by the model training functional block 925. Such management functions may include deployment of a trained model, monitoring model performance, etc. In model deployment, the model management functional block 940 may allocate and schedule hardware and/or software resources for inference, based on received trained and tested models. As used herein, “inference” refers to the process of using trained AI/ML model(s) to generate data analytics, actions, policies, etc. based on input inference data. In performance monitoring, based on wireless performance KPIs and model performance metrics, the model management functional block 940 may decide to terminate the running model, start model re-training, select another model, etc. In embodiments, the model management functional block 940 of the RAN 910 may be able to configure model management policies in the UE 905 as show n.
  • the inference data selection/filter functional block 950 may be responsible for generating datasets for model inference at the inference functional block 945, as described below. Specifically, inference data may be extracted from the data repository' 915. The inference data selection/filter functional block 950 may select and/or filter the data based on the deployed AI/ML model. Data may be transformed/augmented/pre-processed following the same transformation/augmentation/pre-processing as those in training data selection/filtering as described with respect to functional block 920. The produced inference dataset may be fed into the inference functional block 945.
  • the inference functional block 945 may be responsible for executing inference as described above. Specifically, the inference functional block 945 may consume the inference dataset provided by the inference data selection/filtering functional block 950, and generate one or more outcomes. Such outcomes may be or include data analytics, actions, policies, etc. The outcome(s) may be provided to the performance measurement functional block 930.
  • the performance measurement functional block 930 may be configured to measure model performance metrics (e.g., accuracy, model bias, run-time latency, etc.) of deployed and executing models based on the inference outcome(s) for monitoring purpose.
  • Model performance data may be stored in the data repository 915.
  • At least one of the components set forth in one or more of the preceding figures may be configured to perform one or more operations, techniques, processes, and/or methods as set forth in the example section below.
  • the baseband circuitry as described above in connection with one or more of the preceding figures may be configured to operate in accordance with one or more of the examples set forth below.
  • circuitry associated with a UE, base station, network element, etc. as described above in connection with one or more of the preceding figures may be configured to operate in accordance with one or more of the examples set forth below in the example section.
  • At least one of the components set forth in one or more of the preceding figures may be configured to perform one or more operations, techniques, processes, and/or methods as set forth in the example section below'.
  • the baseband circuitry as described above in connection with one or more of the preceding figures may be configured to operate in accordance with one or more of the examples set forth below.
  • circuitry associated with a UE, base station, network element, etc. as described above in connection with one or more of the preceding figures may be configured to operate in accordance with one or more of the examples set forth below'.
  • Example 1 may include an apparatus comprising processing circuitry configured to: receive a request from a service consumer to create a first managed object instance (MOI) for artificial intelligence (AI)/machine learning (ML) entity loading; send a response to the sendee consumer to indicate whether the MOI creation request may be accepted; notify the sendee consumer about the creation of the first MOI; prepare for the AI/ML entity loading; create a second MOI for the AI/ML entity loading process; notify the service consumer about the creation of the second MOL initiate the AI/ML entity loading; and modify the second MOI to keep the sendee consumer informed about the progress of the AI/ML loading process; and a memon to store the request.
  • MOI managed object instance
  • AI artificial intelligence
  • ML machine learning
  • Example 2 may include the apparatus of example 1 and/or some other example herein, wherein the first MOI represents the AI/ML entity loading request.
  • Example 3 may include the apparatus of example 1 and/or some other example herein, wherein the first MOI represents the AI/ML entity 7 loading policy.
  • Example 4 may include the apparatus of example 1 and/or some other example herein, wherein the first MOI contains an identifier of the AI/ML entity’ to be loaded and an identifier of target inference functions where the AI/ML entity may be loaded to.
  • Example 5 may include the apparatus of example 1 and/or some other example herein, wherein the second MOI contains information related to an AI/ML entity’ being loaded, an associated AI/ML entity loading request, loading progress, or control of the AI/ML entity loading process.
  • Example 6 may include the apparatus of example 1 and/or some other example herein, wherein a third MOI may be created to represent the loaded AI/ML entity'.
  • Example 7 may include the apparatus of example 6 and/or some other example herein, wherein the third MOI contains an identifier of the loaded AI/ML entity, an associated trained AI/ML entity, an associated AI/ML entity loading process, and a status of the loaded AI/ML entity’.
  • Example 8 may include the apparatus of example 1 and/or some other example herein, wherein the processing circuitry may be further configured to: receive a request from a second service consumer to modify the second MOI to control the AI/ML entity loading process; respond to the second sendee consumer to indicate whether the request to modify the second MOI may be accepted; notify’ the second service consumer about the modification of the second MOL and control the AI/ML entity loading process based on the modification of the second MOL
  • Example 9 may include the apparatus of example 8 and/or some other example herein, wherein the modification of the second MOI involves changing attributes for canceling, suspending, resuming, or terminating the AI/ML entity loading process.
  • Example 10 may include the apparatus of example 1 and/or some other example herein, wherein the processing circuitry may be further configured to: create a third managed objected instance (MOI) representing the ML training capability; and send a notification to a consumer about the creation of the third MOI.
  • MOI managed objected instance
  • Example 1 1 may include the apparatus of example 10 and/or some other example herein, wherein the third MOI contains at least one of an inference type of the ML model that the ML training function trains or supported performance metrics.
  • Example 12 may include a computer-readable medium storing computer-executable instructions which when executed by one or more processors result in performing operations comprising: receiving a request from a service consumer to create a first managed object instance (MOI) for artificial intelligence (AI)/machine learning (ML) entity loading; sending a response to the service consumer to indicate whether the MOI creation request may be accepted; notifying the service consumer about the creation of the first MOI; preparing for the AI/ML entity' loading; creating a second MOI for the AI/ML entity loading process; notify the service consumer about the creation of the second MOI; initiating the AI/ML entity loading; and modifying the second MOI to keep the service consumer informed about the progress of the AI/ML loading process.
  • MOI managed object instance
  • AI artificial intelligence
  • ML machine learning
  • Example 13 may include the computer-readable medium of example 12 and/or some other example herein, wherein the first MOI represents the AI/ML entity loading request.
  • Example 14 may include the computer-readable medium of example 12 and/or some other example herein, wherein the first MOI represents the AI/ML entity loading policy.
  • Example 15 may include the computer-readable medium of example 12 and/or some other example herein, wherein the first MOI contains an identifier of the AI/ML entity to be loaded and an identifier of target inference functions where the AI/ML entity' may be loaded to.
  • Example 16 may include the computer-readable medium of example 12 and/or some other example herein, wherein the second MOI contains information related to an AI/ML entity being loaded, an associated AI/ML entity' loading request, loading progress, or control of the AI/ML entity loading process.
  • Example 17 may include the computer-readable medium of example 12 and/or some other example herein, wherein a third MOI may be created to represent the loaded AI/ML entity .
  • Example 18 may include the computer-readable medium of example 17 and/or some other example herein, wherein the third MOI contains an identifier of the loaded AI/ML entity', an associated trained AI/ML entity, an associated AI/ML entity loading process, and a status of the loaded AI/ML entity- .
  • Example 19 may include the computer-readable medium of example 12 and/or some other example herein, yvherein the operations further comprise: receiving a request from a second service consumer to modify the second MOI to control the AI/ML entity loading process; sending a response to the second service consumer to indicate whether the request to modify the second MOI may be accepted; notify ing the second service consumer about the modification of the second MOI; and controlling the AI/ML entity loading process based on the modification of the second MOL
  • Example 20 may include the computer-readable medium of example 19 and/or some other example herein, yvherein the modification of the second MOI involves changing attributes for canceling, suspending, resuming, or terminating the AI/ML entity loading process.
  • Example 21 may include the computer-readable medium of example 12 and/or some other example herein, yvherein the operations further comprise: creating a third managed objected instance (MOI) representing the ML training capability; and sending a notification to a consumer about the creation of the third MOI.
  • MOI managed objected instance
  • Example 22 may include the computer-readable medium of example 21 and/or some other example herein, wherein the third MOI contains at least one of an inference type of the ML model that the ML training function trains or supported performance metrics.
  • Example 23 may include a method comprising: receiving a request from a sendee consumer to create a first managed object instance (MOI) for artificial intelligence (AI)/machine learning (ML) entity loading; sending a response to the service consumer to indicate yvhether the MOI creation request may be accepted: notifying the service consumer about the creation of the first MOI; preparing for the AI/ML entity loading; creating a second MOI for the AI/ML entity loading process; notify ing the service consumer about the creation of the second MOI; initiating the AI/ML entity loading; and modifying the second MOI to keep the service consumer informed about the progress of the AI/ML loading process.
  • MOI managed object instance
  • AI artificial intelligence
  • ML machine learning
  • Example 24 may include the method of example 23 and/or some other example herein, wherein the first MOI represents the AI/ML entity- loading request.
  • Example 25 may include the method of example 23 and/or some other example herein, wherein the first MOI represents the AI/ML entity loading policy.
  • Example 26 may include the method of example 23 and/or some other example herein, wherein the first MOI contains an identifier of the AI/ML entity to be loaded and an identifier of target inference functions where the AI/ML entity' may be loaded to.
  • Example 27 may include the method of example 23 and/or some other example herein, wherein the second MOI contains information related to an AI/ML entity being loaded, an associated AI/ML entity loading request, loading progress, or control of the AI/ML entity loading process.
  • Example 28 may include the method of example 23 and/or some other example herein, wherein a third MOI may be created to represent the loaded AI/ML entity’.
  • Example 29 may include the method of example 28 and/or some other example herein, yvherein the third MOI contains an identifier of the loaded AI/ML entity, an associated trained AI/ML entity, an associated AI/ML entity loading process, and a status of the loaded AI/ML entity.
  • Example 30 may include the method of example 23 and/or some other example herein, further comprising: receiving a request from a second service consumer to modify the second MOI to control the AI/ML entity loading process; sending a response to the second service consumer to indicate whether the request to modil the second MOI may be accepted; notifying the second service consumer about the modification of the second MOI; and controlling the AI/ML entity' loading process based on the modification of the second MOI.
  • Example 31 may include the method of example 30 and/or some other example herein, wherein the modification of the second MOI involves changing attributes for canceling, suspending, resuming, or terminating the AI/ML entity loading process.
  • Example 32 may include the method of example 23 and/or some other example herein, further comprising: create a third managed objected instance (MOI) representing the ML training capability; and sending a notification to a consumer about the creation of the third MOL
  • MOI managed objected instance
  • Example 33 may include the method of example 32 and/or some other example herein, wherein the third MOI contains at least one of an inference type of the ML model that the ML training function trains or supported performance metrics.
  • Example 34 may include an apparatus comprising means for: receiving a request from a service consumer to create a first managed object instance (MOI) for artificial intelligence (AI)/machine learning (ML) entity loading; sending a response to the service consumer to indicate whether the MOI creation request may be accepted: notifying the service consumer about the creation of the first MOI; preparing for the AI/ML entity loading; creating a second MOI for the AI/ML entity loading process; notifying the service consumer about the creation of the second MOI; initiating the AI/ML entity loading; and modifying the second MOI to keep the sendee consumer informed about the progress of the AI/ML loading process.
  • MOI managed object instance
  • AI artificial intelligence
  • ML machine learning
  • Example 35 may include the apparatus of example 34 and/or some other example herein, wherein the first MOI represents the AI/ML entity loading request.
  • Example 36 may include the apparatus of example 34 and/or some other example herein, wherein the first MOI represents the AI/ML entity loading policy.
  • Example 37 may include the apparatus of example 34 and/or some other example herein, wherein the first MOI contains an identifier of the AI/ML entity to be loaded and an identifier of target inference functions where the AI/ML entity may be loaded to.
  • Example 38 may include the apparatus of example 34 and/or some other example herein, wherein the second MOI contains information related to an AI/ML entity being loaded, an associated AI/ML entity loading request, loading progress, or control of the AI/ML entity loading process.
  • Example 39 may include the apparatus of example 34 and/or some other example herein, wherein a third MOI may be created to represent the loaded AI/ML entity.
  • Example 40 may include the apparatus of example 39 and/or some other example herein, wherein the third MOI contains an identifier of the loaded AI/ML entity, an associated trained AI/ML entity’, an associated AI/ML entity’ loading process, and a status of the loaded AI/ML entity’.
  • Example 41 may include the apparatus of example 34 and/or some other example herein, further comprising: receiving a request from a second service consumer to modify the second MOI to control the AI/ML entity loading process; sending a response to the second service consumer to indicate whether the request to modify the second MOI may be accepted; notify ing the second service consumer about the modification of the second MOI; and controlling the AI/ML entity loading process based on the modification of the second MOI.
  • Example 42 may include the apparatus of example 41 and/or some other example herein, wherein the modification of the second MOI involves changing attributes for canceling, suspending, resuming, or terminating the AI/ML entity loading process.
  • Example 43 may include the apparatus of example 34 and/or some other example herein, further comprising: creating a third managed objected instance (MOI) representing the ML training capability; and sending a notification to a consumer about the creation of the third MOI.
  • MOI managed objected instance
  • Example 44 may include the apparatus of example 43 and/or some other example herein, wherein the third MOI contains at least one of an inference type of the ML model that the ML training function trains or supported performance metrics.
  • Example 45 may include an apparatus comprising means for performing any of the methods of examples 1-44.
  • Example 46 may include a network node comprising a communication interface and processing circuitry connected thereto and configured to perform the methods of examples 1 - 44.
  • Example 47 may include an apparatus comprising means to perform one or more elements of a method described in or related to any of examples 1-44, or any other method or process described herein.
  • Example 48 may include one or more non-transitory computer-readable media comprising instructions to cause an electronic device, upon execution of the instructions by one or more processors of the electronic device, to perform one or more elements of a method described in or related to any of examples 1-44, or any other method or process described herein.
  • Example 49 may include an apparatus comprising logic, modules, or circuitry to perform one or more elements of a method described in or related to any of examples 1-44, or any other method or process described herein.
  • Example 50 may include a method, technique, or process as described in or related to any of examples 1-44, or portions or parts thereof.
  • Example 51 may include an apparatus comprising: one or more processors and one or more computer-readable media comprising instructions that, when executed by the one or more processors, cause the one or more processors to perform the method, techniques, or process as described in or related to any of examples 1-44, or portions thereof.
  • Example 52 may include a signal as described in or related to any of examples 1-44, or portions or parts thereof.
  • Example 53 may include a datagram, packet, frame, segment, protocol data unit (PDU), or message as described in or related to any of examples 1-44, or portions or parts thereof, or otherwise described in the present disclosure.
  • PDU protocol data unit
  • Example 54 may include a signal encoded with data as described in or related to any of examples 1-44, or portions or parts thereof, or otherwise described in the present disclosure.
  • Example 55 may include a signal encoded with a datagram, packet, frame, segment, protocol data unit (PDU), or message as described in or related to any of examples 1-44, or portions or parts thereof, or otherwise described in the present disclosure.
  • PDU protocol data unit
  • Example 56 may include an electromagnetic signal carrying computer-readable instructions, wherein execution of the computer-readable instructions by one or more processors is to cause the one or more processors to perform the method, techniques, or process as descnbed in or related to any of examples 1-44, or portions thereof.
  • Example 57 may include a computer program comprising instructions, wherein execution of the program by a processing element is to cause the processing element to carry out the method, techniques, or process as described in or related to any of examples 1-44, or portions thereof.
  • Example 58 may include a signal in a wireless network as shown and described herein.
  • Example 59 may include a method of communicating in a wireless network as show n and described herein.
  • Example 60 may include a system for providing wireless communication as shown and described herein.
  • Example 61 may include a device for providing wireless communication as show n and described herein.
  • An example implementation is an edge computing system, including respective edge processing devices and nodes to invoke or perform the operations of the examples above, or other subject matter described herein.
  • Another example implementation is a client endpoint node, operable to invoke or perform the operations of the examples above, or other subject matter described herein.
  • Another example implementation is an aggregation node, network hub node, gateway node, or core data processing node, within or coupled to an edge computing system, operable to invoke or perform the operations of the examples above, or other subject matter described herein.
  • Another example implementation is an access point, base station, road-side unit, street-side unit, or on-premise unit, within or coupled to an edge computing system, operable to invoke or perform the operations of the examples above, or other subject matter described herein.
  • Another example implementation is an edge provisioning node, service orchestration node, application orchestration node, or multi-tenant management node, within or coupled to an edge computing system, operable to invoke or perform the operations of the examples above, or other subject matter described herein.
  • Another example implementation is an edge node operating an edge provisioning service, application or service orchestration service, virtual machine deployment, container deployment, function deployment, and compute management, within or coupled to an edge computing system, operable to invoke or perform the operations of the examples above, or other subject matter described herein.
  • Another example implementation is an edge computing system operable as an edge mesh, as an edge mesh with side car loading, or with mesh-to-mesh communications, operable to invoke or perform the operations of the examples above, or other subject matter described herein.
  • Another example implementation is an edge computing system including aspects of network functions, acceleration functions, acceleration hardware, storage hardware, or computation hardware resources, operable to invoke or perform the use cases discussed herein, with use of the examples above, or other subject matter described herein.
  • Another example implementation is an edge computing system adapted for supporting client mobility, vehicle-to-vehicle (V2V). vehicle-to-everything (V2X), or vehicle-to-infrastructure (V2I) scenarios, and optionally operating according to ETSI MEC specifications, operable to invoke or perform the use cases discussed herein, with use of the examples above, or other subject matter described herein.
  • V2V vehicle-to-vehicle
  • V2X vehicle-to-everything
  • V2I vehicle-to-infrastructure
  • Another example implementation is an edge computing system adapted for mobile wireless communications, including configurations according to an 3GPP 4G/LTE or 5G network capabilities, operable to invoke or perform the use cases discussed herein, with use of the examples above, or other subject matter described herein.
  • Another example implementation is a computing system adapted for network communications, including configurations according to an O-RAN capabilities, operable to invoke or perform the use cases discussed herein, with use of the examples above, or other subject matter described herein.
  • the phrase “A and/or B” means (A), (B), or (A and B).
  • the phrase “A, B, and/or C” means (A), (B), (C), (A and B), (A and C), (B and C), or (A, B and C).
  • the description may use the phrases “in an embodiment,” or “In some embodiments,” which may each refer to one or more of the same or different embodiments.
  • the terms “comprising,” “including,” “having,” and the like, as used with respect to embodiments of the present disclosure are synonymous.
  • Coupled may mean two or more elements are in direct physical or electrical contact with one another, may mean that two or more elements indirectly contact each other but still cooperate or interact with each other, and/or may mean that one or more other elements are coupled or connected between the elements that are said to be coupled with each other.
  • directly coupled may mean that two or more elements are in direct contact with one another.
  • communicatively coupled may mean that two or more elements may be in contact with one another by a means of communication including through a wire or other interconnect connection, through a wireless communication channel or ink, and/or the like.
  • circuitry refers to, is part of, or includes hardware components such as an electronic circuit, a logic circuit, a processor (shared, dedicated, or group) and/or memory (shared, dedicated, or group), an Application Specific Integrated Circuit (ASIC), a field-programmable device (FPD) (e.g.. a field-programmable gate array (FPGA), a programmable logic device (PLD), a complex PLD (CPLD), a high-capacity PLD (HCPLD), a structured ASIC, or a programmable SoC), digital signal processors (DSPs), etc., that are configured to provide the described functionality .
  • FPD field-programmable device
  • FPGA field-programmable gate array
  • PLD programmable logic device
  • CPLD complex PLD
  • HPLD high-capacity PLD
  • DSPs digital signal processors
  • the circuitry may execute one or more software or firmware programs to provide at least some of the described functionality.
  • the term “circuitry” may also refer to a combination of one or more hardware elements (or a combination of circuits used in an electrical or electronic system) with the program code used to cany' out the functionality' of that program code. In these embodiments, the combination of hardware elements and program code may be referred to as a particular type of circuitry.
  • the term “processor circuitry ” as used herein refers to, is part of, or includes circuitry capable of sequentially and automatically carrying out a sequence of arithmetic or logical operations, or recording, storing, and/or transferring digital data.
  • Processing circuitry may include one or more processing cores to execute instructions and one or more memory structures to store program and data information.
  • the term “processor circuitry " may refer to one or more application processors, one or more baseband processors, a physical central processing unit (CPU), a single-core processor, a dual-core processor, a triple-core processor, a quad-core processor, and/or any other device capable of executing or otherwise operating computer-executable instructions, such as program code, software modules, and/or functional processes.
  • Processing circuitry' may include more hardware accelerators, which may be microprocessors, programmable processing devices, or the like.
  • the one or more hardware accelerators may include, for example, computer vision (CV) and/or deep learning (DL) accelerators.
  • CV computer vision
  • DL deep learning
  • memory and/or “memory circuitry” as used herein refers to one or more hardware devices for storing data, including RAM, MRAM, PRAM, DRAM, and/or SDRAM, core memory 7 , ROM, magnetic disk storage mediums, optical storage mediums, flash memory devices or other machine readable mediums for storing data.
  • computer-readable medium may include, but is not limited to, memory, portable or fixed storage devices, optical storage devices, and various other mediums capable of storing, containing or carrying instructions or data.
  • interface circuitry refers to, is part of, or includes circuitry that enables the exchange of information betyveen two or more components or devices.
  • interface circuitry may refer to one or more hardware interfaces, for example, buses, I/O interfaces, peripheral component interfaces, network interface cards, and/or the like.
  • user equipment refers to a device yvith radio communication capabilities and may describe a remote user of netyvork resources in a communications network.
  • the term “user equipment” or “UE” may be considered synonymous to, and may be referred to as, client, mobile, mobile device, mobile terminal, user terminal, mobile unit, mobile station, mobile user, subscriber, user, remote station, access agent, user agent, receiver, radio equipment, reconfigurable radio equipment, reconfigurable mobile device, etc.
  • user equipment or “UE” may include any type of wireless/wired device or any computing device including a yvireless communications interface.
  • network element refers to physical or virtualized equipment and/or infrastructure used to provide wired or wireless communication network services.
  • network element may be considered synonymous to and/or referred to as a networked computer, networking hardware, network equipment, network node, router, switch, hub, bridge, radio network controller, RAN device, RAN node, gateway, server, virtualized VNF, NFVI, and/or the like.
  • computer system refers to any type interconnected electronic devices, computer devices, or components thereof. Additionally, the term “computer system” and/or “system” may refer to various components of a computer that are communicatively coupled with one another. Furthermore, the term “computer system” and/or “system” may refer to multiple computer devices and/or multiple computing systems that are communicatively coupled with one another and configured to share computing and/or networking resources.
  • appliance refers to a computer device or computer system with program code (e.g., software or firmware) that is specifically designed to provide a specific computing resource.
  • a “virtual appliance” is a virtual machine image to be implemented by a hypervisor-equipped device that virtualizes or emulates a computer appliance or otherwise is dedicated to provide a specific computing resource.
  • element refers to a unit that is indivisible at a given level of abstraction and has a clearly defined boundary, wherein an element may be any type of entity including, for example, one or more devices, systems, controllers, network elements, modules, etc., or combinations thereof.
  • device refers to a physical entity embedded inside, or attached to, another physical entity in its vicinity, with capabilities to convey digital information from or to that physical entity.
  • entity refers to a distinct component of an architecture or device, or information transferred as a payload.
  • controller refers to an element or entity that has the capability to affect a physical entity, such as by changing its state or causing the physical entity to move.
  • cloud computing refers to a paradigm for enabling network access to a scalable and elastic pool of shareable computing resources with self-service provisioning and administration on-demand and without active management by users.
  • Cloud computing provides cloud computing services (or cloud services), which are one or more capabilities offered via cloud computing that are invoked using a defined interface (e.g., an API or the like).
  • computing resource or simply “resource” refers to any physical or virtual component, or usage of such components, of limited availability within a computer system or network.
  • Examples of computing resources include usage/access to, for a period of time, servers, processor(s), storage equipment, memory devices, memory areas, networks, electrical power, input/output (peripheral) devices, mechanical devices, network connections (e.g., channels/links, ports, network sockets, etc.), operating systems, virtual machines (VMs), software/applications, computer files, and/or the like.
  • a ‘‘hardware resource” may refer to compute, storage, and/or network resources provided by physical hardware element(s).
  • a “virtualized resource” may refer to compute, storage, and/or network resources provided by virtualization infrastructure to an application, device, system, etc.
  • the term “network resource” or “communication resource” may refer to resources that are accessible by computer devices/systems via a communications network.
  • system resources may refer to any kind of shared entities to provide services, and may include computing and/or network resources.
  • System resources may be considered as a set of coherent functions, network data objects or services, accessible through a server where such system resources reside on a single host or multiple hosts and are clearly identifiable.
  • cloud service provider or CSP indicates an organization which operates typically large-scale “cloud” resources comprised of centralized, regional, and edge data centers (e.g., as used in the context of the public cloud).
  • a CSP may also be referred to as a Cloud Service Operator (CSO).
  • CSO Cloud Service Operator
  • References to “cloud computing” generally refer to computing resources and services offered by a CSP or a CSO, at remote locations with at least some increased latency, distance, or constraints relative to edge computing.
  • data center refers to a purpose-designed structure that is intended to house multiple high-performance compute and data storage nodes such that a large amount of compute, data storage and network resources are present at a single location. This often entails specialized rack and enclosure systems, suitable heating, cooling, ventilation, security, fire suppression, and power delivery systems.
  • the term may also refer to a compute and data storage node in some contexts.
  • a data center may vary in scale between a centralized or cloud data center (e.g., largest), regional data center, and edge data center (e.g., smallest).
  • edge computing refers to the implementation, coordination, and use of computing and resources at locations closer to the “edge” or collection of “edges” of a network. Deploying computing resources at the network’s edge may reduce application and network latency, reduce network backhaul traffic and associated energy' consumption, improve service capabilities, improve compliance with security' or data privacy requirements (especially as compared to conventional cloud computing), and improve total cost of ownership).
  • edge compute node refers to a real-world, logical, or virtualized implementation of a compute-capable element in the form of a device, gateway, bridge, system or subsystem, component, whether operating in a server, client, endpoint, or peer mode, and whether located at an “edge” of an network or at a connected location further within the network.
  • references to a “node” used herein are generally interchangeable with a “device”, “component”, and “sub-system”; however, references to an “edge computing system” or “edge computing network” generally refer to a distributed architecture, organization, or collection of multiple nodes and devices, and which is organized to accomplish or offer some aspect of services or resources in an edge computing setting.
  • the term “Edge Computing” refers to a concept, as described in [6], that enables operator and 3rd party sendees to be hosted close to the UE's access point of attachment, to achieve an efficient service delivery through the reduced end-to- end latency and load on the transport network.
  • the term “Edge Computing Service Provider” refers to a mobile network operator or a 3rd party service provider offering Edge Computing service.
  • the term “Edge Data Network” refers to a local Data Network (DN) that supports the architecture for enabling edge applications.
  • the term “Edge Hosting Environment” refers to an environment providing support required for Edge Application Server's execution.
  • the term “Application Server” refers to application software resident in the cloud performing the server function.
  • loT Internet of Things
  • loT devices are usually low-power devices without heavy compute or storage capabilities.
  • “Edge loT devices” may be any kind of loT devices deployed at a network’s edge.
  • cluster refers to a set or grouping of entities as part of an edge computing system (or systems), in the form of physical entities (e.g., different computing systems, networks or network groups), logical entities (e.g., applications, functions, security constructs, containers), and the like.
  • a “cluster” is also referred to as a “group” or a “domain”.
  • the membership of cluster may be modified or affected based on conditions or functions, including from dynamic or property -based membership, from network or system management scenarios, or from various example techniques discussed below which may add. modify, or remove an entity in a cluster.
  • Clusters may also include or be associated with multiple layers, levels, or properties, including variations in security' features and results based on such layers, levels, or properties.
  • the term '‘application” may refer to a complete and deployable package, environment to achieve a certain function in an operational environment.
  • AI/ML application or the like may be an application that contains some AI/ML models and application-level descriptions.
  • machine learning or “ML” refers to the use of computer systems implementing algorithms and/or statistical models to perform specific task(s) without using explicit instructions, but instead relying on patterns and inferences.
  • ML algorithms build or estimate mathematical model(s) (referred to as “ML models” or the like) based on sample data (referred to as “training data,” “model training information,” or the like) in order to make predictions or decisions without being explicitly programmed to perform such tasks.
  • an ML algorithm is a computer program that leams from experience with respect to some task and some performance measure
  • an ML model may be any object or data structure created after an ML algorithm is trained with one or more training datasets. After training, an ML model may be used to make predictions on new datasets.
  • ML algorithm refers to different concepts than the term “ML model,” these terms as discussed herein may be used interchangeably for the purposes of the present disclosure.
  • machine learning model may also refer to ML methods and concepts used by an ML-assisted solution.
  • An “ML-assisted solution”’ is a solution that addresses a specific use case using ML algorithms during operation.
  • ML models include supervised learning (e.g., linear regression, k-nearest neighbor (KNN), decision tree algorithms, support machine vectors, Bayesian algorithm, ensemble algorithms, etc.) unsupervised learning (e.g., K-means clustering, principle component analysis (PCA), etc.), reinforcement learning (e.g., Q-leaming, multi-armed bandit learning, deep RL, etc.), neural networks, and the like.
  • An “ML pipeline” is a set of functionalities, functions, or functional entities specific for an ML-assisted solution; an ML pipeline may include one or several data sources in a data pipeline, a model training pipeline, a model evaluation pipeline, and an actor.
  • the “actor” is an entity that hosts an ML assisted solution using the output of the ML model inference).
  • ML training host refers to an entity, such as a network function, that hosts the training of the model.
  • ML inference host refers to an entity, such as a network function, that hosts model during inference mode (which includes both the model execution as well as any online learning if applicable).
  • the ML-host informs the actor about the output of the ML algorithm, and the actor takes a decision for an action (an “action” is performed by an actor as a result of the output of an ML assisted solution).
  • model inference information refers to information used as an input to the ML model for determining inference(s); the data used to train an ML model and the data used to determine inferences may overlap, however, “training data” and “inference data” refer to different concepts.
  • instantiate refers to the creation of an instance.
  • An “instance” also refers to a concrete occurrence of an object, which may occur, for example, during execution of program code.
  • information element refers to a structural element containing one or more fields.
  • field refers to individual contents of an information element, or a data element that contains content.
  • a “database object”, “data structure”, or the like may refer to any representation of information that is in the form of an object, attribute-value pair (AVP), key -value pair (KVP), tuple, etc., and may include variables, data structures, functions, methods, classes, database records, database fields, database entities, associations between data and/or database entities (also referred to as a “relation”), blocks and links between blocks in block chain implementations, and/or the like.
  • An “information object,” as used herein, refers to a collection of structured data and/or any representation of information, and may include, for example electronic documents (or “documents”), database objects, data structures, files, audio data, video data, raw data, archive files, application packages, and/or any other like representation of information.
  • electronic document or “document,” may refer to a data structure, computer file, or resource used to record data, and includes various file ty pes and/or data formats such as w ord processing documents, spreadsheets, slide presentations, multimedia items, webpage and/or source code documents, and/or the like.
  • the information objects may include markup and/or source code documents such as HTML, XML, JSON, Apex®, CSS, JSP, MessagePackTM, Apache® ThriftTM, ASN. l, Google® Protocol Buffers (protobuf), or some other document(s)/format(s) such as those discussed herein.
  • An information object may have both a logical and a physical structure. Physically, an information object comprises one or more units called entities. An entity is a unit of storage that contains content and is identified by a name. An entity’ may refer to other entities to cause their inclusion in the information object. An information object begins in a document entity, which is also referred to as a root element (or "root"). Logically, an information object comprises one or more declarations, elements, comments, character references, and processing instructions, all of which are indicated in the information object (e.g., using markup).
  • data item refers to an atomic state of a particular object with at least one specific property at a certain point in time.
  • Such an object is usually identified by an object name or object identifier, and properties of such an object are usually defined as database objects (e.g., fields, records, etc.), object instances, or data elements (e.g., mark-up language elements/tags, etc.).
  • database objects e.g., fields, records, etc.
  • object instances e.g., mark-up language elements/tags, etc.
  • data elements e.g., mark-up language elements/tags, etc.
  • data item may refer to data elements and/or content items, although these terms may refer to difference concepts.
  • data element or “element” as used herein refers to a unit that is indivisible at a given level of abstraction and has a clearly defined boundary.
  • a data element is a logical component of an information object (e.g., electronic document) that may begin with a start tag (e.g., “ ⁇ element>”) and end with amatching end tag (e.g., “ ⁇ /element>”), or only has an empty element tag (e.g., “ ⁇ element />”). Any characters between the start tag and end tag, if any, are the element’s content (referred to herein as “content items” or the like).
  • the content of an entity may include one or more content items, each of which has an associated datatype representation.
  • a content item may include, for example, attribute values, character values, URIs, qualified names (qnames), parameters, and the like.
  • a qname is a fully qualified name of an element, attribute, or identifier in an information object.
  • a qname associates a URI of a namespace w ith a local name of an element, attribute, or identifier in that namespace. To make this association, the qname assigns a prefix to the local name that corresponds to its namespace.
  • the qname comprises a URI of the namespace, the prefix, and the local name. Namespaces are used to provide uniquely named elements and attributes in information objects.
  • child elements e.g., “ ⁇ elementl> ⁇ element2>content item ⁇ /element2> ⁇ /elementl>”.
  • An “attribute” may refer to a markup construct including a name-value pair that exists within a start tag or empty element tag. Attributes contain data related to its element and/or control the element’s behavior.
  • resource refers to a physical or virtual device, a physical or virtual component within a computing environment, and/or a physical or virtual component within a particular device, such as computer devices, mechanical devices, memory space, processor/CPU time, processor/CPU usage, processor and accelerator loads, hardware time or usage, electrical power, input/output operations, ports or network sockets, channel/link allocation, throughput, memory usage, storage, network, database and applications, workload units, and/or the like.
  • a “hardware resource’' may refer to compute, storage, and/or network resources provided by physical hardware element(s).
  • a “virtualized resource” may refer to compute, storage, and/or network resources provided by virtualization infrastructure to an application, device, system, etc.
  • network resource or “communication resource” may refer to resources that are accessible by computer devices/systems via a communications network.
  • system resources may refer to any kind of shared entities to provide services, and may include computing and/or network resources. System resources may be considered as a set of coherent functions, network data objects or services, accessible through a server where such system resources reside on a single host or multiple hosts and are clearly identifiable.
  • channel refers to any transmission medium, either tangible or intangible, which is used to communicate data or a data stream.
  • channel may be synonymous with and/or equivalent to “communications channel,” “data communications channel,” “transmission channel,” “data transmission channel,” “access channel,” “data access channel,” “link,” “data link,” “carrier,” “radiofrequency carrier,” and/or any other like term denoting a pathway or medium through which data is communicated.
  • link refers to a connection between two devices through a RAT for the purpose of transmitting and receiving information.
  • radio technology refers to technology 7 for wireless transmission and/or reception of electromagnetic radiation for information transfer.
  • radio access technology or “RAT” refers to the technology used for the underlying physical connection to a radio based communication network.
  • the term “communication protocol” refers to a set of standardized rules or instructions implemented by a communication device and/or system to communicate with other devices and/or systems, including instructions for packetizing/depacketizing data, modulating/demodulating signals, implementation of protocols stacks, and/or the like.
  • radio technology 7 refers to technology for wireless transmission and/or reception of electromagnetic radiation for information transfer.
  • radio access technology or “RAT” refers to the technology used for the underlying physical connection to a radio based communication network.
  • communication protocol (either wired or wireless) refers to a set of standardized rules or instructions implemented by a communication device and/or system to communicate with other devices and/or systems, including instructions for packetizing/depacketizing data, modulating/demodulating signals, implementation of protocols stacks, and/or the like.
  • wireless communications protocols may be used in various embodiments include a Global System for Mobile Communications (GSM) radio communication technology, a General Packet Radio Service (GPRS) radio communication technology’, an Enhanced Data Rates for GSM Evolution (EDGE) radio communication technology, and/or a Third Generation Partnership Project (3GPP) radio communication technology including, for example, 3GPP Fifth Generation (5G) or New Radio (NR), Universal Mobile Telecommunications System (UMTS), Freedom of Multimedia Access (FOMA), Long Term Evolution (LTE), LTE- Advanced (LTE Advanced), LTE Extra, LTE-A Pro, cdmaOne (2G).
  • GSM Global System for Mobile Communications
  • GPRS General Packet Radio Service
  • EDGE Enhanced Data Rates for GSM Evolution
  • 3GPP Third Generation Partnership Project
  • 3GPP Third Generation Partnership Project
  • 5G Fifth Generation
  • NR Universal Mobile Telecommunications System
  • UMTS Universal Mobile Telecommunications System
  • FOMA Freedom of Multimedia Access
  • LTE Long Term Evolution
  • LTE- Advanced L
  • CDMA 2000 Code Division Multiple Access 2000
  • CDPD Cellular Digital Packet Data
  • Mobitex Circuit Switched Data
  • HSD High-Speed CSD
  • UMTS Universal Mobile Telecommunications System
  • W-CDM Wideband Code Division Multiple Access
  • HSPA High Speed Packet Access
  • HSPA Plus HSPA+
  • Time Division-Code Division Multiple Access TD-CDMA
  • Time Division-Synchronous Code Division Multiple Access TD-SCDMA
  • LTE LAA MuLTEfire
  • UTRA Evolved UTRA
  • E-UTRA Evolution- Data Optimized or Evolution-Data Only (EV-DO)
  • AMPS Advanced Mobile Phone System
  • D-AMPS Digital AMPS
  • TTACS/ETACS Push-to-talk
  • PTT Mobile Telephone System
  • MTS Improved Mobile Telephone System
  • AMTS Advanced Mobile Telephone System
  • CDPD Cellular Digital Packet Data
  • UMA Unlicensed Mobile Access
  • GAN 3GPP Generic Access Network
  • BLE Bluetooth Low Energy
  • IEEE 802.15.4 based protocols e.g., IPv6 over Low power Wireless Personal Area Networks (6L0WPAN), WirelessHART, MiWi, Thread, 802.11a, etc.) WiFi-direct, ANT/ANT+, ZigBee, Z-Wave.
  • mmWave standards in general e.g., wireless systems operating at 10-300 GHz and above such as WiGig, IEEE 802. Had, IEEE 802.
  • V2X communication technologies including 3GPP C-V2X
  • DSRC Dedicated Short Range Communications
  • ITS Intelligent- Transport-Systems
  • any number of satellite uplink technologies may be used for purposes of the present disclosure including, for example, radios compliant with standards issued by the International Telecommunication Union (ITU), or the European Telecommunications Standards Institute (ETSI), among others.
  • ITU International Telecommunication Union
  • ETSI European Telecommunications Standards Institute
  • access network refers to any network, using any combination of radio technologies. RATs, and/or communication protocols, used to connect user devices and serv ice providers.
  • an “access network” is an IEEE 802 local area network (LAN) or metropolitan area network (MAN) between terminals and access routers connecting to provider services.
  • the term “access router” refers to router that terminates a medium access control (MAC) service from terminals and forwards user traffic to information serv ers according to Internet Protocol (IP) addresses.
  • MAC medium access control
  • SMTC refers to an SSB-based measurement timing configuration configured by SSB-MeasurementTimingConfiguration.
  • SSB refers to a synchronization signal/Physical Broadcast Channel (SS/PBCH) block, which includes a Primary’ Syncrhonization Signal (PSS), a Secondary Syncrhonization Signal (SSS), and a PBCH.
  • PSS Primary’ Syncrhonization Signal
  • SSS Secondary Syncrhonization Signal
  • PBCH Physical Broadcast Channel
  • a “Primary Cell” refers to the MCG cell, operating on the primary frequency, in which the UE either performs the initial connection establishment procedure or initiates the connection re-establishment procedure.
  • Primary' SCG Cell refers to the SCG cell in which the UE performs random access when performing the Reconfiguration with Sync procedure for DC operation.
  • Secondary Cell refers to a cell providing additional radio resources on top of a Special Cell for a UE configured with CA.
  • Secondary Cell Group refers to the subset of serving cells comprising the PSCell and zero or more secondary' cells for a UE configured with DC.
  • Serving Cell refers to the primary cell for a UE in RRC_CONNECTED not configured with CA/DC there is only one serving cell comprising of the primary cell.
  • serving cell refers to the set of cells comprising the Special Cell(s) and all secondary cells for a UE in RRC_CONNECTED configured with CA.
  • Special Cell refers to the PCell of the MCG or the PSCell of the SCG for DC operation; otherwise, the term “Special Cell” refers to the Pcell.
  • Al policy refers to a type of declarative policies expressed using formal statements that enable the non-RT RIC function in the SMO to guide the near-RT RIC function, and hence the RAN, towards better fulfilment of the RAN intent.
  • Al Enrichment information refers to information utilized by near-RT RIC that is collected or derived at SMO/non-RT RIC either from non-network data sources or from network functions themselves.
  • Al -Policy Based Traffic Steering Process Mode refers to an operational mode in which the Near-RT RIC is configured through Al Policy to use Traffic Steering Actions to ensure a more specific notion of network performance (for example, applying to smaller groups of E2 Nodes and UEs in the RAN) than that which it ensures in the Background Traffic Steering.
  • Background Traffic Steering Processing Mode refers to an operational mode in which the Near-RT RIC is configured through 01 to use Traffic Steering Actions to ensure a general background network performance which applies broadly across E2 Nodes and UEs in the RAN.
  • Baseline RAN Behavior refers to the default RAN behavior as configured at the E2 Nodes by SMO
  • E2 refers to an interface connecting the Near-RT RIC and one or more O- CU-CPs, one or more O-CU-UPs, one or more O-DUs, and one or more O-eNBs.
  • E2 Node refers to a logical node terminating E2 interface.
  • ORAN nodes terminating E2 interface are: for NR access: O-CU-CP.
  • Intents in the context of 0-RAN systems/implementations, refers to declarative policy to steer or guide the behavior of RAN functions, allowing the RAN function to calculate the optimal result to achieve stated objective.
  • non-RT RIC refers to a logical function that enables non-real-time control and optimization of RAN elements and resources, AI/ML workflow including model training and updates, and policy-based guidance of applications/features in Near-RT RIC.
  • Near-RT RIC or “0-RAN near-real-time RAN Intelligent Controller” refers to a logical function that enables near-real-time control and optimization of RAN elements and resources via fine-grained (e.g., UE basis, Cell basis) data collection and actions over E2 interface.
  • fine-grained e.g., UE basis, Cell basis
  • 0-RAN Central Unit or “O-CU” refers to a logical node hosting RRC, SDAP and PDCP protocols.
  • 0-RAN Central Unit - Control Plane or “O-CU-CP” refers to a logical node hosting the RRC and the control plane part of the PDCP protocol.
  • 0-RAN Central Unit - User Plane or “0-CU-UP” refers to a logical node hosting the user plane part of the PDCP protocol and the SDAP protocol
  • O-RAN Distributed Unit'’ or “O-DU” refers to a logical node hosting RLC/MAC/High-PHY layers based on a lower layer functional split.
  • O-RAN eNB or “O-eNB” refers to an eNB or ng-eNB that supports E2 interface.
  • O-RAN Radio Unit refers to a logical node hosting Low-PHY layer and RF processing based on a lower layer functional split. This is similar to 3GPP’s “TRP” or “RRH” but more specific in including the Low-PHY layer (FFT/iFFT, PRACH extraction).
  • the term “01” refers to an interface between orchestration & management entities (Orchestration/NMS) and O-RAN managed elements, for operation and management, by which FCAPS management, Software management, File management and other similar functions shall be achieved.
  • RAN UE Group refers to an aggregations of UEs whose grouping is set in the E2 nodes through E2 procedures also based on the scope of Al policies. These groups can then be the target of E2 CONTROL or POLICY messages.
  • Traffic Steering Action refers to the use of a mechanism to alter RAN behavior. Such actions include E2 procedures such as CONTROL and POLICY.
  • Traffic Steering Inner Loop refers to the part of the Traffic Steering processing, triggered by the arrival of periodic TS related KPM (Key Performance Measurement) from E2 Node, which includes UE grouping, setting additional data collection from the RAN, as well as selection and execution of one or more optimization actions to enforce Traffic Steering policies.
  • KPM Key Performance Measurement
  • Traffic Steering Outer Loop refers to the part of the Traffic Steering processing, triggered by the near-RT RIC setting up or updating Traffic Steering aware resource optimization procedure based on information from Al Policy setup or update, Al Enrichment Information (El) and/or outcome of Near-RT RIC evaluation, which includes the initial configuration (preconditions) and injection of related Al policies, Triggering conditions for TS changes.
  • Al Policy setup or update Al Enrichment Information (El) and/or outcome of Near-RT RIC evaluation, which includes the initial configuration (preconditions) and injection of related Al policies, Triggering conditions for TS changes.
  • El Al Enrichment Information
  • Triggering conditions for TS changes Triggering conditions for TS changes.
  • Traffic Steering Processing Mode refers to an operational mode in which either the RAN or the Near-RT RIC is configured to ensure a particular network performance. This performance includes such aspects as cell load and throughput, and can apply differently to different E2 nodes and UEs. Throughout this process, Traffic Steering Actions are used to fulfill the requirements of this configuration.
  • Traffic Steering Target refers to the intended performance result that is desired from the network, which is configured to Near-RT RIC over 01.
  • any of the disclosed embodiments and example implementations can be embodied in the form of various types of hardware, software, firmware, middleware, or combinations thereof, including in the form of control logic, and using such hardware or software in a modular or integrated manner.
  • any of the software components or functions described herein can be implemented as software, program code, script, instructions, etc., operable to be executed by processor circuitry.
  • These components, functions, programs, etc. can be developed using any suitable computer language such as, for example, Python, PyTorch, NumPy, Ruby, Ruby on Rails, Scala, Smalltalk, JavaTM, C++, C#, “C”, Kotlin, Swift, Rust. Go (or “Golang”).
  • EMCAScript JavaScript, TypeScript.
  • Jscript ActionScript, Server- Side JavaScript (SSJS), PHP, Pearl, Lua, Torch/Lua with Just-In Time compiler (LuaJIT), Accelerated Mobile Pages Script (AMPscript), VBScript, JavaServer Pages (JSP), Active Server Pages (ASP), Node.js, ASP.NET, JAMscript, Hypertext Markup Language (HTML), extensible HTML (XHTML). Extensible Markup Language (XML). XML User Interface Language (XUL), Scalable Vector Graphics (SVG).
  • RESTful API Modeling Language (RAML), wiki markup or Wikitext, Wireless Markup Language (WML), Java Script Object Notion (JSON), Apache® MessagePackTM, Cascading Stylesheets (CSS), extensible stylesheet language (XSL), Mustache template language. Handlebars template language, Guide Template Language (GTL), Apache® Thrift, Abstract Syntax Notation One (ASN. 1). Google® Protocol Buffers (protobuf). Bitcoin Script, EVM® bytecode, SolidityTM, Vyper (Python derived), bamboo, Lisp Like Language (LLL), Simplicity’ provided by BlockstreamTM, Rholang, Michelson, Counterfactual, Plasma.
  • the software code can be stored as a computer- or processorexecutable instructions or commands on a physical non-transitory computer-readable medium.
  • suitable media include RAM, ROM, magnetic media such as a hard-drive or a floppy disk, or an optical medium such as a compact disk (CD) or DVD (digital versatile disk), flash memory, and the like, or any combination of such storage or transmission devices.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

La présente divulgation concerne des systèmes, des procédés et des dispositifs associés au chargement d'entité IA/ML. Un dispositif peut recevoir une demande provenant d'un consommateur de service pour créer une première instance d'objet gérée (MOI) pour un chargement d'entité d'intelligence artificielle (IA)/apprentissage automatique (ML). Le dispositif peut envoyer une réponse au consommateur de service pour indiquer si la demande de création de MOI est acceptée. Le dispositif peut notifier au consommateur de service la création de la première MOI. Le dispositif peut préparer le chargement d'entité IA/ML. Le dispositif peut créer une seconde MOI pour le processus de chargement d'entité IA/ML. Le dispositif peut notifier au consommateur de service la création de la seconde MOI. Le dispositif peut initier le chargement d'entité IA/ML. Le dispositif peut modifier la seconde MOI pour maintenir le consommateur de service informé de la progression du processus de chargement IA/ML.
PCT/US2023/077924 2022-10-27 2023-10-26 Chargement d'entité d'intelligence artificielle et d'apprentissage automatique dans des réseaux cellulaires WO2024092132A1 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US202263420015P 2022-10-27 2022-10-27
US63/420,015 2022-10-27
US202263421447P 2022-11-01 2022-11-01
US63/421,447 2022-11-01

Publications (1)

Publication Number Publication Date
WO2024092132A1 true WO2024092132A1 (fr) 2024-05-02

Family

ID=90832084

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2023/077924 WO2024092132A1 (fr) 2022-10-27 2023-10-26 Chargement d'entité d'intelligence artificielle et d'apprentissage automatique dans des réseaux cellulaires

Country Status (1)

Country Link
WO (1) WO2024092132A1 (fr)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210021494A1 (en) * 2019-10-03 2021-01-21 Intel Corporation Management data analytics
WO2022221495A1 (fr) * 2021-04-15 2022-10-20 Intel Corporation Support d'apprentissage automatique pour services de gestion et services d'analyse de données de gestion

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210021494A1 (en) * 2019-10-03 2021-01-21 Intel Corporation Management data analytics
WO2022221495A1 (fr) * 2021-04-15 2022-10-20 Intel Corporation Support d'apprentissage automatique pour services de gestion et services d'analyse de données de gestion

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
"3rd Generation Partnership Project; Technical Specification Group Services and System Aspects; Management and orchestration; Artificial Intelligence / Machine Learning (AI/ML) management (Release 17)", 3GPP TS 28.105, no. V17.1.1, 27 September 2022 (2022-09-27), pages 1 - 34, XP052211279 *
"3rd Generation Partnership Project; Technical Specification Group Services and System Aspects; Management and orchestration; Generic management services; (Release 17)", 3GPP TS 28.532, no. V17.2.1, 26 September 2022 (2022-09-26), pages 1 - 236, XP052211195 *
INTEL, NEC: "pCR 28.908 Add possible solution for AI-ML entity deployment", 3GPP TSG-SA5 MEETING #145E, S5-225523, 5 August 2022 (2022-08-05), XP052258858 *

Similar Documents

Publication Publication Date Title
WO2022221260A1 (fr) Support de service de gestion de cycle de vie o-cloud
WO2022261028A1 (fr) Fonctions de données et procédures dans un contrôleur intelligent de réseau d'accès radio non temps réel
WO2022125296A1 (fr) Mécanismes pour permettre des services informatiques en réseau
WO2022240850A1 (fr) Restriction de domaine temporel pour une configuration de signal de référence d'informations d'état de canal
WO2022087474A1 (fr) Hiérarchisation d'équipements intra-utilisateur pour gérer un chevauchement de canaux de commande montants et de données de liaison montante
EP4233419A1 (fr) Attribution de ressources pour un service de diffusion/multidiffusion en nouvelle radio
US20240155393A1 (en) Measurement reporting efficiency enhancement
WO2022087489A1 (fr) Indication de faisceau basée sur des informations de commande de liaison descendante (dci) pour une nouvelle radio (nr)
WO2022221495A1 (fr) Support d'apprentissage automatique pour services de gestion et services d'analyse de données de gestion
WO2024092132A1 (fr) Chargement d'entité d'intelligence artificielle et d'apprentissage automatique dans des réseaux cellulaires
WO2024081642A1 (fr) Services de traitement en pipeline dans des réseaux cellulaires de prochaine génération
WO2024091970A1 (fr) Évaluation de performances pour inférence d'intelligence artificielle/apprentissage automatique
WO2024076852A1 (fr) Structure de fonction de coordination de collecte de données et de fonction d'analyse de données de réseau pour détection de services dans des réseaux cellulaires de prochaine génération
WO2024026515A1 (fr) Test d'entité d'intelligence artificielle et d'apprentissage machine
WO2024015747A1 (fr) Sélection de fonction de gestion de session dans des réseaux cellulaires prenant en charge une strate de non-accès distribuée entre un dispositif et des fonctions de réseau
WO2023122037A1 (fr) Mesures et données de localisation prenant en charge une analyse de données de gestion (mda) pour analyse de problème de couverture
WO2023049345A1 (fr) Optimisation d'équilibrage de charge (lbo) pour systèmes 5g
WO2022232038A1 (fr) Mesures de performances pour un référentiel de données unifié (udr)
WO2024097783A1 (fr) Autorisation à des groupes d'apprentissage fédéré d'accéder à des fonctions d'analyse de données de réseau dans un noyau 5g
WO2024020519A1 (fr) Systèmes et procédés de partage de services d'une fonction de stockage de données non structurées
WO2023014745A1 (fr) Mesures de performance pour fonction d'exposition de réseau
WO2023069750A1 (fr) Bons critères de qualité cellulaire
WO2024097726A1 (fr) Allocation de ressources pour mise en forme du spectre dans le domaine fréquentiel avec extension du spectre
WO2023055852A1 (fr) Mesures de performance pour l'autorisation des politiques et l'exposition des événements pour les fonctions d'exposition du réseau
WO2024039950A2 (fr) Protocole d'application contrainte pour des services de calcul dans des réseaux cellulaires

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23883757

Country of ref document: EP

Kind code of ref document: A1