WO2023173075A1 - Training updates for network data analytics functions (nwdafs) - Google Patents

Training updates for network data analytics functions (nwdafs) Download PDF

Info

Publication number
WO2023173075A1
WO2023173075A1 PCT/US2023/064122 US2023064122W WO2023173075A1 WO 2023173075 A1 WO2023173075 A1 WO 2023173075A1 US 2023064122 W US2023064122 W US 2023064122W WO 2023173075 A1 WO2023173075 A1 WO 2023173075A1
Authority
WO
WIPO (PCT)
Prior art keywords
nwdaf
model
mtlf
network
data
Prior art date
Application number
PCT/US2023/064122
Other languages
French (fr)
Inventor
Meghashree Dattatri Kedalagudde
Thomas Luetzenkirchen
Abhijeet Kolekar
Original Assignee
Intel Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intel Corporation filed Critical Intel Corporation
Publication of WO2023173075A1 publication Critical patent/WO2023173075A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/098Distributed learning, e.g. federated learning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/16Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/02Arrangements for optimising operational condition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/50Network service management, e.g. ensuring proper service fulfilment according to agreements
    • H04L41/5058Service discovery by the service manager

Definitions

  • Various embodiments generally may relate to the field of wireless communications. For example, some embodiments may relate to registration and discovery of NWDAF model training logical function (MTLF) instances supporting distributed learning. Some embodiments may relate to NWDAF MTLF interoperability support
  • Various embodiments generally may relate to the field of wireless communications.
  • Figure l illustrates an example of registration with federated learning aggregation capability included in a network function (NF) profile, in accordance with various embodiments.
  • NF network function
  • Figure 2 illustrates an example of registration with federated learning participation capability included in a NF profile, in accordance with various embodiments.
  • Figures 3a and 3b (collectively, Figure 3) illustrate an example of federated learning to enable cooperation of multiple NWDAF MTLF instances to train a machine learning (ML) model, in accordance with various embodiments.
  • FIG. 4 illustrates an example of a process flow wherein ML model filter information includes a ML model file serialization format, in accordance with various embodiments.
  • Figure 5 illustrates an example of a NF profile registration of a NWDAF containing a MTLF, wherein the NF profile includes a new attribute for a ML model file, in accordance with various embodiments.
  • Figures 6A and 6B illustrates an example of trained ML model retrieval using an analytical data repository function (ADRF), in accordance with various embodiments.
  • ADRF analytical data repository function
  • Figures 7A and 7B (collectively, Figure 7) illustrates an example of trained ML model storage in an ADRF, in accordance with various embodiments.
  • Figure 8 illustrates an example of trained ML model file serialization format conversion, in accordance with various embodiments.
  • Figure 9 schematically illustrates a wireless network in accordance with various embodiments.
  • Figure 10 schematically illustrates components of a wireless network in accordance with various embodiments.
  • Figure 11 is a block diagram illustrating components, according to some example embodiments, able to read instructions from a machine-readable or computer-readable medium (e.g., a non-transitory machine-readable storage medium) and perform any one or more of the methodologies discussed herein.
  • a machine-readable or computer-readable medium e.g., a non-transitory machine-readable storage medium
  • Figure 12 schematically illustrates an alternative example wireless network in accordance with various embodiments.
  • Figure 13 illustrates a simplified block diagram of artificial (Al)-assisted communication between a UE and a RAN, in accordance with various embodiments.
  • FIG. 14 depicts an example process, in accordance with various embodiments herein.
  • FIG. 15 depicts an alternative example process, in accordance with various embodiments herein.
  • the third generation partnership (3 GPP) release- 18 (Rel-18) specifications may relate to one or more of the following goals:
  • 5GC fifth generation
  • a NWDAF containing one or both of an analytic logical function (AnLF) and a MTLF is supported.
  • a network function (NF) NF profile registration of one or more of the NWDAF, AnLF, and MTLF with a network repository function (NRF) is supported.
  • NRF network repository function
  • NWDAFs containing respective MTLFs may not be allowed to coordinate with one another. Only an NWDAF containing an AnLF may be allowed to discover a NWDAF containing a MTLF and request the ML models from the NWDAF containing the MTLF instance.
  • the NWDAF containing an AnLF may select, from a list of candidate NWDAFs containing MTLF instance(s), an NWDAF containing a MTLF that is pre-configured in the NWDAF containing an AnLF to obtain trained ML Model(s).
  • NWDAF and supporting network functions such as the data collection coordination function (DCCF) and the ADRF may allow for data collection to generate analytics data as requested by a NWDAF service consumer.
  • DCCF data collection coordination function
  • ADRF ADRF
  • a scenario where a NWDAF containing a MTLF collects all the raw data from distributed data sources in different areas - especially UE level network data - for training ML models may be undesirable.
  • Federated Machine Learning mechanisms may allow application endpoints supporting ML training to train a shared ML model while keeping the raw data local on each endpoint, which in turn may support user data privacy concern whenever applicable.
  • aspects of various embodiments herein may include one or more of the following: to allow NWDAF containing MTLF to support Federated Learning aggregation/ participation capability. registration and discovery of NWDAF containing MTLF supporting Federated Learning aggregation/participation capability. how to coordinate multiple NWDAFs including selection of participant NWDAF instances in the Federated Learning group to perform the selection, and decision of role for the participant NWDAF.
  • a NWDAF containing a MTLF that can support Federated Learning aggregation capability may register its NF profile with an NRF with the following included in its NF profile: Federated Learning Aggregation capability for ML model(s).
  • An example of such registration is depicted in Figure 1.
  • the NWDAF containing a MTLF with Federated Learning Aggregation capability may be the network function responsible for one or more of the following (note, this list is intended to be illustrative rather than limiting. In some embodiments, the NWDAF may be responsible for one or more additional or alternative tasks or functions):
  • a NWDAF containing a MTLF that can support Federated Learning participation capability may register its NF profile with a NRF with the following included in its NF profile: Federated Learning participation capability for ML model(s).
  • An example of such registration is depicted in Figure 2.
  • the NWDAF containing a MTLF with Federated Learning participation capability may be responsible for one or more of the following (note, this list is intended to be illustrative rather than limiting. In some embodiments, the NWDAF may be responsible for one or more additional or alternative tasks or functions):
  • a NWDAF containing a MTLF with Federated Learning Aggregation capability may be in the role of the service consumer with the NRF.
  • a service consumer may send a Nnrf_NFDiscovery_Request to the NRF with the following additional input(s): Federated Learning participation capability for ML model(s).
  • the NRF may then return one or more instances of a NWDAF containing a MTLF with Federated Learning participation capability for ML model(s) to the NF consumer (e.g., the service consumer), and each instance of the returned NWDAF(s) may include ML Model Filter Information for the available trained ML models.
  • Part 3 Example of New Nnwdaf MLModelTraining service operations to support notification of update to trained ML models as a result of federated learning to the
  • the Nnwdaf_MLModelTrainingUpdate service may be provided by an NWDAF containing a MTLF and consumed by an NWDAF containing an AnLF.
  • the Nnwdaf MLModel DistributedTraining service may be provided by an NWDAF containing a MTLF and consumed by an NWDAF containing a MTLF.
  • An example Nnwdaf MLModelTrainingUpdate Subscribe service operation may include one or more of the following elements. It will be noted that the described service operation is intended as an example of such an operation, and the operation may include more, fewer, or different elements in other embodiments:
  • the consumer NF subscribes to ML model training update with a NWDAF containing a MTLF.
  • the input for the Nnwdaf MLModelTrainingUpdate Subscribe may be Anlaytics ID(s) for which the training update is requested, Notification Target Address, Subscription Correlation ID (in the case of modification of the ML model subscription), Expiry Time.
  • the output of Nnwdaf MLModelTrainingUpdate Subscribe operation may include the Subscription Correlation ID, Expiry time.
  • An example Nnwdaf MLModelTrainingUpdate Unsubscribe service operation may include one or more of the following elements. It will be noted that the described service operation is intended as an example of such an operation, and the operation may include more, fewer, or different elements in other embodiments:
  • the consumer NF may unsubscribe to ML model training update with a NWDAF containing a MTLF.
  • the input may include Subscription Correlation ID and output includes the service operation result.
  • An exampleNnwdaf MLModelTrainingUpdate Notify service operation may include one or more of the following elements. It will be noted that the described service operation is intended as an example of such an operation, and the operation may include more, fewer, or different elements in other embodiments:
  • a NWDAF containing a MTLF notifies the ML model information to the consumer NF (e.g., a NWDAF containing an AnLF) thatr has subscribed to the specific NWDAF service.
  • the input may include Analytics ID(s) for which the updated trained ML model is available and/or the address of the updated trained ML model file (updated trained ML model because of the global ML model update after Federated Learning for the model is completed).
  • An example Nnwdaf MLModel DistributedTraining Request service operation may include one or more of the following elements. It will be noted that the described service operation is intended as an example of such an operation, and the operation may include more, fewer, or different elements in other embodiments:
  • a NWDAF containing a MTLF (with Federated Learning aggregation capability) in the role of service consumer sends a request to another NWDAF containing MTLF (with Federated Learning participation capability).
  • the input may include one or more of: Analytics ID for which Federated Learning is required, ML model (global ML model) file address, ML model reporting time limit.
  • the ML model reporting time limit is the time within which the local trained ML model needs to be reported back to the service consumer.
  • An example Nnwdaf MLModel DistributedTraining Response service operation may include one or more of the following elements. It will be noted that the described service operation is intended as an example of such an operation, and the operation may include more, fewer, or different elements in other embodiments:
  • a NWDAF containing a MTLF (with Federated Learning participation capability) in the role of service producer sends a response to a NWDAF containing a MTLF (with Federated Learning aggregation capability) that includes the result of the operation. If the result of the operation is successful, then the response may include the ML model (local ML model) file address and/or validity period. If the result is not successful, the response may include an error code.
  • An example Nnwdaf_ MLModel DistributedTraining Subscribe service operation may include one or more of the following elements. It will be noted that the described service operation is intended as an example of such an operation, and the operation may include more, fewer, or different elements in other embodiments:
  • the NWDAF containing MTLF with Federated Learning participation capability in the role of service producer sends a response to NWDAF containing MTLF (with Federated Learning aggregation capability) with the result as success
  • the NWDAF containing MTLF with Federated Learning participation capability in the role of service consumer subscribes for ML model (global ML model as a result of aggregation from the result of other NWDAF containing MTLF with Federated Learning participation capability) with NWDAF containing MTLF (with Federated Learning aggregation capability).
  • the input includes Analytics ID, Notification Target Address (+ Notification Correlation ID).
  • the output includes subscription Correlation ID when the subscription is accepted.
  • An example Nnwdaf_ MLModel_DistributedTraining_Notify service operation may include one or more of the following elements. It will be noted that the described service operation is intended as an example of such an operation, and the operation may include more, fewer, or different elements in other embodiments:
  • the NWDAF containing MTLF with Federated Learning participation capability may send a notification to NWDAF containing MTLF (with Federated Learning participation capability) that includes one or more of a Notification Correlation ID, ML model (global ML model) file address, and/or validity period.
  • An example Nnwdaf_ MLModel DistributedTraining Unsubscribe service operation may include one or more of the following elements. It will be noted that the described service operation is intended as an example of such an operation, and the operation may include more, fewer, or different elements in other embodiments:
  • a NWDAF containing a MTLF with Federated Learning participation capability may unsubscribe from a NWDAF containing a MTLF (with Federated Learning aggregation capability).
  • the input may include the subscription Correlation ID.
  • the output may include the result of the operation.
  • Part 4 Support of Distributed Learning (Federated Learning) to enable cooperation of multiple NWDAF containing MTLF instances to train an ML model in 3GPP network
  • NWDAF containing MTLF Federated Learning aggregation capability decides that the federated learning task for a given ML model (required to generate an Analytics ID) needs to be initiated.
  • the decision to initiate federated learning for a given ML model can be based on following factors - ML model accuracy.
  • NWDAF containing MTLF Federated Learning aggregation capability discovers the NWDAF instances containing MTLF (Federated Learning participation capability) via the NRF (as described in Part 2, above).
  • NWDAF containing MTLF Federated Learning aggregation capability decides on the list of NWDAF containing MTLF (Federated Learning participation capability) to participate in a given iteration of federated learning. How the NWDAF MTLF with Federated Learning aggregation capability selects the list of NWDAF MTLF with participation capabilities is in scope of the NWDAF application logic.
  • NWDAF containing MTLF Federated Learning aggregation capability sends Nnwdaf MLModel DistributedTraining Request with the following parameters: Analytics ID(s), ML model (global) file address, ML model reporting time limit.
  • NWDAF containing MTLF Federated Learning participation capability sends Nnwdaf MLModel DistributedTraining Response with the following parameters: result of the operation. If the result of the operation is successful, then the response includes the ML model (local ML model) file address, validity period. If the result is not successful, the response includes an error code and elements 6 and 7 are skipped.
  • NWDAF containing MTLF Federated Learning participation capability
  • ML model global ML model as a result of aggregation from the result of other NWDAF containing MTLF with Federated Learning participation capability
  • NWDAF containing MTLF with Federated Learning aggregation capability
  • the NWDAF MTLF with Federated Learning Aggregation capability completes the global update for the ML model (global ML model as a result of aggregation from the result of other NWDAF containing MTLF with Federated Learning participation capability)
  • the NWDAF containing MTLF sends a notification to NWDAF containing MTLF (with Federated Learning participation capability) which includes the Notification Correlation ID, ML model (global ML model) file address, validity period.
  • ADRF should store types of data other than historical data and analytics (e.g. ML models, analytics context) for network analytics.
  • Enabling ML model sharing between NWDAFs (containing AnLF/MTLF) from different vendors.
  • Allowing ML model file attribute included in NF profile of the NWDAF containing MTLF indicating a list of the supported ML model file serialization formats when registering with the NRF.
  • NWDAF containing AnLF during discovery of NWDAF containing MTLF includes the ML model file attribute supported by the NWDAF containing AnLF which the NRF to return only NWDAF MTLF instances that support at least one matching file serialization format for an ML model.
  • ADRF Analytics and Data Repository Function
  • PART 1, PART 2, and PART 3 may be applicable between NWDAF’s belonging to the same vendor or different vendors.
  • Example Solution 1 ML model filter information includes the ML model file serialization format NWDAF containing MTLF registration with NRF:
  • a NWDAF containing a MTLF sends Nnrf_NFManagement_NFRegister to NRF to inform the NRF of its NF profile.
  • NRF Nnrf_NFManagement_NFRegister
  • it includes supported ML model file serialization formats for the trained ML model(s) in the ML model Filter information.
  • Some of the examples for the ML model file serialization format are ONNX format, H5 format, Protobuf format.
  • the ML model file serialization format(s) included in the ML model Filter information indicates the supported ML model file serialization format(s) for the trained ML model(s) available at the NWDAF containing MTLF for consuming by the service consumer.
  • the consumer of the services provided by NWDAF containing MTLF may be an NWDAF containing AnLF or NWDAF containing MTLF.
  • the consumer NF may belong to the same vendor as NWDAF containing MTLF or to a different vendor.
  • the NWDAF containing AnLF invokes a Nnrf_NfDiscovery_Request to an appropriately configured NRF.
  • it includes the ML model file serialization format(s) supported for the trained ML model(s) in the ML model filter information.
  • the NRF determines a set of NWDAF containing MTLF instance(s) matching at least one of the ML model file serialization formats supported in Nnrf_NFDiscovery_Request and internal policies of the NRF and sends the NF profile(s) (including ML model file serialization format(s)) of the determined NWDAF containing MTLF instances in the Discovery Response.
  • Subscription Correlation ID in the case of modification of the ML model subscription
  • ML Model Filter Information to indicate the conditions for which ML model for the analytics is requested
  • Target of ML Model Reporting to indicate the object(s) for which ML model is requested (e.g. specific UEs, a group of UE(s) or any UE (e.g. all UEs)
  • ML Model Reporting Information including e.g. ML Model Target Period
  • NWDAF containing an AnLF subscribes to a NWDAF containing a MTLF using Nnwdaf MLModelProvision Subscribe service operation including ML model file serialization format requested as input
  • NWDAF containing MTLF notifies the ML model information (address (e.g. URL or FQDN) of Model file) to the NWDAF containing AnLF only if ML model format as requested in the input of Nnwdaf MLModelProvision Subscribe is a match.
  • Example Solution 2 The NF profile registration of NWDAF containing MTLF includes a new attribute for ML model file.
  • NWDAF containing MTLF registration with NRF
  • a NWDAF containing a MTLF may send Nnrf_NFManagement_NFRegister to NRF to inform the NRF of its NF profile.
  • NRF Nnrf_NFManagement_NFRegister
  • it includes ML model file specific information for the trained ML model(s) as a new attribute as shown below.
  • Some of the examples for the ML model file serialization formats are ONNX format, H5 format, Protobuf format.
  • the ML model file specific information attribute includes the supported ML model file serialization formats for the trained ML model(s) available at the NWDAF containing MTLF for consuming by the service consumer.
  • the consumer of the services provided by NWDAF containing MTLF may be an NWDAF containing AnLF or NWDAF containing MTLF.
  • the consumer NF may belong to the same vendor as NWDAF containing MTLF or to a different vendor.
  • NWDAF containing MTLF discovery via the NRF
  • the NWDAF containing AnLF invokes a Nnrf NfDiscovery Request to an appropriately configured NRF.
  • it include s the ML model file serialization formats supported for the trained ML model(s).
  • the NRF determines a set of NWDAF containing MTLF instance(s) matching at least one of the ML model file serialization formats supported in Nnrf NFDiscovery Request and internal policies of the NRF and sends the NF profile(s) of the determined NWDAF containing MTLF instances.
  • Nnwdaf MLModelProvision services with the ML model file serialization format as defined in solution 1 of this part, described above, may be applicable for solution 2 as well.
  • ML model file and associated ML model file attributes stored in the ADRF (Analytics and Data Repository Function) and supported new service operations by ADRF.
  • the Analytics and Data Repository Function defined in 3GPP Rel-17 may enable a consumer to store and retrieve data and analytics.
  • Embodiments herein may extend the functionality of an ADRF to enable a NWDAF containing MTLF to store and retrieve trained ML model(s).
  • An example service defined for ADRF to support storage and retrieval of trained ML model(s) may include one or more of the following:
  • Nadrf MLModelManagement service This service enables the consumer NWDAF containing MTLF to store, retrieve, and remove ML model(s) from an ADRF.
  • Nadrf MLModelManagement service operations a. Nadrf MLModelManagement StorageRequest service operation b . Nadrf MLModelManagement RetrievalRequest c. Nadrf MLModelManagement RetrievalTrainingUpdateSubscribe d. Nadrf MLModelManagement RetrievalTrainingUpdateUnsubscribe e. Nadrf_MLModelManagement_RetrievalTrainingUpdateNotify f. Nadrf_MLModelManagement_Delete g. Nadrf MLModelManagement FormatConversionRequest
  • an example of the above-described service may include or relate to one or more of the following elements, which are described with respect to Figures 6a and 6b (collectively, Figure 6).
  • the NWDAF containing AnLF sends a Nadrf MLModelManagement RetrievalRequest which includes Analytics ID(s), ML Model Filter Info, ML model file specific information, TargetNF (NWDAF MTLF) to subscribe for Notifications.
  • MLModelManagement RetrievalRequest which includes Analytics ID(s), ML Model Filter Info, ML model file specific information, TargetNF (NWDAF MTLF) to subscribe for Notifications.
  • the ADRF based on internal application logic determines if the ML model file for the
  • Analytics ID(s) requested is already stored. If the ML model file for the Analytics ID(s) requested in not stored in ADRF then elements 3a, 4a, 5a, 6a are performed. If the ML model file for the Analytics ID(s) requested in stored in ADRF the elements 3a, 4a, 5a, 6a are skipped.
  • ADRF sends NrrwdafJMLModelProvision Subscribe with the input parameters defined in TS 23.502 and additional input parameters ML model file specific Information (ML model file serialization format)).
  • the NWDAF containing MTLF sends a Nnwdaf MLModelProvision Notify with following parameters Analytics ID, Trained ML model file address, Notification Correlation ID.
  • ADRF sends Nnwdaf JMIModelTrainingUpdate Subscribe with the input parameters defined in TS 23.502 and additional input parameters Analytics ID(s), ML model file specific Information (ML model file serialization format).
  • the NWDAF containing MTLF sends NnwdafJMLModelTrainingUpdate Notify with the following parameters Analytics ID, Trained ML model file address, Notification Correlation ID.
  • the ADRF sends a response back to NWDAF containing AnLF using
  • MLModelManagement RetrievalRequest Response with the following parameters ML Model File Information(Trained ML model file address, ML model file serialization format, Trained ML Model ID per Analytics ID).
  • the NWDAF containing AnLF subscribes to ADRF using
  • the AnLF sends a notification to NWDAF containing AnLF using
  • Nadrf_MLModelManagement_RetrievalTrainingUpdate_Notify service operation containing following parameters ML Model File Information(Trained ML model file address, ML model file serialization format, Trained ML Model ID per Analytics ID).
  • NWDAF containing AnLF determines that the ML model training update is no longer required.
  • ADRF determines if any of the NWDAF AnLF consumer(s) have subscription for ML Model training update per Analytics ID. If not, consumer has subscription for ML model training update per Analytics ID, the ADRF removes the ML model file and ML model file specific information and proceed to element 9. If ADRF determines that NWDAF AnLF consumer(s) have subscription for ML Model training update per Analytics ID, then skip element 9.
  • ADRF sends Nnwdaf MLModelTrainingUpdate Unsubscribe to ADRF with Subscription
  • Figures 7a and 7b Another example process is depicted in Figures 7a and 7b (collectively, Figure 7).
  • the example process of Figure 7 relates to trained ML model storage in an ADRF, and is described below:
  • elements 1 and 2 are performed as follows.
  • the NWDAF containing AnLF sends Nnwdaf MLModellnfo Request with the following input parameters Analytics ID(s), ML model file specific information (ML model file serialization format), Notification end point address (ADRF) to the NWDAF containing MTLF.
  • Analytics ID(s) ML model file specific information
  • ADRF Notification end point address
  • the NWDAF containing MTLF sends Nnwdaf MLModellnfo Response with the input parameters Analytics ID(s), Trained ML model file address.
  • ADRF If trained model storage is triggered by ADRF, then elements la and 2a are performed as follows la.
  • the ADRF sends Nnwdaf MLModelProvision Subscribe with the following input parameters ML model fde specific Information (ML model file serialization format).
  • the NWDAF containing MTLF sends Nnwdaf JMIModelProvision Notify with the following input parameters Analytics ID, Trained ML model file address, Notification Correlation ID.
  • the NWDAF containing MTLF sends Nadrf MLModelManagement StorageRequest with input parameters Analytics ID(s), Trained ML model file address, ML model file specific information (ML model file serialization format).
  • the ADRF sends Nnwdaf MLModelTrainingUpdate Subscribe with input parameters Analytics ID(s), ML model file specific Information (ML model file serialization format).
  • PART 3 ML model file and associated ML model file attributes stored in the newly defined ML Model Storage Function (MLMS) and supported new service operations by ADRF.
  • MLMS ML Model Storage Function
  • ADRF as described with respect to Figures 6 and 7 may be replaced by a ML Model Storage Function (MLMS).
  • MLMS ML Model Storage Function
  • the Nmlms_MLModelManagement service may be supported by the MLMS.
  • the MLMS may enable the consumer NWDAF containing MTLF to store, retrieve, and remove ML model(s) from an ADRF.
  • Nmlms MLModelManagement service operations may include one or more of the following examples: a. Nmlms MLModelManagement StorageRequest service operation b . N mlms MLModelManagement RetrievalRequest c. Nmlms MLModelManagement RetrievalTrainingUpdateSubscribe d. Nmlms MLModelManagement RetrievalTrainingUpdateUnsubscribe e. N mlms_MLModelManagement_RetrievalTrainingUpdateNotify f. N mlms_MLModelManagement_Delete
  • the NF consumers may or shall utilize the NRF to discover MLMS instance(s) unless MLMS information is available by other means e.g. locally configured on NF consumers.
  • the MLMS selection function in NF consumers selects an MLMS instance based on the available MLMS instances.
  • Single-Network Slice Selection Assistance Information (S-NSSAI) may be considered by the NF consumer for MLMS selection.
  • Example Solution 3 Support conversion from one ML model file serialization format to another ML model file serialization format
  • a New service defined for ADRF to support conversion from one ML model file serialization format to another ML model file serialization format may be, include, or relate to Nadrf MLModelManagement FormatConversionRequest/Response.
  • MLModelManagement FormatConversionRequest/Response As depicted in Figure 8, in case a NWDAF containing an AnLF prefers a ML model file serialization format not supported at the NWDAF containing the MTLF, it may request the ADRF to perform ML model file serialization format conversion as follows (note, the process of Figure 8 is intended as one example process, and other embodiments may differ):
  • the NWDAF containing AnLF sends Nadrf MLModelManagement FormatConversion Request with the following input parameters ML model file specific information (Trained ML model file address, ML model file serialization format), target ML model file serialization format.
  • the ADRF sends a response back to NWDAF containing AnLF using
  • MLModelManagement FormatConversionRequest Response with the following parameters ML Model File Information (Trained ML model file address, ML model file serialization format).
  • the ADRF in Figure 8 may be replaced by a ML Model Storage Function (MLMS)
  • MLMS ML Model Storage Function
  • FIGS 9-13 illustrate various systems, devices, and components that may implement aspects of disclosed embodiments.
  • Figure 9 illustrates a network 900 in accordance with various embodiments.
  • the network 900 may operate in a manner consistent with 3GPP technical specifications for LTE or 5G/NR systems.
  • 3GPP technical specifications for LTE or 5G/NR systems 3GPP technical specifications for LTE or 5G/NR systems.
  • the example embodiments are not limited in this regard and the described embodiments may apply to other networks that benefit from the principles described herein, such as future 3 GPP systems, or the like.
  • the network 900 may include a UE 902, which may include any mobile or non-mobile computing device designed to communicate with a RAN 904 via an over-the-air connection.
  • the UE 902 may be communicatively coupled with the RAN 904 by a Uu interface.
  • the UE 902 may be, but is not limited to, a smartphone, tablet computer, wearable computer device, desktop computer, laptop computer, in-vehicle infotainment, in-car entertainment device, instrument cluster, head-up display device, onboard diagnostic device, dashtop mobile equipment, mobile data terminal, electronic engine management system, electron! c/engine control unit, electronic/engine control module, embedded system, sensor, microcontroller, control module, engine management system, networked appliance, machine-type communication device, M2M or D2D device, loT device, etc.
  • the network 900 may include a plurality of UEs coupled directly with one another via a sidelink interface.
  • the UEs may be M2M/D2D devices that communicate using physical sidelink channels such as, but not limited to, PSBCH, PSDCH, PSSCH, PSCCH, PSFCH, etc.
  • the UE 902 may additionally communicate with an AP 906 via an over-the-air connection.
  • the AP 906 may manage a WLAN connection, which may serve to offload some/all network traffic from the RAN 904.
  • the connection between the UE 902 and the AP 906 may be consistent with any IEEE 802.11 protocol, wherein the AP 906 could be a wireless fidelity (Wi-Fi®) router.
  • the UE 902, RAN 904, and AP 906 may utilize cellular-WLAN aggregation (for example, LWA/LWIP). Cellular-WLAN aggregation may involve the UE 902 being configured by the RAN 904 to utilize both cellular radio resources and WLAN resources.
  • the RAN 904 may include one or more access nodes, for example, AN 908.
  • AN 908 may terminate air-interface protocols for the UE 902 by providing access stratum protocols including RRC, PDCP, RLC, MAC, and LI protocols. In this manner, the AN 908 may enable data/voice connectivity between CN 920 and the UE 902.
  • the AN 908 may be implemented in a discrete device or as one or more software entities running on server computers as part of, for example, a virtual network, which may be referred to as a CRAN or virtual baseband unit pool.
  • the AN 908 be referred to as a BS, gNB, RAN node, eNB, ng-eNB, NodeB, RSU, TRxP, TRP, etc.
  • the AN 908 may be a macrocell base station or a low power base station for providing femtocells, picocells or other like cells having smaller coverage areas, smaller user capacity, or higher bandwidth compared to macrocells.
  • the RAN 904 may be coupled with one another via an X2 interface (if the RAN 904 is an LTE RAN) or an Xn interface (if the RAN 904 is a 5G RAN).
  • the X2/Xn interfaces which may be separated into control/user plane interfaces in some embodiments, may allow the ANs to communicate information related to handovers, data/context transfers, mobility, load management, interference coordination, etc.
  • the ANs of the RAN 904 may each manage one or more cells, cell groups, component carriers, etc. to provide the UE 902 with an air interface for network access.
  • the UE 902 may be simultaneously connected with a plurality of cells provided by the same or different ANs of the RAN 904.
  • the UE 902 and RAN 904 may use carrier aggregation to allow the UE 902 to connect with a plurality of component carriers, each corresponding to a Pcell or Scell.
  • a first AN may be a master node that provides an MCG and a second AN may be secondary node that provides an SCG.
  • the first/second ANs may be any combination of eNB, gNB, ng-eNB, etc.
  • the RAN 904 may provide the air interface over a licensed spectrum or an unlicensed spectrum.
  • the nodes may use LAA, eLAA, and/or feLAA mechanisms based on CA technology with PCells/Scells.
  • the nodes Prior to accessing the unlicensed spectrum, the nodes may perform medium/carrier-sensing operations based on, for example, a listen-before-talk (LBT) protocol.
  • LBT listen-before-talk
  • the UE 902 or AN 908 may be or act as a RSU, which may refer to any transportation infrastructure entity used for V2X communications.
  • An RSU may be implemented in or by a suitable AN or a stationary (or relatively stationary) UE.
  • An RSU implemented in or by: a UE may be referred to as a “UE-type RSU”; an eNB may be referred to as an “eNB-type RSU”; a gNB may be referred to as a “gNB-type RSU”; and the like.
  • an RSU is a computing device coupled with radio frequency circuitry located on a roadside that provides connectivity support to passing vehicle UEs.
  • the RSU may also include internal data storage circuitry to store intersection map geometry, traffic statistics, media, as well as applications/software to sense and control ongoing vehicular and pedestrian traffic.
  • the RSU may provide very low latency communications required for high speed events, such as crash avoidance, traffic warnings, and the like. Additionally or alternatively, the RSU may provide other cellular/WLAN communications services.
  • the components of the RSU may be packaged in a weatherproof enclosure suitable for outdoor installation, and may include a network interface controller to provide a wired connection (e.g., Ethernet) to a traffic signal controller or a backhaul network.
  • the RAN 904 may be an LTE RAN 910 with eNBs, for example, eNB 912.
  • the LTE RAN 910 may provide an LTE air interface with the following characteristics: SCS of 15 kHz; CP-OFDM waveform for DL and SC-FDMA waveform for UL; turbo codes for data and TBCC for control; etc.
  • the LTE air interface may rely on CSI-RS for CSI acquisition and beam management; PDSCH/PDCCH DMRS for PDSCH/PDCCH demodulation; and CRS for cell search and initial acquisition, channel quality measurements, and channel estimation for coherent demodulation/detection at the UE.
  • the LTE air interface may operating on sub-6 GHz bands.
  • the RAN 904 may be an NG-RAN 914 with gNBs, for example, gNB 916, or ng-eNBs, for example, ng-eNB 918.
  • the gNB 916 may connect with 5G-enabled UEs using a 5G NR interface.
  • the gNB 916 may connect with a 5G core through an NG interface, which may include an N2 interface or an N3 interface.
  • the ng-eNB 918 may also connect with the 5G core through an NG interface, but may connect with a UE via an LTE air interface.
  • the gNB 916 and the ng-eNB 918 may connect with each other over an Xn interface.
  • the NG interface may be split into two parts, an NG user plane (NG-U) interface, which carries traffic data between the nodes of the NG-RAN 914 and a UPF 948 (e.g., N3 interface), and an NG control plane (NG-C) interface, which is a signaling interface between the nodes of the NG-RAN914 and an AMF 944 (e.g., N2 interface).
  • NG-U NG user plane
  • N-C NG control plane
  • the NG-RAN 914 may provide a 5G-NR air interface with the following characteristics: variable SCS; CP-OFDM for DL, CP-OFDM and DFT-s-OFDM for UL; polar, repetition, simplex, and Reed-Muller codes for control and LDPC for data.
  • the 5G-NR air interface may rely on CSI-RS, PDSCH/PDCCH DMRS similar to the LTE air interface.
  • the 5G-NR air interface may not use a CRS, but may use PBCH DMRS for PBCH demodulation; PTRS for phase tracking for PDSCH; and tracking reference signal for time tracking.
  • the 5G-NR air interface may operating on FR1 bands that include sub-6 GHz bands or FR2 bands that include bands from 24.25 GHz to 52.6 GHz.
  • the 5G-NR air interface may include an SSB that is an area of a downlink resource grid that includes PSS/SSS/PBCH.
  • the 5G-NR air interface may utilize BWPs for various purposes.
  • BWP can be used for dynamic adaptation of the SCS.
  • the UE 902 can be configured with multiple BWPs where each BWP configuration has a different SCS. When a BWP change is indicated to the UE 902, the SCS of the transmission is changed as well.
  • Another use case example of BWP is related to power saving.
  • multiple BWPs can be configured for the UE 902 with different amount of frequency resources (for example, PRBs) to support data transmission under different traffic loading scenarios.
  • a BWP containing a smaller number of PRBs can be used for data transmission with small traffic load while allowing power saving at the UE 902 and in some cases at the gNB 916.
  • a BWP containing a larger number of PRBs can be used for scenarios with higher traffic load.
  • the RAN 904 is communicatively coupled to CN 920 that includes network elements to provide various functions to support data and telecommunications services to customers/subscribers (for example, users of UE 902).
  • the components of the CN 920 may be implemented in one physical node or separate physical nodes.
  • NFV may be utilized to virtualize any or all of the functions provided by the network elements of the CN 920 onto physical compute/storage resources in servers, switches, etc.
  • a logical instantiation of the CN 920 may be referred to as a network slice, and a logical instantiation of a portion of the CN 920 may be referred to as a network sub-slice.
  • the CN 920 may be an LTE CN 922, which may also be referred to as an EPC.
  • the LTE CN 922 may include MME 924, SGW 926, SGSN 928, HSS 930, PGW 932, and PCRF 934 coupled with one another over interfaces (or “reference points”) as shown. Functions of the elements of the LTE CN 922 may be briefly introduced as follows.
  • the MME 924 may implement mobility management functions to track a current location of the UE 902 to facilitate paging, bearer activation/deactivation, handovers, gateway selection, authentication, etc.
  • the SGW 926 may terminate an SI interface toward the RAN and route data packets between the RAN and the LTE CN 922.
  • the SGW 926 may be a local mobility anchor point for inter-RAN node handovers and also may provide an anchor for inter-3GPP mobility. Other responsibilities may include lawful intercept, charging, and some policy enforcement.
  • the SGSN 928 may track a location of the UE 902 and perform security functions and access control. In addition, the SGSN 928 may perform inter-EPC node signaling for mobility between different RAT networks; PDN and S-GW selection as specified by MME 924; MME selection for handovers; etc.
  • the S3 reference point between the MME 924 and the SGSN 928 may enable user and bearer information exchange for inter-3 GPP access network mobility in idle/active states.
  • the HSS 930 may include a database for network users, including subscription-related information to support the network entities’ handling of communication sessions.
  • the HSS 930 can provide support for routing/roaming, authentication, authorization, naming/addressing resolution, location dependencies, etc.
  • An S6a reference point between the HSS 930 and the MME 924 may enable transfer of subscription and authentication data for authenticating/authorizing user access to the LTE CN 920.
  • the PGW 932 may terminate an SGi interface toward a data network (DN) 936 that may include an application/content server 938.
  • the PGW 932 may route data packets between the LTE CN 922 and the data network 936.
  • the PGW 932 may be coupled with the SGW 926 by an S5 reference point to facilitate user plane tunneling and tunnel management.
  • the PGW 932 may further include a node for policy enforcement and charging data collection (for example, PCEF).
  • the SGi reference point between the PGW 932 and the data network 9 36 may be an operator external public, a private PDN, or an intra-operator packet data network, for example, for provision of IMS services.
  • the PGW 932 may be coupled with a PCRF 934 via a Gx reference point.
  • the PCRF 934 is the policy and charging control element of the LTE CN 922.
  • the PCRF 934 may be communicatively coupled to the app/content server 938 to determine appropriate QoS and charging parameters for service flows.
  • the PCRF 932 may provision associated rules into a PCEF (via Gx reference point) with appropriate TFT and QCI.
  • the CN 920 may be a 5GC 940.
  • the 5GC 940 may include an AUSF 942, AMF 944, SMF 946, UPF 948, NSSF 950, NEF 952, NRF 954, PCF 956, UDM 958, and AF 960 coupled with one another over interfaces (or “reference points”) as shown.
  • Functions of the elements of the 5GC 940 may be briefly introduced as follows.
  • the AUSF 942 may store data for authentication of UE 902 and handle authentication- related functionality.
  • the AUSF 942 may facilitate a common authentication framework for various access types.
  • the AUSF 942 may exhibit an Nausf service-based interface.
  • the AMF 944 may allow other functions of the 5GC 940 to communicate with the UE 902 and the RAN 904 and to subscribe to notifications about mobility events with respect to the UE 902.
  • the AMF 944 may be responsible for registration management (for example, for registering UE 902), connection management, reachability management, mobility management, lawful interception of AMF-related events, and access authentication and authorization.
  • the AMF 944 may provide transport for SM messages between the UE 902 and the SMF 946, and act as a transparent proxy for routing SM messages.
  • AMF 944 may also provide transport for SMS messages between UE 902 and an SMSF.
  • AMF 944 may interact with the AUSF 942 and the UE 902 to perform various security anchor and context management functions.
  • AMF 944 may be a termination point of a RAN CP interface, which may include or be an N2 reference point between the RAN 904 and the AMF 944; and the AMF 944 may be a termination point of NAS (Nl) signaling, and perform NAS ciphering and integrity protection.
  • AMF 944 may also support NAS signaling with the UE 902 over an N3 IWF interface.
  • the SMF 946 may be responsible for SM (for example, session establishment, tunnel management between UPF 948 and AN 908); UE IP address allocation and management (including optional authorization); selection and control of UP function; configuring traffic steering at UPF 948 to route traffic to proper destination; termination of interfaces toward policy control functions; controlling part of policy enforcement, charging, and QoS; lawful intercept (for SM events and interface to LI system); termination of SM parts of NAS messages; downlink data notification; initiating AN specific SM information, sent via AMF 944 over N2 to AN 908; and determining SSC mode of a session.
  • SM may refer to management of a PDU session, and a PDU session or “session” may refer to a PDU connectivity service that provides or enables the exchange of PDUs between the UE 902 and the data network 936.
  • the UPF 948 may act as an anchor point for intra-RAT and inter-RAT mobility, an external PDU session point of interconnect to data network 936, and a branching point to support multi-homed PDU session.
  • the UPF 948 may also perform packet routing and forwarding, perform packet inspection, enforce the user plane part of policy rules, lawfully intercept packets (UP collection), perform traffic usage reporting, perform QoS handling for a user plane (e.g., packet filtering, gating, UL/DL rate enforcement), perform uplink traffic verification (e.g., SDF- to-QoS flow mapping), transport level packet marking in the uplink and downlink, and perform downlink packet buffering and downlink data notification triggering.
  • UPF 948 may include an uplink classifier to support routing traffic flows to a data network.
  • the NSSF 950 may select a set of network slice instances serving the UE 902.
  • the NSSF 950 may also determine allowed NSSAI and the mapping to the subscribed S-NSSAIs, if needed.
  • the NSSF 950 may also determine the AMF set to be used to serve the UE 902, or a list of candidate AMFs based on a suitable configuration and possibly by querying the NRF 954.
  • the selection of a set of network slice instances for the UE 902 may be triggered by the AMF 944 with which the UE 902 is registered by interacting with the NSSF 950, which may lead to a change of AMF.
  • the NSSF 950 may interact with the AMF 944 via an N22 reference point; and may communicate with another NSSF in a visited network via an N31 reference point (not shown). Additionally, the NSSF 950 may exhibit an Nnssf service-based interface.
  • the NEF 952 may securely expose services and capabilities provided by 3 GPP network functions for third party, internal exposure/re-exposure, AFs (e.g., AF 960), edge computing or fog computing systems, etc.
  • the NEF 952 may authenticate, authorize, or throttle the AFs.
  • NEF 952 may also translate information exchanged with the AF 960 and information exchanged with internal network functions. For example, the NEF 952 may translate between an AF-Service-Identifier and an internal 5GC information.
  • NEF 952 may also receive information from other NFs based on exposed capabilities of other NFs. This information may be stored at the NEF 952 as structured data, or at a data storage NF using standardized interfaces. The stored information can then be re-exposed by the NEF 952 to other NFs and AFs, or used for other purposes such as analytics. Additionally, the NEF 952 may exhibit an Nnef service-based interface.
  • the NRF 954 may support service discovery functions, receive NF discovery requests from NF instances, and provide the information of the discovered NF instances to the NF instances. NRF 954 also maintains information of available NF instances and their supported services. As used herein, the terms “instantiate,” “instantiation,” and the like may refer to the creation of an instance, and an “instance” may refer to a concrete occurrence of an object, which may occur, for example, during execution of program code. Additionally, the NRF 954 may exhibit the Nnrf service-based interface.
  • the PCF 956 may provide policy rules to control plane functions to enforce them, and may also support unified policy framework to govern network behavior.
  • the PCF 956 may also implement a front end to access subscription information relevant for policy decisions in a UDR of the UDM 958.
  • the PCF 956 exhibit an Npcf service-based interface.
  • the UDM 958 may handle subscription-related information to support the network entities’ handling of communication sessions, and may store subscription data of UE 902. For example, subscription data may be communicated via an N8 reference point between the UDM 958 and the AMF 944.
  • the UDM 958 may include two parts, an application front end and a UDR.
  • the UDR may store subscription data and policy data for the UDM 958 and the PCF 956, and/or structured data for exposure and application data (including PFDs for application detection, application request information for multiple UEs 902) for the NEF 952.
  • the Nudr service-based interface may be exhibited by the UDR 221 to allow the UDM 958, PCF 956, and NEF 952 to access a particular set of the stored data, as well as to read, update (e.g., add, modify), delete, and subscribe to notification of relevant data changes in the UDR.
  • the UDM may include a UDM- FE, which is in charge of processing credentials, location management, subscription management and so on. Several different front ends may serve the same user in different transactions.
  • the UDM-FE accesses subscription information stored in the UDR and performs authentication credential processing, user identification handling, access authorization, registration/mobility management, and subscription management.
  • the UDM 958 may exhibit the Nudm service-based interface.
  • the AF 960 may provide application influence on traffic routing, provide access to NEF, and interact with the policy framework for policy control.
  • the 5GC 940 may enable edge computing by selecting operator/3 rd party services to be geographically close to a point that the UE 902 is attached to the network. This may reduce latency and load on the network.
  • the 5GC 940 may select a UPF 948 close to the UE 902 and execute traffic steering from the UPF 948 to data network 936 via the N6 interface. This may be based on the UE subscription data, UE location, and information provided by the AF 960. In this way, the AF 960 may influence UPF (re)selection and traffic routing.
  • the network operator may permit AF 960 to interact directly with relevant NFs. Additionally, the AF 960 may exhibit an Naf service-based interface.
  • the data network 936 may represent various network operator services, Internet access, or third party services that may be provided by one or more servers including, for example, application/content server 938.
  • FIG 10 schematically illustrates a wireless network 1000 in accordance with various embodiments.
  • the wireless network 1000 may include a UE 1002 in wireless communication with an AN 1004.
  • the UE 1002 and AN 1004 may be similar to, and substantially interchangeable with, like-named components described elsewhere herein.
  • the UE 1002 may be communicatively coupled with the AN 1004 via connection 1006.
  • the connection 1006 is illustrated as an air interface to enable communicative coupling, and can be consistent with cellular communications protocols such as an LTE protocol or a 5G NR. protocol operating at mmWave or sub-6GHz frequencies.
  • the UE 1002 may include a host platform 1008 coupled with a modem platform 1010.
  • the host platform 1008 may include application processing circuitry 1012, which may be coupled with protocol processing circuitry 1014 of the modem platform 1010.
  • the application processing circuitry 1012 may run various applications for the UE 1002 that source/sink application data.
  • the application processing circuitry 1012 may further implement one or more layer operations to transmit/receive application data to/from a data network. These layer operations may include transport (for example UDP) and Internet (for example, IP) operations
  • the protocol processing circuitry 1014 may implement one or more of layer operations to facilitate transmission or reception of data over the connection 1006.
  • the layer operations implemented by the protocol processing circuitry 1014 may include, for example, MAC, RLC, PDCP, RRC and NAS operations.
  • the modem platform 1010 may further include digital baseband circuitry 1016 that may implement one or more layer operations that are “below” layer operations performed by the protocol processing circuitry 1014 in a network protocol stack. These operations may include, for example, PHY operations including one or more of HARQ-ACK functions, scrambling/descrambling, encoding/decoding, layer mapping/de-mapping, modulation symbol mapping, received symbol/bit metric determination, multi-antenna port precoding/decoding, which may include one or more of space-time, space-frequency or spatial coding, reference signal generation/detection, preamble sequence generation and/or decoding, synchronization sequence generation/detection, control channel signal blind decoding, and other related functions.
  • PHY operations including one or more of HARQ-ACK functions, scrambling/descrambling, encoding/decoding, layer mapping/de-mapping, modulation symbol mapping, received symbol/bit metric determination, multi-antenna port precoding/decoding, which may
  • the modem platform 1010 may further include transmit circuitry 1018, receive circuitry 1020, RF circuitry 1022, and RF front end (RFFE) 1024, which may include or connect to one or more antenna panels 1026.
  • the transmit circuitry 1018 may include a digital-to-analog converter, mixer, intermediate frequency (IF) components, etc.
  • the receive circuitry 1020 may include an analog-to-digital converter, mixer, IF components, etc.
  • the RF circuitry 1022 may include a low-noise amplifier, a power amplifier, power tracking components, etc.
  • RFFE 1024 may include filters (for example, surface/bulk acoustic wave filters), switches, antenna tuners, beamforming components (for example, phase-array antenna components), etc.
  • transmit/receive components may be specific to details of a specific implementation such as, for example, whether communication is TDM or FDM, in mmWave or sub-6 gHz frequencies, etc.
  • the transmit/receive components may be arranged in multiple parallel transmit/receive chains, may be disposed in the same or different chips/modules, etc.
  • the protocol processing circuitry 1014 may include one or more instances of control circuitry (not shown) to provide control functions for the transmit/receive components.
  • a UE reception may be established by and via the antenna panels 1026, RFFE 1024, RF circuitry 1022, receive circuitry 1020, digital baseband circuitry 1016, and protocol processing circuitry 1014.
  • the antenna panels 1026 may receive a transmission from the AN 1004 by receive-beamforming signals received by a plurality of antennas/antenna elements of the one or more antenna panels 1026.
  • a UE transmission may be established by and via the protocol processing circuitry 1014, digital baseband circuitry 1016, transmit circuitry 1018, RF circuitry 1022, RFFE 1024, and antenna panels 1026.
  • the transmit components of the UE 1004 may apply a spatial filter to the data to be transmitted to form a transmit beam emitted by the antenna elements of the antenna panels 1026.
  • the AN 1004 may include a host platform 1028 coupled with a modem platform 1030.
  • the host platform 1028 may include application processing circuitry 1032 coupled with protocol processing circuitry 1034 of the modem platform 1030.
  • the modem platform may further include digital baseband circuitry 1036, transmit circuitry 1038, receive circuitry 1040, RF circuitry 1042, RFFE circuitry 1044, and antenna panels 1046.
  • the components of the AN 1004 may be similar to and substantially interchangeable with like- named components of the UE 1002.
  • the components of the AN 1008 may perform various logical functions that include, for example, RNC functions such as radio bearer management, uplink and downlink dynamic radio resource management, and data packet scheduling.
  • Figure 11 is a block diagram illustrating components, according to some example embodiments, able to read instructions from a machine-readable or computer-readable medium (e.g., a non-transitory machine-readable storage medium) and perform any one or more of the methodologies discussed herein.
  • Figure 11 shows a diagrammatic representation of hardware resources 1100 including one or more processors (or processor cores) 1110, one or more memory/storage devices 1120, and one or more communication resources 1130, each of which may be communicatively coupled via a bus 1140 or other interface circuitry.
  • a hypervisor 1102 may be executed to provide an execution environment for one or more network slices/sub-slices to utilize the hardware resources 1100.
  • the processors 1110 may include, for example, a processor 1112 and a processor 1114.
  • the processors 1110 may be, for example, a central processing unit (CPU), a reduced instruction set computing (RISC) processor, a complex instruction set computing (CISC) processor, a graphics processing unit (GPU), a DSP such as a baseband processor, an ASIC, an FPGA, a radiofrequency integrated circuit (RFIC), another processor (including those discussed herein), or any suitable combination thereof.
  • CPU central processing unit
  • RISC reduced instruction set computing
  • CISC complex instruction set computing
  • GPU graphics processing unit
  • DSP such as a baseband processor, an ASIC, an FPGA, a radiofrequency integrated circuit (RFIC), another processor (including those discussed herein), or any suitable combination thereof.
  • the memory/storage devices 1120 may include main memory, disk storage, or any suitable combination thereof.
  • the memory/storage devices 1120 may include, but are not limited to, any type of volatile, non-volatile, or semi-volatile memory such as dynamic random access memory (DRAM), static random access memory (SRAM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), Flash memory, solid-state storage, etc.
  • DRAM dynamic random access memory
  • SRAM static random access memory
  • EPROM erasable programmable read-only memory
  • EEPROM electrically erasable programmable read-only memory
  • Flash memory solid-state storage, etc.
  • the communication resources 1130 may include interconnection or network interface controllers, components, or other suitable devices to communicate with one or more peripheral devices 1104 or one or more databases 1106 or other network elements via a network 1108.
  • the communication resources 1130 may include wired communication components (e.g., for coupling via USB, Ethernet, etc.), cellular communication components, NFC components, Bluetooth® (or Bluetooth® Low Energy) components, Wi-Fi® components, and other communication components.
  • Instructions 1150 may comprise software, a program, an application, an applet, an app, or other executable code for causing at least any of the processors 1110 to perform any one or more of the methodologies discussed herein.
  • the instructions 1150 may reside, completely or partially, within at least one of the processors 1110 (e.g., within the processor’s cache memory), the memory/storage devices 1120, or any suitable combination thereof.
  • any portion of the instructions 1150 may be transferred to the hardware resources 1100 from any combination of the peripheral devices 1104 or the databases 1106. Accordingly, the memory of processors 1110, the memory/storage devices 1120, the peripheral devices 1104, and the databases 1106 are examples of computer-readable and machine-readable media.
  • Figure 12 illustrates a network 1200 in accordance with various embodiments.
  • the network 1200 may operate in a matter consistent with 3GPP technical specifications or technical reports for 6G systems.
  • the network 1200 may operate concurrently with network 900.
  • the network 1200 may share one or more frequency or bandwidth resources with network 900.
  • a UE e.g., UE 1202
  • UE 1202 may be configured to operate in both network 1200 and network 900.
  • Such configuration may be based on a UE including circuitry configured for communication with frequency and bandwidth resources of both networks 900 and 1200.
  • several elements of network 1200 may share one or more characteristics with elements of network 900. For the sake of brevity and clarity, such elements may not be repeated in the description of network 1200.
  • the network 1200 may include a UE 1202, which may include any mobile or non -mobile computing device designed to communicate with a RAN 1208 via an over-the-air connection.
  • the UE 1202 may be similar to, for example, UE 902.
  • the UE 1202 may be, but is not limited to, a smartphone, tablet computer, wearable computer device, desktop computer, laptop computer, in- vehicle infotainment, in-car entertainment device, instrument cluster, head-up display device, onboard diagnostic device, dashtop mobile equipment, mobile data terminal, electronic engine management system, electronic/engine control unit, electronic/engine control module, embedded system, sensor, microcontroller, control module, engine management system, networked appliance, machine-type communication device, M2M or D2D device, loT device, etc.
  • the network 1200 may include a plurality of UEs coupled directly with one another via a sidelink interface.
  • the UEs may be M2M/D2D devices that communicate using physical sidelink channels such as, but not limited to, PSBCH, PSDCH, PSSCH, PSCCH, PSFCH, etc.
  • the UE 1202 may be communicatively coupled with an AP such as AP 906 as described with respect to Figure 9.
  • the RAN 1208 may include one or more ANss such as AN 908 as described with respect to Figure 9.
  • the RAN 1208 and/or the AN of the RAN 1208 may be referred to as a base station (BS), a RAN node, or using some other term or name.
  • the UE 1202 and the RAN 1208 may be configured to communicate via an air interface that may be referred to as a sixth generation (6G) air interface.
  • the 6G air interface may include one or more features such as communication in a terahertz (THz) or sub-THz bandwidth, or joint communication and sensing.
  • THz terahertz
  • sub-THz bandwidth may refer to a system that allows for wireless communication as well as radar-based sensing via various types of multiplexing.
  • THz or sub-THz bandwidths may refer to communication in the 80 GHz and above frequency ranges. Such frequency ranges may additionally or alternatively be referred to as “millimeter wave” or “mmWave” frequency ranges.
  • the RAN 1208 may allow for communication between the UE 1202 and a 6G core network (CN) 1210. Specifically, the RAN 1208 may facilitate the transmission and reception of data between the UE 1202 and the 6G CN 1210.
  • the 6G CN 1210 may include various functions such as NSSF 950, NEF 952, NRF 954, PCF 956, UDM 958, AF 960, SMF 946, and AUSF 942.
  • the 6G CN 1210 may additional include UPF 948 and DN 936 as shown in Figure 12.
  • the RAN 1208 may include various additional functions that are in addition to, or alternative to, functions of a legacy cellular network such as a 4G or 5G network.
  • Two such functions may include a Compute Control Function (Comp CF) 1224 and a Compute Service Function (Comp SF) 1236.
  • the Comp CF 1224 and the Comp SF 1236 may be parts or functions of the Computing Service Plane.
  • Comp CF 1224 may be a control plane function that provides functionalities such as management of the Comp SF 1236, computing task context generation and management (e.g., create, read, modify, delete), interaction with the underlying computing infrastructure for computing resource management, etc..
  • Comp SF 1236 may be a user plane function that serves as the gateway to interface computing service users (such as UE 1202) and computing nodes behind a Comp SF instance. Some functionalities of the Comp SF 1236 may include: parse computing service data received from users to compute tasks executable by computing nodes; hold service mesh ingress gateway or service API gateway; service and charging policies enforcement; performance monitoring and telemetry collection, etc. In some embodiments, a Comp SF 1236 instance may serve as the user plane gateway for a cluster of computing nodes. A Comp CF 1224 instance may control one or more Comp SF 1236 instances.
  • Two other such functions may include a Communication Control Function (Comm CF) 1228 and a Communication Service Function (Comm SF) 1238, which may be parts of the Communication Service Plane.
  • the Comm CF 1228 may be the control plane function for managing the Comm SF 1238, communication sessions creation/configuration/releasing, and managing communication session context.
  • the Comm SF 1238 may be a user plane function for data transport.
  • Comm CF 1228 and Comm SF 1238 may be considered as upgrades of SMF 946 and UPF 948, which were described with respect to a 5G system in Figure 9.
  • the upgrades provided by the Comm CF 1228 and the Comm SF 1238 may enable service-aware transport. For legacy (e.g., 4G or 5G) data transport, SMF 946 and UPF 948 may still be used.
  • Data CF 1222 may be a control plane function and provides functionalities such as Data SF 1232 management, Data service creation/configuration/releasing, Data service context management, etc.
  • Data SF 1232 may be a user plane function and serve as the gateway between data service users (such as UE 1202 and the various functions of the 6G CN 1210) and data service endpoints behind the gateway. Specific functionalities may include include: parse data service user data and forward to corresponding data service endpoints, generate charging data, report data service status.
  • SOCF 1220 may discover, orchestrate and chain up communication/computing/data services provided by functions in the network.
  • SOCF 1220 may interact with one or more of Comp CF 1224, Comm CF 1228, and Data CF 1222 to identify Comp SF 1236, Comm SF 1238, and Data SF 1232 instances, configure service resources, and generate the service chain, which could contain multiple Comp SF 1236, Comm SF 1238, and Data SF 1232 instances and their associated computing endpoints. Workload processing and data movement may then be conducted within the generated service chain.
  • the SOCF 1220 may also responsible for maintaining, updating, and releasing a created service chain.
  • SRF 1214 may act as a registry for system services provided in the user plane such as services provided by service endpoints behind Comp SF 1236 and Data SF 1232 gateways and services provided by the UE 1202.
  • the SRF 1214 may be considered a counterpart of NRF 954, which may act as the registry for network functions.
  • eSCP evolved service communication proxy
  • SCP service communication proxy
  • eSCP-U 1234 service communication proxy
  • SICF 1226 may control and configure eCSP instances in terms of service traffic routing policies, access rules, load balancing configurations, performance monitoring, etc.
  • the AMF 1244 may be similar to 944, but with additional functionality. Specifically, the AMF 1244 may include potential functional repartition, such as move the message forwarding functionality from the AMF 1244 to the RAN 1208.
  • SOEF service orchestration exposure function
  • the SOEF may be configured to expose service orchestration and chaining services to external users such as applications.
  • the UE 1202 may include an additional function that is referred to as a computing client service function (comp CSF) 1204.
  • the comp CSF 1204 may have both the control plane functionalities and user plane functionalities, and may interact with corresponding network side functions such as SOCF 1220, Comp CF 1224, Comp SF 1236, Data CF 1222, and/or Data SF 1232 for service discovery, request/response, compute task workload exchange, etc.
  • the Comp CSF 1204 may also work with network side functions to decide on whether a computing task should be run on the UE 1202, the RAN 1208, and/or an element of the 6G CN 1210.
  • the UE 1202 and/or the Comp CSF 1204 may include a service mesh proxy 1206.
  • the service mesh proxy 1206 may act as a proxy for service-to-service communication in the user plane. Capabilities of the service mesh proxy 1206 may include one or more of addressing, security, load balancing, etc.
  • Figure 13 illustrates a simplified block diagram of artificial (Al)-assisted communication between a UE 1305 and a RAN 1310, in accordance with various embodiments. More specifically, as described in further detail below, AVmachine learning (ML) models may be used or leveraged to facilitate over-the-air communication between UE 1305 and RAN 1310.
  • ML AVmachine learning
  • One or both of the UE 1305 and the RAN 1310 may operate in a matter consistent with 3GPP technical specifications or technical reports for 6G systems.
  • the wireless cellular communication between the UE 1305 and the RAN 1310 may be part of, or operate concurrently with, networks 1200, 900, and/or some other network described herein.
  • the UE 1305 may be similar to, and share one or more features with, UE 1202, UE 902, and/or some other UE described herein.
  • the UE 1305 may be, but is not limited to, a smartphone, tablet computer, wearable computer device, desktop computer, laptop computer, in-vehicle infotainment, in-car entertainment device, instrument cluster, head-up display device, onboard diagnostic device, dashtop mobile equipment, mobile data terminal, electronic engine management system, electronic/engine control unit, electronic/engine control module, embedded system, sensor, microcontroller, control module, engine management system, networked appliance, machine-type communication device, M2M or D2D device, loT device, etc.
  • the RAN 1310 may be similar to, and share one or more features with, RAN 914, RAN 1208, and/or some other RAN described herein.
  • the Al-related elements of UE 1305 may be similar to the Al-related elements of RAN 1310.
  • description of the various elements will be provided from the point of view of the UE 1305, however it will be understood that such discussion or description will apply to equally named/numbered elements of RAN 1310, unless explicitly stated otherwise.
  • the UE 1305 may include various elements or functions that are related to AI/ML. Such elements may be implemented as hardware, software, firmware, and/or some combination thereof. In embodiments, one or more of the elements may be implemented as part of the same hardware (e.g., chip or multi -processor chip), software (e.g., a computing program), or firmware as another element.
  • the data repository 1315 may be responsible for data collection and storage. Specifically, the data repository 1315 may collect and store RAN configuration parameters, measurement data, performance key performance indicators (KPIs), model performance metrics, etc., for model training, update, and inference. More generally, collected data is stored into the repository. Stored data can be discovered and extracted by other elements from the data repository 1315. For example, as may be seen, the inference data selection/filter element 1350 may retrieve data from the data repository 1315.
  • the UE 1305 may be configured to discover and request data from the data repository 1310 in the RAN, and vice versa. More generally, the data repository 1315 of the UE 1305 may be communicatively coupled with the data repository 1315 of the RAN 1310 such that the respective data repositories of the UE and the RAN may share collected data with one another.
  • the training data selection/filter functional block 1320 may be configured to generate training, validation, and testing datasets for model training. Training data may be extracted from the data repository 1315. Data may be selected/filtered based on the specific AI/ML model to be trained. Data may optionally be transformed/augmented/pre-processed (e.g., normalized) before being loaded into datasets. The training data selection/filter functional block 1320 may label data in datasets for supervised learning. The produced datasets may then be fed into model training the model training functional block 1325.
  • model training functional block 1325 may be responsible for training and updating(re-training) AI/ML models.
  • the selected model may be trained using the fed-in datasets (including training, validation, testing) from the training data selection/filtering functional block.
  • the model training functional block 1325 may produce trained and tested AI/ML models which are ready for deployment.
  • the produced trained and tested models can be stored in a model repository 1335.
  • the model repository 1335 may be responsible for AI/ML models’ (both trained and untrained) storage and exposure. Trained/updated model(s) may be stored into the model repository 1335. Model and model parameters may be discovered and requested by other functional blocks (e.g., the training data selection/filter functional block 1320 and/or the model training functional block 1325).
  • the UE 1305 may discover and request AI/ML models from the model repository 1335 of the RAN 1310.
  • the RAN 1310 may be able to discover and/or request AI/ML models from the model repository 1335 of the UE 1305.
  • the RAN 1310 may configure models and/or model parameters in the model repository 1335 of the UE 1305.
  • the model management functional block 1340 may be responsible for management of the AI/ML model produced by the model training functional block 1325. Such management functions may include deployment of a trained model, monitoring model performance, etc. In model deployment, the model management functional block 1340 may allocate and schedule hardware and/or software resources for inference, based on received trained and tested models. As used herein, “inference” refers to the process of using trained AI/ML model(s) to generate data analytics, actions, policies, etc. based on input inference data. In performance monitoring, based on wireless performance KPIs and model performance metrics, the model management functional block 1340 may decide to terminate the running model, start model re-training, select another model, etc. In embodiments, the model management functional block 1340 of the RAN 1310 may be able to configure model management policies in the UE 1305 as shown.
  • the inference data selection/filter functional block 1350 may be responsible for generating datasets for model inference at the inference functional block 1345, as described below. Specifically, inference data may be extracted from the data repository 1315. The inference data selection/filter functional block 1350 may select and/or filter the data based on the deployed AI/ML model. Data may be transformed/augmented/pre-processed following the same transformation/augmentation/pre-processing as those in training data selection/filtering as described with respect to functional block 1320. The produced inference dataset may be fed into the inference functional block 1345.
  • the inference functional block 1345 may be responsible for executing inference as described above. Specifically, the inference functional block 1345 may consume the inference dataset provided by the inference data selection/filtering functional block 1350, and generate one or more outcomes. Such outcomes may be or include data analytics, actions, policies, etc. The outcome(s) may be provided to the performance measurement functional block 1330.
  • the performance measurement functional block 1330 may be configured to measure model performance metrics (e.g., accuracy, model bias, run-time latency, etc.) of deployed and executing models based on the inference outcome(s) for monitoring purpose.
  • Model performance data may be stored in the data repository 1315.
  • the electronic device(s), network(s), system(s), chip(s) or component(s), or portions or implementations thereof, of Figures 9-13, or some other figure herein may be configured to perform one or more processes, techniques, or methods as described herein, or portions thereof.
  • One such process is depicted in Figure 14.
  • the process of Figure 14 may include or relate to a method to be performed by one or more electronic devices that implement a first network data analytics function (NWDAF) with a model training logical function (MTLF).
  • NWDAAF network data analytics function
  • MTLF model training logical function
  • the process may include identifying, at 1401, that a federated learning task for a machine learning (ML) model is to be initiated; identifying, at 1402, a second NWDAF with a MTLF; identifying, at 1403 from the second NWDAF, an indication of an updated local version of the ML model; updating, at 1404 based on the updated local version of the ML model, a global version of the ML model; and transmitting, at 1405 to the second NWDAF, an indication of the updated global version of the ML model.
  • ML machine learning
  • the process of Figure 15 may include or relate to a method to be performed by one or more electronic devices that implement a first network data analytics function (NWDAF) with a model training logical function (MTLF).
  • the process may include updating, at 1501, a local version of a machine learning (ML) model; transmitting, at 1502 to a second NWDAF, an indication of an updated local version of the ML model; and identifying, at 1503 from the second NWDAF based on the updated local version of the ML model, an indication of an updated global version of the ML model.
  • NWDAF machine learning
  • At least one of the components set forth in one or more of the preceding figures may be configured to perform one or more operations, techniques, processes, and/or methods as set forth in the example section below.
  • the baseband circuitry as described above in connection with one or more of the preceding figures may be configured to operate in accordance with one or more of the examples set forth below.
  • circuitry associated with a UE, base station, network element, etc. as described above in connection with one or more of the preceding figures may be configured to operate in accordance with one or more of the examples set forth below in the example section.
  • Example 1 may include a NWDAF containing MTLF that can support Federated Learning aggregation capability registers its NF profile with NRF.
  • Example 2 may include the method of example 1 or some other example herein, where NWDAF containing MTLF with Federated Learning Aggregation capability is the network function responsible for send the global ML model to other NWDAF containing MTLF(s) to perform local model update based on its local data collected, receive local updated ML model from other NWDAF containing MTLF(s), aggregates all the local ML model updates and updates its global ML model, send the updated global ML model to the other NWDAF containing MTLF that participated in the Federated Learning ML training iteration.
  • NWDAF containing MTLF with Federated Learning Aggregation capability is the network function responsible for send the global ML model to other NWDAF containing MTLF(s) to perform local model update based on its local data collected, receive local updated ML model from other NWDAF containing MTLF(s), aggregates all the local ML model updates and updates its global ML model, send the updated global ML model to the other NWDAF containing MTLF that participated in the Feder
  • Example 3 may include a NWDAF containing MTLF that can support Federated Learning participation capability registers its NF profile with NRF.
  • Example 4 may include the method of example 3 or some other example herein, where NWDAF containing MTLF with Federated Learning participation capability is the network function responsible for data collection per Analytics ID for ML model training performed locally in the NWDAF containing MTLF with Federated Learning participation capability, send the local updated ML model to the NWDAF containing MTLF with Federated Learning Aggregation capability, receive the global updated ML model from the NWDAF containing MTLF with Federated Learning Aggregation capability.
  • NWDAF containing MTLF with Federated Learning participation capability is the network function responsible for data collection per Analytics ID for ML model training performed locally in the NWDAF containing MTLF with Federated Learning participation capability
  • send the local updated ML model to the NWDAF containing MTLF with Federated Learning Aggregation capability receive the global updated ML model from the NWDAF containing MTLF with Federated Learning Aggregation capability.
  • Example 5 may include the method of examples 1, and 3 or some other example herein where for the NWDAF containing MTLF with Federated Learning Aggregation capability to discover NWDAF containing MTLF with Federated Learning participation capability using the NRF, the service consumer sends Nnrf_NFDiscovery_Request to the NRF with Federated Learning participation capability for ML model(s)
  • Example 6 may include the method of example 5 or some other example herein, where the NRF returns one or more instances of NWDAF containing MTLF with Federated Learning participation capability for ML model(s) to the NF consumer.
  • Example 7 may include the method of examples 1, 3, and 6 or some other example herein, where the NnwdafJMLModelTrainingUpdate service is provided by an NWDAF containing MTLF and consumed by an NWDAF containing AnLF.
  • Example 8 may include the method of examples 1, 3, and 6 or some other example herein, where Nnwdaf_MLModel_DistributedTraining service is provided by an NWDAF containing MTLF and consumed by an NWDAF containing MTLF.
  • Example 9 may include the method of example 7 or some other example herein, where for Nnwdaf_MLModelTrainingUpdate_Subscribe service operation the consumer NF (NWDAF containing AnLF) subscribes to ML model training update with NWDAF containing MTLF.
  • the input for the Nnwdaf MLModelTrainingUpdate Subscribe is Anlaytics ID(s) for which the training update is requested, Notification Target Address, Subscription Correlation ID (in the case of modification of the ML model subscription), Expiry Time.
  • the output of Nnwdaf MLModelTrainingUpdate Subscribe operation includes the Subscription Correlation ID, Expiry time.
  • Example 10 may include the method of example 7 or some other example herein, where for Nnwdaf MLModelTrainingUpdate Unsubscribe service operation the consumer NF (NWDAF containing AnLF) unsubscribes to ML model training update with NWDAF containing MTLF.
  • the input includes Subscription Correlation ID and output includes the service operation result.
  • Example 11 may include the method of example 7 or some other example herein, where for Nnwdaf_MLModelTrainingUpdate_Notify service operation an NWDAF containing MTLF notifies the ML model information to the consumer NF (NWDAF containing AnLF) which has subscribed to the specific NWDAF service.
  • the input includes Analytics ID for which the updated trained ML model is available, the address of the updated trained ML model file.
  • Example 12 may include the method of example 8 or some other example herein, where Nnwdaf MLModel DistributedTraining Request service operation an NWDAF containing MTLF (with Federated Learning aggregation capability) in the role of service consumer sends a request to another NWDAF containing MTLF (with Federated Learning participation capability).
  • the required input includes Analytics ID for which Federated Learning is required, ML model (global ML model) file address, ML model reporting time limit.
  • the ML model reporting time limit is the time within which the local trained ML model needs to be reported back to the service consumer.
  • Example 13 may include the method of example 8 or some other example herein, where for Nnwdaf MLModel DistributedTraining Response service operation an NWDAF containing MTLF (with Federated Learning participation capability) in the role of service producer sends a response to NWDAF containing MTLF (with Federated Learning aggregation capability) which includes the result of the operation. If the result of the operation is successful, then the response includes the ML model (local ML model) file address, validity period. If the result is not successful, the response includes an error code.
  • Nnwdaf MLModel DistributedTraining Response service operation an NWDAF containing MTLF (with Federated Learning participation capability) in the role of service producer sends a response to NWDAF containing MTLF (with Federated Learning aggregation capability) which includes the result of the operation. If the result of the operation is successful, then the response includes the ML model (local ML model) file address, validity period. If the result is not successful, the response includes an error code.
  • Example 14 may include the method of example 8 or some other example herein, where for Nnwdaf_ MLModel DistributedTraining Subscribe service operation, If the NWDAF containing MTLF with Federated Learning participation capability in the role of service producer sends a response to NWDAF containing MTLF (with Federated Learning aggregation capability) with the result as success, then the NWDAF containing MTLF with Federated Learning participation capability in the role of service consumer subscribes for ML model (global ML model as a result of aggregation from the result of other NWDAF containing MTLF with Federated Learning participation capability) with NWDAF containing MTLF (with Federated Learning aggregation capability).
  • the input includes Analytics ID, Notification Target Address (+ Notification Correlation ID).
  • the output includes subscription Correlation ID when the subscription is accepted.
  • Example 15 may include the method of example 14 or some other example herein, where for Nnwdaf_ MLModel_DistributedTraining_Notify service operation, when the NWDAF containing MTLF with Federated Learning participation capability has subscribed for ML model (global ML model as a result of aggregation from the result of other NWDAF containing MTLF with Federated Learning participation capability), the NWDAF containing MTLF (with Federated Learning aggregation capability) sends a notification to NWDAF containing MTLF (with Federated Learning participation capability) which includes the Notification Correlation ID, ML model (global ML model) file address, validity period.
  • ML model global ML model
  • Example 16 may include the method of example 8 or some other example herein, where for Nnwdaf MLModel DistributedTraining Unsubscribe service operation, the NWDAF containing MTLF with Federated Learning participation capability unsubscribes from the NWDAF containing MTLF (with Federated Learning aggregation capability).
  • the input includes the subscription Correlation ID.
  • the output includes the result of the operation.
  • Example 17 may include the method of example 1 or some other example herein, where NWDAF containing MTLF (Federated Learning aggregation capability) decides that the federated learning task for a given ML model (required to generate an Analytics ID) needs to be initiated based on ML Model accuracy.
  • NWDAF containing MTLF Federated Learning aggregation capability
  • Example Al includes a method an NWDAF containing MTLF registers it NF profile with NRF where the NF profile parameter includes supported ML model file serialization formats for the trained ML models in the ML model filter information.
  • Example A2 includes a method of example Al or some other example herein, where the ML model file serialization format(s) included in the ML model Filter information indicates the supported ML model file serialization format(s) for the trained ML model(s) available at the NWDAF containing MTLF for consuming by the service consumer.
  • Example A2a includes method of example Al or some other example herein, where the modelfileformatList in the ML model filter information can be provided per Analytics ID .
  • Example A3 includes method of example A2 or some other example herein, where the consumer of the services provided by NWDAF containing MTLF may be an NWDAF containing AnLF or NWDAF containing MTLF.
  • Example A4 includes a method of example Al, A2, A3, or some other example herein, where the consumer NF may belong to the same vendor as NWDAF containing MTLF or to a different vendor.
  • Example A5 includes a method of example A3 or some other example herein, where NWDAF containing AnLF invokes a Nnrf NfDiscovery Request to an appropriately configured NRE includes the ML model file serialization format(s) supported for the trained ML model(s) in the ML model filter information.
  • Example A6 includes a method of example A5 or some other example herein, where NRE determines a set of NWDAF containing MTLF instance(s) matching at least one of the ML model file serialization formats supported in Nnrf NFDiscovery Request and internal policies of the NRF and sends the NF profile(s) (including ML model file serialization format(s)) of the determined NWDAF containing MTLF instances in the Discovery Response.
  • Example A7 includes a method of example A5, A6, or some other example herein, where the NWDAF containing AnLF subscribes to NWDAF containing MTLF using Nnwdaf MLModelProvision Subscribe service operation including ML model file serialization format requested as input, NWDAF containing MTLF notifies the ML model information (address (e.g. URL or FQDN) of Model file) to the NWDAF containing AnLF only if ML model format as requested in the input of Nnwdaf MLModelProvision Subscribe is a match.
  • NWDAF containing AnLF subscribes to NWDAF containing MTLF using Nnwdaf MLModelProvision Subscribe service operation including ML model file serialization format requested as input
  • NWDAF containing MTLF notifies the ML model information (address (e.g. URL or FQDN) of Model file) to the NWDAF containing AnLF only if ML model format as requested in the input of Nnwdaf MLModelPro
  • Example A8 includes a method where the NWDAF containing MTLF sends Nnrf_NFManagement_NFRegister to NRF to inform the NRF of its NF profile and it includes ML model file specific information for the trained ML model(s) as a new attribute.
  • Example A9 includes a method of example A8, A3, A4, or some other example herein, where The ML model file specific information attribute includes the supported ML model file serialization formats for the trained ML model(s) available at the NWDAF containing MTLF for consuming by the service consumer.
  • Example A10 includes a method where the functionality of ADRF is extended to enable a NWDAF containing MTLF to store and retrieve trained ML model(s) to and from ADRF respectively.
  • Example Al 1 includes a method of example A10 or some other example herein, where a new service is supported by ADRF e.g., NadrfJMLModelManagement service that enables the consumer NWDAF containing MTLF to store, retrieve, and remove ML model(s) from an ADRF.
  • ADRF e.g., NadrfJMLModelManagement service that enables the consumer NWDAF containing MTLF to store, retrieve, and remove ML model(s) from an ADRF.
  • Example A12 includes a method of example Al 1 or some other example herein, where the NWDAF containing AnLF sends a Nadrf MLModelManagement RetrievalRequest which includes Analytics ID(s), ML Model Filter Info, ML model file specific information, TargetNF (NWDAF MTLF) to subscribe for Notifications.
  • Example A13 includes a method of example A12 or some other example herein, where if the ML model file for the Analytics ID(s) requested is not stored in ADRF then elements in example A14, A15, A16 are performed.
  • Example A14 includes a method of example A13 or some other example herein, where ADRF sends Nnwdaf MLModelProvision Subscribe with input parameters ML model file specific Information (ML model file serialization format).
  • Example Al 5 includes a method of example A14 or some other example herein, where ADRF sends Nnwdaf MLModelTrainingUpdate Subscribe with input parameters Analytics ID(s), ML model file specific Information (ML model file serialization format).
  • Example Al 6 includes a method of example Al 5 or some other example herein, where the ML model for which the ADRF has subscribed for ML model training update has been updated, the NWDAF containing MTLF sends Nnwdaf_MLModelTrainingUpdate_Notify with the following parameters Analytics ID, Trained ML model file address, Notification Correlation ID.
  • Example A17 includes a method of example A12 or some other example herein, where the ADRF sends a response back to NWDAF containing AnLF using Nadrf MLModelManagement RetrievalRequest Response with the following parameters ML Model File Information(Trained ML model file address, ML model file serialization format, Trained ML Model ID per Analytics ID).
  • Example Al 8 includes a method of example Al 7 or some other example herein, where the NWDAF containing AnLF subscribes to ADRF using Nadrf MLModelManagement RetrievalTrainingUpdate Subscribe service operation containing input parameters Trained ML Model ID per Analytics ID.
  • Example Al 9 includes a method of example Al 8 or some other example herein, where the AnLF sends a notification to NWDAF containing AnLF using Nadrf MLModelManagement RetrievalTrainingUpdate Notify service operation containing following parameters ML Model File Information(Trained ML model file address, ML model file serialization format, Trained ML Model ID per Analytics ID).
  • Example A20 includes a method of example Al 9 or some other example herein, where the NWDAF containing AnLF determines that the ML model training update is no longer required and if yes, the NWDAF containing AnLF sends Nadrf MLModelManagement RetrievalTrainingUpdate Unsubscribe with Subscription Correlation ID as input parameters.
  • Example A21 includes a method of example A20 or some other example herein, where ADRF determines if any of the NWDAF AnLF consumer(s) have subscription for ML Model training update per Analytics ID. If not, consumer has subscription for ML model training update per Analytics ID, the ADRF removes the ML model file and ML model file specific information and elements described in example A22 is performed.
  • Example A22 includes a method of example A21 or some other example herein, where ADRF sends Nnwdaf MLModelTrainingUpdate Unsubscribe to ADRF with Subscription Correlation ID as input parameter.
  • Example A23 includes a method of example A10 or some other example herein, where the NWDAF containing AnLF may trigger trained ML model storage in ADRF.
  • Example A24 includes a method of example A23 or some other example herein, where the NWDAF containing AnLF sends Nnwdaf MLModellnfo Request with the following input parameters Analytics ID(s), ML model file specific information (ML model file serialization format), Notification end point address (ADRF) to the NWDAF containing MTLF.
  • Analytics ID(s) ML model file specific information
  • ADRF Notification end point address
  • Example A25 includes a method of example A10 or some other example herein, where the ADRF may send the request to store the trained ML model in ADRF.
  • Example A26 includes a method of example A25 or some other example herein, where the ADRF sends Nnwdaf MLModelProvision Subscribe with the following input parameters ML model file specific Information (ML model file serialization format).
  • Example A27 includes a method of example A25, A26 or some other example herein, where the NWDAF containing MTLF sends Nadrf MLModelManagement StorageRequest with input parameters Analytics ID(s), Trained ML model file address, ML model file specific information(ML model file serialization format).
  • Example A28 includes a method of example A27 or some other example herein, where the ADRF sends Nnwdaf MLModelTrainingUpdate Subscribe with input parameters Analytics ID(s), ML model file specific Information (ML model file serialization format).
  • Example A29 includes a method of example A28, or some other example herein, when the ML model for which the ADRF has subscribed for ML model training update has been updated, the NWDAF containing MTLF sends Nnwdaf_MLModelTrainingUpdate_Notify with input parameters Analytics ID, Trained ML model file address, Notification Correlation ID.
  • Example A30 includes a method where a new network function is defined in 5G core network e.g., ML Model Storage Function (MLMS) to enable a NWDAF containing MTLF to store and retrieve trained ML model(s) to and from ADRF respectively.
  • MLMS ML Model Storage Function
  • Example A31 includes a method of example A30 or some other example herein, where a new service is supported by MLMS e.g., Nmlms_MLModelManagement service that enables the consumer NWDAF containing MTLF to store, retrieve, and remove ML model(s) from an MLMS.
  • Example A32 includes a method of example A30, A31 or some other example herein, where the NF consumers (NWDAF containing MTLF and/or AnLF) shall utilize the NRF to discover MLMS instance(s) unless MLMS information is available by other means e.g. locally configured on NF consumers.
  • the MLMS selection function in NF consumers selects an MLMS instance based on the available MLMS instances.
  • Example A33 includes a method of example A30, A31, A32, or some other example herein, where the S-NSSAI is used as a factor by the NF consumer for MLMS selection.
  • Example A34 includes a method of example A30, A31 or some other example herein, where ADRF in example A12 to example A29 is replaced by MLMS where the functionality of ADRF defined in examples A12 to A29 is applicable to MLMS
  • Example A40 includes a method where the functionality of ADRF is extended to enable a NWDAF containing MTLF to request ADRF to perform conversion from one ML model file serialization format to another ML model file serialization format.
  • Example A41 includes a method of example A40 or some other example herein, where the NWDAF containing AnLF sends a
  • Nadrf MLModelManagement FormatConversionRequest which includes ML model file specific information (Trained ML model file address, ML model file serialization format), and target ML model file serialization format.
  • Example A42 includes a method of example A41 or some other example herein, where the ADRF performs conversion of the given ML model file serialization format to the requested target ML model file serialization format.
  • Example A43 includes a method of example A42 or some other example herein, where the ADRF sends a Nadrf MLModelManagement FormatConversionRequest Response and includes ML Model File Information (Trained ML model file address, ML model file serialization format).
  • Example A50 includes a method where a ML Model Storage Function (MLMS) functionality is introduced to enable a NWDAF containing MTLF to request MLMS to perform conversion from one ML model file serialization format to another ML model file serialization format.
  • MLMS ML Model Storage Function
  • Example A51 includes a method of example A40 or some other example herein, where the NWDAF containing AnLF sends a
  • Nmlms MLModelManagement FormatConversionRequest which includes ML model file specific information (Trained ML model file address, ML model file serialization format), and target ML model file serialization format.
  • Example A52 includes a method of example 5 lor some other example herein, where the MLMS performs conversion of the given ML model file serialization format to the requested target ML model file serialization format.
  • Example A53 includes a method of example A52 or some other example herein, where the MLMS sends a Nmlms MLModelManagement FormatConversionRequest Response and includes ML Model File Information (Trained ML model file address, ML model file serialization format).
  • Example Bl relates to a method to be performed by one or more electronic devices that implement a first network data analytics function (NWDAF) with a model training logical function (MTLF), wherein the method comprises identifying that a federated learning task for a machine learning (ML) model is to be initiated; identifying a second NWDAF with a MTLF; identifying, from the second NWDAF, an indication of an updated local version of the ML model; updating, based on the updated local version of the ML model, a global version of the ML model; and transmitting, to the second NWDAF, an indication of the updated global version of the ML model.
  • NWDAF network data analytics function
  • MTLF model training logical function
  • Example B2 relates to the method of example Bl, and/or some other example herein, wherein the first NWDAF is a NWDAF with a MTLF that is configured for federated learning aggregation.
  • Example B3 relates to the method of any of examples B1-B2, and/or some other example herein, wherein the second NWDAF is a NWDAF with a MTLF that is configured for federated learning participation.
  • Example B4 relates to the method of any of examples B1-B3, and/or some other example herein, wherein identification of the second NWDAF is based on: transmission of a Nnrf Discovery Request message to a network repository function (NRF); and receipt, from the NRF based on the transmitted Nnrf Discovery Request message, of a Nnrf Discovery Response message that includes an indication of the second NWDAF.
  • NRF network repository function
  • Example B5 relates to the method of any of examples B1-B4, and/or some other example herein, wherein the indication of the updated local version of the ML model is received in a Nnwdaf MLModel DistributedTraining Response message.
  • Example B6 relates to the method of example B5, and/or some other example herein, wherein the Nnwdaf MLModel DistributedTraining Response message is responsive to transmission, from the first NWDAF to the second NWDAF, of a Nnwdaf MLModel DistributedTraining Request message.
  • Example B7 relates to the method of any of examples B1-B6, and/or some other example herein, wherein the indication of the updated global version of the ML model is transmitted in a Nnwdaf_MLModel_DistributedTraining_Notify message.
  • Example B8 includes a method to be performed by one or more electronic devices that implement a first network data analytics function (NWDAF) with a model training logical function (MTLF), wherein the method comprises: updating a local version of a machine learning (ML) model; transmitting, to a second NWDAF, an indication of an updated local version of the ML model; and identifying, from the second NWDAF based on the updated local version of the ML model, an indication of an updated global version of the ML model.
  • NWDAF network data analytics function
  • MTLF model training logical function
  • Example B9 includes the method of example B8, and/or some other example herein, wherein the second NWDAF is a NWDAF with a MTLF that is configured for federated learning aggregation.
  • Example BIO includes the method of any of examples B8-B9, and/or some other example herein, wherein the first NWDAF is a NWDAF with a MTLF that is configured for federated learning participation.
  • Example Bl 1 includes the method of any of examples B8-B10, and/or some other example herein, wherein the indication of the updated local version of the ML model is transmitted to the second NWDAF in a Nnwdaf_MLModel_DistributedTraining_Response message.
  • Example B12 includes the method of example Bl 1, and/or some other example herein, wherein the Nnwdaf MLModel DistributedTraining Response message is responsive to receipt, from the second NWDAF, of a Nnwdaf_MLModel_DistributedTraining_Request message.
  • Example B13 includes the method of any of examples B8-B12, and/or some other example herein, wherein the indication of the updated global version of the ML model is received in a Nnwdaf_MLModel_DistributedTraining_Notify message.
  • Example Z01 may include an apparatus comprising means to perform one or more elements of a method described in or related to any of examples 1 -Bl 3, or any other method or process described herein.
  • Example Z02 may include one or more non-transitory computer-readable media comprising instructions to cause an electronic device, upon execution of the instructions by one or more processors of the electronic device, to perform one or more elements of a method described in or related to any of examples 1-B13, or any other method or process described herein.
  • Example Z03 may include an apparatus comprising logic, modules, or circuitry to perform one or more elements of a method described in or related to any of examples 1 -Bl 3, or any other method or process described herein.
  • Example Z04 may include a method, technique, or process as described in or related to any of examples 1 -Bl 3, or portions or parts thereof.
  • Example Z05 may include an apparatus comprising: one or more processors and one or more computer-readable media comprising instructions that, when executed by the one or more processors, cause the one or more processors to perform the method, techniques, or process as described in or related to any of examples 1-B13, or portions thereof.
  • Example Z06 may include a signal as described in or related to any of examples 1-B13, or portions or parts thereof.
  • Example Z07 may include a datagram, packet, frame, segment, protocol data unit (PDU), or message as described in or related to any of examples 1 -Bl 3, or portions or parts thereof, or otherwise described in the present disclosure.
  • PDU protocol data unit
  • Example Z08 may include a signal encoded with data as described in or related to any of examples 1-B13, or portions or parts thereof, or otherwise described in the present disclosure.
  • Example Z09 may include a signal encoded with a datagram, packet, frame, segment, protocol data unit (PDU), or message as described in or related to any of examples 1-B13, or portions or parts thereof, or otherwise described in the present disclosure.
  • PDU protocol data unit
  • Example Z10 may include an electromagnetic signal carrying computer-readable instructions, wherein execution of the computer-readable instructions by one or more processors is to cause the one or more processors to perform the method, techniques, or process as described in or related to any of examples 1-B13, or portions thereof.
  • Example Z11 may include a computer program comprising instructions, wherein execution of the program by a processing element is to cause the processing element to carry out the method, techniques, or process as described in or related to any of examples 1 -Bl 3, or portions thereof.
  • Example Z12 may include a signal in a wireless network as shown and described herein.
  • Example Z13 may include a method of communicating in a wireless network as shown and described herein.
  • Example Z14 may include a system for providing wireless communication as shown and described herein.
  • Example Z15 may include a device for providing wireless communication as shown and described herein.
  • Neighbour Relation 70 BPSK Binary Phase 105 CE Coverage Enhancement Optional Information CDM Content CoMP Coordinated Resource Delivery Network Multi-Point Indicator, CSI-RS CDMA Code- CORESET Control Resource Division Multiple 40 Resource Set 75 Indicator Access COTS Commercial C-RNTI Cell
  • Gateway Function 50 Premise 85 Information CHF Charging Equipment CSI-IM CSI
  • CID Cell-ID (e g., CQI Channel CSI-RS CSI positioning method) 55 Quality Indicator 90 Reference Signal CIM Common CPU CSI processing CSI-RSRP CSI Information Model unit, Central reference signal CIR Carrier to Processing Unit received power Interference Ratio C/R CSI-RSRQ CSI CK Cipher Key 60 Command/Resp 95 reference signal CM Connection onse field bit received quality Management, CRAN Cloud Radio CSI-SINR CSI
  • Cloud CRC Cyclic CSMA/CA CSMA Management System Redundancy Check with collision CO Conditional 70
  • Reference Signal ED Energy Enhanced DN Data network 65 Detection 100 GPRS DNN Data Network EDGE Enhanced EIR Equipment Name Datarates for GSM Identity Register
  • EREG enhanced REG Channel/Half enhanced LAA enhanced resource rate FN Frame Number element groups
  • FACH Forward Access FPGA Field- ETSI European Channel Programmable Gate
  • GSM EDGE for Mobile Packet Access RAN
  • GGSN Gateway GPRS 45 GTP GPRS 80 Packet Access Support Node Tunneling Protocol HSS Home GLONASS GTP-UGPRS Subscriber Server
  • NodeB 60 Hybrid 95 Block centralized unit Automatic ICCID Integrated gNB-DU gNB- Repeat Request Circuit Card distributed unit, Next HANDO Handover Identification
  • NodeB 65 Number 100 Access and distributed unit HHO Hard Handover Backhaul
  • IP Internet 85 code USIM IEIDL Information Protocol Individual key Element Ipsec IP Security, kB Kilobyte (1000
  • Management Function 65 MAC-IMAC used for 100 MDT Minimization of LOS Line of data integrity of Drive Tests
  • MS Mobile Station 80 Acknowledgement MIMO Multiple Input MSB Most NAI Network Multiple Output Significant Bit Access Identifier MLC Mobile MSC Mobile NAS Non-Access Location Centre Switching Centre Stratum, Non- Access MM Mobility 50 MSI Minimum 85 Stratum layer Management System NCT Network MME Mobility Information, Connectivity Management Entity MCH Scheduling Topology MN Master Node Information NC-JT Non- MNO Mobile 55 MSID Mobile Station 90 Coherent Joint Network Operator Identifier Transmission MO Measurement MSIN Mobile Station NEC Network
  • N-PoP Network Point NR New Radio, OFDMA of Presence Neighbour Relation Orthogonal
  • PBCH Physical Data Network Point Broadcast Channel
  • PDSCH Physical PPP Point-to-Point PC Power Control, Downlink Shared Protocol
  • PCC Primary Unit PRB Physical Component Carrier, PEI Permanent resource block Primary CC Equipment PRG Physical
  • PCF Policy Control 55 PIN Personal 90 PS Packet Services Function Identification Number PSBCH Physical
  • POC PTT over sidelink feedback PDN Packet Data 70 Cellular 105 channel PSCell Primary SCell Bearer, Random layer PSS Primary Access Burst RLC AM RLC Synchronization RACH Random Access Acknowledged Mode
  • Uplink Control number (used for RLM-RS
  • TPC Transmit Power 70 UDP User Datagram 105 UTRA UMTS Terrestrial Radio Protocol Access VPLMN Visited
  • VIM Virtualized Network Infrastructure Manager WPANWireless VL Virtual Link, 55 Personal Area Network VLAN Virtual LAN, X2-C X2-Control Virtual Local Area plane Network X2-U X2-User plane VM Virtual XML extensible Machine 60 Markup
  • circuitry refers to, is part of, or includes hardware components such as an electronic circuit, a logic circuit, a processor (shared, dedicated, or group) and/or memory (shared, dedicated, or group), an Application Specific Integrated Circuit (ASIC), a field-programmable device (FPD) (e.g., a field-programmable gate array (FPGA), a programmable logic device (PLD), a complex PLD (CPLD), a high-capacity PLD (HCPLD), a structured ASIC, or a programmable SoC), digital signal processors (DSPs), etc., that are configured to provide the described functionality.
  • FPD field-programmable device
  • FPGA field-programmable gate array
  • PLD programmable logic device
  • CPLD complex PLD
  • HPLD high-capacity PLD
  • DSPs digital signal processors
  • the circuitry may execute one or more software or firmware programs to provide at least some of the described functionality.
  • the term “circuitry” may also refer to a combination of one or more hardware elements (or a combination of circuits used in an electrical or electronic system) with the program code used to carry out the functionality of that program code. In these embodiments, the combination of hardware elements and program code may be referred to as a particular type of circuitry.
  • processor circuitry refers to, is part of, or includes circuitry capable of sequentially and automatically carrying out a sequence of arithmetic or logical operations, or recording, storing, and/or transferring digital data.
  • Processing circuitry may include one or more processing cores to execute instructions and one or more memory structures to store program and data information.
  • processor circuitry may refer to one or more application processors, one or more baseband processors, a physical central processing unit (CPU), a single-core processor, a dual-core processor, a triple-core processor, a quad-core processor, and/or any other device capable of executing or otherwise operating computerexecutable instructions, such as program code, software modules, and/or functional processes.
  • Processing circuitry may include more hardware accelerators, which may be microprocessors, programmable processing devices, or the like.
  • the one or more hardware accelerators may include, for example, computer vision (CV) and/or deep learning (DL) accelerators.
  • CV computer vision
  • DL deep learning
  • application circuitry and/or “baseband circuitry” may be considered synonymous to, and may be referred to as, “processor circuitry.”
  • interface circuitry refers to, is part of, or includes circuitry that enables the exchange of information between two or more components or devices.
  • interface circuitry may refer to one or more hardware interfaces, for example, buses, VO interfaces, peripheral component interfaces, network interface cards, and/or the like.
  • user equipment refers to a device with radio communication capabilities and may describe a remote user of network resources in a communications network.
  • the term “user equipment” or “UE” may be considered synonymous to, and may be referred to as, client, mobile, mobile device, mobile terminal, user terminal, mobile unit, mobile station, mobile user, subscriber, user, remote station, access agent, user agent, receiver, radio equipment, reconfigurable radio equipment, reconfigurable mobile device, etc.
  • the term “user equipment” or “UE” may include any type of wireless/wired device or any computing device including a wireless communications interface.
  • network element refers to physical or virtualized equipment and/or infrastructure used to provide wired or wireless communication network services.
  • network element may be considered synonymous to and/or referred to as a networked computer, networking hardware, network equipment, network node, router, switch, hub, bridge, radio network controller, RAN device, RAN node, gateway, server, virtualized VNF, NFVI, and/or the like.
  • computer system refers to any type interconnected electronic devices, computer devices, or components thereof. Additionally, the term “computer system” and/or “system” may refer to various components of a computer that are communicatively coupled with one another. Furthermore, the term “computer system” and/or “system” may refer to multiple computer devices and/or multiple computing systems that are communicatively coupled with one another and configured to share computing and/or networking resources.
  • appliance refers to a computer device or computer system with program code (e.g., software or firmware) that is specifically designed to provide a specific computing resource.
  • program code e.g., software or firmware
  • a “virtual appliance” is a virtual machine image to be implemented by a hypervisor-equipped device that virtualizes or emulates a computer appliance or otherwise is dedicated to provide a specific computing resource.
  • resource refers to a physical or virtual device, a physical or virtual component within a computing environment, and/or a physical or virtual component within a particular device, such as computer devices, mechanical devices, memory space, processor/CPU time, processor/CPU usage, processor and accelerator loads, hardware time or usage, electrical power, input/output operations, ports or network sockets, channel/link allocation, throughput, memory usage, storage, network, database and applications, workload units, and/or the like.
  • a “hardware resource” may refer to compute, storage, and/or network resources provided by physical hardware element(s).
  • a “virtualized resource” may refer to compute, storage, and/or network resources provided by virtualization infrastructure to an application, device, system, etc.
  • network resource or “communication resource” may refer to resources that are accessible by computer devices/systems via a communications network.
  • system resources may refer to any kind of shared entities to provide services, and may include computing and/or network resources. System resources may be considered as a set of coherent functions, network data objects or services, accessible through a server where such system resources reside on a single host or multiple hosts and are clearly identifiable.
  • channel refers to any transmission medium, either tangible or intangible, which is used to communicate data or a data stream.
  • channel may be synonymous with and/or equivalent to “communications channel,” “data communications channel,” “transmission channel,” “data transmission channel,” “access channel,” “data access channel,” “link,” “data link,” “carrier,” “radiofrequency carrier,” and/or any other like term denoting a pathway or medium through which data is communicated.
  • link refers to a connection between two devices through a RAT for the purpose of transmitting and receiving information.
  • instantiate refers to the creation of an instance.
  • An “instance” also refers to a concrete occurrence of an object, which may occur, for example, during execution of program code.
  • Coupled may mean two or more elements are in direct physical or electrical contact with one another, may mean that two or more elements indirectly contact each other but still cooperate or interact with each other, and/or may mean that one or more other elements are coupled or connected between the elements that are said to be coupled with each other.
  • directly coupled may mean that two or more elements are in direct contact with one another.
  • communicatively coupled may mean that two or more elements may be in contact with one another by a means of communication including through a wire or other interconnect connection, through a wireless communication channel or link, and/or the like.
  • information element refers to a structural element containing one or more fields.
  • field refers to individual contents of an information element, or a data element that contains content.
  • SMTC refers to an SSB-based measurement timing configuration configured by SSB-MeasurementTimingConfiguration .
  • SSB refers to an SS/PBCH block.
  • a “Primary Cell” refers to the MCG cell, operating on the primary frequency, in which the UE either performs the initial connection establishment procedure or initiates the connection re-establishment procedure.
  • Primary SCG Cell refers to the SCG cell in which the UE performs random access when performing the Reconfiguration with Sync procedure for DC operation.
  • Secondary Cell refers to a cell providing additional radio resources on top of a Special Cell for a UE configured with CA.
  • Secondary Cell Group refers to the subset of serving cells comprising the
  • PSCell and zero or more secondary cells for a UE configured with DC.
  • the term “Serving Cell” refers to the primary cell for a UE in RRC CONNECTED not configured with CA/DC there is only one serving cell comprising of the primary cell.
  • serving cell refers to the set of cells comprising the Special Cell(s) and all secondary cells for a UE in RRC CONNECTED configured with CA/.
  • Special Cell refers to the PCell of the MCG or the PSCell of the SCG for DC operation; otherwise, the term “Special Cell” refers to the Pcell.

Abstract

Various embodiments herein provide techniques related to federated learning of a machine learning (ML) model. Specifically, embodiments relate to communication and support between a network data analytics function (NWDAF) with a machine learning training function (MTLF) that is configured for federated learning aggregation, and one or more other MWDAF(s) with MTLF(s). Other embodiments may be described and/or claimed.

Description

TRAINING UPDATES FOR NETWORK DATA ANALYTICS FUNCTIONS (NWDAFS)
CROSS REFERENCE TO RELATED APPLICATION
The present application claims priority to U.S. Provisional Patent Application No. 63/319,103, which was filed March 11, 2022; and to U.S. Provisional Patent Application No. 63/320,592, which was filed March 16, 2022.
FIELD
Various embodiments generally may relate to the field of wireless communications. For example, some embodiments may relate to registration and discovery of NWDAF model training logical function (MTLF) instances supporting distributed learning. Some embodiments may relate to NWDAF MTLF interoperability support
BACKGROUND
Various embodiments generally may relate to the field of wireless communications.
BRIEF DESCRIPTION OF THE DRAWINGS
Embodiments will be readily understood by the following detailed description in conjunction with the accompanying drawings. To facilitate this description, like reference numerals designate like structural elements. Embodiments are illustrated by way of example and not by way of limitation in the figures of the accompanying drawings.
Figure lillustrates an example of registration with federated learning aggregation capability included in a network function (NF) profile, in accordance with various embodiments.
Figure 2 illustrates an example of registration with federated learning participation capability included in a NF profile, in accordance with various embodiments.
Figures 3a and 3b (collectively, Figure 3) illustrate an example of federated learning to enable cooperation of multiple NWDAF MTLF instances to train a machine learning (ML) model, in accordance with various embodiments.
Figure 4 illustrates an example of a process flow wherein ML model filter information includes a ML model file serialization format, in accordance with various embodiments.
Figure 5 illustrates an example of a NF profile registration of a NWDAF containing a MTLF, wherein the NF profile includes a new attribute for a ML model file, in accordance with various embodiments.
Figures 6A and 6B (collectively, Figure 6) illustrates an example of trained ML model retrieval using an analytical data repository function (ADRF), in accordance with various embodiments.
Figures 7A and 7B (collectively, Figure 7) illustrates an example of trained ML model storage in an ADRF, in accordance with various embodiments. Figure 8 illustrates an example of trained ML model file serialization format conversion, in accordance with various embodiments.
Figure 9 schematically illustrates a wireless network in accordance with various embodiments.
Figure 10 schematically illustrates components of a wireless network in accordance with various embodiments.
Figure 11 is a block diagram illustrating components, according to some example embodiments, able to read instructions from a machine-readable or computer-readable medium (e.g., a non-transitory machine-readable storage medium) and perform any one or more of the methodologies discussed herein.
Figure 12 schematically illustrates an alternative example wireless network in accordance with various embodiments.
Figure 13 illustrates a simplified block diagram of artificial (Al)-assisted communication between a UE and a RAN, in accordance with various embodiments.
Figure 14 depicts an example process, in accordance with various embodiments herein.
Figure 15 depicts an alternative example process, in accordance with various embodiments herein.
DETAILED DESCRIPTION
The following detailed description refers to the accompanying drawings. The same reference numbers may be used in different drawings to identify the same or similar elements. In the following description, for purposes of explanation and not limitation, specific details are set forth such as particular structures, architectures, interfaces, techniques, etc. in order to provide a thorough understanding of the various aspects of various embodiments. However, it will be apparent to those skilled in the art having the benefit of the present disclosure that the various aspects of the various embodiments may be practiced in other examples that depart from these specific details. In certain instances, descriptions of well-known devices, circuits, and methods are omitted so as not to obscure the description of the various embodiments with unnecessary detail. For the purposes of the present document, the phrases “A or B” and “A/B” mean (A), (B), or (A and B).
Registration and Discovery of NWDAF MTLF Instances Supporting Distributed Learning
The third generation partnership (3 GPP) release- 18 (Rel-18) specifications may relate to one or more of the following goals:
Whether and how to enhance architecture to support federated learning in the the fifth generation (5G) core (collectively, 5GC);
How to improve correctness of NWDAF analytics
In the third generation partnership project (3 GPP) release- 17 (Rel-17) specifications, a NWDAF containing one or both of an analytic logical function (AnLF) and a MTLF is supported. Further, a network function (NF) NF profile registration of one or more of the NWDAF, AnLF, and MTLF with a network repository function (NRF) is supported. However, for a NWDAF containing a MTLF there may be multiple restrictions in the Rel-17 specifications with respect to an NWDAF containing a MTLF instance. Such restrictions may include or relate to one or more of the following:
- NWDAFs containing respective MTLFs may not be allowed to coordinate with one another. only an NWDAF containing an AnLF may be allowed to discover a NWDAF containing a MTLF and request the ML models from the NWDAF containing the MTLF instance.
The NWDAF containing an AnLF may select, from a list of candidate NWDAFs containing MTLF instance(s), an NWDAF containing a MTLF that is pre-configured in the NWDAF containing an AnLF to obtain trained ML Model(s).
In addition, NWDAF and supporting network functions such as the data collection coordination function (DCCF) and the ADRF may allow for data collection to generate analytics data as requested by a NWDAF service consumer. Given the user data privacy and security concerns, a scenario where a NWDAF containing a MTLF collects all the raw data from distributed data sources in different areas - especially UE level network data - for training ML models may be undesirable. Given this, Federated Machine Learning mechanisms may allow application endpoints supporting ML training to train a shared ML model while keeping the raw data local on each endpoint, which in turn may support user data privacy concern whenever applicable.
Aspects of various embodiments herein may include one or more of the following: to allow NWDAF containing MTLF to support Federated Learning aggregation/ participation capability. registration and discovery of NWDAF containing MTLF supporting Federated Learning aggregation/participation capability. how to coordinate multiple NWDAFs including selection of participant NWDAF instances in the Federated Learning group to perform the selection, and decision of role for the participant NWDAF.
Various embodiments herein are described further below.
Part 1: NWDAF containing MTLF that can support Federated Learning Registration with NRF:
In one embodiment, a NWDAF containing a MTLF that can support Federated Learning aggregation capability may register its NF profile with an NRF with the following included in its NF profile: Federated Learning Aggregation capability for ML model(s). An example of such registration is depicted in Figure 1.
Generally, the NWDAF containing a MTLF with Federated Learning Aggregation capability may be the network function responsible for one or more of the following (note, this list is intended to be illustrative rather than limiting. In some embodiments, the NWDAF may be responsible for one or more additional or alternative tasks or functions):
Sending the global ML model to other NWDAF(s) containing MTLF(s) to perform local model update based on its local data collected.
Receiving a local updated ML model from other NWDAF(s) containing MTLF(s), aggregating all of the local ML model updates, and updating its global ML model.
Sending the updated global ML model to one or more other NWDAFs that contain MTLFs that participated in the Federated Learning ML training iteration.
In another embodiment, a NWDAF containing a MTLF that can support Federated Learning participation capability may register its NF profile with a NRF with the following included in its NF profile: Federated Learning participation capability for ML model(s). An example of such registration is depicted in Figure 2.
The NWDAF containing a MTLF with Federated Learning participation capability may be responsible for one or more of the following (note, this list is intended to be illustrative rather than limiting. In some embodiments, the NWDAF may be responsible for one or more additional or alternative tasks or functions):
Data collection per Analytics ID for ML model training performed locally in the NWDAF containing MTLF with Federated Learning participation capability.
Sending a local updated ML model to a NWDAF containing MTLF with Federated Learning Aggregation capability.
Receiving a global updated ML model from a NWDAF containing a MTLF with Federated Learning Aggregation capability.
Part 2: Discovery of NWDAF containing MTLF that can support Federated Learning: A NWDAF containing a MTLF with Federated Learning Aggregation capability may be in the role of the service consumer with the NRF.
For a NWDAF containing a MTLF with Federated Learning Aggregation capability to discover a NWDAF containing a MTLF with Federated Learning participation capability using the NRF, a service consumer may send a Nnrf_NFDiscovery_Request to the NRF with the following additional input(s): Federated Learning participation capability for ML model(s).
The NRF may then return one or more instances of a NWDAF containing a MTLF with Federated Learning participation capability for ML model(s) to the NF consumer (e.g., the service consumer), and each instance of the returned NWDAF(s) may include ML Model Filter Information for the available trained ML models.
Example of NWDAF Capability included in NWDAFInfo in the NWDAF NF profile
Figure imgf000007_0001
Figure imgf000008_0001
Table 1: Example ofNWDAF Capability included in NWDAFInfo in the NWDAF NF profde
Part 3: Example of New Nnwdaf MLModelTraining service operations to support notification of update to trained ML models as a result of federated learning to the
Figure imgf000008_0002
Figure imgf000009_0001
Table 2: Example of New NWDAF service operations
The Nnwdaf_MLModelTrainingUpdate service may be provided by an NWDAF containing a MTLF and consumed by an NWDAF containing an AnLF.
The Nnwdaf MLModel DistributedTraining service may be provided by an NWDAF containing a MTLF and consumed by an NWDAF containing a MTLF.
An example Nnwdaf MLModelTrainingUpdate Subscribe service operation may include one or more of the following elements. It will be noted that the described service operation is intended as an example of such an operation, and the operation may include more, fewer, or different elements in other embodiments:
The consumer NF (e.g., a NWDAF containing an AnLF) subscribes to ML model training update with a NWDAF containing a MTLF. The input for the Nnwdaf MLModelTrainingUpdate Subscribe may be Anlaytics ID(s) for which the training update is requested, Notification Target Address, Subscription Correlation ID (in the case of modification of the ML model subscription), Expiry Time. The output of Nnwdaf MLModelTrainingUpdate Subscribe operation may include the Subscription Correlation ID, Expiry time.
An example Nnwdaf MLModelTrainingUpdate Unsubscribe service operation may include one or more of the following elements. It will be noted that the described service operation is intended as an example of such an operation, and the operation may include more, fewer, or different elements in other embodiments:
The consumer NF (e.g., a NWDAF containing an AnLF) may unsubscribe to ML model training update with a NWDAF containing a MTLF. The input may include Subscription Correlation ID and output includes the service operation result.
An exampleNnwdaf MLModelTrainingUpdate Notify service operation may include one or more of the following elements. It will be noted that the described service operation is intended as an example of such an operation, and the operation may include more, fewer, or different elements in other embodiments:
A NWDAF containing a MTLF notifies the ML model information to the consumer NF (e.g., a NWDAF containing an AnLF) thatr has subscribed to the specific NWDAF service. The input may include Analytics ID(s) for which the updated trained ML model is available and/or the address of the updated trained ML model file (updated trained ML model because of the global ML model update after Federated Learning for the model is completed).
An example Nnwdaf MLModel DistributedTraining Request service operation may include one or more of the following elements. It will be noted that the described service operation is intended as an example of such an operation, and the operation may include more, fewer, or different elements in other embodiments:
A NWDAF containing a MTLF (with Federated Learning aggregation capability) in the role of service consumer sends a request to another NWDAF containing MTLF (with Federated Learning participation capability). The input may include one or more of: Analytics ID for which Federated Learning is required, ML model (global ML model) file address, ML model reporting time limit. The ML model reporting time limit is the time within which the local trained ML model needs to be reported back to the service consumer.
An example Nnwdaf MLModel DistributedTraining Response service operation may include one or more of the following elements. It will be noted that the described service operation is intended as an example of such an operation, and the operation may include more, fewer, or different elements in other embodiments:
A NWDAF containing a MTLF (with Federated Learning participation capability) in the role of service producer sends a response to a NWDAF containing a MTLF (with Federated Learning aggregation capability) that includes the result of the operation. If the result of the operation is successful, then the response may include the ML model (local ML model) file address and/or validity period. If the result is not successful, the response may include an error code.
An example Nnwdaf_ MLModel DistributedTraining Subscribe service operation may include one or more of the following elements. It will be noted that the described service operation is intended as an example of such an operation, and the operation may include more, fewer, or different elements in other embodiments:
[Condition]: If the NWDAF containing MTLF with Federated Learning participation capability in the role of service producer sends a response to NWDAF containing MTLF (with Federated Learning aggregation capability) with the result as success, then the NWDAF containing MTLF with Federated Learning participation capability in the role of service consumer subscribes for ML model (global ML model as a result of aggregation from the result of other NWDAF containing MTLF with Federated Learning participation capability) with NWDAF containing MTLF (with Federated Learning aggregation capability). The input includes Analytics ID, Notification Target Address (+ Notification Correlation ID). The output includes subscription Correlation ID when the subscription is accepted.
An example Nnwdaf_ MLModel_DistributedTraining_Notify service operation may include one or more of the following elements. It will be noted that the described service operation is intended as an example of such an operation, and the operation may include more, fewer, or different elements in other embodiments:
[Condition]: When the NWDAF containing MTLF with Federated Learning participation capability has subscribed for ML model (global ML model as a result of aggregation from the result of other NWDAF containing MTLF with Federated Learning participation capability), the NWDAF containing a MTLF (with Federated Learning aggregation capability) may send a notification to NWDAF containing MTLF (with Federated Learning participation capability) that includes one or more of a Notification Correlation ID, ML model (global ML model) file address, and/or validity period.
An example Nnwdaf_ MLModel DistributedTraining Unsubscribe service operation may include one or more of the following elements. It will be noted that the described service operation is intended as an example of such an operation, and the operation may include more, fewer, or different elements in other embodiments:
A NWDAF containing a MTLF with Federated Learning participation capability may unsubscribe from a NWDAF containing a MTLF (with Federated Learning aggregation capability). The input may include the subscription Correlation ID. The output may include the result of the operation. Part 4: Support of Distributed Learning (Federated Learning) to enable cooperation of multiple NWDAF containing MTLF instances to train an ML model in 3GPP network
The discussion of this part may be made with reference to elements of Figures 3a and 3b (collectively, Figure 3).
1. NWDAF containing MTLF (Federated Learning aggregation capability) decides that the federated learning task for a given ML model (required to generate an Analytics ID) needs to be initiated. For example, the decision to initiate federated learning for a given ML model can be based on following factors - ML model accuracy.
2. NWDAF containing MTLF (Federated Learning aggregation capability) discovers the NWDAF instances containing MTLF (Federated Learning participation capability) via the NRF (as described in Part 2, above).
3. NWDAF containing MTLF (Federated Learning aggregation capability) decides on the list of NWDAF containing MTLF (Federated Learning participation capability) to participate in a given iteration of federated learning. How the NWDAF MTLF with Federated Learning aggregation capability selects the list of NWDAF MTLF with participation capabilities is in scope of the NWDAF application logic.
4. NWDAF containing MTLF (Federated Learning aggregation capability) sends Nnwdaf MLModel DistributedTraining Request with the following parameters: Analytics ID(s), ML model (global) file address, ML model reporting time limit.
5. NWDAF containing MTLF (Federated Learning participation capability) sends Nnwdaf MLModel DistributedTraining Response with the following parameters: result of the operation. If the result of the operation is successful, then the response includes the ML model (local ML model) file address, validity period. If the result is not successful, the response includes an error code and elements 6 and 7 are skipped.
6. If element 5 was successful, then NWDAF containing MTLF (Federated Learning participation capability) subscribes for ML model (global ML model as a result of aggregation from the result of other NWDAF containing MTLF with Federated Learning participation capability) with NWDAF containing MTLF (with Federated Learning aggregation capability) with the following parameters: Analytics ID, Notification Target Address (+ Notification Correlation ID)
7. If the NWDAF MTLF with Federated Learning Aggregation capability completes the global update for the ML model (global ML model as a result of aggregation from the result of other NWDAF containing MTLF with Federated Learning participation capability), the NWDAF containing MTLF (with Federated Learning aggregation capability) sends a notification to NWDAF containing MTLF (with Federated Learning participation capability) which includes the Notification Correlation ID, ML model (global ML model) file address, validity period.
NWDAF MTLF Interoperability Support for Registration and Discovery
It has been identified that it may be desirable to perform one or more of the following goals:
Study whether and how to enhance trained ML Model sharing for different vendors. Whether and how the ADRF should store types of data other than historical data and analytics (e.g. ML models, analytics context) for network analytics.
In 3 GPP Rel-17, ML model sharing and services defined was restricted such that ML models cannot be shared between NWDAFs from different vendors, i.e., sharing of models or model meta data is limited to single vendor environments.
Various embodiments herein provide solutions related to one or more of the following examples:
Enabling ML model sharing between NWDAFs (containing AnLF/MTLF) from different vendors. o Allowing ML model file attribute included in NF profile of the NWDAF containing MTLF indicating a list of the supported ML model file serialization formats when registering with the NRF. o NWDAF containing AnLF during discovery of NWDAF containing MTLF includes the ML model file attribute supported by the NWDAF containing AnLF which the NRF to return only NWDAF MTLF instances that support at least one matching file serialization format for an ML model.
Support for ML model file and associated ML model file attributes to be stored in the ADRF (Analytics and Data Repository Function) and supported new service operations by ADRF.
Support for conversion from one ML model file serialization format to another ML model file serialization format.
PART 1: Enable ML model sharing between NWDAFs (containing AnLF/MTLF) from different vendors - Discover and select a NWDAF containing MTLF by an NWDAF containing AnLF both NWDAF’s belong to different vendors
[Note: one or more of the solutions described below in PART 1, PART 2, and PART 3 may be applicable between NWDAF’s belonging to the same vendor or different vendors.]
Example Solution 1 : ML model filter information includes the ML model file serialization format NWDAF containing MTLF registration with NRF:
As may be shown in Figure 4, a NWDAF containing a MTLF sends Nnrf_NFManagement_NFRegister to NRF to inform the NRF of its NF profile. In addition to the NF profile parameters defined in TS 23.502, it includes supported ML model file serialization formats for the trained ML model(s) in the ML model Filter information. Some of the examples for the ML model file serialization format are ONNX format, H5 format, Protobuf format. The ML model file serialization format(s) included in the ML model Filter information indicates the supported ML model file serialization format(s) for the trained ML model(s) available at the NWDAF containing MTLF for consuming by the service consumer. The consumer of the services provided by NWDAF containing MTLF may be an NWDAF containing AnLF or NWDAF containing MTLF. The consumer NF may belong to the same vendor as NWDAF containing MTLF or to a different vendor.
Figure imgf000014_0001
Table 3: Example of MlAnalyticsInfo data structure (ML model filter information) NWDAF containing MTLF discovery via the NRF:
The NWDAF containing AnLF invokes a Nnrf_NfDiscovery_Request to an appropriately configured NRF. In addition to the parameters defined in TS 23.501, it includes the ML model file serialization format(s) supported for the trained ML model(s) in the ML model filter information. The NRF determines a set of NWDAF containing MTLF instance(s) matching at least one of the ML model file serialization formats supported in Nnrf_NFDiscovery_Request and internal policies of the NRF and sends the NF profile(s) (including ML model file serialization format(s)) of the determined NWDAF containing MTLF instances in the Discovery Response.
Example of Nnwdaf MLModelProvision services with the ML model format:
Service operation name: Nnwdaf MLModelProvision Subscribe.
Description: Subscribes to NWDAF ML model provision with specific parameters.
Inputs required: (set of) Analytics ID(s), Notification Target Address (+ Notification Correlation ID), ML model file serialization format requested.
Inputs, Optional: Subscription Correlation ID (in the case of modification of the ML model subscription), ML Model Filter Information to indicate the conditions for which ML model for the analytics is requested, and Target of ML Model Reporting to indicate the object(s) for which ML model is requested (e.g. specific UEs, a group of UE(s) or any UE (e.g. all UEs)), ML Model Reporting Information (including e.g. ML Model Target Period), Expiry time.
Outputs Required: When the subscription is accepted: Subscription Correlation ID (required for management of this subscription), Expiry time (required if the subscription can be expired based on the operator's policy).
Outputs, Optional: None.
When a NWDAF containing an AnLF subscribes to a NWDAF containing a MTLF using Nnwdaf MLModelProvision Subscribe service operation including ML model file serialization format requested as input, NWDAF containing MTLF notifies the ML model information (address (e.g. URL or FQDN) of Model file) to the NWDAF containing AnLF only if ML model format as requested in the input of Nnwdaf MLModelProvision Subscribe is a match.
Example Solution 2: The NF profile registration of NWDAF containing MTLF includes a new attribute for ML model file.
NWDAF containing MTLF registration with NRF:
As may be seen in Figure 5, a NWDAF containing a MTLF may send Nnrf_NFManagement_NFRegister to NRF to inform the NRF of its NF profile. In addition to the NF profile parameters defined in TS 23.502, it includes ML model file specific information for the trained ML model(s) as a new attribute as shown below. Some of the examples for the ML model file serialization formats are ONNX format, H5 format, Protobuf format. The ML model file specific information attribute includes the supported ML model file serialization formats for the trained ML model(s) available at the NWDAF containing MTLF for consuming by the service consumer. The consumer of the services provided by NWDAF containing MTLF may be an NWDAF containing AnLF or NWDAF containing MTLF. The consumer NF may belong to the same vendor as NWDAF containing MTLF or to a different vendor.
Figure imgf000016_0001
Figure imgf000017_0001
Table 4: Nwdafinfo data type
NWDAF containing MTLF discovery via the NRF:
The NWDAF containing AnLF invokes a Nnrf NfDiscovery Request to an appropriately configured NRF. In addition to the parameters defined in TS 23.501, it include s the ML model file serialization formats supported for the trained ML model(s). The NRF determines a set of NWDAF containing MTLF instance(s) matching at least one of the ML model file serialization formats supported in Nnrf NFDiscovery Request and internal policies of the NRF and sends the NF profile(s) of the determined NWDAF containing MTLF instances. Note: Nnwdaf MLModelProvision services with the ML model file serialization format as defined in solution 1 of this part, described above, may be applicable for solution 2 as well.
PART 2: ML model file and associated ML model file attributes stored in the ADRF (Analytics and Data Repository Function) and supported new service operations by ADRF. The Analytics and Data Repository Function defined in 3GPP Rel-17 may enable a consumer to store and retrieve data and analytics. Embodiments herein may extend the functionality of an ADRF to enable a NWDAF containing MTLF to store and retrieve trained ML model(s).
An example service defined for ADRF to support storage and retrieval of trained ML model(s) may include one or more of the following:
Nadrf MLModelManagement service: This service enables the consumer NWDAF containing MTLF to store, retrieve, and remove ML model(s) from an ADRF.
Nadrf MLModelManagement service operations: a. Nadrf MLModelManagement StorageRequest service operation b . Nadrf MLModelManagement RetrievalRequest c. Nadrf MLModelManagement RetrievalTrainingUpdateSubscribe d. Nadrf MLModelManagement RetrievalTrainingUpdateUnsubscribe e. Nadrf_MLModelManagement_RetrievalTrainingUpdateNotify f. Nadrf_MLModelManagement_Delete g. Nadrf MLModelManagement FormatConversionRequest
Generally, an example of the above-described service may include or relate to one or more of the following elements, which are described with respect to Figures 6a and 6b (collectively, Figure 6).
1. The NWDAF containing AnLF sends a Nadrf MLModelManagement RetrievalRequest which includes Analytics ID(s), ML Model Filter Info, ML model file specific information, TargetNF (NWDAF MTLF) to subscribe for Notifications.
2. The ADRF based on internal application logic determines if the ML model file for the
Analytics ID(s) requested is already stored. If the ML model file for the Analytics ID(s) requested in not stored in ADRF then elements 3a, 4a, 5a, 6a are performed. If the ML model file for the Analytics ID(s) requested in stored in ADRF the elements 3a, 4a, 5a, 6a are skipped.
3a. ADRF sends NrrwdafJMLModelProvision Subscribe with the input parameters defined in TS 23.502 and additional input parameters ML model file specific Information (ML model file serialization format)).
4a. The NWDAF containing MTLF sends a Nnwdaf MLModelProvision Notify with following parameters Analytics ID, Trained ML model file address, Notification Correlation ID.
5a. ADRF sends Nnwdaf JMIModelTrainingUpdate Subscribe with the input parameters defined in TS 23.502 and additional input parameters Analytics ID(s), ML model file specific Information (ML model file serialization format).
6a. When the ML model for which the ADRF has subscribed for ML model training update has been updated, the NWDAF containing MTLF sends NnwdafJMLModelTrainingUpdate Notify with the following parameters Analytics ID, Trained ML model file address, Notification Correlation ID.
3. The ADRF sends a response back to NWDAF containing AnLF using
Nadrf MLModelManagement RetrievalRequest Response with the following parameters ML Model File Information(Trained ML model file address, ML model file serialization format, Trained ML Model ID per Analytics ID).
4. The NWDAF containing AnLF subscribes to ADRF using
Nadrf MLModelManagement RetrievalTrainingUpdate Subscribe service operation containing input parameters Trained ML Model ID per Analytics ID.
5. The AnLF sends a notification to NWDAF containing AnLF using
Nadrf_MLModelManagement_RetrievalTrainingUpdate_Notify service operation containing following parameters ML Model File Information(Trained ML model file address, ML model file serialization format, Trained ML Model ID per Analytics ID).
6. Based on internal application logic, NWDAF containing AnLF determines that the ML model training update is no longer required.
7. The NWDAF containing AnLF sends
Nadrf MLModelManagement RetrievalTrainingUpdate Unsubscribe with Subscription Correlation ID as input parameters.
8. ADRF determines if any of the NWDAF AnLF consumer(s) have subscription for ML Model training update per Analytics ID. If not, consumer has subscription for ML model training update per Analytics ID, the ADRF removes the ML model file and ML model file specific information and proceed to element 9. If ADRF determines that NWDAF AnLF consumer(s) have subscription for ML Model training update per Analytics ID, then skip element 9.
9. ADRF sends Nnwdaf MLModelTrainingUpdate Unsubscribe to ADRF with Subscription
Correlation ID as input parameter.
Another example process is depicted in Figures 7a and 7b (collectively, Figure 7). The example process of Figure 7 relates to trained ML model storage in an ADRF, and is described below:
Initially, if trained ML model storage is triggered by NWDAF containing AnLF then elements 1 and 2 are performed as follows.
1. The NWDAF containing AnLF sends Nnwdaf MLModellnfo Request with the following input parameters Analytics ID(s), ML model file specific information (ML model file serialization format), Notification end point address (ADRF) to the NWDAF containing MTLF.
2. The NWDAF containing MTLF sends Nnwdaf MLModellnfo Response with the input parameters Analytics ID(s), Trained ML model file address.
If trained model storage is triggered by ADRF, then elements la and 2a are performed as follows la. The ADRF sends Nnwdaf MLModelProvision Subscribe with the following input parameters ML model fde specific Information (ML model file serialization format).
2a. The NWDAF containing MTLF sends Nnwdaf JMIModelProvision Notify with the following input parameters Analytics ID, Trained ML model file address, Notification Correlation ID.
3. The NWDAF containing MTLF sends Nadrf MLModelManagement StorageRequest with input parameters Analytics ID(s), Trained ML model file address, ML model file specific information (ML model file serialization format).
4. The ADRF sends Nnwdaf MLModelTrainingUpdate Subscribe with input parameters Analytics ID(s), ML model file specific Information (ML model file serialization format).
5. When the ML model for which the ADRF has subscribed for ML model training update has been updated, the NWDAF containing MTLF sends
Nnwdaf MLModelTrainingUpdate Notify with input parameters Analytics ID, Trained ML model file address, Notification Correlation ID.
PART 3: ML model file and associated ML model file attributes stored in the newly defined ML Model Storage Function (MLMS) and supported new service operations by ADRF.
Generally, the call flows as described in PART 2 of this section, above, are applicable with one or more of the following example differences:
ADRF as described with respect to Figures 6 and 7 may be replaced by a ML Model Storage Function (MLMS). The Nmlms_MLModelManagement service may be supported by the MLMS. The MLMS may enable the consumer NWDAF containing MTLF to store, retrieve, and remove ML model(s) from an ADRF.
Nmlms MLModelManagement service operations may include one or more of the following examples: a. Nmlms MLModelManagement StorageRequest service operation b . N mlms MLModelManagement RetrievalRequest c. Nmlms MLModelManagement RetrievalTrainingUpdateSubscribe d. Nmlms MLModelManagement RetrievalTrainingUpdateUnsubscribe e. N mlms_MLModelManagement_RetrievalTrainingUpdateNotify f. N mlms_MLModelManagement_Delete
MLMS discovery and selection:
The NF consumers (NWDAF containing MTLF and/or AnLF) may or shall utilize the NRF to discover MLMS instance(s) unless MLMS information is available by other means e.g. locally configured on NF consumers. The MLMS selection function in NF consumers selects an MLMS instance based on the available MLMS instances. Single-Network Slice Selection Assistance Information (S-NSSAI) may be considered by the NF consumer for MLMS selection.
Example Solution 3 : Support conversion from one ML model file serialization format to another ML model file serialization format
A New service defined for ADRF to support conversion from one ML model file serialization format to another ML model file serialization format may be, include, or relate to Nadrf MLModelManagement FormatConversionRequest/Response. As depicted in Figure 8, in case a NWDAF containing an AnLF prefers a ML model file serialization format not supported at the NWDAF containing the MTLF, it may request the ADRF to perform ML model file serialization format conversion as follows (note, the process of Figure 8 is intended as one example process, and other embodiments may differ):
1. The NWDAF containing AnLF sends Nadrf MLModelManagement FormatConversion Request with the following input parameters ML model file specific information (Trained ML model file address, ML model file serialization format), target ML model file serialization format.
2. The ADRF sends a response back to NWDAF containing AnLF using
Nadrf MLModelManagement FormatConversionRequest Response with the following parameters ML Model File Information (Trained ML model file address, ML model file serialization format).
As an alternative option, the call flows as described in Figure 8 may be applicable with one or more of the following example differences:
The ADRF in Figure 8 may be replaced by a ML Model Storage Function (MLMS) A new service operation Nmlms MLModelManagement FormatConversion Request/response is used.
SYSTEMS AND IMPLEMENTATIONS
Figures 9-13 illustrate various systems, devices, and components that may implement aspects of disclosed embodiments.
Figure 9 illustrates a network 900 in accordance with various embodiments. The network 900 may operate in a manner consistent with 3GPP technical specifications for LTE or 5G/NR systems. However, the example embodiments are not limited in this regard and the described embodiments may apply to other networks that benefit from the principles described herein, such as future 3 GPP systems, or the like.
The network 900 may include a UE 902, which may include any mobile or non-mobile computing device designed to communicate with a RAN 904 via an over-the-air connection. The UE 902 may be communicatively coupled with the RAN 904 by a Uu interface. The UE 902 may be, but is not limited to, a smartphone, tablet computer, wearable computer device, desktop computer, laptop computer, in-vehicle infotainment, in-car entertainment device, instrument cluster, head-up display device, onboard diagnostic device, dashtop mobile equipment, mobile data terminal, electronic engine management system, electron! c/engine control unit, electronic/engine control module, embedded system, sensor, microcontroller, control module, engine management system, networked appliance, machine-type communication device, M2M or D2D device, loT device, etc.
In some embodiments, the network 900 may include a plurality of UEs coupled directly with one another via a sidelink interface. The UEs may be M2M/D2D devices that communicate using physical sidelink channels such as, but not limited to, PSBCH, PSDCH, PSSCH, PSCCH, PSFCH, etc.
In some embodiments, the UE 902 may additionally communicate with an AP 906 via an over-the-air connection. The AP 906 may manage a WLAN connection, which may serve to offload some/all network traffic from the RAN 904. The connection between the UE 902 and the AP 906 may be consistent with any IEEE 802.11 protocol, wherein the AP 906 could be a wireless fidelity (Wi-Fi®) router. In some embodiments, the UE 902, RAN 904, and AP 906 may utilize cellular-WLAN aggregation (for example, LWA/LWIP). Cellular-WLAN aggregation may involve the UE 902 being configured by the RAN 904 to utilize both cellular radio resources and WLAN resources.
The RAN 904 may include one or more access nodes, for example, AN 908. AN 908 may terminate air-interface protocols for the UE 902 by providing access stratum protocols including RRC, PDCP, RLC, MAC, and LI protocols. In this manner, the AN 908 may enable data/voice connectivity between CN 920 and the UE 902. In some embodiments, the AN 908 may be implemented in a discrete device or as one or more software entities running on server computers as part of, for example, a virtual network, which may be referred to as a CRAN or virtual baseband unit pool. The AN 908 be referred to as a BS, gNB, RAN node, eNB, ng-eNB, NodeB, RSU, TRxP, TRP, etc. The AN 908 may be a macrocell base station or a low power base station for providing femtocells, picocells or other like cells having smaller coverage areas, smaller user capacity, or higher bandwidth compared to macrocells.
In embodiments in which the RAN 904 includes a plurality of ANs, they may be coupled with one another via an X2 interface (if the RAN 904 is an LTE RAN) or an Xn interface (if the RAN 904 is a 5G RAN). The X2/Xn interfaces, which may be separated into control/user plane interfaces in some embodiments, may allow the ANs to communicate information related to handovers, data/context transfers, mobility, load management, interference coordination, etc.
The ANs of the RAN 904 may each manage one or more cells, cell groups, component carriers, etc. to provide the UE 902 with an air interface for network access. The UE 902 may be simultaneously connected with a plurality of cells provided by the same or different ANs of the RAN 904. For example, the UE 902 and RAN 904 may use carrier aggregation to allow the UE 902 to connect with a plurality of component carriers, each corresponding to a Pcell or Scell. In dual connectivity scenarios, a first AN may be a master node that provides an MCG and a second AN may be secondary node that provides an SCG. The first/second ANs may be any combination of eNB, gNB, ng-eNB, etc.
The RAN 904 may provide the air interface over a licensed spectrum or an unlicensed spectrum. To operate in the unlicensed spectrum, the nodes may use LAA, eLAA, and/or feLAA mechanisms based on CA technology with PCells/Scells. Prior to accessing the unlicensed spectrum, the nodes may perform medium/carrier-sensing operations based on, for example, a listen-before-talk (LBT) protocol.
In V2X scenarios the UE 902 or AN 908 may be or act as a RSU, which may refer to any transportation infrastructure entity used for V2X communications. An RSU may be implemented in or by a suitable AN or a stationary (or relatively stationary) UE. An RSU implemented in or by: a UE may be referred to as a “UE-type RSU”; an eNB may be referred to as an “eNB-type RSU”; a gNB may be referred to as a “gNB-type RSU”; and the like. In one example, an RSU is a computing device coupled with radio frequency circuitry located on a roadside that provides connectivity support to passing vehicle UEs. The RSU may also include internal data storage circuitry to store intersection map geometry, traffic statistics, media, as well as applications/software to sense and control ongoing vehicular and pedestrian traffic. The RSU may provide very low latency communications required for high speed events, such as crash avoidance, traffic warnings, and the like. Additionally or alternatively, the RSU may provide other cellular/WLAN communications services. The components of the RSU may be packaged in a weatherproof enclosure suitable for outdoor installation, and may include a network interface controller to provide a wired connection (e.g., Ethernet) to a traffic signal controller or a backhaul network.
In some embodiments, the RAN 904 may be an LTE RAN 910 with eNBs, for example, eNB 912. The LTE RAN 910 may provide an LTE air interface with the following characteristics: SCS of 15 kHz; CP-OFDM waveform for DL and SC-FDMA waveform for UL; turbo codes for data and TBCC for control; etc. The LTE air interface may rely on CSI-RS for CSI acquisition and beam management; PDSCH/PDCCH DMRS for PDSCH/PDCCH demodulation; and CRS for cell search and initial acquisition, channel quality measurements, and channel estimation for coherent demodulation/detection at the UE. The LTE air interface may operating on sub-6 GHz bands.
In some embodiments, the RAN 904 may be an NG-RAN 914 with gNBs, for example, gNB 916, or ng-eNBs, for example, ng-eNB 918. The gNB 916 may connect with 5G-enabled UEs using a 5G NR interface. The gNB 916 may connect with a 5G core through an NG interface, which may include an N2 interface or an N3 interface. The ng-eNB 918 may also connect with the 5G core through an NG interface, but may connect with a UE via an LTE air interface. The gNB 916 and the ng-eNB 918 may connect with each other over an Xn interface.
In some embodiments, the NG interface may be split into two parts, an NG user plane (NG-U) interface, which carries traffic data between the nodes of the NG-RAN 914 and a UPF 948 (e.g., N3 interface), and an NG control plane (NG-C) interface, which is a signaling interface between the nodes of the NG-RAN914 and an AMF 944 (e.g., N2 interface).
The NG-RAN 914 may provide a 5G-NR air interface with the following characteristics: variable SCS; CP-OFDM for DL, CP-OFDM and DFT-s-OFDM for UL; polar, repetition, simplex, and Reed-Muller codes for control and LDPC for data. The 5G-NR air interface may rely on CSI-RS, PDSCH/PDCCH DMRS similar to the LTE air interface. The 5G-NR air interface may not use a CRS, but may use PBCH DMRS for PBCH demodulation; PTRS for phase tracking for PDSCH; and tracking reference signal for time tracking. The 5G-NR air interface may operating on FR1 bands that include sub-6 GHz bands or FR2 bands that include bands from 24.25 GHz to 52.6 GHz. The 5G-NR air interface may include an SSB that is an area of a downlink resource grid that includes PSS/SSS/PBCH.
In some embodiments, the 5G-NR air interface may utilize BWPs for various purposes. For example, BWP can be used for dynamic adaptation of the SCS. For example, the UE 902 can be configured with multiple BWPs where each BWP configuration has a different SCS. When a BWP change is indicated to the UE 902, the SCS of the transmission is changed as well. Another use case example of BWP is related to power saving. In particular, multiple BWPs can be configured for the UE 902 with different amount of frequency resources (for example, PRBs) to support data transmission under different traffic loading scenarios. A BWP containing a smaller number of PRBs can be used for data transmission with small traffic load while allowing power saving at the UE 902 and in some cases at the gNB 916. A BWP containing a larger number of PRBs can be used for scenarios with higher traffic load.
The RAN 904 is communicatively coupled to CN 920 that includes network elements to provide various functions to support data and telecommunications services to customers/subscribers (for example, users of UE 902). The components of the CN 920 may be implemented in one physical node or separate physical nodes. In some embodiments, NFV may be utilized to virtualize any or all of the functions provided by the network elements of the CN 920 onto physical compute/storage resources in servers, switches, etc. A logical instantiation of the CN 920 may be referred to as a network slice, and a logical instantiation of a portion of the CN 920 may be referred to as a network sub-slice.
In some embodiments, the CN 920 may be an LTE CN 922, which may also be referred to as an EPC. The LTE CN 922 may include MME 924, SGW 926, SGSN 928, HSS 930, PGW 932, and PCRF 934 coupled with one another over interfaces (or “reference points”) as shown. Functions of the elements of the LTE CN 922 may be briefly introduced as follows.
The MME 924 may implement mobility management functions to track a current location of the UE 902 to facilitate paging, bearer activation/deactivation, handovers, gateway selection, authentication, etc.
The SGW 926 may terminate an SI interface toward the RAN and route data packets between the RAN and the LTE CN 922. The SGW 926 may be a local mobility anchor point for inter-RAN node handovers and also may provide an anchor for inter-3GPP mobility. Other responsibilities may include lawful intercept, charging, and some policy enforcement.
The SGSN 928 may track a location of the UE 902 and perform security functions and access control. In addition, the SGSN 928 may perform inter-EPC node signaling for mobility between different RAT networks; PDN and S-GW selection as specified by MME 924; MME selection for handovers; etc. The S3 reference point between the MME 924 and the SGSN 928 may enable user and bearer information exchange for inter-3 GPP access network mobility in idle/active states.
The HSS 930 may include a database for network users, including subscription-related information to support the network entities’ handling of communication sessions. The HSS 930 can provide support for routing/roaming, authentication, authorization, naming/addressing resolution, location dependencies, etc. An S6a reference point between the HSS 930 and the MME 924 may enable transfer of subscription and authentication data for authenticating/authorizing user access to the LTE CN 920.
The PGW 932 may terminate an SGi interface toward a data network (DN) 936 that may include an application/content server 938. The PGW 932 may route data packets between the LTE CN 922 and the data network 936. The PGW 932 may be coupled with the SGW 926 by an S5 reference point to facilitate user plane tunneling and tunnel management. The PGW 932 may further include a node for policy enforcement and charging data collection (for example, PCEF). Additionally, the SGi reference point between the PGW 932 and the data network 9 36 may be an operator external public, a private PDN, or an intra-operator packet data network, for example, for provision of IMS services. The PGW 932 may be coupled with a PCRF 934 via a Gx reference point.
The PCRF 934 is the policy and charging control element of the LTE CN 922. The PCRF 934 may be communicatively coupled to the app/content server 938 to determine appropriate QoS and charging parameters for service flows. The PCRF 932 may provision associated rules into a PCEF (via Gx reference point) with appropriate TFT and QCI.
In some embodiments, the CN 920 may be a 5GC 940. The 5GC 940 may include an AUSF 942, AMF 944, SMF 946, UPF 948, NSSF 950, NEF 952, NRF 954, PCF 956, UDM 958, and AF 960 coupled with one another over interfaces (or “reference points”) as shown. Functions of the elements of the 5GC 940 may be briefly introduced as follows.
The AUSF 942 may store data for authentication of UE 902 and handle authentication- related functionality. The AUSF 942 may facilitate a common authentication framework for various access types. In addition to communicating with other elements of the 5GC 940 over reference points as shown, the AUSF 942 may exhibit an Nausf service-based interface.
The AMF 944 may allow other functions of the 5GC 940 to communicate with the UE 902 and the RAN 904 and to subscribe to notifications about mobility events with respect to the UE 902. The AMF 944 may be responsible for registration management (for example, for registering UE 902), connection management, reachability management, mobility management, lawful interception of AMF-related events, and access authentication and authorization. The AMF 944 may provide transport for SM messages between the UE 902 and the SMF 946, and act as a transparent proxy for routing SM messages. AMF 944 may also provide transport for SMS messages between UE 902 and an SMSF. AMF 944 may interact with the AUSF 942 and the UE 902 to perform various security anchor and context management functions. Furthermore, AMF 944 may be a termination point of a RAN CP interface, which may include or be an N2 reference point between the RAN 904 and the AMF 944; and the AMF 944 may be a termination point of NAS (Nl) signaling, and perform NAS ciphering and integrity protection. AMF 944 may also support NAS signaling with the UE 902 over an N3 IWF interface.
The SMF 946 may be responsible for SM (for example, session establishment, tunnel management between UPF 948 and AN 908); UE IP address allocation and management (including optional authorization); selection and control of UP function; configuring traffic steering at UPF 948 to route traffic to proper destination; termination of interfaces toward policy control functions; controlling part of policy enforcement, charging, and QoS; lawful intercept (for SM events and interface to LI system); termination of SM parts of NAS messages; downlink data notification; initiating AN specific SM information, sent via AMF 944 over N2 to AN 908; and determining SSC mode of a session. SM may refer to management of a PDU session, and a PDU session or “session” may refer to a PDU connectivity service that provides or enables the exchange of PDUs between the UE 902 and the data network 936.
The UPF 948 may act as an anchor point for intra-RAT and inter-RAT mobility, an external PDU session point of interconnect to data network 936, and a branching point to support multi-homed PDU session. The UPF 948 may also perform packet routing and forwarding, perform packet inspection, enforce the user plane part of policy rules, lawfully intercept packets (UP collection), perform traffic usage reporting, perform QoS handling for a user plane (e.g., packet filtering, gating, UL/DL rate enforcement), perform uplink traffic verification (e.g., SDF- to-QoS flow mapping), transport level packet marking in the uplink and downlink, and perform downlink packet buffering and downlink data notification triggering. UPF 948 may include an uplink classifier to support routing traffic flows to a data network.
The NSSF 950 may select a set of network slice instances serving the UE 902. The NSSF 950 may also determine allowed NSSAI and the mapping to the subscribed S-NSSAIs, if needed. The NSSF 950 may also determine the AMF set to be used to serve the UE 902, or a list of candidate AMFs based on a suitable configuration and possibly by querying the NRF 954. The selection of a set of network slice instances for the UE 902 may be triggered by the AMF 944 with which the UE 902 is registered by interacting with the NSSF 950, which may lead to a change of AMF. The NSSF 950 may interact with the AMF 944 via an N22 reference point; and may communicate with another NSSF in a visited network via an N31 reference point (not shown). Additionally, the NSSF 950 may exhibit an Nnssf service-based interface.
The NEF 952 may securely expose services and capabilities provided by 3 GPP network functions for third party, internal exposure/re-exposure, AFs (e.g., AF 960), edge computing or fog computing systems, etc. In such embodiments, the NEF 952 may authenticate, authorize, or throttle the AFs. NEF 952 may also translate information exchanged with the AF 960 and information exchanged with internal network functions. For example, the NEF 952 may translate between an AF-Service-Identifier and an internal 5GC information. NEF 952 may also receive information from other NFs based on exposed capabilities of other NFs. This information may be stored at the NEF 952 as structured data, or at a data storage NF using standardized interfaces. The stored information can then be re-exposed by the NEF 952 to other NFs and AFs, or used for other purposes such as analytics. Additionally, the NEF 952 may exhibit an Nnef service-based interface.
The NRF 954 may support service discovery functions, receive NF discovery requests from NF instances, and provide the information of the discovered NF instances to the NF instances. NRF 954 also maintains information of available NF instances and their supported services. As used herein, the terms “instantiate,” “instantiation,” and the like may refer to the creation of an instance, and an “instance” may refer to a concrete occurrence of an object, which may occur, for example, during execution of program code. Additionally, the NRF 954 may exhibit the Nnrf service-based interface.
The PCF 956 may provide policy rules to control plane functions to enforce them, and may also support unified policy framework to govern network behavior. The PCF 956 may also implement a front end to access subscription information relevant for policy decisions in a UDR of the UDM 958. In addition to communicating with functions over reference points as shown, the PCF 956 exhibit an Npcf service-based interface.
The UDM 958 may handle subscription-related information to support the network entities’ handling of communication sessions, and may store subscription data of UE 902. For example, subscription data may be communicated via an N8 reference point between the UDM 958 and the AMF 944. The UDM 958 may include two parts, an application front end and a UDR. The UDR may store subscription data and policy data for the UDM 958 and the PCF 956, and/or structured data for exposure and application data (including PFDs for application detection, application request information for multiple UEs 902) for the NEF 952. The Nudr service-based interface may be exhibited by the UDR 221 to allow the UDM 958, PCF 956, and NEF 952 to access a particular set of the stored data, as well as to read, update (e.g., add, modify), delete, and subscribe to notification of relevant data changes in the UDR. The UDM may include a UDM- FE, which is in charge of processing credentials, location management, subscription management and so on. Several different front ends may serve the same user in different transactions. The UDM-FE accesses subscription information stored in the UDR and performs authentication credential processing, user identification handling, access authorization, registration/mobility management, and subscription management. In addition to communicating with other NFs over reference points as shown, the UDM 958 may exhibit the Nudm service-based interface.
The AF 960 may provide application influence on traffic routing, provide access to NEF, and interact with the policy framework for policy control.
In some embodiments, the 5GC 940 may enable edge computing by selecting operator/3rd party services to be geographically close to a point that the UE 902 is attached to the network. This may reduce latency and load on the network. To provide edge-computing implementations, the 5GC 940 may select a UPF 948 close to the UE 902 and execute traffic steering from the UPF 948 to data network 936 via the N6 interface. This may be based on the UE subscription data, UE location, and information provided by the AF 960. In this way, the AF 960 may influence UPF (re)selection and traffic routing. Based on operator deployment, when AF 960 is considered to be a trusted entity, the network operator may permit AF 960 to interact directly with relevant NFs. Additionally, the AF 960 may exhibit an Naf service-based interface.
The data network 936 may represent various network operator services, Internet access, or third party services that may be provided by one or more servers including, for example, application/content server 938.
Figure 10 schematically illustrates a wireless network 1000 in accordance with various embodiments. The wireless network 1000 may include a UE 1002 in wireless communication with an AN 1004. The UE 1002 and AN 1004 may be similar to, and substantially interchangeable with, like-named components described elsewhere herein.
The UE 1002 may be communicatively coupled with the AN 1004 via connection 1006. The connection 1006 is illustrated as an air interface to enable communicative coupling, and can be consistent with cellular communications protocols such as an LTE protocol or a 5G NR. protocol operating at mmWave or sub-6GHz frequencies.
The UE 1002 may include a host platform 1008 coupled with a modem platform 1010. The host platform 1008 may include application processing circuitry 1012, which may be coupled with protocol processing circuitry 1014 of the modem platform 1010. The application processing circuitry 1012 may run various applications for the UE 1002 that source/sink application data. The application processing circuitry 1012 may further implement one or more layer operations to transmit/receive application data to/from a data network. These layer operations may include transport (for example UDP) and Internet (for example, IP) operations
The protocol processing circuitry 1014 may implement one or more of layer operations to facilitate transmission or reception of data over the connection 1006. The layer operations implemented by the protocol processing circuitry 1014 may include, for example, MAC, RLC, PDCP, RRC and NAS operations.
The modem platform 1010 may further include digital baseband circuitry 1016 that may implement one or more layer operations that are “below” layer operations performed by the protocol processing circuitry 1014 in a network protocol stack. These operations may include, for example, PHY operations including one or more of HARQ-ACK functions, scrambling/descrambling, encoding/decoding, layer mapping/de-mapping, modulation symbol mapping, received symbol/bit metric determination, multi-antenna port precoding/decoding, which may include one or more of space-time, space-frequency or spatial coding, reference signal generation/detection, preamble sequence generation and/or decoding, synchronization sequence generation/detection, control channel signal blind decoding, and other related functions.
The modem platform 1010 may further include transmit circuitry 1018, receive circuitry 1020, RF circuitry 1022, and RF front end (RFFE) 1024, which may include or connect to one or more antenna panels 1026. Briefly, the transmit circuitry 1018 may include a digital-to-analog converter, mixer, intermediate frequency (IF) components, etc.; the receive circuitry 1020 may include an analog-to-digital converter, mixer, IF components, etc.; the RF circuitry 1022 may include a low-noise amplifier, a power amplifier, power tracking components, etc.; RFFE 1024 may include filters (for example, surface/bulk acoustic wave filters), switches, antenna tuners, beamforming components (for example, phase-array antenna components), etc. The selection and arrangement of the components of the transmit circuitry 1018, receive circuitry 1020, RF circuitry 1022, RFFE 1024, and antenna panels 1026 (referred generically as “transmit/receive components”) may be specific to details of a specific implementation such as, for example, whether communication is TDM or FDM, in mmWave or sub-6 gHz frequencies, etc. In some embodiments, the transmit/receive components may be arranged in multiple parallel transmit/receive chains, may be disposed in the same or different chips/modules, etc.
In some embodiments, the protocol processing circuitry 1014 may include one or more instances of control circuitry (not shown) to provide control functions for the transmit/receive components.
A UE reception may be established by and via the antenna panels 1026, RFFE 1024, RF circuitry 1022, receive circuitry 1020, digital baseband circuitry 1016, and protocol processing circuitry 1014. In some embodiments, the antenna panels 1026 may receive a transmission from the AN 1004 by receive-beamforming signals received by a plurality of antennas/antenna elements of the one or more antenna panels 1026.
A UE transmission may be established by and via the protocol processing circuitry 1014, digital baseband circuitry 1016, transmit circuitry 1018, RF circuitry 1022, RFFE 1024, and antenna panels 1026. In some embodiments, the transmit components of the UE 1004 may apply a spatial filter to the data to be transmitted to form a transmit beam emitted by the antenna elements of the antenna panels 1026.
Similar to the UE 1002, the AN 1004 may include a host platform 1028 coupled with a modem platform 1030. The host platform 1028 may include application processing circuitry 1032 coupled with protocol processing circuitry 1034 of the modem platform 1030. The modem platform may further include digital baseband circuitry 1036, transmit circuitry 1038, receive circuitry 1040, RF circuitry 1042, RFFE circuitry 1044, and antenna panels 1046. The components of the AN 1004 may be similar to and substantially interchangeable with like- named components of the UE 1002. In addition to performing data transmission/reception as described above, the components of the AN 1008 may perform various logical functions that include, for example, RNC functions such as radio bearer management, uplink and downlink dynamic radio resource management, and data packet scheduling.
Figure 11 is a block diagram illustrating components, according to some example embodiments, able to read instructions from a machine-readable or computer-readable medium (e.g., a non-transitory machine-readable storage medium) and perform any one or more of the methodologies discussed herein. Specifically, Figure 11 shows a diagrammatic representation of hardware resources 1100 including one or more processors (or processor cores) 1110, one or more memory/storage devices 1120, and one or more communication resources 1130, each of which may be communicatively coupled via a bus 1140 or other interface circuitry. For embodiments where node virtualization (e.g., NFV) is utilized, a hypervisor 1102 may be executed to provide an execution environment for one or more network slices/sub-slices to utilize the hardware resources 1100.
The processors 1110 may include, for example, a processor 1112 and a processor 1114. The processors 1110 may be, for example, a central processing unit (CPU), a reduced instruction set computing (RISC) processor, a complex instruction set computing (CISC) processor, a graphics processing unit (GPU), a DSP such as a baseband processor, an ASIC, an FPGA, a radiofrequency integrated circuit (RFIC), another processor (including those discussed herein), or any suitable combination thereof.
The memory/storage devices 1120 may include main memory, disk storage, or any suitable combination thereof. The memory/storage devices 1120 may include, but are not limited to, any type of volatile, non-volatile, or semi-volatile memory such as dynamic random access memory (DRAM), static random access memory (SRAM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), Flash memory, solid-state storage, etc.
The communication resources 1130 may include interconnection or network interface controllers, components, or other suitable devices to communicate with one or more peripheral devices 1104 or one or more databases 1106 or other network elements via a network 1108. For example, the communication resources 1130 may include wired communication components (e.g., for coupling via USB, Ethernet, etc.), cellular communication components, NFC components, Bluetooth® (or Bluetooth® Low Energy) components, Wi-Fi® components, and other communication components.
Instructions 1150 may comprise software, a program, an application, an applet, an app, or other executable code for causing at least any of the processors 1110 to perform any one or more of the methodologies discussed herein. The instructions 1150 may reside, completely or partially, within at least one of the processors 1110 (e.g., within the processor’s cache memory), the memory/storage devices 1120, or any suitable combination thereof. Furthermore, any portion of the instructions 1150 may be transferred to the hardware resources 1100 from any combination of the peripheral devices 1104 or the databases 1106. Accordingly, the memory of processors 1110, the memory/storage devices 1120, the peripheral devices 1104, and the databases 1106 are examples of computer-readable and machine-readable media.
Figure 12 illustrates a network 1200 in accordance with various embodiments. The network 1200 may operate in a matter consistent with 3GPP technical specifications or technical reports for 6G systems. In some embodiments, the network 1200 may operate concurrently with network 900. For example, in some embodiments, the network 1200 may share one or more frequency or bandwidth resources with network 900. As one specific example, a UE (e.g., UE 1202) may be configured to operate in both network 1200 and network 900. Such configuration may be based on a UE including circuitry configured for communication with frequency and bandwidth resources of both networks 900 and 1200. In general, several elements of network 1200 may share one or more characteristics with elements of network 900. For the sake of brevity and clarity, such elements may not be repeated in the description of network 1200.
The network 1200 may include a UE 1202, which may include any mobile or non -mobile computing device designed to communicate with a RAN 1208 via an over-the-air connection. The UE 1202 may be similar to, for example, UE 902. The UE 1202 may be, but is not limited to, a smartphone, tablet computer, wearable computer device, desktop computer, laptop computer, in- vehicle infotainment, in-car entertainment device, instrument cluster, head-up display device, onboard diagnostic device, dashtop mobile equipment, mobile data terminal, electronic engine management system, electronic/engine control unit, electronic/engine control module, embedded system, sensor, microcontroller, control module, engine management system, networked appliance, machine-type communication device, M2M or D2D device, loT device, etc.
Although not specifically shown in Figure 12, in some embodiments the network 1200 may include a plurality of UEs coupled directly with one another via a sidelink interface. The UEs may be M2M/D2D devices that communicate using physical sidelink channels such as, but not limited to, PSBCH, PSDCH, PSSCH, PSCCH, PSFCH, etc. Similarly, although not specifically shown in Figure 12, the UE 1202 may be communicatively coupled with an AP such as AP 906 as described with respect to Figure 9. Additionally, although not specifically shown in Figure 12, in some embodiments the RAN 1208 may include one or more ANss such as AN 908 as described with respect to Figure 9. The RAN 1208 and/or the AN of the RAN 1208 may be referred to as a base station (BS), a RAN node, or using some other term or name.
The UE 1202 and the RAN 1208 may be configured to communicate via an air interface that may be referred to as a sixth generation (6G) air interface. The 6G air interface may include one or more features such as communication in a terahertz (THz) or sub-THz bandwidth, or joint communication and sensing. As used herein, the term “joint communication and sensing” may refer to a system that allows for wireless communication as well as radar-based sensing via various types of multiplexing. As used herein, THz or sub-THz bandwidths may refer to communication in the 80 GHz and above frequency ranges. Such frequency ranges may additionally or alternatively be referred to as “millimeter wave” or “mmWave” frequency ranges.
The RAN 1208 may allow for communication between the UE 1202 and a 6G core network (CN) 1210. Specifically, the RAN 1208 may facilitate the transmission and reception of data between the UE 1202 and the 6G CN 1210. The 6G CN 1210 may include various functions such as NSSF 950, NEF 952, NRF 954, PCF 956, UDM 958, AF 960, SMF 946, and AUSF 942. The 6G CN 1210 may additional include UPF 948 and DN 936 as shown in Figure 12.
Additionally, the RAN 1208 may include various additional functions that are in addition to, or alternative to, functions of a legacy cellular network such as a 4G or 5G network. Two such functions may include a Compute Control Function (Comp CF) 1224 and a Compute Service Function (Comp SF) 1236. The Comp CF 1224 and the Comp SF 1236 may be parts or functions of the Computing Service Plane. Comp CF 1224 may be a control plane function that provides functionalities such as management of the Comp SF 1236, computing task context generation and management (e.g., create, read, modify, delete), interaction with the underlying computing infrastructure for computing resource management, etc.. Comp SF 1236 may be a user plane function that serves as the gateway to interface computing service users (such as UE 1202) and computing nodes behind a Comp SF instance. Some functionalities of the Comp SF 1236 may include: parse computing service data received from users to compute tasks executable by computing nodes; hold service mesh ingress gateway or service API gateway; service and charging policies enforcement; performance monitoring and telemetry collection, etc. In some embodiments, a Comp SF 1236 instance may serve as the user plane gateway for a cluster of computing nodes. A Comp CF 1224 instance may control one or more Comp SF 1236 instances.
Two other such functions may include a Communication Control Function (Comm CF) 1228 and a Communication Service Function (Comm SF) 1238, which may be parts of the Communication Service Plane. The Comm CF 1228 may be the control plane function for managing the Comm SF 1238, communication sessions creation/configuration/releasing, and managing communication session context. The Comm SF 1238 may be a user plane function for data transport. Comm CF 1228 and Comm SF 1238 may be considered as upgrades of SMF 946 and UPF 948, which were described with respect to a 5G system in Figure 9. The upgrades provided by the Comm CF 1228 and the Comm SF 1238 may enable service-aware transport. For legacy (e.g., 4G or 5G) data transport, SMF 946 and UPF 948 may still be used.
Two other such functions may include a Data Control Function (Data CF) 1222 and Data Service Function (Data SF) 1232 may be parts of the Data Service Plane. Data CF 1222 may be a control plane function and provides functionalities such as Data SF 1232 management, Data service creation/configuration/releasing, Data service context management, etc. Data SF 1232 may be a user plane function and serve as the gateway between data service users (such as UE 1202 and the various functions of the 6G CN 1210) and data service endpoints behind the gateway. Specific functionalities may include include: parse data service user data and forward to corresponding data service endpoints, generate charging data, report data service status.
Another such function may be the Service Orchestration and Chaining Function (SOCF) 1220, which may discover, orchestrate and chain up communication/computing/data services provided by functions in the network. Upon receiving service requests from users, SOCF 1220 may interact with one or more of Comp CF 1224, Comm CF 1228, and Data CF 1222 to identify Comp SF 1236, Comm SF 1238, and Data SF 1232 instances, configure service resources, and generate the service chain, which could contain multiple Comp SF 1236, Comm SF 1238, and Data SF 1232 instances and their associated computing endpoints. Workload processing and data movement may then be conducted within the generated service chain. The SOCF 1220 may also responsible for maintaining, updating, and releasing a created service chain.
Another such function may be the service registration function (SRF) 1214, which may act as a registry for system services provided in the user plane such as services provided by service endpoints behind Comp SF 1236 and Data SF 1232 gateways and services provided by the UE 1202. The SRF 1214 may be considered a counterpart of NRF 954, which may act as the registry for network functions.
Other such functions may include an evolved service communication proxy (eSCP) and service infrastructure control function (SICF) 1226, which may provide service communication infrastructure for control plane services and user plane services. The eSCP may be related to the service communication proxy (SCP) of 5G with user plane service communication proxy capabilities being added. The eSCP is therefore expressed in two parts: eCSP-C 1212 and eSCP- U 1234, for control plane service communication proxy and user plane service communication proxy, respectively. The SICF 1226 may control and configure eCSP instances in terms of service traffic routing policies, access rules, load balancing configurations, performance monitoring, etc.
Another such function is the AMF 1244. The AMF 1244 may be similar to 944, but with additional functionality. Specifically, the AMF 1244 may include potential functional repartition, such as move the message forwarding functionality from the AMF 1244 to the RAN 1208.
Another such function is the service orchestration exposure function (SOEF) 1218. The SOEF may be configured to expose service orchestration and chaining services to external users such as applications.
The UE 1202 may include an additional function that is referred to as a computing client service function (comp CSF) 1204. The comp CSF 1204 may have both the control plane functionalities and user plane functionalities, and may interact with corresponding network side functions such as SOCF 1220, Comp CF 1224, Comp SF 1236, Data CF 1222, and/or Data SF 1232 for service discovery, request/response, compute task workload exchange, etc. The Comp CSF 1204 may also work with network side functions to decide on whether a computing task should be run on the UE 1202, the RAN 1208, and/or an element of the 6G CN 1210.
The UE 1202 and/or the Comp CSF 1204 may include a service mesh proxy 1206. The service mesh proxy 1206 may act as a proxy for service-to-service communication in the user plane. Capabilities of the service mesh proxy 1206 may include one or more of addressing, security, load balancing, etc.
Figure 13 illustrates a simplified block diagram of artificial (Al)-assisted communication between a UE 1305 and a RAN 1310, in accordance with various embodiments. More specifically, as described in further detail below, AVmachine learning (ML) models may be used or leveraged to facilitate over-the-air communication between UE 1305 and RAN 1310.
One or both of the UE 1305 and the RAN 1310 may operate in a matter consistent with 3GPP technical specifications or technical reports for 6G systems. In some embodiments, the wireless cellular communication between the UE 1305 and the RAN 1310 may be part of, or operate concurrently with, networks 1200, 900, and/or some other network described herein.
The UE 1305 may be similar to, and share one or more features with, UE 1202, UE 902, and/or some other UE described herein. The UE 1305 may be, but is not limited to, a smartphone, tablet computer, wearable computer device, desktop computer, laptop computer, in-vehicle infotainment, in-car entertainment device, instrument cluster, head-up display device, onboard diagnostic device, dashtop mobile equipment, mobile data terminal, electronic engine management system, electronic/engine control unit, electronic/engine control module, embedded system, sensor, microcontroller, control module, engine management system, networked appliance, machine-type communication device, M2M or D2D device, loT device, etc. The RAN 1310 may be similar to, and share one or more features with, RAN 914, RAN 1208, and/or some other RAN described herein.
As may be seen in Figure 13, the Al-related elements of UE 1305 may be similar to the Al-related elements of RAN 1310. For the sake of discussion herein, description of the various elements will be provided from the point of view of the UE 1305, however it will be understood that such discussion or description will apply to equally named/numbered elements of RAN 1310, unless explicitly stated otherwise.
As previously noted, the UE 1305 may include various elements or functions that are related to AI/ML. Such elements may be implemented as hardware, software, firmware, and/or some combination thereof. In embodiments, one or more of the elements may be implemented as part of the same hardware (e.g., chip or multi -processor chip), software (e.g., a computing program), or firmware as another element.
One such element may be a data repository 1315. The data repository 1315 may be responsible for data collection and storage. Specifically, the data repository 1315 may collect and store RAN configuration parameters, measurement data, performance key performance indicators (KPIs), model performance metrics, etc., for model training, update, and inference. More generally, collected data is stored into the repository. Stored data can be discovered and extracted by other elements from the data repository 1315. For example, as may be seen, the inference data selection/filter element 1350 may retrieve data from the data repository 1315. In various embodiments, the UE 1305 may be configured to discover and request data from the data repository 1310 in the RAN, and vice versa. More generally, the data repository 1315 of the UE 1305 may be communicatively coupled with the data repository 1315 of the RAN 1310 such that the respective data repositories of the UE and the RAN may share collected data with one another.
Another such element may be a training data selection/filtering functional block 1320. The training data selection/filter functional block 1320 may be configured to generate training, validation, and testing datasets for model training. Training data may be extracted from the data repository 1315. Data may be selected/filtered based on the specific AI/ML model to be trained. Data may optionally be transformed/augmented/pre-processed (e.g., normalized) before being loaded into datasets. The training data selection/filter functional block 1320 may label data in datasets for supervised learning. The produced datasets may then be fed into model training the model training functional block 1325.
As noted above, another such element may be the model training functional block 1325. This functional block may be responsible for training and updating(re-training) AI/ML models. The selected model may be trained using the fed-in datasets (including training, validation, testing) from the training data selection/filtering functional block. The model training functional block 1325 may produce trained and tested AI/ML models which are ready for deployment. The produced trained and tested models can be stored in a model repository 1335.
The model repository 1335 may be responsible for AI/ML models’ (both trained and untrained) storage and exposure. Trained/updated model(s) may be stored into the model repository 1335. Model and model parameters may be discovered and requested by other functional blocks (e.g., the training data selection/filter functional block 1320 and/or the model training functional block 1325). In some embodiments, the UE 1305 may discover and request AI/ML models from the model repository 1335 of the RAN 1310. Similarly, the RAN 1310 may be able to discover and/or request AI/ML models from the model repository 1335 of the UE 1305. In some embodiments, the RAN 1310 may configure models and/or model parameters in the model repository 1335 of the UE 1305.
Another such element may be a model management functional block 1340. The model management functional block 1340 may be responsible for management of the AI/ML model produced by the model training functional block 1325. Such management functions may include deployment of a trained model, monitoring model performance, etc. In model deployment, the model management functional block 1340 may allocate and schedule hardware and/or software resources for inference, based on received trained and tested models. As used herein, “inference” refers to the process of using trained AI/ML model(s) to generate data analytics, actions, policies, etc. based on input inference data. In performance monitoring, based on wireless performance KPIs and model performance metrics, the model management functional block 1340 may decide to terminate the running model, start model re-training, select another model, etc. In embodiments, the model management functional block 1340 of the RAN 1310 may be able to configure model management policies in the UE 1305 as shown.
Another such element may be an inference data selection/filtering functional block 1350. The inference data selection/filter functional block 1350 may be responsible for generating datasets for model inference at the inference functional block 1345, as described below. Specifically, inference data may be extracted from the data repository 1315. The inference data selection/filter functional block 1350 may select and/or filter the data based on the deployed AI/ML model. Data may be transformed/augmented/pre-processed following the same transformation/augmentation/pre-processing as those in training data selection/filtering as described with respect to functional block 1320. The produced inference dataset may be fed into the inference functional block 1345.
Another such element may be the inference functional block 1345. The inference functional block 1345 may be responsible for executing inference as described above. Specifically, the inference functional block 1345 may consume the inference dataset provided by the inference data selection/filtering functional block 1350, and generate one or more outcomes. Such outcomes may be or include data analytics, actions, policies, etc. The outcome(s) may be provided to the performance measurement functional block 1330.
The performance measurement functional block 1330 may be configured to measure model performance metrics (e.g., accuracy, model bias, run-time latency, etc.) of deployed and executing models based on the inference outcome(s) for monitoring purpose. Model performance data may be stored in the data repository 1315.
EXAMPLE PROCEDURES
In some embodiments, the electronic device(s), network(s), system(s), chip(s) or component(s), or portions or implementations thereof, of Figures 9-13, or some other figure herein, may be configured to perform one or more processes, techniques, or methods as described herein, or portions thereof. One such process is depicted in Figure 14. For example, the process of Figure 14 may include or relate to a method to be performed by one or more electronic devices that implement a first network data analytics function (NWDAF) with a model training logical function (MTLF). The process may include identifying, at 1401, that a federated learning task for a machine learning (ML) model is to be initiated; identifying, at 1402, a second NWDAF with a MTLF; identifying, at 1403 from the second NWDAF, an indication of an updated local version of the ML model; updating, at 1404 based on the updated local version of the ML model, a global version of the ML model; and transmitting, at 1405 to the second NWDAF, an indication of the updated global version of the ML model.
Another such process is depicted in Figure 15. The process of Figure 15 may include or relate to a method to be performed by one or more electronic devices that implement a first network data analytics function (NWDAF) with a model training logical function (MTLF). The process may include updating, at 1501, a local version of a machine learning (ML) model; transmitting, at 1502 to a second NWDAF, an indication of an updated local version of the ML model; and identifying, at 1503 from the second NWDAF based on the updated local version of the ML model, an indication of an updated global version of the ML model.
For one or more embodiments, at least one of the components set forth in one or more of the preceding figures may be configured to perform one or more operations, techniques, processes, and/or methods as set forth in the example section below. For example, the baseband circuitry as described above in connection with one or more of the preceding figures may be configured to operate in accordance with one or more of the examples set forth below. For another example, circuitry associated with a UE, base station, network element, etc. as described above in connection with one or more of the preceding figures may be configured to operate in accordance with one or more of the examples set forth below in the example section.
EXAMPLES
Example 1 may include a NWDAF containing MTLF that can support Federated Learning aggregation capability registers its NF profile with NRF.
Example 2 may include the method of example 1 or some other example herein, where NWDAF containing MTLF with Federated Learning Aggregation capability is the network function responsible for send the global ML model to other NWDAF containing MTLF(s) to perform local model update based on its local data collected, receive local updated ML model from other NWDAF containing MTLF(s), aggregates all the local ML model updates and updates its global ML model, send the updated global ML model to the other NWDAF containing MTLF that participated in the Federated Learning ML training iteration.
Example 3 may include a NWDAF containing MTLF that can support Federated Learning participation capability registers its NF profile with NRF.
Example 4 may include the method of example 3 or some other example herein, where NWDAF containing MTLF with Federated Learning participation capability is the network function responsible for data collection per Analytics ID for ML model training performed locally in the NWDAF containing MTLF with Federated Learning participation capability, send the local updated ML model to the NWDAF containing MTLF with Federated Learning Aggregation capability, receive the global updated ML model from the NWDAF containing MTLF with Federated Learning Aggregation capability.
Example 5 may include the method of examples 1, and 3 or some other example herein where for the NWDAF containing MTLF with Federated Learning Aggregation capability to discover NWDAF containing MTLF with Federated Learning participation capability using the NRF, the service consumer sends Nnrf_NFDiscovery_Request to the NRF with Federated Learning participation capability for ML model(s)
Example 6 may include the method of example 5 or some other example herein, where the NRF returns one or more instances of NWDAF containing MTLF with Federated Learning participation capability for ML model(s) to the NF consumer.
Example 7 may include the method of examples 1, 3, and 6 or some other example herein, where the NnwdafJMLModelTrainingUpdate service is provided by an NWDAF containing MTLF and consumed by an NWDAF containing AnLF.
Example 8 may include the method of examples 1, 3, and 6 or some other example herein, where Nnwdaf_MLModel_DistributedTraining service is provided by an NWDAF containing MTLF and consumed by an NWDAF containing MTLF.
Example 9 may include the method of example 7 or some other example herein, where for Nnwdaf_MLModelTrainingUpdate_Subscribe service operation the consumer NF (NWDAF containing AnLF) subscribes to ML model training update with NWDAF containing MTLF. The input for the Nnwdaf MLModelTrainingUpdate Subscribe is Anlaytics ID(s) for which the training update is requested, Notification Target Address, Subscription Correlation ID (in the case of modification of the ML model subscription), Expiry Time. The output of Nnwdaf MLModelTrainingUpdate Subscribe operation includes the Subscription Correlation ID, Expiry time.
Example 10 may include the method of example 7 or some other example herein, where for Nnwdaf MLModelTrainingUpdate Unsubscribe service operation the consumer NF (NWDAF containing AnLF) unsubscribes to ML model training update with NWDAF containing MTLF. The input includes Subscription Correlation ID and output includes the service operation result.
Example 11 may include the method of example 7 or some other example herein, where for Nnwdaf_MLModelTrainingUpdate_Notify service operation an NWDAF containing MTLF notifies the ML model information to the consumer NF (NWDAF containing AnLF) which has subscribed to the specific NWDAF service. The input includes Analytics ID for which the updated trained ML model is available, the address of the updated trained ML model file.
Example 12 may include the method of example 8 or some other example herein, where Nnwdaf MLModel DistributedTraining Request service operation an NWDAF containing MTLF (with Federated Learning aggregation capability) in the role of service consumer sends a request to another NWDAF containing MTLF (with Federated Learning participation capability). The required input includes Analytics ID for which Federated Learning is required, ML model (global ML model) file address, ML model reporting time limit. The ML model reporting time limit is the time within which the local trained ML model needs to be reported back to the service consumer.
Example 13 may include the method of example 8 or some other example herein, where for Nnwdaf MLModel DistributedTraining Response service operation an NWDAF containing MTLF (with Federated Learning participation capability) in the role of service producer sends a response to NWDAF containing MTLF (with Federated Learning aggregation capability) which includes the result of the operation. If the result of the operation is successful, then the response includes the ML model (local ML model) file address, validity period. If the result is not successful, the response includes an error code.
Example 14 may include the method of example 8 or some other example herein, where for Nnwdaf_ MLModel DistributedTraining Subscribe service operation, If the NWDAF containing MTLF with Federated Learning participation capability in the role of service producer sends a response to NWDAF containing MTLF (with Federated Learning aggregation capability) with the result as success, then the NWDAF containing MTLF with Federated Learning participation capability in the role of service consumer subscribes for ML model (global ML model as a result of aggregation from the result of other NWDAF containing MTLF with Federated Learning participation capability) with NWDAF containing MTLF (with Federated Learning aggregation capability). The input includes Analytics ID, Notification Target Address (+ Notification Correlation ID). The output includes subscription Correlation ID when the subscription is accepted.
Example 15 may include the method of example 14 or some other example herein, where for Nnwdaf_ MLModel_DistributedTraining_Notify service operation, when the NWDAF containing MTLF with Federated Learning participation capability has subscribed for ML model (global ML model as a result of aggregation from the result of other NWDAF containing MTLF with Federated Learning participation capability), the NWDAF containing MTLF (with Federated Learning aggregation capability) sends a notification to NWDAF containing MTLF (with Federated Learning participation capability) which includes the Notification Correlation ID, ML model (global ML model) file address, validity period.
Example 16 may include the method of example 8 or some other example herein, where for Nnwdaf MLModel DistributedTraining Unsubscribe service operation, the NWDAF containing MTLF with Federated Learning participation capability unsubscribes from the NWDAF containing MTLF (with Federated Learning aggregation capability). The input includes the subscription Correlation ID. The output includes the result of the operation.
Example 17 may include the method of example 1 or some other example herein, where NWDAF containing MTLF (Federated Learning aggregation capability) decides that the federated learning task for a given ML model (required to generate an Analytics ID) needs to be initiated based on ML Model accuracy.
Example Al includes a method an NWDAF containing MTLF registers it NF profile with NRF where the NF profile parameter includes supported ML model file serialization formats for the trained ML models in the ML model filter information.
Example A2 includes a method of example Al or some other example herein, where the ML model file serialization format(s) included in the ML model Filter information indicates the supported ML model file serialization format(s) for the trained ML model(s) available at the NWDAF containing MTLF for consuming by the service consumer.
Example A2a includes method of example Al or some other example herein, where the modelfileformatList in the ML model filter information can be provided per Analytics ID .
Example A3 includes method of example A2 or some other example herein, where the consumer of the services provided by NWDAF containing MTLF may be an NWDAF containing AnLF or NWDAF containing MTLF.
Example A4 includes a method of example Al, A2, A3, or some other example herein, where the consumer NF may belong to the same vendor as NWDAF containing MTLF or to a different vendor.
Example A5 includes a method of example A3 or some other example herein, where NWDAF containing AnLF invokes a Nnrf NfDiscovery Request to an appropriately configured NRE includes the ML model file serialization format(s) supported for the trained ML model(s) in the ML model filter information.
Example A6 includes a method of example A5 or some other example herein, where NRE determines a set of NWDAF containing MTLF instance(s) matching at least one of the ML model file serialization formats supported in Nnrf NFDiscovery Request and internal policies of the NRF and sends the NF profile(s) (including ML model file serialization format(s)) of the determined NWDAF containing MTLF instances in the Discovery Response.
Example A7 includes a method of example A5, A6, or some other example herein, where the NWDAF containing AnLF subscribes to NWDAF containing MTLF using Nnwdaf MLModelProvision Subscribe service operation including ML model file serialization format requested as input, NWDAF containing MTLF notifies the ML model information (address (e.g. URL or FQDN) of Model file) to the NWDAF containing AnLF only if ML model format as requested in the input of Nnwdaf MLModelProvision Subscribe is a match.
Example A8 includes a method where the NWDAF containing MTLF sends Nnrf_NFManagement_NFRegister to NRF to inform the NRF of its NF profile and it includes ML model file specific information for the trained ML model(s) as a new attribute.
Example A9 includes a method of example A8, A3, A4, or some other example herein, where The ML model file specific information attribute includes the supported ML model file serialization formats for the trained ML model(s) available at the NWDAF containing MTLF for consuming by the service consumer.
Example A10 includes a method where the functionality of ADRF is extended to enable a NWDAF containing MTLF to store and retrieve trained ML model(s) to and from ADRF respectively.
Example Al 1 includes a method of example A10 or some other example herein, where a new service is supported by ADRF e.g., NadrfJMLModelManagement service that enables the consumer NWDAF containing MTLF to store, retrieve, and remove ML model(s) from an ADRF.
Example A12 includes a method of example Al 1 or some other example herein, where the NWDAF containing AnLF sends a Nadrf MLModelManagement RetrievalRequest which includes Analytics ID(s), ML Model Filter Info, ML model file specific information, TargetNF (NWDAF MTLF) to subscribe for Notifications. Example A13 includes a method of example A12 or some other example herein, where if the ML model file for the Analytics ID(s) requested is not stored in ADRF then elements in example A14, A15, A16 are performed.
Example A14 includes a method of example A13 or some other example herein, where ADRF sends Nnwdaf MLModelProvision Subscribe with input parameters ML model file specific Information (ML model file serialization format).
Example Al 5 includes a method of example A14 or some other example herein, where ADRF sends Nnwdaf MLModelTrainingUpdate Subscribe with input parameters Analytics ID(s), ML model file specific Information (ML model file serialization format).
Example Al 6 includes a method of example Al 5 or some other example herein, where the ML model for which the ADRF has subscribed for ML model training update has been updated, the NWDAF containing MTLF sends Nnwdaf_MLModelTrainingUpdate_Notify with the following parameters Analytics ID, Trained ML model file address, Notification Correlation ID.
Example A17 includes a method of example A12 or some other example herein, where the ADRF sends a response back to NWDAF containing AnLF using Nadrf MLModelManagement RetrievalRequest Response with the following parameters ML Model File Information(Trained ML model file address, ML model file serialization format, Trained ML Model ID per Analytics ID).
Example Al 8 includes a method of example Al 7 or some other example herein, where the NWDAF containing AnLF subscribes to ADRF using Nadrf MLModelManagement RetrievalTrainingUpdate Subscribe service operation containing input parameters Trained ML Model ID per Analytics ID.
Example Al 9 includes a method of example Al 8 or some other example herein, where the AnLF sends a notification to NWDAF containing AnLF using Nadrf MLModelManagement RetrievalTrainingUpdate Notify service operation containing following parameters ML Model File Information(Trained ML model file address, ML model file serialization format, Trained ML Model ID per Analytics ID).
Example A20 includes a method of example Al 9 or some other example herein, where the NWDAF containing AnLF determines that the ML model training update is no longer required and if yes, the NWDAF containing AnLF sends Nadrf MLModelManagement RetrievalTrainingUpdate Unsubscribe with Subscription Correlation ID as input parameters.
Example A21 includes a method of example A20 or some other example herein, where ADRF determines if any of the NWDAF AnLF consumer(s) have subscription for ML Model training update per Analytics ID. If not, consumer has subscription for ML model training update per Analytics ID, the ADRF removes the ML model file and ML model file specific information and elements described in example A22 is performed.
Example A22 includes a method of example A21 or some other example herein, where ADRF sends Nnwdaf MLModelTrainingUpdate Unsubscribe to ADRF with Subscription Correlation ID as input parameter.
Example A23 includes a method of example A10 or some other example herein, where the NWDAF containing AnLF may trigger trained ML model storage in ADRF.
Example A24 includes a method of example A23 or some other example herein, where the NWDAF containing AnLF sends Nnwdaf MLModellnfo Request with the following input parameters Analytics ID(s), ML model file specific information (ML model file serialization format), Notification end point address (ADRF) to the NWDAF containing MTLF.
Example A25 includes a method of example A10 or some other example herein, where the ADRF may send the request to store the trained ML model in ADRF.
Example A26 includes a method of example A25 or some other example herein, where the ADRF sends Nnwdaf MLModelProvision Subscribe with the following input parameters ML model file specific Information (ML model file serialization format).
Example A27 includes a method of example A25, A26 or some other example herein, where the NWDAF containing MTLF sends Nadrf MLModelManagement StorageRequest with input parameters Analytics ID(s), Trained ML model file address, ML model file specific information(ML model file serialization format).
Example A28 includes a method of example A27 or some other example herein, where the ADRF sends Nnwdaf MLModelTrainingUpdate Subscribe with input parameters Analytics ID(s), ML model file specific Information (ML model file serialization format).
Example A29 includes a method of example A28, or some other example herein, when the ML model for which the ADRF has subscribed for ML model training update has been updated, the NWDAF containing MTLF sends Nnwdaf_MLModelTrainingUpdate_Notify with input parameters Analytics ID, Trained ML model file address, Notification Correlation ID.
Example A30 includes a method where a new network function is defined in 5G core network e.g., ML Model Storage Function (MLMS) to enable a NWDAF containing MTLF to store and retrieve trained ML model(s) to and from ADRF respectively.
Example A31 includes a method of example A30 or some other example herein, where a new service is supported by MLMS e.g., Nmlms_MLModelManagement service that enables the consumer NWDAF containing MTLF to store, retrieve, and remove ML model(s) from an MLMS. Example A32 includes a method of example A30, A31 or some other example herein, where the NF consumers (NWDAF containing MTLF and/or AnLF) shall utilize the NRF to discover MLMS instance(s) unless MLMS information is available by other means e.g. locally configured on NF consumers. The MLMS selection function in NF consumers selects an MLMS instance based on the available MLMS instances.
Example A33 includes a method of example A30, A31, A32, or some other example herein, where the S-NSSAI is used as a factor by the NF consumer for MLMS selection.
Example A34 includes a method of example A30, A31 or some other example herein, where ADRF in example A12 to example A29 is replaced by MLMS where the functionality of ADRF defined in examples A12 to A29 is applicable to MLMS
Example A40 includes a method where the functionality of ADRF is extended to enable a NWDAF containing MTLF to request ADRF to perform conversion from one ML model file serialization format to another ML model file serialization format.
Example A41 includes a method of example A40 or some other example herein, where the NWDAF containing AnLF sends a
Nadrf MLModelManagement FormatConversionRequest which includes ML model file specific information (Trained ML model file address, ML model file serialization format), and target ML model file serialization format.
Example A42 includes a method of example A41 or some other example herein, where the ADRF performs conversion of the given ML model file serialization format to the requested target ML model file serialization format.
Example A43 includes a method of example A42 or some other example herein, where the ADRF sends a Nadrf MLModelManagement FormatConversionRequest Response and includes ML Model File Information (Trained ML model file address, ML model file serialization format).
Example A50 includes a method where a ML Model Storage Function (MLMS) functionality is introduced to enable a NWDAF containing MTLF to request MLMS to perform conversion from one ML model file serialization format to another ML model file serialization format.
Example A51 includes a method of example A40 or some other example herein, where the NWDAF containing AnLF sends a
Nmlms MLModelManagement FormatConversionRequest which includes ML model file specific information (Trained ML model file address, ML model file serialization format), and target ML model file serialization format.
Example A52 includes a method of example 5 lor some other example herein, where the MLMS performs conversion of the given ML model file serialization format to the requested target ML model file serialization format.
Example A53 includes a method of example A52 or some other example herein, where the MLMS sends a Nmlms MLModelManagement FormatConversionRequest Response and includes ML Model File Information (Trained ML model file address, ML model file serialization format).
Example Bl relates to a method to be performed by one or more electronic devices that implement a first network data analytics function (NWDAF) with a model training logical function (MTLF), wherein the method comprises identifying that a federated learning task for a machine learning (ML) model is to be initiated; identifying a second NWDAF with a MTLF; identifying, from the second NWDAF, an indication of an updated local version of the ML model; updating, based on the updated local version of the ML model, a global version of the ML model; and transmitting, to the second NWDAF, an indication of the updated global version of the ML model.
Example B2 relates to the method of example Bl, and/or some other example herein, wherein the first NWDAF is a NWDAF with a MTLF that is configured for federated learning aggregation.
Example B3 relates to the method of any of examples B1-B2, and/or some other example herein, wherein the second NWDAF is a NWDAF with a MTLF that is configured for federated learning participation.
Example B4 relates to the method of any of examples B1-B3, and/or some other example herein, wherein identification of the second NWDAF is based on: transmission of a Nnrf Discovery Request message to a network repository function (NRF); and receipt, from the NRF based on the transmitted Nnrf Discovery Request message, of a Nnrf Discovery Response message that includes an indication of the second NWDAF.
Example B5 relates to the method of any of examples B1-B4, and/or some other example herein, wherein the indication of the updated local version of the ML model is received in a Nnwdaf MLModel DistributedTraining Response message.
Example B6 relates to the method of example B5, and/or some other example herein, wherein the Nnwdaf MLModel DistributedTraining Response message is responsive to transmission, from the first NWDAF to the second NWDAF, of a Nnwdaf MLModel DistributedTraining Request message.
Example B7 relates to the method of any of examples B1-B6, and/or some other example herein, wherein the indication of the updated global version of the ML model is transmitted in a Nnwdaf_MLModel_DistributedTraining_Notify message. Example B8 includes a method to be performed by one or more electronic devices that implement a first network data analytics function (NWDAF) with a model training logical function (MTLF), wherein the method comprises: updating a local version of a machine learning (ML) model; transmitting, to a second NWDAF, an indication of an updated local version of the ML model; and identifying, from the second NWDAF based on the updated local version of the ML model, an indication of an updated global version of the ML model.
Example B9 includes the method of example B8, and/or some other example herein, wherein the second NWDAF is a NWDAF with a MTLF that is configured for federated learning aggregation.
Example BIO includes the method of any of examples B8-B9, and/or some other example herein, wherein the first NWDAF is a NWDAF with a MTLF that is configured for federated learning participation.
Example Bl 1 includes the method of any of examples B8-B10, and/or some other example herein, wherein the indication of the updated local version of the ML model is transmitted to the second NWDAF in a Nnwdaf_MLModel_DistributedTraining_Response message.
Example B12 includes the method of example Bl 1, and/or some other example herein, wherein the Nnwdaf MLModel DistributedTraining Response message is responsive to receipt, from the second NWDAF, of a Nnwdaf_MLModel_DistributedTraining_Request message.
Example B13 includes the method of any of examples B8-B12, and/or some other example herein, wherein the indication of the updated global version of the ML model is received in a Nnwdaf_MLModel_DistributedTraining_Notify message.
Example Z01 may include an apparatus comprising means to perform one or more elements of a method described in or related to any of examples 1 -Bl 3, or any other method or process described herein.
Example Z02 may include one or more non-transitory computer-readable media comprising instructions to cause an electronic device, upon execution of the instructions by one or more processors of the electronic device, to perform one or more elements of a method described in or related to any of examples 1-B13, or any other method or process described herein.
Example Z03 may include an apparatus comprising logic, modules, or circuitry to perform one or more elements of a method described in or related to any of examples 1 -Bl 3, or any other method or process described herein.
Example Z04 may include a method, technique, or process as described in or related to any of examples 1 -Bl 3, or portions or parts thereof.
Example Z05 may include an apparatus comprising: one or more processors and one or more computer-readable media comprising instructions that, when executed by the one or more processors, cause the one or more processors to perform the method, techniques, or process as described in or related to any of examples 1-B13, or portions thereof.
Example Z06 may include a signal as described in or related to any of examples 1-B13, or portions or parts thereof.
Example Z07 may include a datagram, packet, frame, segment, protocol data unit (PDU), or message as described in or related to any of examples 1 -Bl 3, or portions or parts thereof, or otherwise described in the present disclosure.
Example Z08 may include a signal encoded with data as described in or related to any of examples 1-B13, or portions or parts thereof, or otherwise described in the present disclosure.
Example Z09 may include a signal encoded with a datagram, packet, frame, segment, protocol data unit (PDU), or message as described in or related to any of examples 1-B13, or portions or parts thereof, or otherwise described in the present disclosure.
Example Z10 may include an electromagnetic signal carrying computer-readable instructions, wherein execution of the computer-readable instructions by one or more processors is to cause the one or more processors to perform the method, techniques, or process as described in or related to any of examples 1-B13, or portions thereof.
Example Z11 may include a computer program comprising instructions, wherein execution of the program by a processing element is to cause the processing element to carry out the method, techniques, or process as described in or related to any of examples 1 -Bl 3, or portions thereof.
Example Z12 may include a signal in a wireless network as shown and described herein.
Example Z13 may include a method of communicating in a wireless network as shown and described herein.
Example Z14 may include a system for providing wireless communication as shown and described herein.
Example Z15 may include a device for providing wireless communication as shown and described herein.
Any of the above-described examples may be combined with any other example (or combination of examples), unless explicitly stated otherwise. The foregoing description of one or more implementations provides illustration and description, but is not intended to be exhaustive or to limit the scope of embodiments to the precise form disclosed. Modifications and variations are possible in light of the above teachings or may be acquired from practice of various embodiments.
Abbreviations
Unless used differently herein, terms, definitions, and abbreviations may be consistent with terms, definitions, and abbreviations defined in 3GPP TR 21.905 vl6.0.0 (2019-06). For the purposes of the present document, the following abbreviations may apply to the examples and embodiments discussed herein.
3 GPP Third AO A Angle of Shift Keying Generation Arrival BRAS Broadband
Partnership AP Application Remote Access Project Protocol, Antenna Server 4G Fourth 40 Port, Access Point 75 BSS Business Generation API Application Support System 5G Fifth Programming Interface BS Base Station Generation APN Access Point BSR Buffer Status 5GC 5G Core Name Report network 45 ARP Allocation and 80 BW Bandwidth AC Retention Priority BWP Bandwidth Part
Application ARQ Automatic C-RNTI Cell Client Repeat Request Radio Network
ACR Application AS Access Stratum Temporary Context Relocation 50 ASP 85 Identity ACK Application Service CA Carrier
Acknowledgem Provider Aggregation, ent Certification ACID ASN.l Abstract Syntax Authority
Application 55 Notation One 90 CAPEX CAPital Client Identification AUSF Authentication Expenditure AF Application Server Function CBRA Contention Function AWGN Additive Based Random
AM Acknowledged White Gaussian Access Mode 60 Noise 95 CC Component
AMBRAggregate BAP Backhaul Carrier, Country Maximum Bit Rate Adaptation Protocol Code, Cryptographic AMF Access and BCH Broadcast Checksum
Mobility Channel CCA Clear Channel
Management 65 BER Bit Error Ratio 100 Assessment Function BFD Beam CCE Control AN Access Failure Detection Channel Element Network BLER Block Error CCCH Common ANR Automatic Rate Control Channel
Neighbour Relation 70 BPSK Binary Phase 105 CE Coverage Enhancement Optional Information CDM Content CoMP Coordinated Resource Delivery Network Multi-Point Indicator, CSI-RS CDMA Code- CORESET Control Resource Division Multiple 40 Resource Set 75 Indicator Access COTS Commercial C-RNTI Cell
CDR Charging Data Off-The-Shelf RNTI Request CP Control Plane, CS Circuit
CDR Charging Data Cyclic Prefix, Switched Response 45 Connection 80 CSCF call
CFRA Contention Free Point session control function Random Access CPD Connection CSAR Cloud Service CG Cell Group Point Descriptor Archive CGF Charging CPE Customer CSI Channel-State
Gateway Function 50 Premise 85 Information CHF Charging Equipment CSI-IM CSI
Function CPICHCommon Pilot Interference
CI Cell Identity Channel Measurement CID Cell-ID (e g., CQI Channel CSI-RS CSI positioning method) 55 Quality Indicator 90 Reference Signal CIM Common CPU CSI processing CSI-RSRP CSI Information Model unit, Central reference signal CIR Carrier to Processing Unit received power Interference Ratio C/R CSI-RSRQ CSI CK Cipher Key 60 Command/Resp 95 reference signal CM Connection onse field bit received quality Management, CRAN Cloud Radio CSI-SINR CSI
Conditional Access signal-to-noise and Mandatory Network, Cloud interference CMAS Commercial 65 RAN 100 ratio Mobile Alert Service CRB Common CSMA Carrier Sense CMD Command Resource Block Multiple Access CMS Cloud CRC Cyclic CSMA/CA CSMA Management System Redundancy Check with collision CO Conditional 70 CRI Channel -State 105 avoidance CSS Common DRB Data Radio Application Server
Search Space, CellBearer EASID Edge specific Search DRS Discovery Application Server
Space Reference Signal Identification
CTF Charging 40 DRX Discontinuous 75 ECS Edge
Trigger Function Reception Configuration Server
CTS Clear-to-Send DSL Domain ECSP Edge
CW Codeword Specific Language. Computing Service
CWS Contention Digital Provider
Window Size 45 Subscriber Line 80 EDN Edge
D2D Device-to- DSLAM DSL Data Network
Device Access Multiplexer EEC Edge
DC Dual DwPTS Enabler Client
Connectivity, Direct Downlink Pilot EECID Edge Current 50 Time Slot 85 Enabler Client
DCI Downlink E-LAN Ethernet Identification
Control Local Area Network EES Edge
Information E2E End-to-End Enabler Server
DF Deployment EAS Edge EESID Edge Flavour 55 Application Server 90 Enabler Server
DL Downlink ECCA extended clear Identification
DMTF Distributed channel EHE Edge
Management Task assessment, Hosting Environment Force extended CCA EGMF Exposure
DPDK Data Plane 60 ECCE Enhanced 95 Governance
Development Kit Control Channel Management
DM-RS, DMRS Element, Function
Demodulation Enhanced CCE EGPRS
Reference Signal ED Energy Enhanced DN Data network 65 Detection 100 GPRS DNN Data Network EDGE Enhanced EIR Equipment Name Datarates for GSM Identity Register
DNAI Data Network Evolution eLAA enhanced Access Identifier (GSM Evolution) Licensed Assisted
70 EAS Edge 105 Access, enhanced LAA eUICC embedded Information EM Element UICC, embedded FCC Federal Manager Universal Communications eMBB Enhanced Integrated Circuit Commission Mobile 40 Card 75 FCCH Frequency
Broadband E-UTRA Evolved Correction CHannel
EMS Element UTRA FDD Frequency Management System E-UTRAN Evolved Division Duplex eNB evolved NodeB, UTRAN FDM Frequency E-UTRAN Node B 45 EV2X Enhanced V2X 80 Division EN-DC E- F1AP Fl Application Multiplex UTRA-NR Dual Protocol FDMA F requency
Connectivity Fl-C Fl Control Division Multiple EPC Evolved Packet plane interface Access Core 50 Fl-U Fl User plane 85 FE Front End EPDCCH interface FEC Forward Error enhanced FACCH Fast Correction PDCCH, enhanced Associated Control FFS For Further Physical CHannel Study Downlink Control 55 FACCH/F Fast 90 FFT Fast Fourier Cannel Associated Control Transformation EPRE Energy per Channel/Full feLAA further resource element rate enhanced Licensed EPS Evolved Packet FACCH/H Fast Assisted
System 60 Associated Control 95 Access, further
EREG enhanced REG, Channel/Half enhanced LAA enhanced resource rate FN Frame Number element groups FACH Forward Access FPGA Field- ETSI European Channel Programmable Gate
Telecommunica 65 FAUSCH Fast 100 Array tions Standards Uplink Signalling FR Frequency Institute Channel Range
ETW S Earthquake and FB Functional FQDN Fully Tsunami Warning Block Qualified Domain System 70 FBI Feedback 105 Name G-RNTI GERAN Radio Service HPLMN Home
Radio Network GPSI Generic Public Land Mobile
Temporary Public Subscription Network Identity Identifier HSDPA High GERAN 40 GSM Global System 75 Speed Downlink
GSM EDGE for Mobile Packet Access RAN, GSM EDGE Communication HSN Hopping
Radio Access s, Groupe Special Sequence Number
Network Mobile HSPA High Speed
GGSN Gateway GPRS 45 GTP GPRS 80 Packet Access Support Node Tunneling Protocol HSS Home GLONASS GTP-UGPRS Subscriber Server
GLObal'naya Tunnelling Protocol HSUPA High
NAvigatsionnay for User Plane Speed Uplink Packet a Sputnikovaya 50 GTS Go To Sleep 85 Access Si sterna (Engl.: Signal (related HTTP Hyper Text Global Navigation to WUS) Transfer Protocol
Satellite GUMMEI Globally HTTPS Hyper
System) Unique MME Text Transfer Protocol gNB Next 55 Identifier 90 Secure (https is Generation NodeB GUTI Globally http/ 1.1 over gNB-CU gNB- Unique Temporary SSL, i.e. port 443) centralized unit, Next UE Identity I-Block
Generation HARQ Hybrid ARQ, Information
NodeB 60 Hybrid 95 Block centralized unit Automatic ICCID Integrated gNB-DU gNB- Repeat Request Circuit Card distributed unit, Next HANDO Handover Identification
Generation HFN HyperFrame IAB Integrated
NodeB 65 Number 100 Access and distributed unit HHO Hard Handover Backhaul
GNSS Global HLR Home Location ICIC Inter-Cell
Navigation Satellite Register Interference
System HN Home Network Coordination GPRS General Packet 70 HO Handover 105 ID Identity, identifier IMGI International Identity Module
IDFT Inverse Discrete mobile group identity ISO International Fourier IMPI IP Multimedia Organisation for
Transform Private Identity Standardisation IE Information 40 IMPU IP Multimedia 75 ISP Internet Service element PUblic identity Provider IBE In-Band IMS IP Multimedia IWF Interworking- Emission Subsystem Function IEEE Institute of IMSI International I-WLAN Electrical and 45 Mobile 80 Interworking
Electronics Subscriber WLAN Engineers Identity Constraint IEI Information loT Internet of length of the Element Things convolutional
Identifier 50 IP Internet 85 code, USIM IEIDL Information Protocol Individual key Element Ipsec IP Security, kB Kilobyte (1000
Identifier Data Internet Protocol bytes) Length Security kbps kilo-bits per IETF Internet 55 IP-CAN IP- 90 second Engineering Task Connectivity Access Kc Ciphering key Force Network Ki Individual
IF Infrastructure IP-M IP Multicast subscriber IIOT Industrial IPv4 Internet authentication Internet of Things 60 Protocol Version 4 95 key IM Interference IPv6 Internet KPI Key Measurement, Protocol Version 6 Performance Indicator
Intermodulation IR Infrared KQI Key Quality , IP Multimedia IS In Sync Indicator IMG IMS 65 IRP Integration 100 KSI Key Set Credentials Reference Point Identifier IMEI International ISDN Integrated ksps kilo-symbols Mobile Services Digital per second
Equipment Network KVM Kernel Virtual Identity 70 ISIM IM Services 105 Machine LI Layer 1 Positioning Protocol and Orchestration (physical layer) LSB Least MBMS Ll-RSRP Layer 1 Significant Bit Multimedia reference signal LTE Long Term Broadcast and received power 40 Evolution 75 Multicast L2 Layer 2 (data LWA LTE-WLAN Service link layer) aggregation MBSFN L3 Layer 3 LWIP LTE/WLAN Multimedia (network layer) Radio Level Broadcast LAA Licensed 45 Integration with 80 multicast Assisted Access IPsec Tunnel service Single LAN Local Area LTE Long Term Frequency Network Evolution Network LADN Local M2M Machine-to- MCC Mobile Country Area Data Network 50 Machine 85 Code LBT Listen Before MAC Medium Access MCG Master Cell Talk Control Group LCM LifeCycle (protocol MCOT Maximum Management layering context) Channel LCR Low Chip Rate 55 MAC Message 90 Occupancy LCS Location authentication code Time Services (security/ encrypti on MCS Modulation and LCID Logical context) coding scheme Channel ID MAC-A MAC MD AF Management LI Layer Indicator 60 used for 95 Data Analytics LLC Logical Link authentication Function Control, Low Layer and key MD AS Management Compatibility agreement Data Analytics LMF Location (TSG T WG3 context) Service
Management Function 65 MAC-IMAC used for 100 MDT Minimization of LOS Line of data integrity of Drive Tests
Sight signalling messages ME Mobile LPLMN Local (TSG T WG3 context) Equipment
PLMN MANO MeNB master eNB LPP LTE 70 Management 105 MER Message Error Ratio MPRACH MTC Machine-Type MGL Measurement Physical Random Communication Gap Length Access s MGRP Measurement CHannel MU-MIMO Multi Gap Repetition 40 MPUSCH MTC 75 User MEMO Period Physical Uplink Shared MWUS MTC MIB Master Channel wake-up signal, MTC Information Block, MPLS MultiProtocol wus Management Label Switching NACK Negative
Information Base 45 MS Mobile Station 80 Acknowledgement MIMO Multiple Input MSB Most NAI Network Multiple Output Significant Bit Access Identifier MLC Mobile MSC Mobile NAS Non-Access Location Centre Switching Centre Stratum, Non- Access MM Mobility 50 MSI Minimum 85 Stratum layer Management System NCT Network MME Mobility Information, Connectivity Management Entity MCH Scheduling Topology MN Master Node Information NC-JT Non- MNO Mobile 55 MSID Mobile Station 90 Coherent Joint Network Operator Identifier Transmission MO Measurement MSIN Mobile Station NEC Network
Object, Mobile Identification Capability
Originated Number Exposure MPBCH MTC 60 MSISDN Mobile 95 NE-DC NR-E-
Physical Broadcast Subscriber ISDN UTRA Dual CHannel Number Connectivity
MPDCCH MTC MT Mobile NEF Network Physical Downlink Terminated, Mobile Exposure Function Control 65 Termination 100 NF Network
CHannel MTC Machine-Type Function
MPDSCH MTC Communication NFP Network Physical Downlink s Forwarding Path Shared mMTCmassive MTC, NFPD Network
CHannel 70 massive 105 Forwarding Path Descriptor Shared CHannel S-NNSAI Single-
NFV Network NPRACH NSSAI
Functions Narrowband NSSF Network Slice
Virtualization Physical Random Selection Function
NFVI NFV 40 Access CHannel 75 NW Network
Infrastructure NPUSCH NWUSNarrowband
NF VO NFV Narrowband wake-up signal,
Orchestrator Physical Uplink Narrowband WUS
NG Next Shared CHannel NZP Non-Zero
Generation, Next Gen 45 NPSS Narrowband 80 Power
NGEN-DC NG- Primary O&M Operation and
RAN E-UTRA-NR Synchronization Maintenance
Dual Connectivity Signal ODU2 Optical channel
NM Network NSSS Narrowband Data Unit - type 2
Manager 50 Secondary 85 OFDM Orthogonal
NMS Network Synchronization Frequency Division
Management System Signal Multiplexing
N-PoP Network Point NR New Radio, OFDMA of Presence Neighbour Relation Orthogonal
NMIB, N-MIB 55 NRF NF Repository 90 Frequency Division
Narrowband MIB Function Multiple Access
NPBCH NRS Narrowband OOB Out-of-band
Narrowband Reference Signal 00 S Out of
Physical NS Network Sync
Broadcast 60 Service 95 OPEX OPerating
CHannel NS A Non- Standalone EXpense
NPDCCH operation mode OSI Other System
Narrowband NSD Network Information
Physical Service Descriptor OSS Operations
Downlink 65 NSR Network 100 Support System
Control CHannel Service Record OTA over-the-air
NPDSCH NSSAINetwork Slice PAPR Peak-to-
Narrowband Selection Average Power
Physical Assistance Ratio
Downlink 70 Information 105 PAR Peak to Average Ratio Network, Public PP, PTP Point-to-
PBCH Physical Data Network Point Broadcast Channel PDSCH Physical PPP Point-to-Point PC Power Control, Downlink Shared Protocol
Personal 40 Channel 75 PRACH Physical
Computer PDU Protocol Data RACH
PCC Primary Unit PRB Physical Component Carrier, PEI Permanent resource block Primary CC Equipment PRG Physical
P-CSCF Proxy 45 Identifiers 80 resource block
CSCF PFD Packet Flow group
PCell Primary Cell Description ProSe Proximity
PCI Physical Cell P-GW PDN Gateway Services, ID, Physical Cell PHICH Physical Proximity- Identity 50 hybrid-ARQ indicator 85 Based Service
PCEF Policy and channel PRS Positioning
Charging PHY Physical layer Reference Signal
Enforcement PLMN Public Land PRR Packet
Function Mobile Network Reception Radio
PCF Policy Control 55 PIN Personal 90 PS Packet Services Function Identification Number PSBCH Physical
PCRF Policy Control PM Performance Sidelink Broadcast and Charging Rules Measurement Channel Function PMI Precoding PSDCH Physical
PDCP Packet Data 60 Matrix Indicator 95 Sidelink Downlink
Convergence PNF Physical Channel
Protocol, Packet Network Function PSCCH Physical
Data Convergence PNFD Physical Sidelink Control Protocol layer Network Function Channel
PDCCH Physical 65 Descriptor 100 PSSCH Physical
Downlink Control PNFR Physical Sidelink Shared
Channel Network Function Channel
PDCP Packet Data Record PSFCH physical
Convergence Protocol POC PTT over sidelink feedback PDN Packet Data 70 Cellular 105 channel PSCell Primary SCell Bearer, Random layer PSS Primary Access Burst RLC AM RLC Synchronization RACH Random Access Acknowledged Mode
Signal Channel RLC UM RLC
PSTN Public Switched 40 RADIUS Remote 75 Unacknowledged
Telephone Network Authentication Dial Mode PT-RS Phase-tracking In User Service RLF Radio Link reference signal RAN Radio Access Failure
PTT Push-to-Talk Network RLM Radio Link PUCCH Physical 45 RAND RANDom 80 Monitoring
Uplink Control number (used for RLM-RS
Channel authentication) Reference
PUSCH Physical RAR Random Access Signal for RLM
Uplink Shared Response RM Registration
Channel 50 RAT Radio Access 85 Management
QAM Quadrature Technology RMC Reference Amplitude RAU Routing Area Measurement Channel
Modulation Update RMSI Remaining
QCI QoS class of RB Resource block, MSI, Remaining identifier 55 Radio Bearer 90 Minimum
QCL Quasi coRBG Resource block System location group Information
QFI QoS Flow ID, REG Resource RN Relay Node QoS Flow Element Group RNC Radio Network
Identifier 60 Rel Release 95 Controller
QoS Quality of REQ REQuest RNL Radio Network Service RF Radio Layer
QPSK Quadrature Frequency RNTI Radio Network (Quaternary) Phase RI Rank Indicator Temporary Shift Keying 65 RIV Resource 100 Identifier
QZSS Quasi-Zenith indicator value ROHC RObust Header Satellite System RL Radio Link Compression
RA-RNTI Random RLC Radio Link RRC Radio Resource
Access RNTI Control, Radio Control, Radio
RAB Radio Access 70 Link Control 105 Resource Control layer S-RNTI SRNC SCS Subcarrier
RRM Radio Resource Radio Network Spacing Management Temporary SC TP Stream Control
RS Reference Identity Transmission
Signal 40 S-TMSI SAE 75 Protocol
RSRP Reference Temporary Mobile SDAP Service Data
Signal Received Station Adaptation
Power Identifier Protocol,
RSRQ Reference SA Standalone Service Data Signal Received 45 operation mode 80 Adaptation
Quality SAE System Protocol layer
RSSI Received Signal Architecture SDL Supplementary Strength Evolution Downlink
Indicator SAP Service Access SDNF Structured Data
RSU Road Side Unit 50 Point 85 Storage Network RSTD Reference SAPD Service Access Function Signal Time Point Descriptor SDP Session difference SAPI Service Access Description Protocol
RTP Real Time Point Identifier SDSF Structured Data Protocol 55 SCC Secondary 90 Storage Function
RTS Ready-To-Send Component Carrier, SDT Small Data RTT Round Trip Secondary CC Transmission Time SCell Secondary Cell SDU Service Data
Rx Reception, SCEF Service Unit Receiving, Receiver 60 Capability Exposure 95 SEAF Security S1AP SI Application Function Anchor Function Protocol SC-FDMA Single SeNB secondary eNB
Sl-MME SI for Carrier Frequency SEPP Security Edge the control plane Division Protection Proxy Sl-U SI for the user 65 Multiple Access 100 SFI Slot format plane SCG Secondary Cell indication
S-CSCF serving Group SFTD Space-
CSCF SCM Security Frequency Time
S-GW Serving Context Diversity, SFN Gateway 70 Management 105 and frame timing difference Number Synchronization
SFN System Frame SoC System on Chip Signal based Number SON Self-Organizing Reference
SgNB Secondary gNB Network Signal Received SGSN Serving GPRS 40 SpCell Special Cell 75 Power Support Node SP-CSI-RNTISemi- SS-RSRQ
S-GW Serving Persistent CSI RNTI Synchronization
Gateway SPS Semi-Persistent Signal based
SI System Scheduling Reference
Information 45 SQN Sequence 80 Signal Received
SI-RNTI System number Quality
Information RNTI SR Scheduling SS-SINR
SIB System Request Synchronization
Information Block SRB Signalling Signal based Signal
SIM Subscriber 50 Radio Bearer 85 to Noise and
Identity Module SRS Sounding Interference Ratio
SIP Session Reference Signal SSS Secondary
Initiated Protocol SS Synchronization Synchronization
SiP System in Signal Signal
Package 55 SSB Synchronization 90 SSSG Search Space
SL Sidelink Signal Block Set Group
SLA Service Level SSID Service Set SSSIF Search Space
Agreement Identifier Set Indicator
SM Session SS/PBCH Block SST Slice/Service
Management 60 SSBRI SS/PBCH 95 Types
SMF Session Block Resource SU-MIMO Single
Management Function Indicator, User MIMO
SMS Short Message Synchronization SUL Supplementary
Service Signal Block Uplink
SMSF SMS Function 65 Resource 100 TA Timing
SMTC S SB-based Indicator Advance, Tracking
Measurement Timing SSC Session and Area
Configuration Service TAC Tracking Area
SN Secondary Continuity Code
Node, Sequence 70 SS-RSRP 105 TAG Timing Advance Group Control Protocol
TAI TPMI Transmitted UDSF Unstructured
Tracking Area Precoding Matrix Data Storage Network
Identity Indicator Function
TAU Tracking Area 40 TR Technical 75 UICC Universal
Update Report Integrated Circuit
TB Transport Block TRP, TRxP Card
TBS Transport Block Transmission UL Uplink
Size Reception Point UM
TBD To Be Defined 45 TRS Tracking 80 Unacknowledge
TCI Transmission Reference Signal d Mode
Configuration TRx Transceiver UML Unified
Indicator TS Technical Modelling Language
TCP Transmission Specifications, UMTS Universal
Communication 50 Technical 85 Mobile
Protocol Standard Tel ecommuni ca
TDD Time Division TTI Transmission tions System
Duplex Time Interval UP User Plane
TDM Time Division Tx Transmission, UPF User Plane
Multiplexing 55 Transmitting, 90 Function
TDMATime Division Transmitter URI Uniform
Multiple Access U-RNTI UTRAN Resource Identifier TE Terminal Radio Network URL Uniform Equipment Temporary Resource Locator TEID Tunnel End 60 Identity 95 URLLC Ultra¬
Point Identifier UART Universal Reliable and Low
TFT Traffic Flow Asynchronous Latency
Template Receiver and USB Universal Serial
TMSI Temporary Transmitter Bus
Mobile 65 UCI Uplink Control 100 USIM Universal
Subscriber Information Subscriber Identity
Identity UE User Equipment Module
TNL Transport UDM Unified Data USS UE-specific
Network Layer Management search space
TPC Transmit Power 70 UDP User Datagram 105 UTRA UMTS Terrestrial Radio Protocol Access VPLMN Visited
UTRAN Public Land Mobile
Universal Network Terrestrial Radio 40 VPN Virtual Private
Access Network
Network VRB Virtual
UwPTS Uplink Resource Block Pilot Time Slot WiMAX V2I Vehicle-to- 45 Worldwide Infrastruction Interoperability V2P Vehicle-to- for Microwave Pedestrian Access
V2V Vehicle-to- WLANWireless Local Vehicle 50 Area Network
V2X Vehicle-to- WMAN Wireless everything Metropolitan Area
VIM Virtualized Network Infrastructure Manager WPANWireless VL Virtual Link, 55 Personal Area Network VLAN Virtual LAN, X2-C X2-Control Virtual Local Area plane Network X2-U X2-User plane VM Virtual XML extensible Machine 60 Markup
VNF Virtualized Language Network Function XRES EXpected user
VNFFG VNF RESponse
Forwarding Graph XOR exclusive OR VNFFGD VNF 65 ZC Zadoff-Chu
Forwarding Graph ZP Zero Power
Descriptor VNFMVNF Manager VoIP Voice-over-IP, Voice-over- Internet Terminology
For the purposes of the present document, the following terms and definitions are applicable to the examples and embodiments discussed herein.
The term “circuitry” as used herein refers to, is part of, or includes hardware components such as an electronic circuit, a logic circuit, a processor (shared, dedicated, or group) and/or memory (shared, dedicated, or group), an Application Specific Integrated Circuit (ASIC), a field-programmable device (FPD) (e.g., a field-programmable gate array (FPGA), a programmable logic device (PLD), a complex PLD (CPLD), a high-capacity PLD (HCPLD), a structured ASIC, or a programmable SoC), digital signal processors (DSPs), etc., that are configured to provide the described functionality. In some embodiments, the circuitry may execute one or more software or firmware programs to provide at least some of the described functionality. The term “circuitry” may also refer to a combination of one or more hardware elements (or a combination of circuits used in an electrical or electronic system) with the program code used to carry out the functionality of that program code. In these embodiments, the combination of hardware elements and program code may be referred to as a particular type of circuitry.
The term “processor circuitry” as used herein refers to, is part of, or includes circuitry capable of sequentially and automatically carrying out a sequence of arithmetic or logical operations, or recording, storing, and/or transferring digital data. Processing circuitry may include one or more processing cores to execute instructions and one or more memory structures to store program and data information. The term “processor circuitry” may refer to one or more application processors, one or more baseband processors, a physical central processing unit (CPU), a single-core processor, a dual-core processor, a triple-core processor, a quad-core processor, and/or any other device capable of executing or otherwise operating computerexecutable instructions, such as program code, software modules, and/or functional processes. Processing circuitry may include more hardware accelerators, which may be microprocessors, programmable processing devices, or the like. The one or more hardware accelerators may include, for example, computer vision (CV) and/or deep learning (DL) accelerators. The terms “application circuitry” and/or “baseband circuitry” may be considered synonymous to, and may be referred to as, “processor circuitry.”
The term “interface circuitry” as used herein refers to, is part of, or includes circuitry that enables the exchange of information between two or more components or devices. The term “interface circuitry” may refer to one or more hardware interfaces, for example, buses, VO interfaces, peripheral component interfaces, network interface cards, and/or the like.
The term “user equipment” or “UE” as used herein refers to a device with radio communication capabilities and may describe a remote user of network resources in a communications network. The term “user equipment” or “UE” may be considered synonymous to, and may be referred to as, client, mobile, mobile device, mobile terminal, user terminal, mobile unit, mobile station, mobile user, subscriber, user, remote station, access agent, user agent, receiver, radio equipment, reconfigurable radio equipment, reconfigurable mobile device, etc. Furthermore, the term “user equipment” or “UE” may include any type of wireless/wired device or any computing device including a wireless communications interface.
The term “network element” as used herein refers to physical or virtualized equipment and/or infrastructure used to provide wired or wireless communication network services. The term “network element” may be considered synonymous to and/or referred to as a networked computer, networking hardware, network equipment, network node, router, switch, hub, bridge, radio network controller, RAN device, RAN node, gateway, server, virtualized VNF, NFVI, and/or the like.
The term “computer system” as used herein refers to any type interconnected electronic devices, computer devices, or components thereof. Additionally, the term “computer system” and/or “system” may refer to various components of a computer that are communicatively coupled with one another. Furthermore, the term “computer system” and/or “system” may refer to multiple computer devices and/or multiple computing systems that are communicatively coupled with one another and configured to share computing and/or networking resources.
The term “appliance,” “computer appliance,” or the like, as used herein refers to a computer device or computer system with program code (e.g., software or firmware) that is specifically designed to provide a specific computing resource. A “virtual appliance” is a virtual machine image to be implemented by a hypervisor-equipped device that virtualizes or emulates a computer appliance or otherwise is dedicated to provide a specific computing resource.
The term “resource” as used herein refers to a physical or virtual device, a physical or virtual component within a computing environment, and/or a physical or virtual component within a particular device, such as computer devices, mechanical devices, memory space, processor/CPU time, processor/CPU usage, processor and accelerator loads, hardware time or usage, electrical power, input/output operations, ports or network sockets, channel/link allocation, throughput, memory usage, storage, network, database and applications, workload units, and/or the like. A “hardware resource” may refer to compute, storage, and/or network resources provided by physical hardware element(s). A “virtualized resource” may refer to compute, storage, and/or network resources provided by virtualization infrastructure to an application, device, system, etc. The term “network resource” or “communication resource” may refer to resources that are accessible by computer devices/systems via a communications network. The term “system resources” may refer to any kind of shared entities to provide services, and may include computing and/or network resources. System resources may be considered as a set of coherent functions, network data objects or services, accessible through a server where such system resources reside on a single host or multiple hosts and are clearly identifiable.
The term “channel” as used herein refers to any transmission medium, either tangible or intangible, which is used to communicate data or a data stream. The term “channel” may be synonymous with and/or equivalent to “communications channel,” “data communications channel,” “transmission channel,” “data transmission channel,” “access channel,” “data access channel,” “link,” “data link,” “carrier,” “radiofrequency carrier,” and/or any other like term denoting a pathway or medium through which data is communicated. Additionally, the term “link” as used herein refers to a connection between two devices through a RAT for the purpose of transmitting and receiving information.
The terms “instantiate,” “instantiation,” and the like as used herein refers to the creation of an instance. An “instance” also refers to a concrete occurrence of an object, which may occur, for example, during execution of program code.
The terms “coupled,” “communicatively coupled,” along with derivatives thereof are used herein. The term “coupled” may mean two or more elements are in direct physical or electrical contact with one another, may mean that two or more elements indirectly contact each other but still cooperate or interact with each other, and/or may mean that one or more other elements are coupled or connected between the elements that are said to be coupled with each other. The term “directly coupled” may mean that two or more elements are in direct contact with one another. The term “communicatively coupled” may mean that two or more elements may be in contact with one another by a means of communication including through a wire or other interconnect connection, through a wireless communication channel or link, and/or the like.
The term “information element” refers to a structural element containing one or more fields. The term “field” refers to individual contents of an information element, or a data element that contains content.
The term “SMTC” refers to an SSB-based measurement timing configuration configured by SSB-MeasurementTimingConfiguration .
The term “SSB” refers to an SS/PBCH block.
The term “a “Primary Cell” refers to the MCG cell, operating on the primary frequency, in which the UE either performs the initial connection establishment procedure or initiates the connection re-establishment procedure. The term “Primary SCG Cell” refers to the SCG cell in which the UE performs random access when performing the Reconfiguration with Sync procedure for DC operation.
The term “Secondary Cell” refers to a cell providing additional radio resources on top of a Special Cell for a UE configured with CA. The term “Secondary Cell Group” refers to the subset of serving cells comprising the
PSCell and zero or more secondary cells for a UE configured with DC.
The term “Serving Cell” refers to the primary cell for a UE in RRC CONNECTED not configured with CA/DC there is only one serving cell comprising of the primary cell.
The term “serving cell” or “serving cells” refers to the set of cells comprising the Special Cell(s) and all secondary cells for a UE in RRC CONNECTED configured with CA/.
The term “Special Cell” refers to the PCell of the MCG or the PSCell of the SCG for DC operation; otherwise, the term “Special Cell” refers to the Pcell.

Claims

1. An electronic device comprising: one or more processors to implement a first network data analytics function (NWDAF) with a model training logical function (MTLF); and one or more non-transitory computer-readable media comprising instructions that, when executed by the one or more processors, are to cause the NWDAF with the MTLF to: identify that a federated learning task for a machine learning (ML) model is to be initiated; identify a second NWDAF with a MTLF; identify, from the second NWDAF, an indication of an updated local version of the ML model; update, based on the updated local version of the ML model, a global version of the ML model; and transmit, to the second NWDAF, an indication of the updated global version of the ML model.
2. The electronic device of claim 1, wherein the first NWDAF is a NWDAF with a MTLF that is configured for federated learning aggregation.
3. The electronic device of claim 1, wherein the second NWDAF is a NWDAF with a MTLF that is configured for federated learning participation.
4. The electronic device of any of claims 1-3, wherein identification of the second NWDAF is based on: transmission of a Nnrf Discovery Request message to a network repository function (NRF); and receipt, from the NRF based on the transmitted Nnrf Discovery Request message, of a Nnrf Discovery Response message that includes an indication of the second NWDAF.
5. The electronic device of any of claims 1-3, wherein the indication of the updated local version of the ML model is received in a Nnwdaf_MLModel_DistributedTraining_Response message.
6. The electronic device of claim 5, wherein the Nnwdaf MLModel DistributedTraining Response message is responsive to transmission, from the first NWDAF to the second NWDAF, of a Nnwdaf MLModel DistributedTraining Request message.
7. The electronic device of any of claims 1-3, wherein the indication of the updated global version of the ML model is transmitted in a Nnwdaf_MLModel_DistributedTraining_Notify message.
8. One or more non-transitory computer-readable media (NTCRM) comprising instructions that, when executed by one or more processors, are to cause a network data analytics function (NWDAF) with a machine learning training function (MTLF) to: identify that a federated learning task for a machine learning (ML) model is to be initiated; identify a second NWDAF with a MTLF; identify, from the second NWDAF, an indication of an updated local version of the ML model; update, based on the updated local version of the ML model, a global version of the ML model; and transmit, to the second NWDAF, an indication of the updated global version of the ML model.
9. The one or more NTCRM of claim 8, wherein the first NWDAF is a NWDAF with a MTLF that is configured for federated learning aggregation.
10. The one or more NTCRM of claim 8, wherein the second NWDAF is a NWDAF with a MTLF that is configured for federated learning participation.
11. The one or more NTCRM of any of claims 8-10, wherein identification of the second NWDAF is based on: transmission of a Nnrf Discovery Request message to a network repository function (NRF); and receipt, from the NRF based on the transmitted Nnrf Discovery Request message, of a Nnrf Discovery Response message that includes an indication of the second NWDAF.
12. The one or more NTCRM of any of claims 8-10, wherein the indication of the updated local version of the ML model is received in a
Nnwdaf MLModel DistributedTraining Response message.
13. The one or more NTCRM of claim 12, wherein the
Nnwdaf MLModel DistributedTraining Response message is responsive to transmission, from the first NWDAF to the second NWDAF, of a Nnwdaf MLModel DistributedTraining Request message.
14. The one or more NTCRM of any of claims 8-10, wherein the indication of the updated global version of the ML model is transmitted in a Nnwdaf_MLModel_DistributedTraining_Notify message.
15. An electronic device comprising: one or more processors to implement a first network data analytics function (NWDAF) with a model training logical function (MTLF); and one or more non-transitory computer-readable media comprising instructions that, when executed by the one or more processors, are to cause the NWDAF with the MTLF to: update a local version of a machine learning (ML) model; transmit, to a second NWDAF, an indication of an updated local version of the ML model; and identify, from the second NWDAF based on the updated local version of the ML model, an indication of an updated global version of the ML model.
16. The electronic device of claim 15, wherein the second NWDAF is a NWDAF with a MTLF that is configured for federated learning aggregation.
17. The electronic device of claim 15, wherein the first NWDAF is a NWDAF with a MTLF that is configured for federated learning participation.
18. The electronic device of any of claims 15-17, wherein the indication of the updated local version of the ML model is transmitted to the second NWDAF in a
Nnwdaf MLModel DistributedTraining Response message.
19. The electronic device of claim 18, wherein the
Nnwdaf MLModel DistributedTraining Response message is responsive to receipt, from the second NWDAF, of a Nnwdaf MLModel DistributedTraining Request message.
20. The electronic device of any of claims 15-17, wherein the indication of the updated global version of the ML model is received in a Nnwdaf_MLModel_DistributedTraining_Notify message.
PCT/US2023/064122 2022-03-11 2023-03-10 Training updates for network data analytics functions (nwdafs) WO2023173075A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US202263319103P 2022-03-11 2022-03-11
US63/319,103 2022-03-11
US202263320592P 2022-03-16 2022-03-16
US63/320,592 2022-03-16

Publications (1)

Publication Number Publication Date
WO2023173075A1 true WO2023173075A1 (en) 2023-09-14

Family

ID=87936085

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2023/064122 WO2023173075A1 (en) 2022-03-11 2023-03-10 Training updates for network data analytics functions (nwdafs)

Country Status (1)

Country Link
WO (1) WO2023173075A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190222489A1 (en) * 2018-04-09 2019-07-18 Intel Corporation NETWORK DATA ANALYTICS FUNCTION (NWDAF) INFLUENCING FIFTH GENERATION (5G) QUALITY OF SERVICE (QoS) CONFIGURATION AND ADJUSTMENT
US20220046101A1 (en) * 2019-11-06 2022-02-10 Tencent Technology (Shenzhen) Company Limited Nwdaf network element selection method and apparatus, electronic device, and readable storage medium
KR20220021438A (en) * 2020-08-13 2022-02-22 한국전자통신연구원 Management method of machine learning model for network data analytics function device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190222489A1 (en) * 2018-04-09 2019-07-18 Intel Corporation NETWORK DATA ANALYTICS FUNCTION (NWDAF) INFLUENCING FIFTH GENERATION (5G) QUALITY OF SERVICE (QoS) CONFIGURATION AND ADJUSTMENT
US20220046101A1 (en) * 2019-11-06 2022-02-10 Tencent Technology (Shenzhen) Company Limited Nwdaf network element selection method and apparatus, electronic device, and readable storage medium
KR20220021438A (en) * 2020-08-13 2022-02-22 한국전자통신연구원 Management method of machine learning model for network data analytics function device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"Open Discussion about eNA work in R18", 3GPP DRAFT; S2-2106313, 3RD GENERATION PARTNERSHIP PROJECT (3GPP), MOBILE COMPETENCE CENTRE ; 650, ROUTE DES LUCIOLES ; F-06921 SOPHIA-ANTIPOLIS CEDEX ; FRANCE, vol. SA WG2, no. E (e-meeting); 20210816 - 20210827, 10 August 2021 (2021-08-10), Mobile Competence Centre ; 650, route des Lucioles ; F-06921 Sophia-Antipolis Cedex ; France, XP052054042 *
NOKIA, NOKIA SHANGHAI BELL: "Corrections to ML model provisioning and AnLF / MTLF split functionality", 3GPP DRAFT; S2-2107250, 3RD GENERATION PARTNERSHIP PROJECT (3GPP), MOBILE COMPETENCE CENTRE ; 650, ROUTE DES LUCIOLES ; F-06921 SOPHIA-ANTIPOLIS CEDEX ; FRANCE, vol. SA WG2, no. e-meeting; 20211018 - 20211022, 11 October 2021 (2021-10-11), Mobile Competence Centre ; 650, route des Lucioles ; F-06921 Sophia-Antipolis Cedex ; France, XP052061479 *

Similar Documents

Publication Publication Date Title
US20230164598A1 (en) Self-organizing network coordination and energy saving assisted by management data analytics
US20230163984A1 (en) User equipment (ue) route selection policy (usrp) ue in an evolved packet system (eps)
US20230156509A1 (en) Listen-before-talk (lbt) in radio resource management (rrm) for new radio systems
WO2022240750A1 (en) Spatial relationship and power control configuration for uplink transmissions
WO2022170213A1 (en) Data-centric communication and computing system architecture
WO2022169716A1 (en) Systems and methods of beamforming indication
US20240022616A1 (en) Webrtc signaling and data channel in fifth generation (5g) media streaming
US20230189347A1 (en) Multiple physical random access channel (prach) transmissions for coverage enhancement
US20240007314A1 (en) Converged charging for edge enabling resource usage and application context transfer
US20230319773A1 (en) A1 enrichment information for user equipment (ue) physical positioning information
US20230155781A1 (en) User equipment behavior and requirements for positioning measurement without gap
EP4246853A1 (en) Configuration and identification for advanced duplex system
WO2023173075A1 (en) Training updates for network data analytics functions (nwdafs)
WO2024036111A1 (en) Techniques for sounding reference signal (srs) operation with eight ports
WO2023150605A1 (en) Service mesh enabled sixth generation (6g) architecture
WO2023205691A1 (en) Pre-configured measurement gap (mg) testing procedure
WO2023114411A1 (en) Configuration and collision handling for simultaneous uplink transmission using multiple antenna panels
WO2023129362A1 (en) System and information for charging for edge application server (eas) deployment
WO2023154691A1 (en) Microservice communication and computing offloading via service mesh
WO2023212523A1 (en) Codebook support for different antenna structures and enhanced operation for full power mode 2
WO2023178091A1 (en) Enhanced demodulation reference signal (dmrs) for uplink transmission
WO2023158726A1 (en) Techniques for a positioning reference signal measurement with a measurement gap
WO2024064534A1 (en) Non-grid of beams (gob) beamforming control and policy over e2 interface
WO2023129361A1 (en) Charging for edge enabling infrastructure resources
WO2023141094A1 (en) Pre-configured measurement gap status indication to a user equipment (ue)

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23767737

Country of ref document: EP

Kind code of ref document: A1