EP4384947A1 - Systèmes et procédés pour optimiser l'entraînement de modèles et d'algorithmes d'ia/aa - Google Patents

Systèmes et procédés pour optimiser l'entraînement de modèles et d'algorithmes d'ia/aa

Info

Publication number
EP4384947A1
EP4384947A1 EP22765043.9A EP22765043A EP4384947A1 EP 4384947 A1 EP4384947 A1 EP 4384947A1 EP 22765043 A EP22765043 A EP 22765043A EP 4384947 A1 EP4384947 A1 EP 4384947A1
Authority
EP
European Patent Office
Prior art keywords
model
algorithm
network node
message
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP22765043.9A
Other languages
German (de)
English (en)
Inventor
Pablo SOLDATI
Luca LUNARDI
Johan Rune
Henrik RYDÉN
Angelo Centonza
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Telefonaktiebolaget LM Ericsson AB
Original Assignee
Telefonaktiebolaget LM Ericsson AB
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget LM Ericsson AB filed Critical Telefonaktiebolaget LM Ericsson AB
Publication of EP4384947A1 publication Critical patent/EP4384947A1/fr
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/09Supervised learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/092Reinforcement learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]

Definitions

  • the present disclosure relates to a wireless communication system and, more specifically, to training of Artificial Intelligence (Al) or Machine Learning (ML) models and algorithms in a wireless communication system.
  • Al Artificial Intelligence
  • ML Machine Learning
  • a Study Item (SI) “Enhancement for Data Collection for NR and EN-DC” is defined in Third Generation Partnership Project (3GPP) RP-201620.
  • the study item aims to study the functional framework for Radio Access Network (RAN) intelligence enabled by further enhancement of data collection through use cases, examples, etc. and identify the potential standardization impacts on current Next Generation RAN (NG-RAN) nodes and interfaces.
  • RAN Radio Access Network
  • the study focuses on AI/ML functionality and corresponding types of inputs/outputs.
  • Model inference function The input/output and the location of Model inference function should be studied case by case.
  • RAN3 should focus on the analysis of data needed at the Model training function from external functions, while the aspects of how the Model training function uses inputs to train a model are out of RAN3 scope.
  • Model training and Model inference functions should be able to request, if needed, specific information to be used to train or execute the AI/ML algorithm and to avoid reception of unnecessary information.
  • specific information to be used to train or execute the AI/ML algorithm and to avoid reception of unnecessary information.
  • the nature of such information depends on the use case and on the algorithm.
  • the Model inference function should signal the outputs of the model only to nodes that have explicitly requested them (e.g. via subscription), or nodes that are subject to actions based on the output from model inference.
  • NG-RAN is prioritized; EN-DC is included in the scope.
  • a general framework and workflow for AI/ML optimization should be defined and captured in the TR. The generalized workflow should not prevent to "think beyond” the workflow if the use case requires so.
  • Data Collection is a function that provides input data to Model training and Model inference functions.
  • Artificial Intelligence I Machine Learning (ML) algorithm specific pre-processing of data is not carried out in the Data Collection function.
  • input data may include measurements from User Equipments (UEs) or different network entities, performance feedback, AI/ML model output.
  • UEs User Equipments
  • o Training Data information needed for the AI/ML model training function.
  • o Inference Data information needed as an input for the Model inference function to provide a corresponding output.
  • Model Training is a function that performs the training of the ML model.
  • the Model training function is also responsible for data preparation (e.g., data pre-processing and cleaning, formatting, and transformation of raw data), if required.
  • Model Inference is a function that provides AI/ML model inference output (e g., predictions or decisions).
  • the Model inference function is also responsible for data preparation (e.g., data pre-processing and cleaning, formatting, and transformation of raw data), if required.
  • Actor is a function that receives the output from the Model inference function and triggers or performs corresponding actions.
  • the Actor may trigger actions directed to other entities or to itself.
  • Feedback Information that may be needed to derive training or inference data or performance feedback.
  • Validating the ML model is important to ensure its accuracy. Basically, when the model is trained, validating the model with different set of data (e.g. different from training data) provides an opportunity to further improve the model quality, which further avoids making wrong decisions taken by the machine in the real-life prediction.
  • different set of data e.g. different from training data
  • “Data Collection” should also provide validation data to “Model Training”, so that the accuracy of the trained model can be guaranteed.
  • Proposal 13 “Data Collection” function should also provide validation data to “Model Training” function for ML model validation.
  • Proposal 14 “Model Training” should also perform model validation based on the validation data set received from “Data Collection” to further improve model accuracy.
  • the Actor is the one monitoring the performance of the ML Model.
  • By adding a feedback of the ML Model performance from Actor to Model Training allows, in case of model performance degradation, the retraining of the ML model while keeping Data Collection as ML Model independent (ML Model independent data remains the part of the feedback from Actor to Data Collection) .
  • Model Training may request data from Data Collection. In this way, Data is collected from Model Training while Data Collection is kept ML Model Independent. Summary
  • a method performed by first network node comprises receiving, from at least one other network node, at least one message comprising: (a) a set of one or more historical data associated to an Al or ML (AI/ML) model or algorithm; (b) indications of at least a performance metric associated to the AI/ML model or algorithm for which optimization is desirable; and (c) instructions or recommendations to (i) retrain the AI/ML model or algorithm, update the AI/ML model or algorithm, optimize the AI/ML model or algorithm, or replace the AI/ML model or algorithm with a new AI/ML model or algorithm, (ii) to test/validate at least one AI/ML model, or (Hi) both (i) and (ii).
  • AI/ML Al or ML
  • the method further comprises performing one or more actions with respect to the AI/ML model or algorithm based on information comprised in the at least one message.
  • the AI/ML model or algorithm can be optimized by, e.g., optimizing the associated training function and/or the based on drifts in the data distribution of dataset when the data collected for the AI/ML model or algorithm is not co-located with the node hosting the training function.
  • the one or more actions comprise: (A) training the AI/ML model or algorithm, (B) optimizing the AI/ML model or algorithm, (C) updating the AI/ML model or algorithm, (D) testing the AI/ML model or algorithm, (E) validating the AI/ML model or algorithm, or (F) any combination of two or more of (A)-(E).
  • the one or more actions further comprise sending a message to a third network node comprising an updated version of the AI/ML model or algorithm or a new AI/ML model or algorithm that replaces the AI/ML model or algorithm.
  • the method further comprises, prior to receiving the at least one message, transmitting a first message to a second network node.
  • the first message comprises one or more requests or instructions, one or more configurations or indications, or both the one or more requests or instructions and the one or more configurations.
  • the one or more requests or instructions comprise one or more requests or instructions to provide one or more requested historical data associated to at least one AI/ML model or algorithm, monitor and/or report the performance of the at least one AI/ML model or algorithm, monitor and/or report the quality of the data set associated to the at least an AI/ML model, or a combination of any two or more thereof.
  • the one or more configurations or indications are associated to the one or more requests or instructions, and the one or more configurations or indications specify: data to monitor for the at least one AI/ML model or algorithm, information to be reported for the at least one AI/ML model or algorithm, event(s) and/or conditions to provide the one or more requested historical data associated to the AI/ML model or algorithm, or a combination of any two or more thereof.
  • receiving the at least one message comprises receiving at least one second message from the second network node, the at least one second message comprising: (a) a set of one or more historical data associated to AI/ML model or algorithm, (b) indications of at least a performance metric associated to the AI/ML model or algorithm for which optimization is desirable, and/or (c) instructions or recommendations to (i) re-train the AI/ML model or algorithm, update the AI/ML model or algorithm, optimize the AI/ML model or algorithm, or replace the AI/ML model or algorithm with a new AI/ML model or algorithm, (ii) to test/validate at least one AI/ML model, or (iii) both (i) and (ii).
  • receiving the at least one message comprises receiving at least one message from a fourth network node comprising: (a) a set of one or more historical data associated to an AI/ML model or algorithm; (b) indications of at least a performance metric associated to the AI/ML model or algorithm for which optimization is desirable; and/or (c) instructions or recommendations to (i) re-train the AI/ML model or algorithm, update the AI/ML model or algorithm, optimize the AI/ML model or algorithm, or replace the AI/ML model or algorithm with a new AI/ML model or algorithm, (ii) to test/validate at least one AI/ML model, or (iii) both (i) and (ii).
  • receiving the at least one message comprises receiving at least one message from the second network node and receiving at least one message from a fourth network node.
  • a first network node is adapted to receive, from at least one other network node, at least one message comprising: (a) a set of one or more historical data associated to an AI/ML model or algorithm; (b) indications of at least a performance metric associated to the AI/ML model or algorithm for which optimization is desirable; and (c) instructions or recommendations to (i) re-train the AI/ML model or algorithm, update the AI/ML model or algorithm, optimize the AI/ML model or algorithm, or replace the AI/ML model or algorithm with a new AI/ML model or algorithm, (ii) to test/validate at least one AI/ML model, or (iii) both (I) and (ii).
  • the first network node is further adapted to perform one or more actions with respect to the AI/ML model or algorithm based on information comprised in the at least one message.
  • a first network node comprises processing circuitry configured to cause the first network node to receive, from at least one other network node, at least one message comprising: (a) a set of one or more historical data associated to an AI/ML model or algorithm; (b) indications of at least a performance metric associated to the AI/ML model or algorithm for which optimization is desirable; and (c) instructions or recommendations to (i) retrain the AI/ML model or algorithm, update the AI/ML model or algorithm, optimize the AI/ML model or algorithm, or replace the AI/ML model or algorithm with a new AI/ML model or algorithm, (ii) to test/validate at least one AI/ML model, or (iii) both (i) and (ii).
  • the processing circuitry is further configured to cause the first network node to perform one or more actions with respect to the AI/ML model or algorithm based on information comprised in the at least one message.
  • a method performed by a second network node comprises transmitting, to a first network node, at least one message comprising: (a) a set of one or more historical data associated to an AI/ML model or algorithm, (b) indications of at least a performance metric associated to the AI/ML model or algorithm for which optimization is desirable, and (c) instructions or recommendations to (i) re-train the AI/ML model or algorithm, update the AI/ML model or algorithm, optimize the AI/ML model or algorithm, or replace the AI/ML model or algorithm with a new AI/ML model or algorithm, (ii) to test/validate at least one AI/ML model, or both (i) and (ii).
  • the method further comprises, prior to transmitting the at least one message to the first network node, receiving a first message from the first network node.
  • the first message comprises one or more requests or instructions, one or more configurations or indications, or both.
  • the one or more requests or instructions comprises one or more requests or instructions to: provide one or more requested historical data associated to at least one AI/ML model or algorithm, monitor and/or report the performance of the at least one AI/ML model or algorithm, monitor and/or report the quality of the data set associated to the at least an AI/ML model, or a combination of any two or more thereof.
  • the one or more configurations or indications are associated to the one or more requests or instructions and specify: data to monitor for the at least one AI/ML model or algorithm, information to be reported for the at least one AI/ML model or algorithm, event(s) and/or conditions to provide the one or more requested historical data associated to the AI/ML model or algorithm, or a combination of any two or more thereof.
  • the method further comprises obtaining at least some information comprised in the message transmitted to the first network node from another network node.
  • the method further comprises instructing another network node to provide some of the information requested by the first message to the first network node.
  • a second network node is adapted to transmit, to a first network node, at least one message comprising: (a) a set of one or more historical data associated to an AI/ML model or algorithm, (b) indications of at least a performance metric associated to the AI/ML model or algorithm for which optimization is desirable, and (c) instructions or recommendations to (i) re-train the AI/ML model or algorithm, update the AI/ML model or algorithm, optimize the AI/ML model or algorithm, or replace the AI/ML model or algorithm with a new AI/ML model or algorithm, (ii) to test/validate at least one AI/ML model, or both (i) and (ii).
  • a second network node comprises processing circuitry configured to cause the second network node to transmit, to a first network node, at least one message comprising: (a) a set of one or more historical data associated to an AI/ML model or algorithm, (b) indications of at least a performance metric associated to the AI/ML model or algorithm for which optimization is desirable, and (c) instructions or recommendations to (i) re-train the AI/ML model or algorithm, update the AI/ML model or algorithm, optimize the AI/ML model or algorithm, or replace the AI/ML model or algorithm with a new AI/ML model or algorithm, (ii) to test/validate at least one AI/ML model, or both (i) and (ii).
  • a method performed by a second network node comprises receiving a first message from a first network node, the first message comprising one or more requests or instructions, one or more configurations or indications, or both.
  • the one or more requests or instructions comprises one or more requests or instructions to: provide one or more requested historical data associated to at least one AI/ML model or algorithm, monitor and/or report the performance of the at least one AI/ML model or algorithm, monitor and/or report the quality of the data set associated to the at least an AI/ML model, or a combination of any two or more thereof.
  • the one or more configurations or indications are associated to the one or more requests or instructions and specify: data to monitor for the at least one AI/ML model or algorithm, information to be reported for the at least one AI/ML model or algorithm, event(s) and/or conditions to provide the one or more requested historical data associated to the AI/ML model or algorithm, or a combination of any two or more thereof.
  • the method further comprises transmitting, to a fourth network node, a message that subscribes to the fourth network node for information requested by the first message.
  • the method further comprises sending, to the first network node, a message that instructs the first network node to obtain the information requested by the first message from the fourth network node.
  • a second network node is also disclosed.
  • a second network node is adapted to receive a first message from a first network node, the first message comprising one or more requests or instructions, one or more configurations or indications, or both.
  • the one or more requests or instructions comprises one or more requests or instructions to: provide one or more requested historical data associated to at least one AI/ML model or algorithm, monitor and/or report the performance of the at least one AI/ML model or algorithm, monitor and/or report the quality of the data set associated to the at least an AI/ML model, or a combination of any two or more thereof.
  • the one or more configurations or indications are associated to the one or more requests or instructions and specify: data to monitor for the at least one AI/ML model or algorithm, information to be reported for the at least one AI/ML model or algorithm, event(s) and/or conditions to provide the one or more requested historical data associated to the AI/ML model or algorithm, or a combination of any two or more thereof.
  • the second network node is further adapted to transmit, to a fourth network node, a message that subscribes to the fourth network node for information requested by the first message.
  • a second network node comprises processing circuitry configured to cause the second network node to receive a first message from a first network node, the first message comprising one or more requests or instructions, one or more configurations or indications, or both.
  • the one or more requests or instructions comprises one or more requests or instructions to: provide one or more requested historical data associated to at least one AI/ML model or algorithm, monitor and/or report the performance of the at least one AI/ML model or algorithm, monitor and/or report the quality of the data set associated to the at least an AI/ML model, or a combination of any two or more thereof.
  • the one or more configurations or indications are associated to the one or more requests or instructions and specify: data to monitor for the at least one AI/ML model or algorithm, information to be reported for the at least one AI/ML model or algorithm, event(s) and/or conditions to provide the one or more requested historical data associated to the AI/ML model or algorithm, or a combination of any two or more thereof.
  • the processing circuitry is further configured to cause the second network node to transmit, to a fourth network node, a message that subscribes to the fourth network node for information requested by the first message.
  • Figure 1 is a reproduction of Figure 4.2-1 from Third Generation Partnership Project (3GPP) document R3- 212978;
  • 3GPP Third Generation Partnership Project
  • Figure 2 illustrates one example method executed by a first network node together with a second network node to optimize a training function of an Artificial Intelligence (Al) I Machine Learning (ML) model or algorithm in a radio communication network, wherein a second message provides information and data sent without need for a subscription confirmation (or implicit subscription confirmation) in accordance with an embodiment of the present disclosure;
  • Al Artificial Intelligence
  • ML Machine Learning
  • Figure 3 illustrates a procedure to optimize a training function of an AI/ML model or algorithm in a radio communication network, wherein a SECOND MESSAGE first provides a subscription confirmation and then the requested data or information, in accordance with another embodiment of the present disclosure
  • Figure 4 illustrates another embodiment of the method wherein a first network node receives a second message that did not generate from any previous message signaled from the first network node to the second network node, in accordance with another embodiment of the present disclosure
  • Figure 5 illustrates another embodiment of the procedure to optimize a training function of an AI/ML model or algorithm in a radio communication network, wherein a first network node can send a third message to a third network node;
  • Figure 6 illustrates an embodiment of a procedure, or method, to optimize a training function of an AI/ML model or algorithm in a radio communication network, wherein a first network node obtains (at least part of) the requested data from a fourth network node as a result of indications comprised in a second message;
  • Figure 7 illustrates an embodiment of a procedure, or method, to optimize a training function of an AI/ML model or algorithm in a radio communication network, wherein a second network node obtains at least part of the requested data from a fourth network node;
  • Figure 8 Figure 9, Figure 10, and Figure 11 show example embodiments related to how the proposed solution can map to the current Functional Framework discussed in 3GPP;
  • Figure 12 shows an example of a communication system in which embodiments of the present disclosure described herein may be implemented
  • FIG. 13 shows a User Equipment (UE) in accordance with some embodiments
  • Figure 14 shows a network node in accordance with some embodiments
  • FIG 15 is a block diagram of a host, which may be an embodiment of the host of Figure 12, in accordance with various aspects described herein;
  • Figure 16 is a block diagram illustrating a virtualization environment in which functions implemented by some embodiments may be virtualized.
  • Figure 1 shows a communication diagram of a host communicating via a network node with a UE over a partially wireless connection in accordance with some embodiments.
  • Certain aspects of the disclosure and their embodiments may provide solutions to these or other challenges.
  • Systems and methods are described herein that provide a solution wherein a first network node hosting a training function of at least an AI/ML model or algorithm sends, to a second network node hosting a data collection function of the same AI/ML model or algorithm, a message comprising a subscription request to obtain historical data required to optimize the AI/ML model or algorithm.
  • the first network node may further transmit, as part of, or together with, or separately from the subscription request, one or more of:
  • the second network node (e.g., a data source or data collection function/node) is configured to detect a systematic change in the data set associated to at least an AI/ML model or algorithm indicated by the first network node, possibly caused by a change in the environment or in node configuration for the equipment collecting the data. Upon such a detection, the second network node could be further configured to inform the first network node (e.g., the model training function/node) that retraining or additional training is needed. As a further option, the second network node may be configured to provide the first network node (i.e., the model training function/node) with new training data samples which could, in one example, implicitly trigger a re-training operation. The new training data could provide, for instance, new information related to the part of the dataset containing data associated to the detected systematic change.
  • the first network node could therefore receive a message from the second network node comprising such new historical data and/or an indication of a change in the data distribution associated to at least an AI/ML model or algorithm and/or an indication to re-train at least one AI/ML model or algorithm.
  • the first network node could receive such message from another network node (a third network node) indicated by the second network node to the first network node.
  • the second network node may have knowledge of the model performance given the inputs provided to the model.
  • the second network node e.g., a data source or data collection function/node
  • the second network node is the node providing inputs to a model inference function running the model in question.
  • the second network node can therefore identify whether the model, with the inputs provided to it by the second network node, is generating outputs with, e.g., a sufficiently good uncertainty or accuracy. If such performance (e.g., uncertainty/accuracy) is not sufficient, the second network node may signal to the first network node new information, which can be used as training data. Optionally a request for retraining can also be signaled.
  • the second network node may also signal to the first network node training data that were not used during the model training phase. This may be done as a result of the second network node understanding that the model inference function is inferring on data that are not in the range of samples used for training the model.
  • the new training data may be associated with a request from second to first network node to retrain the model with the new data.
  • the second network node may deduce that the network conditions under which the model has been trained have been changed to the point that, even with the provisioning of new training data from the second network node to the first network node, a possible retrained model would not deliver a sufficiently good performance.
  • the second network node may signal to the first network node a message indicating that a process to derive a new model should be triggered. Additionally, the second network node may also signal the change in the network conditions that caused such decision. An example of such changes may be the introduction of a new radio access technology in the network, or the introduction of a new type of mobility for the served UEs.
  • the first network node obtaining the historical data can use the historical data to improve its performances.
  • One aspect of the solution described herein is the use of a subscription request to obtain historical data and a subsequent reception of such data, wherein the historical data can be used by an AI/ML model training function to optimize /re-train an AI/ML model or algorithm.
  • the first network node is the host of a model training function associated to at least one AI/ML model or algorithm.
  • the second network node is the host of a data collection function or a data storage function or a data source function responsible for handling data samples for at least one AI/ML model or algorithm (as indicated by the first network node).
  • the first network node e.g. after an update of the AI/ML model, can decide to send an update of the model and/or indications of performance to a third network node (hosting an inference function of the AI/ML model or algorithm), or to the second network node (hosting data collection function)
  • a third network node hosting an inference function of the AI/ML model or algorithm
  • the second network node hosting data collection function
  • the second network node When the second network node receives the subscription request from the first network node, it can indicate to the first network node to contact a further node (fourth network node) to obtain at least in part the historical data.
  • Certain embodiments may provide one or more of the following technical advantage(s).
  • One of the advantages of the proposed solution is to enable the possibility to optimize an AI/ML model or algorithm deployed in a communication network by optimizing the associated training function.
  • a further advantage is to enable the optimization of an AI/ML model or algorithm based on changes or drifts in the data distribution of the dataset when the data collected for an AI/ML model or algorithm is not co-located with the node hosting the model training function.
  • a network node can be a Radio Access Network (RAN) node, an Operations and Management (0AM), a Core Network node, an 0AM, an Service Management and Orchestration (SMO), a Network Management System (NMS), a Non-Real Time RAN Intelligent Controller (Non-RT RIC), a Real-Time RAN Intelligent Controller (RT-RIC), a gNodeB (gNB), evolved Node B (eNB), en-gNB, ng-eNB, gNB Central Unit (gNB-CU), gNB-CU Control Plane (gNB-CU-CP), gNB-CU User Plane (gNB-CU-UP), eNB Central Unit (eNB-CU), eNB- CU Control Plane (eNB-CU-CP), eNB-CU User Plane (eNB-CU-UP), Integrated Access and Backhaul (IAB) node, lAB-donor Distributed Unit (DU), lAB-donor Central Unit (CU), IABDU, IAB Mobile Termination (MT
  • model training, model optimizing, model optimization, and model updating are herein used interchangeably with the same meaning unless explicitly specified otherwise.
  • network nodes may be a physical node or a function or logical entity of any kind, e.g. a software entity implemented in a data center or a cloud, e.g. using one or more virtual machines, and two network nodes may well be implemented as logical software entities in the same data center or cloud.
  • a “physical network node” as used herein should be understood as a physical node having one or more hardware components (e.g., one or more processors, memory, network interface, transmitter(s), receiver(s), and/or the like).
  • Embodiments of the systems and methods disclosed herein are independent with respect to specific AI/ML model types or learning problems/setting (e.g., supervised learning, unsupervised learning, reinforcement learning, hybrid learning, centralized learning, federated learning, distributed learning, etc.)
  • AI/ML model types or learning problems/setting e.g., supervised learning, unsupervised learning, reinforcement learning, hybrid learning, centralized learning, federated learning, distributed learning, etc.
  • Non-limiting examples of AI/ML algorithms may include supervised learning algorithms, deep learning algorithms, reinforcement learning type of algorithms (such as Deep Q-Network (DQN), A2C, A3C, etc.), contextual multi-armed bandit algorithms, autoregression algorithms, etc., or combinations thereof.
  • DQN Deep Q-Network
  • A2C A2C
  • A3C A3C
  • autoregression algorithms etc., or combinations thereof.
  • AI/ML models such as neural networks (e.g., feedforward neural networks, deep neural networks, recurrent neural networks, convolutional neural networks, etc.).
  • neural networks e.g., feedforward neural networks, deep neural networks, recurrent neural networks, convolutional neural networks, etc.
  • reinforcement learning algorithms may include deep reinforcement learning (such as DQN, proximal policy optimization (PPO), double Q-learning), actor-critic algorithms (such as Advantage actor-critic algorithms, e.g. A2C or A3C, actor-critic with experience replay, etc.), policy gradient algorithms, off-policy learning algorithms, etc.
  • deep reinforcement learning such as DQN, proximal policy optimization (PPO), double Q-learning
  • actor-critic algorithms such as Advantage actor-critic algorithms, e.g. A2C or A3C, actor-critic with experience replay, etc.
  • policy gradient algorithms e.g. A2C or A3C, actor-critic with experience replay, etc.
  • the “first network node” is the host of a model training function associated to at least one AI/ML model or algorithm.
  • the “second network node” is the host of a data collection function or a data storage function or a data source function responsible for handling data samples for at least one AI/ML model or algorithm (as indicated by the first network node).
  • FIG. 2 illustrates one example method executed by a first network node 200 together with a second network node 202 to optimize a training function of an AI/ML model or algorithm in a radio communication network, wherein the SECOND MESSAGE provides information and data sent without need for a subscription confirmation (or implicit subscription confirmation) in accordance with an embodiment of the present disclosure.
  • this method comprises the steps of:
  • the first network node 200 transmits at least one FIRST MESSAGE to the second network node 202 of a radio communication network (e.g., a 3GPP cellular communications system).
  • the FIRST MESSAGE comprises a subscription request comprising one or more of: o request(s)/instruction(s) specifying one or more of: ⁇ to provide one or more requested historical data associated to at least one AI/ML model or algorithm
  • to monitor and/or report the quality of the data set associated to the at least an AI/ML model, e.g. to identify changes/drifts in the data distribution of the AI/ML model, which would justify the retraining o configurations/indications associated to the requests/instructions, specifying one or more of:
  • the first network node 200 receives at least one SECOND MESSAGE from the second network node 202.
  • the SECOND MESSAGE comprises one or more of: o a set of one or more historical data associated to the AI/ML model or algorithm for which optimization is desirable based o indications of at least a performance metric associated to the AI/ML model or algorithm for which optimization is desirable o instructions or recommendations:
  • Step 208 The first network node 200 trains, optimizes, updates, tests, and/or validates the AI/ML model, e.g., based on the information comprised in the SECOND MESSAGE.
  • the AI/ML model or algorithm indicated by the FIRST MESSAGE and/or by the SECOND MESSAGE is an AI/ML model or algorithm of the first network node 200 or of a third network node (not shown, but some network node other than the first network node 200 or the second network node 202).
  • Figure 3 illustrates a procedure to optimize a training function of an AI/ML model or algorithm in a radio communication network, wherein the SECOND MESSAGE first provides a subscription confirmation (step 300) and then the requested data or information (step 206), in accordance with another embodiment of the present disclosure. Otherwise, the embodiment of Figure 3 is the same as that of Figure 2.
  • the first network node 200 may receive at least one SECOND MESSAGE from the second network node 202 in response to the FIRST MESSAGE, wherein one or more of the information elements carried by the SECOND MESSAGE depends on one or more information elements of the FIRST MESSAGE
  • the FIRST MESSAGE may comprise an indication for the second network node 202 to provide instructions or recommendations pertaining to retrain at least one AI/ML model of the first network node 200.
  • the first network node 200 transmits (e.g., in the FIRST MESSAGE) at least one of:
  • Figure 4 illustrates another embodiment of the method wherein the first network node 200 receives a SECOND MESSAGE (step 206) that did not generate from any previous message signaled from the first network node 200 to the second network node 202.
  • the first network node 200 may;
  • the first network node 200 may
  • the information comprised in the SECOND MESSAGE, as well as the subsequent actions that the first network node 200 may take upon receiving a SECOND MESSAGE may in one case be dependent on the transmission of a FIRST MESSAGE or in other cases may be independent on the transmission of any FIRST MESSAGE.
  • the first network node 200 trains, optimizes, updates, or replaces with a new model at least one AI/ML model or algorithm based on information received with the SECOND MESSAGE (see step 208 of Figure 2, Figure 3, and Figure 4), such as based on one or more of:
  • the AI/ML model or algorithm of the first network node 200 that the SECOND MESSAGE instructs or recommends to re-train/optimize/update/replace with a new model can be indicated by the first network node 200 to the second network node 202 with a FIRST MESSAGE prior to receiving a SECOND MESSAGE.
  • the first network node 200 may train, optimize, update, or replace with a new model the Al ML model or algorithm based on the received data.
  • the new model may be derived by means of the new data received, as well as with other historical data available at the first network node 200.
  • the first network node 200 may determine whether training, optimizing, updating, or replacing with a new model the AI/ML model or algorithm is necessary based on at least an indication of at least a performance metric associated to the AI/ML model or algorithm or based on instructions or recommendations to re-train/update/optimize the AI/ML model received with a SECOND MESSAGE.
  • the first network node 200 may determine whether training, optimizing, updating, or replacing with a new model the AI/ML model or algorithm is necessary based on at least an indication of at least a performance metric associated to the AI/ML model or algorithm or based on instructions or recommendations to re-train/update/optimize the AI/ML model received with a SECOND MESSAGE.
  • the first network node 200 may train, optimize, update, or replace the AI/ML model or algorithm based on the available data.
  • the first network node 200 may transmit a FIRST MESSAGE (or a new message) to the second network node 202 comprising a subscription request to obtain from the second network node 202 one or more historical data associated to the AI/ML model or algorithm to be trained, optimized, updated, or replaced.
  • a FIRST MESSAGE or a new message
  • the first network node 200 may receive a message from the second network node 202, which is not derived from any previously signaled message from the first network node 200 to the second node 202, where the second network node 202 indicates to the first network node 200 to train, optimize, update, or replace the AI/ML model or algorithm in question.
  • Such indication is based on observations carried out at the first network node 200 that determine that one of such actions is needed.
  • the message received from the first network node 200 may include historical data on the basis of which the model should be trained, optimized, updated, or replaced. Additionally, the message may include an indication that more historical data will be provided by the second network node 202 in the future, which can be taken by the first node to train, optimize, update, or replace the AI/ML model or algorithm in question
  • the first network node 200 may receive a SECOND MESSAGE comprising instructions or recommendations to re-train/update/optimize/replace at least one AI/ML model and a set of one or more historical data associated to the AI/ML model.
  • the first network node 200 may determine train, optimize, update, or replace the AIML model or algorithm based on the received data, instructions and/or recommendations received from the second network node.
  • the first network node 200 determines to test and/or validate at least one AI/ML model or algorithm based on information received with the SECOND MESSAGE, such as based on one or more of
  • the first network node 200 may further determine one or more performance metric associated to the AI/ML algorithm tested, such as
  • the second network node 202 can issue an unsolicited message, namely a message not derived as the result of a FIRST MESSAGE received from the first network node 200, to test the model or algorithm in question.
  • the testing can be carried out with respect to a set of data received together with the request for testing.
  • the subscription request to obtain, from the second network node 202, one or more historical data associated to the AI/ML model or algorithm transmitted with the FIRST MESSAGE can comprise one or more of the following:
  • a reason or cause value e.g. "re-train” or “optimize” could be used for the subscription request.
  • an indication of the type of data requested which may include or more of historical data samples comprising o Historical data associated to the AI/ML model, such as measurements or estimate of the network state and/or user state that were used for inference of the Al ML model or algorithm.
  • Historical data associated to the AI/ML model such as measurements or estimate of the network state and/or user state that were used for inference of the Al ML model or algorithm.
  • Inference data associated to the AI/ML model or algorithm such as a set of input data used by the model inference function executing the AI/ML model or algorithm.
  • the first network node could subscribe to historical inference data.
  • the first network node may subscribe to recent or live inference data associated to at least one AI/ML model or algorithm.
  • timing related indications indicating, e.g., a validity time associated to the subscription
  • Non-limiting examples can be: one or more period of collection, data selected in a random fashion, data associated with one or more radio network procedure (e.g. mobility), data related to one or more user equipment or type of user equipment, data pertaining to performance indicators, to UE or network configuration data, data collected for one or more area of interests (e.g.
  • one or more coverage area one or more cell, one or more carrier frequency, one or more TAs, one or more TAIs, one or more PLMN, one or more RAT
  • data collected for one or more S-NSSAI, or one or more 5QI, or one or more service data collected for MDT, data collected for QoE, radio measurements, load metrics, data related to energy savings (e.g. an energy score), data collected at TTI levels, per milliseconds, per second, per day, per reporting period o filtering criteria can be combined.
  • the request of historical data can indicate that data of interest is a load metric for a list of cells, or an energy score and corresponding UE configuration data.
  • o a periodic sending with a reporting periodicity o a sending based on event (e.g., upon availability of the data) o timing indications such as a start time for initiating the sending, an end time to stop the sending, a duration during which the sending can happen, a duration of pause, a time to resume o indications of a size of historical data required, such as the number of data samples per batch of historical data to be provided to the first network node, a minimum, a maximum amount of historical data (overall and/or per attempt of sending) o indications to pause or resume sending of historical data
  • Non-limiting examples can be to request to receive a notification: ⁇ when requested data is available or not available
  • historical data may change when a new service is activated, or a new option becomes available (e.g., a new S-NSSAI or a new PLMN is introduced) or when a new performance indicator is introduced
  • the at least one request/instruction/configuration/indication comprised in the FIRST MESSAGE may comprise one or more of:
  • Uncertainty can be of different types, e.g. aleatoric, epistemic, homoscedastic, heteroscedastic o Loss function optimized to train the AI/ML model or algorithm (e.g., Mean squared error, Root Mean Square Error, Mean Absolute Error, Coefficient of Determination, Adjusted R-squared, a function thereof), o Utility function optimized to train the AI/ML model or algorithm (e.g., throughput, spectral efficiency, latency)
  • An indication of a time interval for monitoring the performance of the AI/ML model or algorithm which may comprise one or more of o A starting time o An ending time o A reporting periodicity o One or more time windows/durations/reporting periods o A periodicity of the time interval monitoring o
  • An indication of the type of data samples to be monitored or to be used for monitoring the performance of the AI/ML model or algorithm such as o Inference data samples, i.e., input data samples, fulfilling a diversity condition with respect to the data samples used for training the AI/ML model or algorithm o Inference data samples, i.e., input data samples, providing information in one or more component of the data sample, such as
  • a list of at least one input data feature i.e., a component of an input data sample that is required to provide information
  • the indication of a drift or a change may comprise: o An indication of change or data drift associated to a data set of the AI/ML algorithm.
  • the FIRST MESSAGE may additionally indicate one or more of
  • A period of time in which the data sample of the data set should have been collected o
  • o at least a measure of the change or data drift associated to a data set or parts thereof, associated to the AI/ML algorithms, wherein the composition of the dataset may additionally be indicted by the FIRST MESSAGE as disclosed in other embodiments
  • a change and/or data drift in a data set or part thereof, such as in one or more components of the data samples may be realized by comparing two or more data samples, e.g. collected or stored at different time.
  • Examples of measurements or metrics that can be used to monitor a change or a drift in a data set or in the distribution of the data samples of a dataset or parts thereof may comprise one or more distance metrics Therefore, o
  • the FIRST MESSAGE may further indicate of one or more metrics that the second network node may use and may report to monitor a drift or a change in at least one part of the data distribution of a data set associated to the AI/ML model or algorithm.
  • Non-limiting examples of such metrics may include
  • the configuration of at least one event/condition to provide the requested one or more historical data associated to the AI/ML model or algorithm transmitted by the first network node 200 to the second network node 202 with the FIRST MESSAGE may comprise one or more of:
  • One or more events or conditions for o reporting at least a performance metric associated to the AI/ML algorithm could be realized by o triggering, recommending, or instructing the first network node 200 to train, optimize or update an AI/ML model or algorithm. o providing to the first network node one or more historical data for to training, optimizing, or updating an AI/ML model or algorithm.
  • said event or conditions can be based on
  • such events and/or conditions could be realized by o A list of one or more performance metrics associated to the algorithm that the second network node 202 should report o A list of at least one event or condition, associated to each performance metric, such as
  • Providing to the first network node one or more historical data for training, optimizing, or updating the AI/ML model or algorithm if at least a measured or estimated performance of the AI/ML model or algorithm
  • said events or conditions are based on
  • At least a measured or estimated metric associated to the historical data stored or collected for the AI/ML model or algorithm such as a metric indicating drift or a change in at least one part of the data distribution of a data set associated to the AI/ML model or algorithm
  • metrics may include: Manhattan (L1 -norm), Euclidean (L2-norm), Minkowski, Cosine and Chebychev type of distance metrics.
  • Manhattan L1 -norm
  • Euclidean L2-norm
  • Minkowski Cosine
  • Cosine Cosine
  • Chebychev type of distance metrics may include: Manhattan (L1 -norm), Euclidean (L2-norm), Minkowski, Cosine and Chebychev type of distance metrics.
  • Some non-limiting examples include densitybased approaches, proximity approaches (maximum distance to other points, average distance to other points etc.)
  • Such events and/or conditions could be realized by o Inference data samples, i.e., input data samples, fulfilling a diversity condition or providing new or different information with respect to the data samples used for training the AI/ML model or algorithm.
  • o Inference data samples i.e., input data samples, that should provide information not previously collected in other data samples
  • o Inference data samples i.e., input data samples, for which information was lacking in historical data samples used to train the AI/ML model
  • such events and condition based on the inference data samples can be realized by ⁇ A list of at least one input data feature (i.e. , a component of an input data sample) that is required to provide information
  • the SECOND MESSAGE may comprise a confirmation or refusal of a subscription request received with the FIRST MESSAGE.
  • the SECOND MESSAGE confirms a subscription request, such confirmation can precede a subsequent SECOND MESSAGE comprising the set of one or more historical data, indications of at least a performance metric, and instructions or recommendations
  • the SECOND MESSAGE can comprise a confirmation or a refusal associated to a subscription request as a whole or to only part of the subscription request (e.g. , only some of the requests/instructions and/or configurations/indications, only to requests/instructions/configurations applicable to data with certain characteristics) received from the second network node with a FIRST MESSAGE.
  • the SECOND MESSAGE may further comprise
  • the SECOND MESSAGE may further comprise a set of data associated to the AI/ML model or algorithm including only inference data, as indicated the subscription received by the second network node with the FIRST MESSAGE.
  • the SECOND MESSAGE may provide at least one set of historical data samples, or a historical data set that the first network node should use for training the AI/ML algorithm indicated by the FIRST MESSAGE.
  • the SECOND MESSAGE may additionally comprise an indication indicating the scope of at least a data set provided to the first network node, such as training, testing or validation of an AI/ML model or algorithm indicated by the FIRST MESSAGE
  • the SECOND MESSAGE may comprise a set of historical data samples, or a historical data set that the first network node should use for training, testing, or validating the AI/ML algorithm indicated by the FIRST MESSAGE.
  • the SECOND MESSAGE may comprise an indication of which portion of the dataset first network node should use for training, testing, or validating the AI/ML model or algorithm. In one example, this could be realized by indicating the fraction or percentage of the dataset associated to training, testing, and validation. In an alternative implementation this could be realized by indicating a range of data samples to be used for training, testing, or validation.
  • the SECOND MESSAGE may provide at least one set of historical inference data samples, such as data samples provided by the second network node to a third network node for inference using the AI/ML model or algorithm indicated by the FIRST MESSAGE.
  • the SECOND MESSAGE may additionally comprise an indication of the age of such inference data samples, such as an indication of a time period when such inference data samples have been sent to or by a third network node for inference
  • the SECOND MESSAGE may provide at least one set of actual or live inference data samples, such as data samples provided by the second network node to a third network node for inference using the AI/ML model or algorithm indicated by the FIRST MESSAGE.
  • the first network node may receive multiple SECOND MESSAGES comprising one or more inference data samples currently sent to or used by a third network node for inference with the AI/ML model or algorithm indicated by the FIRST MESSAGE.
  • the at least a performance metric associated to the AI/ML model or algorithm indicated by the SECOND MESSAGE may comprise
  • the indicated performance metric associated to the AI/ML model or algorithm may have been requested by the first network node by means of a FIRST MESSAGE.
  • the SECOND MESSAGE comprises an indication a drift or a change in at least part of the data distribution of a data set associated to the AI/ML model or algorithm indicated by the FIRST MESSAGE, such as a measurement or an estimate of a metric indicating a change or a drift in the data set or part thereof and/or in the distribution of the data of the dataset.
  • the SECOND MESSAGE may comprise: • an indication of change or data drift associated to a data set of the AI/ML algorithm
  • Examples measurements or estimates of metrics that the SECOND MESSAGE may comprise or report to indicate changes and/or data drift in a data set or at least a component of the dataset could comprise one or more (or a combination of):
  • the instructions or recommendation to re-train/update/optimize or to test/validate at least an AI/ML model indicated by the SECOND MESSAGE may comprise
  • An indication of one or more parameters of the algorithm that the first network node should use to train, optimize, or update the AI/ML algorithm such as o
  • An indication of the optimization method to use for training the algorithm o
  • An indication of an exploration strategy and the corresponding exploration parameters o
  • An indication of a discounting factor o
  • An indication of a loss function or an error function that should be optimized to train the algorithm o
  • the SECOND MESSAGE received by the first network node 200 may further comprise
  • timing related indications indicating, e.g., a time of validity/expiration for the subscription
  • a SECOND MESSAGE can be used to confirm or to refuse the subscription request included in the FIRST MESSAGE, or said confirmation or refusal can be sent from the second network node 202 to the first network node 200 using a FOURTH MESSAGE, responsive to the FIRST MESSAGE and preceding a SECOND MESSAGE comprising the requested historical data,
  • a SECOND MESSAGE can be sent synchronously or asynchronously to the FIRST MESSAGE.
  • a SECOND MESSAGE may comprise indications indicating one or more fourth network nodes that can be requested to send historical data, wherein a fourth network node can be comprised within a radio communication network (e.g., a RAN node or a UE) or outside a radio communication network.
  • the said indications can be one or more IP address, one or more Uniform Resource Locator (URL), one or more Uniform Resource Indicator (URI).
  • Figure 5 illustrates another embodiment of the procedure to optimize a training function of an AI/ML model or algorithm in a radio communication network, wherein the first network node 200 can send a THIRD MESSAGE to a third network node 500 (step 502).
  • the first network node 200 trains, optimizes or updates at least one AI/ML model or algorithm based on information received with the SECOND MESSAGE
  • the first network node 200 may further
  • THIRD MESSAGE Transmit a THIRD MESSAGE to the third network node 500 (e.g., a Model Inference) (see step 502) or to the second network node 202, where the THIRD MESSAGE comprises one or more of o A trained, optimized, or updated AI/ML model (e.g., via a Model Deployment Update) o One or more indications of a performance metric associated to the AI/ML algorithm 2 Method at Second Network Node
  • Embodiments of a method executed by a second network node are also disclosed herein As illustrated in Figures 2 and 3, in one embodiment, the method performed by the second network node 202 comprises the steps of:
  • Step 204 The second network node 202 receives at least one FIRST MESSAGE from the first network node 200 of the radio communication network, the FIRST MESSAGE comprising a subscription request comprising one or more of: o request(s)/instruction(s):
  • to monitor and/or report the quality of the data set associated to the at least an AI/ML model, e.g. to identify changes/drifts in the data distribution of the AI/ML model, which would justify the retraining o configurations/indications associated to the requests/instructions, specifying:
  • the second network node 202 transmits at least one
  • the at least one SECOND MESSAGE comprises one or more of: o a set of one or more historical data associated to the AI/ML model or algorithm for which optimization is desirable o indications of at least a performance metric associated to the AI/ML model or algorithm for which optimization is desirable o instructions or recommendations:
  • the AI/ML model or algorithm indicated by the FIRST MESSAGE and/or by the SECOND MESSAGE is an AI/ML model or algorithm of the first network node or of a third network node (see, e.g., the third network node 500 of Figure 5).
  • the second network node 202 may • Transmit at least one SECOND MESSAGE to the first network node 200 in response to the FIRST MESSAGE, wherein one or more of the information elements carried by the SECOND MESSAGE depends on one or more information elements of the FIRST MESSAGE
  • the second network node 202 may
  • the second network node 202 can issue an unsolicited message, namely a message not derived as the result of a first message received from the first network node 200, to test the model or algorithm in question.
  • the testing can be carried out with respect to a set of data received together with the request for testing.
  • FIG. 6 illustrates an embodiment of a procedure, or method, to optimize a training function of an AI/ML model or algorithm in a radio communication network, wherein the first network node 200 obtains (at least part of) the requested data from a fourth network node 600 as a result of indications comprised in the SECOND MESSAGE (see steps 602, 604, 606, 608, and 610).
  • Figure 7 illustrates an embodiment of a procedure, or method, to optimize a training function of an AI/ML model or algorithm in a radio communication network, wherein the second network node 202 obtains at least part of the requested data from a fourth network node 600.
  • the second network node 202 upon reception of the FIRST MESSAGE, sends a FOURTH MESSAGE to a fourth network node 600 (data storage) (step 602), where the FOURTH MESSAGE comprises a second subscription request to receive data from the fourth network node 600, based on the request of the first network node 200.
  • the second subscription can be the same as the first subscription request comprised in the FIRST MESSAGE (e.g., second network node 202 relays the first subscription) or a new one.
  • the second network node 202 derives the need for sending the second subscription request by checking different criteria, including one or more of the following:
  • the first subscription request pertains to historical data that are not present at the second network node 202 (e.g., requested data are out of range for the second network node 202, data are no longer available, e.g. have been discarded due to expiration of a retention period, data at the second network node 202 have a different aggregation level or granularity)
  • the fourth network node 600 sends to the second network node 202 a FIFTH MESSAGE to confirm or refuse the subscription comprised in the FOURTH MESSAGE (step 604).
  • the second network node 202 sends to the first network node a SECOND MESSAGE comprising a confirmation or refusal or the subscription comprised in the FIRST MESSAGE (step 606).
  • the second network node 202 includes in the SECOND MESSAGE indications for the first network node 200 to contact the fourth network node directly.
  • the first network node 202 can then send to the fourth network node 600 a (new) FIRST MESSAGE to subscribe to information (step 608).
  • the fourth network node 600 sends to the first network node 202 a SIXTH MESSAGE comprising the requested data (step 610).
  • the second network node 202 upon receiving the FIRST MESSAGE, sends to a fourth network node 600 a FOURTH MESSAGE comprising a second subscription to request for receiving data from the fourth network node 600, based on the first subscription request of the first network node 200 (step 700).
  • a second subscription can comprise the same elements of a first subscription.
  • the second subscription included in the FOURTH MESSAGE can be the same as the (first) subscription comprised in the FIRST MESSAGE.
  • the subscription comprised in the FOURTH MESSAGE has the purpose to forward the data associated to the AI/ML model or algorithm indicated by the first network node.
  • the subscription comprised in the FOURTH MESSAGE has the purpose to monitor the performance of the AI/ML model or algorithm indicated by the first network node.
  • the fourth network node 600 sends to the second network node 202 a FIFTH MESSAGE to confirm or refuse the second subscription comprised in the FOURTH MESSAGE (step 702).
  • the second network node 202 may send to the first network node 200 a SECOND MESSAGE comprising a confirmation or refusal or the subscription comprised in the FIRST MESSAGE (step 300).
  • the fourth network node 600 sends to the first network node 200 and/or the second network node 202 a SIXTH MESSAGE comprising the requested data (steps 704 and 706). If provided to the second network node 202, the second network node 202 includes the requested data in a SECOND MESSAGE sent from the second network node 202 to the first network node 200 (step 206).
  • the method further discloses a method executed by a third network node (e.g , the third network node 500 of Figure 5) comprising the steps of
  • THIRD MESSAGE from the first network node 200 (step 502), where the THIRD MESSAGE comprises one or more of o A trained, optimized, or updated AI/ML model o One or more indications of a performance metric associated to the AI/ML algorithm
  • the third network node 500 can execute the inference of the receives AI/ML model.
  • the method further discloses a method executed by a fourth network node (e.g., the fourth network node 600 of Figure 6 and Figure 7) comprising the steps of
  • the first network node 200 is the host of a model training function associated to at least an AI/ML models or algorithms
  • the second network node 202 is the host of a data collection function or a data source function responsible of handling data samples for at least one AI/ML model or algorithm (as indicated by the first network node 200).
  • the fourth network node 600 is the host of a data storage function or a data source function responsible of handling data samples for at least one AI/ML model or algorithm (as indicated by the first network node).
  • the third network node 500 is the host of a model inference function associated to at least an AI/ML models or algorithms that the first network node 200
  • Figure 8, Figure 9, Figure 10, and Figure 11 show example embodiments related to how the proposed solution can map to the current Functional Framework discussed in 3GPP.
  • Figure 8 illustrates a first example of mapping of the proposed solution to the Functional Framework as defined in 3GPP in R3-212978.
  • Figure 9 illustrates a second example of mapping of the proposed solution to the Functional Framework as defined in 3GPP in R3- 212978, with the addition of a fourth network node, wherein the fourth network node sends requested data to the first network node via the third network node.
  • Figure 10 illustrates a third example of mapping of the proposed solution to the Functional Framework as defined in 3GPP in R3-212978, with the addition of a fourth network node, wherein the fourth network node can send requested data to the first network node and to the second network node.
  • Figure 11 illustrates a possible mapping of the method to the current Functional Framework for RAN Intelligence. 6 Additional Embodiments
  • the first network node 200 and the second network node 202 may go through a procedure wherein the first network node 200 and the second network node 202 exchange information about the capabilities of the first network node 200 and the second network node 202 concerning the subscription, the sending of historical data, and the reception of historical data.
  • the data collection function may act on request/indication from the training function (first network node 200) and/or the model inference function (third network node 500) or autonomously, e.g. in accordance with configuration or implementation.
  • the training function's perspective (where the training function/entity may be the previously described first network node 200):
  • the training function may, as one option, send a subscription request or a first request for training data to the data collection function, possibly indicating a certain type or scope of the training data.
  • this relation i.e., the collection function sending training data to the training function
  • this relation may have been established through configuration. If the training function sends the first request for training data to the data collection function, this may have been triggered by an indication (e.g., a message) from the data collection function, indicating that retraining, or further training, may be needed (e.g., because of a detected significant, e.g. systematic, change in the collected data).
  • the first request may thus be either an initial request to initiate a training process or a request pertaining to a training process that has already been ongoing.
  • the training function may send a second request to the data collection function to request the data collection function to monitor the collected training data (or the data collection function may do that by default anyway - without a request from the training function - e.g. according to configuration or implementation) and notify the training function if there is a significant, e.g. systematic, change in the collected data which is significant enough to probably require retraining, or additional training, of a model or algorithm trained on the previously collected training data of the same type.
  • This second request may contain a measure of the significant, e.g. systematic, change which should trigger the data collection function to notify the training function.
  • the training function may thus receive a notification from the data collection function, indicating that a significant, e.g. systematic, change of the collected training data has occurred which may require retraining, or further training, of a model or algorithm trained on the previously collected training data of the same type, e.g. during an ongoing or previously ongoing training process.
  • this notification may be accompanied by new training data (or the new training data constitutes the indication/notification), which reflect the status of the significant, e.g. systematic, change of the collected data.
  • the notification is not accompanied by new training data, but it is up to the training function to send a second request to the data collection function to request such new training data upon receiving the notification of a significant, e.g. systematic, change of the collected data.
  • the training function may be triggered to send said first request and/or second request to the data collection function upon receiving an indication from the model inference function that the performance of the model or algorithm has degraded, e.g. in terms of consistency of the output, uncertainty of the output, evaluation of related affected performance metric, etc.
  • the data collection function s perspective (where the data collection function/entity may be the previously described second network node):
  • the data collection function may receive a subscription request or a first request for training data from a training function.
  • the data collection function may also receive a second request from the training function, requesting the data collection function to monitor the collected training data (or the data collection function may do that by default anyway - without a request from the training function - e.g. according to configuration or implementation) and notify the training function if there is a significant, e.g. systematic, change in the collected data which is significant enough to probably require retraining, or additional training, of a model or algorithm trained on the previously collected training data of the same type.
  • the second request may include a measure of the significant, e.g. systematic, change which should trigger the data collection function to notify the training function.
  • the data collection function may monitor the data it collects to detect significant, e.g. systematic, changes which implies that retraining, or further training, of a model or algorithm trained on the previously collected training data of the same type.
  • the data collection function may monitor the data it collects and detect a significant, e.g. systematic, change in the collected data samples which are to be used as training data in the training function (where this training function will use this data for training of one or more AI/ML model(s) or algorithm(s)).
  • the data collection function may perform one or more of the following steps (not necessarily in the indicated order):
  • step 4 May send a set of data samples to the training function, to be used as training data for training of a model or algorithm that previously has been trained on collected data of the same type, where in the set of data samples reflect the status of the significant, e.g. systematic, change of the collected data, e.g. the set of collected data samples determined in step 1 .
  • This set of data samples may be sent together with the indication, e.g. notification, of a detected significant, e.g. systematic, change of the collected data (or the set of data samples constitutes the indication/notification), or the data collection function may send the set of data samples separately, e.g. upon reception of a request for training data from the training function, e.g. said first request.
  • these steps may not necessarily be performed in the indicated (i.e., numbered) order.
  • step 1 may alternatively be performed after step 2 or after step 3.
  • the data collection function may be triggered to send new training data to the training function by an indication from the model inference function that new, or more, data is needed for retraining, or additional training, of a model or algorithm associated with the model inference function, and/or an indication from the model inference function that the performance of an associated model or algorithm has degraded, e.g. in terms of consistency of the output, uncertainty of the output, evaluation of related affected performance metric, etc.
  • the indication from the model inference function may trigger the data collection function to analyze collected data, e.g. to determine if a significant, e.g.
  • model inference function s perspective (where the model inference function/entity may be the previously described third network node):
  • the model inference function may monitor the performance of the output from an associated model or algorithm and, upon detection of degraded performance, the model inference function may notify the training function or the data collection function, wherein the performance degradation, may manifest itself, e.g., in terms of decreased consistency of the output, increased uncertainty of the output, degradation of an evaluated affected performance metric, etc.
  • the actions performed by the model inference function in the above embodiments may also be performed by an entity denoted “Actor”, which is fed with the output from the model inference function, or, as yet another option, the actions are performed by the model inference function and the Actor together, e g. in cooperation or performing different parts of the actions separately.
  • Actor entity denoted “Actor”
  • the actions are performed by the model inference function and the Actor together, e g. in cooperation or performing different parts of the actions separately.
  • the importance a new sample x, in the data collection node, for the dataset provided to the training node can be based on a function, denoted by a decision criterion hereafter
  • the actions can be to a. Include one or more new sample(s) in database, (described below) b. Increase weight of existing sample(s) in database (described below)
  • the data collection node updates the training node with the new updated/historical dataset. 7.1.1 Include one or more new samples
  • One such decision criterion function can be the weighted Euclidian distance compared to the other samples Xd in the database D.
  • a decision criterion could comprise if the minimum of the set of weighted Euclidian distances, to the new sample x, is higher than a certain threshold. The dataset is updated with said sample.
  • Another type of decision criterion could be based on some of the available distance metrics in the literature. Some non-limiting examples include Manhattan (L1-norm), Euclidean (L2-norm), Minkowski, Cosine and Chebychev type of distance metrics. In the literature there are many techniques that can use the distance to reach a decision on the novelty of a sample and this document does not exclude any of them. Some non-limiting examples include density-based approaches, proximity approaches (maximum distance to other points, average distance to other points etc.)
  • the data collection node could receive in a FIRST MESSAGE, an instruction/configuration comprising the decision function with associated threshold to be included in the database, and how many new samples the second node should receive prior to sending the SECOND MESSAGE comprising the new updated/historical data.
  • the threshold can be based on the ML model performance, for example in case the ML-model at training node has high accuracy, one can be more restrictive in receiving new data, hence use a smaller threshold.
  • we store a weight for each sample in the dataset i.e., the dataset consists of tuples ([[xi,wi];[x2,w2], ....,[.x n ,w n ]J) where xi is the /-th sample and w, is its corresponding weight.
  • the weight of a sample can be increased if a new sample is the closest in terms of a certain distance criterion. For example, we can increase a new sample is closest to Xi of the database samples, and within a threshold range of Xthreshoid. Otherwise, if it is outside, we can include the sample in the database according to previous section.
  • weights can affect the model training in order to enable the model to perform better at data more likely to be experienced.
  • the weight is included in the optimization function.
  • One typical optimization is to minimize the weighted mean squared error of the model output and the true value
  • the data collection node could receive in a FIRST MESSAGE, an instruction/configuration comprising the number of updated weights, the second node should count prior to triggering/sending the SECOND MESSAGE including the new updated/historical data (including the updated weights).
  • Figure 12 shows an example of a communication system 1200 in which embodiments of the present disclosure described above with respect to Sections 1 to 7 may be implemented.
  • the first network node 200, the second network node 202, the third network node 500, and the fourth network node 600 may be any network node in the radio access network and/or core network of the communication system 1200.
  • the particular network nodes in the communication system 1200 that operate as the first network node 200, the second network node 202, the third network node 500, and the fourth network node 600 may, in some embodiments, depend on the particular Al /ML model, the data used by the AI/ML model, and/or the network node at which inference (i.e., use of) the AI/ML model is desired.
  • the communication system 1200 includes a telecommunication network 1202 that includes an access network 1204, such as a Radio Access Network (RAN), and a core network 1206, which includes one or more core network nodes 1208.
  • the access network 1204 includes one or more access network nodes, such as network nodes 1210A and 1210B (one or more of which may be generally referred to as network nodes 1210), or any other similar Third Generation Partnership Project (3GPP) access node or non-3GPP Access Point (AP).
  • 3GPP Third Generation Partnership Project
  • the network nodes 1210 facilitate direct or indirect connection of User Equipment (UE), such as by connecting UEs 1212A, 1212B, 1212C, and 1212D (one or more of which may be generally referred to as UEs 1212) to the core network 1206 over one or more wireless connections.
  • UE User Equipment
  • Example wireless communications over a wireless connection include transmitting and/or receiving wireless signals using electromagnetic waves, radio waves, infrared waves, and/or other types of signals suitable for conveying information without the use of wires, cables, or other material conductors.
  • the communication system 1200 may include any number of wired or wireless networks, network nodes, UEs, and/or any other components or systems that may facilitate or participate in the communication of data and/or signals whether via wired or wireless connections.
  • the communication system 1200 may include and/or interface with any type of communication, telecommunication, data, cellular, radio network, and/or other similar type of system.
  • the UEs 1212 may be any of a wide variety of communication devices, including wireless devices arranged, configured, and/or operable to communicate wirelessly with the network nodes 1210 and other communication devices.
  • the network nodes 1210 are arranged, capable, configured, and/or operable to communicate directly or indirectly with the UEs 1212 and/or with other network nodes or equipment in the telecommunication network 1202 to enable and/or provide network access, such as wireless network access, and/or to perform other functions, such as administration in the telecommunication network 1202.
  • the core network 1206 connects the network nodes 1210 to one or more hosts, such as host 1216. These connections may be direct or indirect via one or more intermediary networks or devices. In other examples, network nodes may be directly coupled to hosts.
  • the core network 1206 includes one more core network nodes (e.g., core network node 1208) that are structured with hardware and software components. Features of these components may be substantially similar to those described with respect to the UEs, network nodes, and/or hosts, such that the descriptions thereof are generally applicable to the corresponding components of the core network node 1208.
  • Example core network nodes include functions of one or more of a Mobile Switching Center (MSC), Mobility Management Entity (MME), Home Subscriber Server (HSS), Access and Mobility Management Function (AMF), Session Management Function (SMF), Authentication Server Function (AUSF), Subscription Identifier De-Concealing Function (SIDF), Unified Data Management (UDM), Security Edge Protection Proxy (SEPP), Network Exposure Function (NEF), and/or a User Plane Function (UPF).
  • the host 1216 may be under the ownership or control of a service provider other than an operator or provider of the access network 1204 and/or the telecommunication network 1202, and may be operated by the service provider or on behalf of the service provider.
  • the host 1216 may host a variety of applications to provide one or more service.
  • Examples of such applications include live and pre-recorded audio/video content, data collection services such as retrieving and compiling data on various ambient conditions detected by a plurality of UEs, analytics functionality, social media, functions for controlling or otherwise interacting with remote devices, functions for an alarm and surveillance center, or any other such function performed by a server.
  • the communication system 1200 of Figure 12 enables connectivity between the UEs, network nodes, and hosts.
  • the communication system 1200 may be configured to operate according to predefined rules or procedures, such as specific standards that include, but are not limited to: Global System for Mobile Communications (GSM); Universal Mobile Telecommunications System (UMTS); Long Term Evolution (LTE), and/or other suitable Second, Third, Fourth, or Fifth Generation (2G, 3G, 4G, or 5G) standards, or any applicable future generation standard (e.g., Sixth Generation (6G)); Wireless Local Area Network (WLAN) standards, such as the Institute of Electrical and Electronics Engineers (IEEE) 802.11 standards (WiFi); and/or any other appropriate wireless communication standard, such as the Worldwide Interoperability for Microwave Access (WiMax), Bluetooth, Z-Wave, Near Field Communication (NFC) ZigBee, LiFi, and/or any Low Power Wide Area Network (LPWAN) standards such as LoRa and Sigfox.
  • GSM Global System for Mobile Communications
  • UMTS
  • the telecommunication network 1202 is a cellular network that implements 3GPP standardized features. Accordingly, the telecommunication network 1202 may support network slicing to provide different logical networks to different devices that are connected to the telecommunication network 1202. For example, the telecommunication network 1202 may provide Ultra Reliable Low Latency Communication (URLLC) services to some UEs, while providing enhanced Mobile Broadband (eMBB) services to other UEs, and/or massive Machine Type Communication (mMTC)/massive Internet of Things (loT) services to yet further UEs.
  • URLLC Ultra Reliable Low Latency Communication
  • eMBB enhanced Mobile Broadband
  • mMTC massive Machine Type Communication
  • LoT massive Internet of Things
  • the UEs 1212 are configured to transmit and/or receive information without direct human interaction.
  • a UE may be designed to transmit information to the access network 1204 on a predetermined schedule, when triggered by an internal or external event, or in response to requests from the access network 1204.
  • a UE may be configured for operating in single- or multi-Radio Access Technology (RAT) or multi-standard mode.
  • RAT Radio Access Technology
  • a UE may operate with any one or combination of WiFi, New Radio (NR), and LTE, i.e. be configured for Multi-Radio Dual Connectivity (MR-DC), such as Evolved UMTS Terrestrial RAN (E- UTRAN) NR - Dual Connectivity (EN-DC).
  • MR-DC Multi-Radio Dual Connectivity
  • E- UTRAN Evolved UMTS Terrestrial RAN
  • EN-DC Dual Connectivity
  • a hub 1214 communicates with the access network 1204 to facilitate indirect communication between one or more UEs (e.g., UE 1212C and/or 1212D) and network nodes (e.g., network node 1210B).
  • the hub 1214 may be a controller, router, content source and analytics, or any of the other communication devices described herein regarding UEs.
  • the hub 1214 may be a broadband router enabling access to the core network 1206 for the UEs.
  • the hub 1214 may be a controller that sends commands or instructions to one or more actuators in the UEs.
  • the hub 1214 may be a data collector that acts as temporary storage for UE data and, in some embodiments, may perform analysis or other processing of the data.
  • the hub 1214 may be a content source. For example, for a UE that is a Virtual Reality (VR) headset, display, loudspeaker or other media delivery device, the hub 1214 may retrieve VR assets, video, audio, or other media or data related to sensory information via a network node, which the hub 1214 then provides to the UE either directly, after performing local processing, and/or after adding additional local content.
  • the hub 1214 acts as a proxy server or orchestrator for the UEs, in particular in if one or more of the UEs are low energy loT devices.
  • the hub 1214 may have a constant/persistent or intermittent connection to the network node 121 OB.
  • the hub 1214 may also allow for a different communication scheme and/or schedule between the hub 1214 and UEs (e.g., UE 12120 and/or 1212D), and between the hub 1214 and the core network 1206.
  • the hub 1214 is connected to the core network 1206 and/or one or more UEs via a wired connection.
  • the hub 1214 may be configured to connect to a Machine-to-Machine (M2M) service provider over the access network 1204 and/or to another UE over a direct connection.
  • M2M Machine-to-Machine
  • UEs may establish a wireless connection with the network nodes 1210 while still connected via the hub 1214 via a wired or wireless connection.
  • the hub 1214 may be a dedicated hub - that is, a hub whose primary function is to route communications to/from the UEs from/to the network node 121 OB.
  • the hub 1214 may be a non-dedicated hub - that is, a device which is capable of operating to route communications between the UEs and the network node 1210B, but which is additionally capable of operating as a communication start and/or end point for certain data channels.
  • a UE refers to a device capable, configured, arranged, and/or operable to communicate wirelessly with network nodes and/or other UEs.
  • a UE include, but are not limited to, a smart phone, mobile phone, cell phone, Voice over Internet Protocol (VoIP) phone, wireless local loop phone, desktop computer, Personal Digital Assistant (PDA), wireless camera, gaming console or device, music storage device, playback appliance, wearable terminal device, wireless endpoint, mobile station, tablet, laptop, Laptop Embedded Equipment (LEE), Laptop Mounted Equipment (LME), smart device, wireless Customer Premise Equipment (CPE), vehicle-mounted or vehicle embedded/integrated wireless device, etc.
  • Other examples include any UE identified by the 3GPP, including a Narrowband Internet of Things (NB-loT) UE, a Machine Type Communication (MTC) UE, and/or an enhanced MTC (eMTC) UE.
  • NB-loT Narrowband Internet of Things
  • MTC Machine Type Communication
  • eMTC
  • a UE may support Device-to-Device (D2D) communication, for example by implementing a 3GPP standard for sidelink communication, Dedicated Short-Range Communication (DSRC), Vehicle-to-Vehicle (V2V), Vehicle-to- Infrastructure (V2I), or Vehicle-to-Everything (V2X).
  • D2D Device-to-Device
  • DSRC Dedicated Short-Range Communication
  • V2V Vehicle-to-Vehicle
  • V2I Vehicle-to- Infrastructure
  • V2X Vehicle-to-Everything
  • a UE may not necessarily have a user in the sense of a human user who owns and/or operates the relevant device.
  • a UE may represent a device that is intended for sale to, or operation by, a human user but which may not, or which may not initially, be associated with a specific human user (e.g., a smart sprinkler controller).
  • a UE may represent a device that is not intended for sale to, or operation by,
  • the UE 1300 includes processing circuitry 1302 that is operatively coupled via a bus 1304 to an input/output interface 1306, a power source 1308, memory 1310, a communication interface 1312, and/or any other component, or any combination thereof.
  • Certain UEs may utilize all or a subset of the components shown in Figure 13. The level of integration between the components may vary from one UE to another UE. Further, certain UEs may contain multiple instances of a component, such as multiple processors, memories, transceivers, transmitters, receivers, etc.
  • the processing circuitry 1302 is configured to process instructions and data and may be configured to implement any sequential state machine operative to execute instructions stored as machine-readable computer programs in the memory 1310.
  • the processing circuitry 1302 may be implemented as one or more hardware- implemented state machines (e.g., in discrete logic, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), etc.); programmable logic together with appropriate firmware; one or more stored computer programs, general purpose processors, such as a microprocessor or Digital Signal Processor (DSP), together with appropriate software; or any combination of the above.
  • the processing circuitry 1302 may include multiple Central Processing Units (CPUs).
  • the input/output interface 1306 may be configured to provide an interface or interfaces to an input device, output device, or one or more input and/or output devices.
  • Examples of an output device include a speaker, a sound card, a video card, a display, a monitor, a printer, an actuator, an emitter, a smartcard, another output device, or any combination thereof.
  • An input device may allow a user to capture information into the UE 1300.
  • Examples of an input device include a touch-sensitive or presence-sensitive display, a camera (e.g., a digital camera, a digital video camera, a web camera, etc.), a microphone, a sensor, a mouse, a trackball, a directional pad, a trackpad, a scroll wheel, a smartcard, and the like.
  • the presence-sensitive display may include a capacitive or resistive touch sensor to sense input from a user.
  • a sensor may be, for instance, an accelerometer, a gyroscope, a tilt sensor, a force sensor, a magnetometer, an optical sensor, a proximity sensor, a biometric sensor, etc., or any combination thereof
  • An output device may use the same type of interface port as an input device. For example, a Universal Serial Bus (USB) port may be used to provide an input device and an output device.
  • USB Universal Serial Bus
  • the power source 1308 is structured as a battery or battery pack. Other types of power sources, such as an external power source (e.g., an electricity outlet), photovoltaic device, or power cell, may be used.
  • the power source 1308 may further include power circuitry for delivering power from the power source 1308 itself, and/or an external power source, to the various parts of the UE 1300 via input circuitry or an interface such as an electrical power cable. Delivering power may be, for example, for charging the power source 1308.
  • Power circuitry may perform any formatting, converting, or other modification to the power from the power source 1308 to make the power suitable for the respective components of the UE 1300 to which power is supplied.
  • the memory 1310 may be or be configured to include memory such as Random Access Memory (RAM), Read Only Memory (ROM), Programmable ROM (PROM), Erasable PROM (EPROM), Electrically EPROM (EEPROM), magnetic disks, optical disks, hard disks, removable cartridges, flash drives, and so forth.
  • the memory 1310 includes one or more application programs 1314, such as an operating system, web browser application, a widget, gadget engine, or other application, and corresponding data 1316.
  • the memory 1310 may store, for use by the UE 1300, any of a variety of various operating systems or combinations of operating systems.
  • the memory 1310 may be configured to include a number of physical drive units, such as Redundant Array of Independent Disks (RAID), flash memory, USB flash drive, external hard disk drive, thumb drive, pen drive, key drive, High Density Digital Versatile Disc (HD-DVD) optical disc drive, internal hard disk drive, Blu-Ray optical disc drive, Holographic Digital Data Storage (HDDS) optical disc drive, external mini Dual In-line Memory Module (DIMM), Synchronous Dynamic RAM (SDRAM), external micro-DIMM SDRAM, smartcard memory such as a tamper resistant module in the form of a Universal Integrated Circuit Card (UICC) including one or more Subscriber Identity Modules (SIMs), such as a Universal SIM (USIM) and/or Internet Protocol Multimedia Services Identity Module (I SI M), other memory, or any combination thereof.
  • RAID Redundant Array of Independent Disks
  • HD-DVD High Density Digital Versatile Disc
  • HDDS Holographic Digital Data Storage
  • DIMM Dual In-line Memory
  • the UICC may for example be an embedded UICC (eUlCC), integrated UICC (IUICC) or a removable UICC commonly known as a ‘SIM card.
  • eUlCC embedded UICC
  • IUICC integrated UICC
  • SIM card removable UICC commonly known as a ‘SIM card.
  • the memory 1310 may allow the UE 1300 to access instructions, application programs, and the like stored on transitory or non-transitory memory media, to off-load data, or to upload data.
  • An article of manufacture, such as one utilizing a communication system, may be tangibly embodied as or in the memory 1310, which may be or comprise a device-readable storage medium.
  • the processing circuitry 1302 may be configured to communicate with an access network or other network using the communication interface 1312.
  • the communication interface 1312 may comprise one or more communication subsystems and may include or be communicatively coupled to an antenna 1322.
  • the communication interface 1312 may include one or more transceivers used to communicate, such as by communicating with one or more remote transceivers of another device capable of wireless communication (e.g . , another UE or a network node in an access network).
  • Each transceiver may include a transmitter 1318 and/or a receiver 1320 appropriate to provide network communications (e.g., optical, electrical, frequency allocations, and so forth).
  • the transmitter 1318 and receiver 1320 may be coupled to one or more antennas (e.g., the antenna 1322) and may share circuit components, software, or firmware, or alternatively be implemented separately
  • communication functions of the communication interface 1312 may include cellular communication, WiFi communication, LPWAN communication, data communication, voice communication, multimedia communication, short-range communications such as Bluetooth, NFC, location-based communication such as the use of the Global Positioning System (GPS) to determine a location, another like communication function, or any combination thereof.
  • GPS Global Positioning System
  • Communications may be implemented according to one or more communication protocols and/or standards, such as IEEE 802.11, Code Division Multiplexing Access (CDMA), Wideband CDMA (WCDMA), GSM, LTE, NR, UMTS, WiMax, Ethernet, Transmission Control Protocol/lnternet Protocol (TCP/IP), Synchronous Optical Networking (SONET), Asynchronous Transfer Mode (ATM), Quick User Datagram Protocol Internet Connection (QUIC), Hypertext Transfer Protocol (HTTP), and so forth.
  • CDMA Code Division Multiplexing Access
  • WCDMA Wideband CDMA
  • GSM Global System for Mobile communications
  • LTE Long Term Evolution
  • NR Fifth Generation
  • UMTS Worldwide Interoperability for Mobile communications
  • Ethernet Transmission Control Protocol/lnternet Protocol
  • TCP/IP Synchronous Optical Networking
  • SONET Synchronous Optical Networking
  • ATM Asynchronous Transfer Mode
  • QUIC Quick User Datagram Protocol Internet Connection
  • HTTP Hypertext Transfer Protocol
  • a UE may provide an output of data captured by its sensors, through its communication interface 1312, or via a wireless connection to a network node.
  • Data captured by sensors of a UE can be communicated through a wireless connection to a network node via another UE.
  • the output may be periodic (e.g., once every 15 minutes if it reports the sensed temperature), random (e.g., to even out the load from reporting from several sensors), in response to a triggering event (e.g., when moisture is detected an alert is sent), in response to a request (e.g., a user initiated request), or a continuous stream (e.g., a live video feed of a patient).
  • a UE comprises an actuator, a motor, or a switch related to a communication interface configured to receive wireless input from a network node via a wireless connection.
  • the states of the actuator, the motor, or the switch may change.
  • the UE may comprise a motor that adjusts the control surfaces or rotors of a drone in flight according to the received input or to a robotic arm performing a medical procedure according to the received input.
  • a UE when in the form of an loT device, may be a device for use in one or more application domains, these domains comprising, but not limited to, city wearable technology, extended industrial application, and healthcare.
  • Non-limiting examples of such an loT device are a device which is or which is embedded in: a connected refrigerator or freezer, a television, a connected lighting device, an electricity meter, a robot vacuum cleaner, a voice controlled smart speaker, a home security camera, a motion detector, a thermostat, a smoke detector, a door/window sensor, a flood/moisture sensor, an electrical door lock, a connected doorbell, an air conditioning system like a heat pump, an autonomous vehicle, a surveillance system, a weather monitoring device, a vehicle parking monitoring device, an electric vehicle charging station, a smart watch, a fitness tracker, a head-mounted display for Augmented Reality (AR) or VR, a wearable for tactile augmentation or sensory enhancement, a water sprinkler, an animal- or item-tracking device, a sensor for monitoring a plant or animal, an industrial robot, an Unmanned Aerial Vehicle (UAV), and any kind of medical device, like a heart rate monitor or a remote controlled surgical robot.
  • a UE may represent a machine or other device that performs monitoring and/or measurements and transmits the results of such monitoring and/or measurements to another UE and/or a network node.
  • the UE may in this case be an M2M device, which may in a 3GPP context be referred to as an MTC device.
  • the UE may implement the 3GPP NB-loT standard.
  • a UE may represent a vehicle, such as a car, a bus, a truck, a ship, an airplane, or other equipment that is capable of monitoring and/or reporting on its operational status or other functions associated with its operation.
  • a first UE might be or be integrated in a drone and provide the drone's speed information (obtained through a speed sensor) to a second UE that is a remote controller operating the drone
  • the first UE may adjust the throttle on the drone (e.g., by controlling an actuator) to increase or decrease the drone's speed.
  • the first and/or the second UE can also include more than one of the functionalities described above.
  • a UE might comprise the sensor and the actuator and handle communication of data for both the speed sensor and the actuators.
  • FIG 14 shows a network node 1400 in accordance with some embodiments.
  • network node refers to equipment capable, configured, arranged, and/or operable to communicate directly or indirectly with a UE and/or with other network nodes or equipment in a telecommunication network.
  • Examples of network nodes include, but are not limited to, APs (e.g., radio APs), Base Stations (BSs) (e.g., radio BSs, Node Bs, evolved Node Bs (eNBs), and NR Node Bs (gNBs)).
  • APs e.g., radio APs
  • BSs Base Stations
  • eNBs evolved Node Bs
  • gNBs NR Node Bs
  • BSs may be categorized based on the amount of coverage they provide (or, stated differently, their transmit power level) and so, depending on the provided amount of coverage, may be referred to as femto BSs, pico BSs, micro BSs, or macro BSs.
  • a BS may be a relay node or a relay donor node controlling a relay.
  • a network node may also include one or more (or all) parts of a distributed radio BS such as centralized digital units and/or Remote Radio Units (RRUs), sometimes referred to as Remote Radio Heads (RRHs). Such RRUs may or may not be integrated with an antenna as an antenna integrated radio.
  • RRUs Remote Radio Heads
  • Parts of a distributed radio BS may also be referred to as nodes in a Distributed Antenna System (DAS).
  • DAS Distributed Antenna System
  • network nodes include multiple Transmission Point (multi-TRP) 5G access nodes, MultiStandard Radio (MSR) equipment such as MSR BSs, network controllers such as Radio Network Controllers (RNCs) or BS Controllers (BSCs), Base Transceiver Stations (BTSs), transmission points, transmission nodes, Multi- Cell/Multicast Coordination Entities (MCEs), Operation and Maintenance (O&M) nodes, Operations Support System (OSS) nodes, Self-Organizing Network (SON) nodes, positioning nodes (e.g., Evolved Serving Mobile Location Centers (E-SMLCs)), and/or Minimization of Drive Tests (MDTs).
  • MSR Multiple Transmission Point
  • RNCs Radio Network Controllers
  • BSCs Base Transceiver Stations
  • MCEs Multi- Cell/Multicast Co
  • the network node 1400 includes processing circuitry 1402, memory 1404, a communication interface 1406, and a power source 1408.
  • the network node 1400 may be composed of multiple physically separate components (e.g., a Node B component and an RNC component, or a BTS component and a BSC component, etc.), which may each have their own respective components.
  • the network node 1400 comprises multiple separate components (e.g., BTS and BSC components)
  • one or more of the separate components may be shared among several network nodes.
  • a single RNC may control multiple Node Bs.
  • each unique Node B and RNC pair may in some instances be considered a single separate network node.
  • the network node 1400 may be configured to support multiple RATs. In such embodiments, some components may be duplicated (e.g., separate memory 1404 for different RATs) and some components may be reused (e.g., an antenna 1410 may be shared by different RATs).
  • the network node 1400 may also include multiple sets of the various illustrated components for different wireless technologies integrated into network node 1400, for example GSM, WCDMA, LTE, NR, WiFi, Zigbee, Z-wave, Long Range Wide Area Network (LoRaWAN), Radio Frequency Identification (RFID), or Bluetooth wireless technologies. These wireless technologies may be integrated into the same or different chip or set of chips and other components within the network node 1400.
  • the processing circuitry 1402 may comprise a combination of one or more of a microprocessor, controller, microcontroller, CPU, DSP, ASIC, FPGA, or any other suitable computing device, resource, or combination of hardware, software, and/or encoded logic operable to provide, either alone or in conjunction with other network node 1400 components, such as the memory 1404, to provide network node 1400 functionality.
  • the processing circuitry 1402 includes a System on a Chip (SOC). In some embodiments, the processing circuitry 1402 includes one or more of Radio Frequency (RF) transceiver circuitry 1412 and baseband processing circuitry 1414. In some embodiments, the RF transceiver circuitry 1412 and the baseband processing circuitry 1414 may be on separate chips (or sets of chips), boards, or units, such as radio units and digital units. In alternative embodiments, part or all of the RF transceiver circuitry 1412 and the baseband processing circuitry 1414 may be on the same chip or set of chips, boards, or units.
  • SOC System on a Chip
  • the processing circuitry 1402 includes one or more of Radio Frequency (RF) transceiver circuitry 1412 and baseband processing circuitry 1414.
  • RF transceiver circuitry 1412 and the baseband processing circuitry 1414 may be on separate chips (or sets of chips), boards, or units, such as radio units and digital units. In alternative embodiments, part or all of the
  • the memory 1404 may comprise any form of volatile or non-volatile computer-readable memory including, without limitation, persistent storage, solid state memory, remotely mounted memory, magnetic media, optical media, RAM, ROM, mass storage media (for example, a hard disk), removable storage media (for example, a flash drive, a Compact Disk (CD), or a Digital Video Disk (DVD)), and/or any other volatile or non-volatile, non-transitory device- readable, and/or computer-executable memory devices that store information, data, and/or instructions that may be used by the processing circuitry 1402.
  • volatile or non-volatile computer-readable memory including, without limitation, persistent storage, solid state memory, remotely mounted memory, magnetic media, optical media, RAM, ROM, mass storage media (for example, a hard disk), removable storage media (for example, a flash drive, a Compact Disk (CD), or a Digital Video Disk (DVD)
  • the memory 1404 may store any suitable instructions, data, or information, including a computer program, software, an application including one or more of logic, rules, code, tables, and/or other instructions capable of being executed by the processing circuitry 1402 and utilized by the network node 1400.
  • the memory 1404 may be used to store any calculations made by the processing circuitry 1402 and/or any data received via the communication interface 1406.
  • the processing circuitry 1402 and the memory 1404 are integrated.
  • the communication interface 1406 is used in wired or wireless communication of signaling and/or data between a network node, access network, and/or UE. As illustrated, the communication interface 1406 comprises port(s)/terminal(s) 1416 to send and receive data, for example to and from a network over a wired connection.
  • the communication interface 1406 also includes radio front-end circuitry 1418 that may be coupled to, or in certain embodiments a part of, the antenna 1410.
  • the radio front-end circuitry 1418 comprises filters 1420 and amplifiers 1422.
  • the radio front-end circuitry 1418 may be connected to the antenna 1410 and the processing circuitry 1402.
  • the radio front-end circuitry 1418 may be configured to condition signals communicated between the antenna 1410 and the processing circuitry 1402.
  • the radio front-end circuitry 1418 may receive digital data that is to be sent out to other network nodes or UEs via a wireless connection.
  • the radio front-end circuitry 1418 may convert the digital data into a radio signal having the appropriate channel and bandwidth parameters using a combination of the filters 1420 and/or the amplifiers 1422.
  • the radio signal may then be transmitted via the antenna 1410.
  • the antenna 1410 may collect radio signals which are then converted into digital data by the radio front-end circuitry 1418.
  • the digital data may be passed to the processing circuitry 1402.
  • the communication interface 1406 may comprise different components and/or different combinations of components.
  • the network node 1400 does not include separate radio front-end circuitry 1418; instead, the processing circuitry 1402 includes radio front-end circuitry and is connected to the antenna 1410. Similarly, in some embodiments, all or some of the RF transceiver circuitry 1412 is part of the communication interface 1406. In still other embodiments, the communication interface 1406 includes the one or more ports or terminals 1416, the radio front-end circuitry 1418, and the RF transceiver circuitry 1412 as part of a radio unit (not shown), and the communication interface 1406 communicates with the baseband processing circuitry 1414, which is part of a digital unit (not shown).
  • the antenna 1410 may include one or more antennas, or antenna arrays, configured to send and/or receive wireless signals.
  • the antenna 1410 may be coupled to the radio front-end circuitry 1418 and may be any type of antenna capable of transmitting and receiving data and/or signals wirelessly.
  • the antenna 1410 is separate from the network node 1400 and connectable to the network node 1400 through an interface or port.
  • the antenna 1410, the communication interface 1406, and/or the processing circuitry 1402 may be configured to perform any receiving operations and/or certain obtaining operations described herein as being performed by the network node 1400. Any information, data, and/or signals may be received from a UE, another network node, and/or any other network equipment. Similarly, the antenna 1410, the communication interface 1406, and/or the processing circuitry 1402 may be configured to perform any transmitting operations described herein as being performed by the network node 1400. Any information, data, and/or signals may be transmitted to a UE, another network node, and/or any other network equipment.
  • the power source 1408 provides power to the various components of the network node 1400 in a form suitable for the respective components (e.g. , at a voltage and current level needed for each respective component).
  • the power source 1408 may further comprise, or be coupled to, power management circuitry to supply the components of the network node 1400 with power for performing the functionality described herein.
  • the network node 1400 may be connectable to an external power source (e.g., the power grid or an electricity outlet) via input circuitry or an interface such as an electrical cable, whereby the external power source supplies power to power circuitry of the power source 1408.
  • the power source 1408 may comprise a source of power in the form of a battery or battery pack which is connected to, or integrated in, power circuitry. The battery may provide backup power should the external power source fail.
  • Embodiments of the network node 1400 may include additional components beyond those shown in Figure 14 for providing certain aspects of the network node’s functionality, including any of the functionality described herein and/or any functionality necessary to support the subject matter described herein.
  • the network node 1400 may include user interface equipment to allow input of information into the network node 1400 and to allow output of information from the network node 1400. This may allow a user to perform diagnostic, maintenance, repair, and other administrative functions for the network node 1400.
  • FIG 15 is a block diagram of a host 1500, which may be an embodiment of the host 1216 of Figure 12, in accordance with various aspects described herein.
  • the host 1500 may be or comprise various combinations of hardware and/or software including a standalone server, a blade server, a cloud-implemented server, a distributed server, a virtual machine, container, or processing resources in a server farm.
  • the host 1500 may provide one or more services to one or more UEs.
  • the host 1500 includes processing circuitry 1502 that is operatively coupled via a bus 1504 to an input/output interface 1506, a network interface 1508, a power source 1510, and memory 1512.
  • processing circuitry 1502 that is operatively coupled via a bus 1504 to an input/output interface 1506, a network interface 1508, a power source 1510, and memory 1512.
  • Other components may be included in other embodiments. Features of these components may be substantially similar to those described with respect to the devices of previous figures, such as Figures 13 and 14, such that the descriptions thereof are generally applicable to the corresponding components of the host 1500.
  • the memory 1512 may include one or more computer programs including one or more host application programs 1514 and data 1516, which may include user data, e.g. data generated by a UE for the host 1500 or data generated by the host 1500 for a UE.
  • Embodiments of the host 1500 may utilize only a subset or all of the components shown.
  • the host application programs 1514 may be implemented in a container-based architecture and may provide support for video codecs (e.g., Versatile Video Coding (VVC), High Efficiency Video Coding (HEVC), Advanced Video Coding (AVC), Moving Picture Experts Group (MPEG), VP9) and audio codecs (e.g., Free Lossless Audio Codec (FLAC), Advanced Audio Coding (AAC), MPEG, G.711), including transcoding for multiple different classes, types, or implementations of UEs (e.g., handsets, desktop computers, wearable display systems, and headsup display systems).
  • VVC Versatile Video Coding
  • HEVC High Efficiency Video Coding
  • AVC Advanced Video Coding
  • MPEG Moving Picture Experts Group
  • VP9 Moving Picture Experts Group
  • audio codecs e.g., Free Lossless Audio Codec (FLAC), Advanced Audio Coding (AAC), MPEG, G.711
  • the host application programs 1514 may also provide for user authentication and licensing checks and may periodically report health, routes, and content availability to a central node, such as a device in or on the edge of a core network. Accordingly, the host 1500 may select and/or indicate a different host for Over-The-Top (OTT) services for a UE.
  • the host application programs 1514 may support various protocols, such as the HTTP Live Streaming (HLS) protocol, Real-Time Messaging Protocol (RTMP), Real-Time Streaming Protocol (RTSP), Dynamic Adaptive Streaming over HTTP (DASH or MPEG-DASH), etc.
  • FIG 16 is a block diagram illustrating a virtualization environment 1600 in which functions implemented by some embodiments may be virtualized.
  • virtualizing means creating virtual versions of apparatuses or devices which may include virtualizing hardware platforms, storage devices, and networking resources.
  • virtualization can be applied to any device described herein, or components thereof, and relates to an implementation in which at least a portion of the functionality is implemented as one or more virtual components.
  • Some or all of the functions described herein may be implemented as virtual components executed by one or more Virtual Machines (VMs) implemented in one or more virtual environments 1600 hosted by one or more of hardware nodes, such as a hardware computing device that operates as a network node, UE, core network node, or host.
  • VMs Virtual Machines
  • the virtual node does not require radio connectivity (e.g. , a core network node or host)
  • the node may be entirely virtualized.
  • Applications 1602 (which may alternatively be called software instances, virtual appliances, network functions, virtual nodes, virtual network functions, etc.) are run in the virtualization environment Q400 to implement some of the features, functions, and/or benefits of some of the embodiments disclosed herein.
  • Hardware 1604 includes processing circuitry, memory that stores software and/or instructions executable by hardware processing circuitry, and/or other hardware devices as described herein, such as a network interface, input/output interface, and so forth.
  • Software may be executed by the processing circuitry to instantiate one or more virtualization layers 1606 (also referred to as hypervisors or VM Monitors (VMMs)), provide VMs 1608A and 1608B (one or more of which may be generally referred to as VMs 1608), and/or perform any of the functions, features, and/or benefits described in relation with some embodiments described herein.
  • the virtualization layer 1606 may present a virtual operating platform that appears like networking hardware to the VMs 1608.
  • the VMs 1608 comprise virtual processing, virtual memory, virtual networking, or interface and virtual storage, and may be run by a corresponding virtualization layer 1606
  • Different embodiments of the instance of a virtual appliance 1602 may be implemented on one or more of the VMs 1608, and the implementations may be made in different ways Virtualization of the hardware is in some contexts referred to as Network Function Virtualization (NFV).
  • NFV Network Function Virtualization
  • NFV may be used to consolidate many network equipment types onto industry standard high volume server hardware, physical switches, and physical storage, which can be located in data centers and customer premise equipment.
  • a VM 1608 may be a software implementation of a physical machine that runs programs as if they were executing on a physical, non-virtualized machine.
  • Each of the VMs 1608, and that part of the hardware 1604 that executes that VM be it hardware dedicated to that VM and/or hardware shared by that VM with others of the VMs 1608, forms separate virtual network elements.
  • a virtual network function is responsible for handling specific network functions that run in one or more VMs 1608 on top of the hardware 1604 and corresponds to the application 1602.
  • the hardware 1604 may be implemented in a standalone network node with generic or specific components.
  • the hardware 1604 may implement some functions via virtualization.
  • the hardware 1604 may be part of a larger cluster of hardware (e.g., such as in a data center or CPE) where many hardware nodes work together and are managed via management and orchestration 1610, which, among others, oversees lifecycle management of the applications 1602.
  • the hardware 1604 is coupled to one or more radio units that each include one or more transmitters and one or more receivers that may be coupled to one or more antennas. Radio units may communicate directly with other hardware nodes via one or more appropriate network interfaces and may be used in combination with the virtual components to provide a virtual node with radio capabilities, such as a RAN or a BS.
  • some signaling can be provided with the use of a control system 1612 which may alternatively be used for communication between hardware nodes and radio units.
  • Figure 17 shows a communication diagram of a host 1702 communicating via a network node 1704 with a UE 1706 over a partially wireless connection in accordance with some embodiments.
  • embodiments of the host 1702 include hardware, such as a communication interface, processing circuitry, and memory.
  • the host 1702 also includes software, which is stored in or is accessible by the host 1702 and executable by the processing circuitry.
  • the software includes a host application that may be operable to provide a service to a remote user, such as the UE 1706 connecting via an OTT connection 1750 extending between the UE 1706 and the host 1702.
  • a host application may provide user data which is transmitted using the OTT connection 1750.
  • the network node 1704 includes hardware enabling it to communicate with the host 1702 and the UE 1706 via a connection 1760.
  • the connection 1760 may be direct or pass through a core network (like the core network 1206 of Figure 12) and/or one or more other intermediate networks, such as one or more public, private, or hosted networks
  • an intermediate network may be a backbone network or the Internet.
  • the UE 1706 includes hardware and software, which is stored in or accessible by the UE 1706 and executable by the UE's processing circuitry.
  • the software includes a client application, such as a web browser or operator-specific “app” that may be operable to provide a service to a human or non-human user via the UE 1706 with the support of the host 1702.
  • a client application such as a web browser or operator-specific “app” that may be operable to provide a service to a human or non-human user via the UE 1706 with the support of the host 1702.
  • an executing host application may communicate with the executing client application via the OTT connection 1750 terminating at the UE 1706 and the host 1702.
  • the UE's client application may receive request data from the host's host application and provide user data in response to the request data.
  • the OTT connection 1750 may transfer both the request data and the user data.
  • the UE's client application may interact with the user to generate the user data that it provides to the host application
  • the OTT connection 1750 may extend via the connection 1760 between the host 1702 and the network node 1704 and via a wireless connection 1770 between the network node 1704 and the UE 1706 to provide the connection between the host 1702 and the UE 1706.
  • the connection 1760 and the wireless connection 1770, over which the OTT connection 1750 may be provided, have been drawn abstractly to illustrate the communication between the host 1702 and the UE 1706 via the network node 1704, without explicit reference to any intermediary devices and the precise routing of messages via these devices.
  • the host 1702 provides user data, which may be performed by executing a host application.
  • the user data is associated with a particular human user interacting with the UE 1706.
  • the user data is associated with a UE 1706 that shares data with the host 1702 without explicit human interaction.
  • the host 1702 initiates a transmission carrying the user data towards the UE 1706.
  • the host 1702 may initiate the transmission responsive to a request transmitted by the UE 1706.
  • the request may be caused by human interaction with the UE 1706 or by operation of the client application executing on the UE 1706.
  • the transmission may pass via the network node 1704 in accordance with the teachings of the embodiments described throughout this disclosure. Accordingly, in step 1712, the network node 1704 transmits to the UE 1706 the user data that was carried in the transmission that the host 1702 initiated, in accordance with the teachings of the embodiments described throughout this disclosure. In step 1714, the UE 1706 receives the user data carried in the transmission, which may be performed by a client application executed on the UE 1706 associated with the host application executed by the host 1702.
  • the UE 1706 executes a client application which provides user data to the host 1702.
  • the user data may be provided in reaction or response to the data received from the host 1702.
  • the UE 1706 may provide user data, which may be performed by executing the client application.
  • the client application may further consider user input received from the user via an input/output interface of the UE 1706. Regardless of the specific manner in which the user data was provided, the UE 1706 initiates, in step 1718, transmission of the user data towards the host 1702 via the network node 1704.
  • the network node 1704 receives user data from the UE 1706 and initiates transmission of the received user data towards the host 1702.
  • the host 1702 receives the user data carried in the transmission initiated by the UE 1706.
  • One or more of the various embodiments improve the performance of OTT services provided to the UE 1706 using the OTT connection 1750, in which the wireless connection 1770 forms the last segment.
  • factory status information may be collected and analyzed by the host 1702.
  • the host 1702 may process audio and video data which may have been retrieved from a UE for use in creating maps.
  • the host 1702 may collect and analyze real-time data to assist in controlling vehicle congestion (e.g., controlling traffic lights).
  • the host 1702 may store surveillance video uploaded by a UE.
  • the host 1702 may store or control access to media content such as video, audio, VR, or AR which it can broadcast, multicast, or unicast to UEs.
  • the host 1702 may be used for energy pricing, remote control of non-time critical electrical load to balance power generation needs, location services, presentation services (such as compiling diagrams etc. from data collected from remote devices), or any other function of collecting, retrieving, storing, analyzing, and/or transmitting data.
  • a measurement procedure may be provided for the purpose of monitoring data rate, latency, and other factors on which the one or more embodiments improve.
  • the measurement procedure and/or the network functionality for reconfiguring the OTT connection 1750 may be implemented in software and hardware of the host 1702 and/or the UE 1706.
  • sensors (not shown) may be deployed in or in association with other devices through which the OTT connection 1750 passes; the sensors may participate in the measurement procedure by supplying values of the monitored quantities exemplified above, or by supplying values of other physical quantities from which software may compute or estimate the monitored quantities.
  • the reconfiguring of the OTT connection 1750 may include message format, retransmission settings, preferred routing, etc.; the reconfiguring need not directly alter the operation of the network node 1704. Such procedures and functionalities may be known and practiced in the art.
  • measurements may involve proprietary UE signaling that facilitates measurements of throughput, propagation times, latency, and the like by the host 1702.
  • the measurements may be implemented in that software causes messages to be transmitted, in particular empty or 'dummy' messages, using the OTT connection 1750 while monitoring propagation times, errors, etc.
  • computing devices described herein may include the illustrated combination of hardware components, other embodiments may comprise computing devices with different combinations of components. It is to be understood that these computing devices may comprise any suitable combination of hardware and/or software needed to perform the tasks, features, functions, and methods disclosed herein. Determining, calculating, obtaining, or similar operations described herein may be performed by processing circuitry, which may process information by, for example, converting the obtained information into other information, comparing the obtained information or converted information to information stored in the network node, and/or performing one or more operations based on the obtained information or converted information, and as a result of said processing making a determination.
  • processing circuitry may process information by, for example, converting the obtained information into other information, comparing the obtained information or converted information to information stored in the network node, and/or performing one or more operations based on the obtained information or converted information, and as a result of said processing making a determination.
  • computing devices may comprise multiple different physical components that make up a single illustrated component, and functionality may be partitioned between separate components.
  • a communication interface may be configured to include any of the components described herein, and/or the functionality of the components may be partitioned between the processing circuitry and the communication interface
  • non-computationally intensive functions of any of such components may be implemented in software or firmware and computationally intensive functions may be implemented in hardware.
  • processing circuitry executing instructions stored in memory, which in certain embodiments may be a computer program product in the form of a non-transitory computer-readable storage medium.
  • some or all of the functionality may be provided by the processing circuitry without executing instructions stored on a separate or discrete device-readable storage medium, such as in a hardwired manner.
  • the processing circuitry can be configured to perform the described functionality. The benefits provided by such functionality are not limited to the processing circuitry alone or to other components of the computing device, but are enjoyed by the computing device as a whole and/or by end users and a wireless network generally.
  • Embodiment 1 A method performed by first network node (200), the method comprising:
  • Embodiment 2 The method of embodiment 1 wherein the one or more actions comprising: (A) training the AI/ML model or algorithm, (B) optimizing the AI/ML model or algorithm, (C) updating the AI/ML model or algorithm, (D) testing the AI/ML model or algorithm, (E) validating the AI/ML model or algorithm, or (F) any combination of two or more of (A)-(E).
  • Embodiment 3 The method of embodiment 2 wherein the one or more actions further comprise sending (502) a message to a third network node (500) comprising an updated version of the AI/ML model or algorithm or a new AI/ML model or algorithm that replaces the AI/ML model or algorithm.
  • Embodiment 4 The method of any of embodiments 1 to 3 further comprising, prior to receiving (206; 608; 610; 706) the at least one message, transmitting (204) a first message to a second network node (202).
  • Embodiment 5 The method of embodiment 4 wherein the first message comprises:
  • one or more configurations or indications associated to the one or more requests or instructions specifying: o data to monitor for the at least one AI/ML model or algorithm, o information to be reported for the at least one AI/ML model or algorithm, o event(s) and/or conditions to provide the one or more requested historical data associated to the AI/ML model or algorithm, or o a combination of any two or more thereof; or
  • Embodiment 6 The method of embodiment 4 or 5 wherein receiving (206) the at least one message comprises receiving (206) at least one second message from the second network node (202), the at least one second message comprising:
  • Embodiment 7 The method of embodiment 4 or 5 wherein receiving (610; 706) the at least one message comprises receiving (610; 706) at least one message from a fourth network node (606) comprising:
  • Embodiment 8 The method of embodiment 4 or 5 wherein receiving (206; 610; 706) the at least one message comprises receiving (206) at least one message from the second network node (202) and receiving (610; 706) at least one message from a fourth network node (606).
  • Embodiment 9 A method performed by second network node (202), the method comprising:
  • Embodiment 10 The method embodiment 9 further comprising, prior to transmitting (206) the at least one message to the first network node (200), receiving (204) a first message from the first network node (202).
  • Embodiment 11 The method of embodiment 10 wherein the first message comprises:
  • one or more configurations or indications associated to the one or more requests or instructions specifying: o data to monitor for the at least one AI/ML model or algorithm, o information to be reported for the at least one AI/ML model or algorithm, o event(s) and/or conditions to provide the one or more requested historical data associated to the AI/ML model or algorithm, or o a combination of any two or more thereof; or
  • Embodiment 12 The method of any of embodiments 9 to 11 further comprising obtaining (700, 704) at least some information comprised in the message transmitted to the first network node (200) from another network node (600).
  • Embodiment 13 The method of any of embodiments 9 to 12 further comprising instructing (700) another network node (600) to provide some of the information requested by the first message to the first network node (200).
  • Embodiment 14 A method performed by second network node (202), the method comprising:
  • provide one or more requested historical data associated to at least one AI/ML model or algorithm
  • Embodiment 15 The method embodiment 14 further comprising sending (606), to the first network node (600), a message that instructs the first network node (600) to obtain the information requested by the first message from the fourth network node (600).
  • Embodiment 16 A network node adapted to perform the method of any of the previous embodiments.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

L'invention concerne des systèmes et des procédés ayant trait à l'optimisation de modèles ou d'algorithmes d'apprentissage automatique (AA) ou d'intelligence artificielle (IA). Dans un mode de réalisation, un procédé mis en œuvre par un premier nœud de réseau comprend la réception, en provenance d'au moins un autre nœud de réseau, d'au moins un message comprenant : (a) un ensemble d'une ou de plusieurs données historiques associées à un modèle ou algorithme d'IA ou d'AA (IA/AA); (b) des indications d'au moins une métrique de performance associée au modèle ou algorithme d'IA/AA pour lequel une optimisation est souhaitable; et (c) des instructions ou des recommandations pour (i) ré-entraîner le modèle ou algorithme d'IA/AA, mettre à jour le modèle ou algorithme d'IA/AA, optimiser le modèle ou algorithme d'IA/AA, ou remplacer le modèle ou algorithme d'IA/AA par un nouveau modèle ou algorithme d'IA/AA, (ii) tester/valider au moins un modèle d'IA/AA, ou (iii) aussi bien (i) que (ii). Le procédé comprend en outre l'exécution d'une ou de plusieurs actions relatives au modèle ou algorithme d'IA/AA sur la base d'informations comprises dans le ou les messages. De cette manière, le modèle ou algorithme d'IA/AA peut être optimisé par, par exemple, optimisation de la fonction d'entraînement associée et/ou sur la base de dérives dans la distribution de données d'un ensemble de données lorsque les données collectées pour le modèle ou algorithme d'IA/AA ne sont pas colocalisées avec le nœud hébergeant la fonction d'entraînement.
EP22765043.9A 2021-08-13 2022-08-10 Systèmes et procédés pour optimiser l'entraînement de modèles et d'algorithmes d'ia/aa Withdrawn EP4384947A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202163232705P 2021-08-13 2021-08-13
PCT/EP2022/072496 WO2023017102A1 (fr) 2021-08-13 2022-08-10 Systèmes et procédés pour optimiser l'entraînement de modèles et d'algorithmes d'ia/aa

Publications (1)

Publication Number Publication Date
EP4384947A1 true EP4384947A1 (fr) 2024-06-19

Family

ID=83193585

Family Applications (1)

Application Number Title Priority Date Filing Date
EP22765043.9A Withdrawn EP4384947A1 (fr) 2021-08-13 2022-08-10 Systèmes et procédés pour optimiser l'entraînement de modèles et d'algorithmes d'ia/aa

Country Status (2)

Country Link
EP (1) EP4384947A1 (fr)
WO (1) WO2023017102A1 (fr)

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10063575B2 (en) * 2015-10-08 2018-08-28 Cisco Technology, Inc. Anomaly detection in a network coupling state information with machine learning outputs

Also Published As

Publication number Publication date
WO2023017102A1 (fr) 2023-02-16

Similar Documents

Publication Publication Date Title
WO2023191682A1 (fr) Gestion de modèles d'intelligence artificielle/d'apprentissage machine entre des nœuds radio sans fil
WO2023022642A1 (fr) Signalisation de surchauffe prédite d'ue
WO2023017102A1 (fr) Systèmes et procédés pour optimiser l'entraînement de modèles et d'algorithmes d'ia/aa
WO2024125362A1 (fr) Procédé et appareil de commande de liaison de communication entre dispositifs de communication
EP4381707A1 (fr) Commande et garantie de rapport d'incertitude à partir de modèles de ml
WO2024094176A1 (fr) Collecte de données l1
WO2023239287A1 (fr) Apprentissage machine permettant une optimisation d'un réseau d'accès radio
WO2023140767A1 (fr) Balayage de faisceau avec détection compressée basée sur l'intelligence artificielle (ia)
WO2023147870A1 (fr) Prédiction de variable de réponse dans un réseau de communication
WO2023232743A1 (fr) Systèmes et procédés pour une rétroaction d'estimation de corrélation de caractéristiques assistée par un équipement utilisateur
WO2024128945A1 (fr) Systèmes et procédés de planification assistée par intelligence artificielle de maintenance opérationnelle dans un réseau de télécommunications
WO2023209566A1 (fr) Gestion de partitions et de priorités d'accès aléatoire
WO2023187678A1 (fr) Gestion de modèle de machine d'équipement utilisateur assistée par réseau d'apprentissage
WO2023192409A1 (fr) Rapport d'équipement utilisateur de performance de modèle d'apprentissage automatique
WO2023132775A1 (fr) Systèmes et procédés de mise à jour d'informations d'historique d'un équipement utilisateur permettant un transfert intercellulaire conditionnel et un changement conditionnel d'une cellule d'un groupe de cellules secondaires primaires
EP4352658A1 (fr) Sélection de modèles d'apprentissage automatique globaux pour apprentissage automatique collaboratif dans un réseau de communication
EP4396731A1 (fr) Gestion d'autocodeur décentralisé permettant la détection ou la prédiction d'une classe minoritaire à partir d'un ensemble de données déséquilibré
WO2023033687A1 (fr) Gestion d'autocodeur décentralisé permettant la détection ou la prédiction d'une classe minoritaire à partir d'un ensemble de données déséquilibré
WO2023211347A1 (fr) États de déclenchement apériodiques inactifs pour économie d'énergie
WO2024012799A1 (fr) Prédiction d'informations d'état de canal à l'aide d'un apprentissage automatique
WO2024117960A1 (fr) Filtre de liste de bandes de fréquences appliquées prédéfinies
WO2023131822A1 (fr) Récompense pour optimisation d'inclinaison sur la base d'un apprentissage par renforcement (rl)
EP4381812A1 (fr) Approches de signalisation pour plmn de catastrophe
WO2023084277A1 (fr) Procédé de priorisation d'utilisateur assisté par apprentissage automatique pour des problèmes d'attribution de ressources asynchrones
WO2024096805A1 (fr) Communication basée sur un partage d'identifiant de configuration de réseau

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: UNKNOWN

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN

17P Request for examination filed

Effective date: 20240220

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

18W Application withdrawn

Effective date: 20240517