WO2024028183A1 - Machine learning capability configuration in radio access network - Google Patents

Machine learning capability configuration in radio access network Download PDF

Info

Publication number
WO2024028183A1
WO2024028183A1 PCT/EP2023/070714 EP2023070714W WO2024028183A1 WO 2024028183 A1 WO2024028183 A1 WO 2024028183A1 EP 2023070714 W EP2023070714 W EP 2023070714W WO 2024028183 A1 WO2024028183 A1 WO 2024028183A1
Authority
WO
WIPO (PCT)
Prior art keywords
message
network node
node
model
ran
Prior art date
Application number
PCT/EP2023/070714
Other languages
French (fr)
Inventor
Ioanna Pappa
Luca LUNARDI
Angelo Centonza
Pablo SOLDATI
Germán BASSI
Original Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget Lm Ericsson (Publ) filed Critical Telefonaktiebolaget Lm Ericsson (Publ)
Publication of WO2024028183A1 publication Critical patent/WO2024028183A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/02Arrangements for optimising operational condition

Definitions

  • [001] Disclosed are embodiments related to providing machine learning (ML) capability information and/or ML requirement information to a network node.
  • ML machine learning
  • FIG. 1 illustrates the current 5G RAN (a.k.a., the Next Generation RAN (NG-RAN)) architecture.
  • NG-RAN Next Generation RAN
  • the NG-RAN architecture is described in 3GPP Technical Specification (TS) 38.401 V17.0.0 (“TS 38.401”).
  • the NG- RAN consists of a set of base stations (denoted “gNBs”) connected to a 5G core network (5GC) through an NG interface.
  • the gNBs can be interconnected through an Xn interface.
  • a gNB may consist of a gNB central unit (gNB-CU) and one or more gNB distributed units (gNB-DU(s)).
  • gNB-CU and a gNB-DU are connected via an F1 interface.
  • One gNB-DU is connected to only one gNB-CU.
  • the NG, Xn and F1 interfaces are logical interfaces.
  • a gNB-CU may comprise a gNB-CU control plane (CP) function (gNB-CU-CP) and a gNB-CU user plane (UP) function (gNB-CU-UP).
  • CP gNB-CU control plane
  • UP gNB-CU user plane
  • TR 37.817 The 3GPP Technical Report (TR) 37.817 V17.0.0 (“TR 37.817”) has been produced as outcome of the Study Item (SI) "Enhancement for Data Collection for NR and EN-DC” defined in 3GPP Technical Document (Tdoc) No. RP-201620.
  • SI Study Item
  • Tdoc 3GPP Technical Document
  • the study item aimed to study the functional framework for RAN intelligence enabled by further enhancement of data collection through use cases, examples, etc., and identify the potential standardization impacts on current NG-RAN nodes and interfaces.
  • TR 37.817 identifies the following high-level principles that should be applied for Al-enabled RAN intelligence:
  • the study focuses on AI/ML functionality and corresponding types of inputs/outputs.
  • the input/output and the location of the model training and Model Inference function should be studied case by case.
  • the study focuses on the analysis of data needed at the model training function from Data Collection, while the aspects of how the model training function uses inputs to train a model are out of RAN3 scope.
  • the study focuses on the analysis of data needed at the model inference function from Data Collection, while the aspects of how the model inference function uses inputs to derive outputs are out of RAN3 scope.
  • AI/ML functionality resides within the current RAN architecture, depends on deployment and on the specific use cases.
  • the Model Training and Model Inference functions should be able to request, if needed, specific information to be used to train or execute the AI/ML algorithm and to avoid reception of unnecessary information.
  • specific information to be used to train or execute the AI/ML algorithm and to avoid reception of unnecessary information.
  • the nature of such information depends on the use case and on the AI/ML algorithm.
  • the Model Inference function should signal the outputs of the model only to nodes that have explicitly requested them (e.g. via subscription), or nodes that take actions based on the output from Model Inference.
  • An AI/ML model used in a Model Inference function has to be initially trained, validated and tested by the model training function before deployment.
  • NG-RAN SA is prioritized; EN-DC and MR-DC are down-prioritized, but not precluded from Rel.18.
  • Radio Access Network Intelligence
  • FIG. 2 illustrates the Functional Framework for RAN Intelligence.
  • the framework includes the following functions: 1) a data collection function; 2) a model training function; 3) a model inference function; and 4) an actor function, or Actor.
  • the data collection function provides training data (e.g., a set of training data samples - i.e., one or more training data samples) to the model training function.
  • Training data is data that is used by the model training function to train a model (e.g., a neural network or other model).
  • a model e.g. a neural network
  • parameters e.g., neural network weights
  • the function approximated by the model is the Q-function, which assigns a value to a state-action pair.
  • the Q-function (hence the ML model) determine the behavior (or policy) of the RL agent.
  • the data collection function also provides inference data to the model inference function, which uses the inference data to produce an output (a.k.a., an inference).
  • ML model specific data preparation may also be carried out in the data collection function.
  • Examples of interference and training data may include measurements from user equipments (UEs) or different network entities, feedback from the Actor, and output from the model inference function.
  • the model training function performs the ML model training, validation, and testing which may generate model performance metrics as part of the model testing procedure.
  • the model training function is also responsible for data preparation (e.g. data pre-processing and cleaning, formatting, and transformation) based on training data delivered by a data collection function, if required.
  • the model training function deploys a trained, validated and tested model (e.g., a model that parameterizes or approximates at least one of a policy function, a value function and a Q-function in a deep reinforcement learning environment) to the model inference function or delivers an updated model to the model inference function.
  • the model inference function provides model inference output (e.g. predictions or decisions).
  • the model inference function may provide model performance feedback to the model training function when applicable.
  • the model inference function is also responsible for data preparation (e g. data pre-processing and cleaning, formatting, and transformation) based on inference data delivered by a data collection function, if required.
  • the model inference function may provide model performance feedback information to the model training function, which uses this feedback information for monitoring the performance of the model.
  • the actor is a function that receives the output from the model inference function and triggers or performs corresponding actions.
  • the Actor may trigger actions directed to other entities or to itself.
  • the actions may generate feedback information, provided to the data collection function, that may be needed to derive training or inference data.
  • TR 37 817 states:
  • AI/ML Model Training is located in the 0AM and AI/ML Model Inference is located in the gNB [5G base station],
  • AI/ML Model Training and AI/ML Model Inference are both located in the gNB.
  • gNB is also allowed to continue model training based on model trained in the 0AM.
  • CU-DU split architecture the following solutions are possible:
  • AI/ML Model Training is located in the 0AM and AI/ML Model Inference is located in the gNB-CU.
  • AI/ML Model Training and Model Inference are both located in the gNB-CU.
  • TR 37.817 states:
  • the AI/ML Model Training function is deployed in 0AM, while the model inference function resides within the RAN node.
  • Both the AI/ML Model Training function and the AI/ML Model Inference function reside within the RAN node.
  • AI/ML Model Training is located in CU-CP or 0AM and AI/ML Model Inference function is located in CU-CP. Note: gNB is also allowed to continue model training based on model trained in the 0 AM.
  • TR 37.817 states:
  • AI/ML Model Training is located in the 0AM and AI/ML Model Inference is located in the gNB.
  • AI/ML Model Training and AI/ML Model Inference are both located in the gNB.
  • AI/ML Model Training is located in the 0AM and AI/ML Model Inference is located in the gNB-CU.
  • AI/ML Model Training and Model Inference are both located in the gNB-CU.
  • gNB is also allowed to continue model training based on model trained in the 0AM.
  • Tdoc R3-215244 proposes to introduce a model management function in the Functional Framework for RAN Intelligence, as shown in FIG. 3.
  • Model deployment/update should be decided by model management instead of model training.
  • the model management may also host a model repository.
  • the model deployment/update should be performed by model management.
  • Model performance monitoring is a key function to assist and control model inference.
  • the model performance feedback from model inference should be first sent to model management. If the performance is not ideal, the model management may decide to fallback to traditional algorithm or change/update the model.
  • the model training should be also controlled by model management.
  • the model management function may be taken by either 0AM or CU or other network entities depending on the use cases. Clearly defining a model management function is useful for future signalling design and analysis
  • Proposal 1 Introduce a model management function into AI/ML framework [as shown in FIG. 3],
  • Model management function supports following roles: I) Requesting model training and receiving the model training result; ii) Model deployment/updates for inference, ill) Model performance monitoring, including receiving performance feedback from model inference and taking necessary action, e.g. keep the model, fallback to traditional algorithm, change or update the model, iv) Model storage.
  • the main objective of model training is to produce a model (e.g., neural network that parameterizes or approximates at least one of a policy function, a value function and a Q-function) that can generalize to conditions and situations not directly experienced in the training data (i.e., a model that performs well when used with inference data that differs from the training data used in the training process).
  • a model e.g., neural network that parameterizes or approximates at least one of a policy function, a value function and a Q-function
  • This process is also known as a training process.
  • the rollout worker uses the received model to interact with an external environment by selecting actions and applying the actions to the environment.
  • the rollout worker can collect experience samples that can be used for further training and improving the model.
  • an experience sample is a tuple that comprises: i) an observation (e.g., state vector) for time step t (denoted St), ii) an action (At) selected based on St, iii) an observation for time step t+1 (denoted St+1 ), and iv) a reward value Rt based on St and St+1 .
  • Some techniques provide a shared storage memory, also known as “replay buffer” or “experience buffer,” in which the rollout workers store the experience samples (e.g., at each time step, the rollout worker generates and stores an experience in the replay buffer).
  • the Model Trainer function can then filter experiences from the replay buffer to train/update the model (e.g., a new set of weights of a neural network), which is then provided to the distributed rollout workers.
  • Parallel and distributed experience sample collection allows the evaluation of multiple versions of a model in parallel and to quickly produce a new model. It also allows for improved diversity in the collected information, as different rollout workers can be tasked to test the model against different versions of the environment. This allows improved quality of the collected experiences, which in turns enables: producing a model that better generalizes against conditions (e.g., events) unseen during the training process, improving the speed of learning because updates of the model can be provided more frequently due to the high throughput of the training data generation, and improving learning efficiency (i.e., the improved data diversity provided by parallel and distributed rollout workers enables production of a better model for a given amount of experience samples compared to the case where a single rollout worker is used). Using these techniques in a RAN could achieve a performance that otherwise would not be possible to achieve.
  • ML enabled NG-RAN a.k.a., “Al enabled NG-RAN” or “AI/ML enabled NG-RAN”
  • ML capabilities refer to how network nodes are aware of the ML resources present in the network. The handling of the ML capabilities is of importance for a solution using Al to be able to function properly across a network.
  • a method for providing machine learning (ML) capability information and/or ML requirement information includes a first network node transmitting to a second network node a first message comprising a first information element, IE, comprising first ML capability information and/or first ML requirement information, wherein the first ML capability information indicates one more ML capabilities of the first network node, and the first ML requirement information indicates one or more ML requirements of the first network node.
  • ML machine learning
  • a computer program comprising instructions which when executed by processing circuitry of a network node causes the network node to perform any of the methods disclosed herein.
  • a carrier containing the computer program wherein the carrier is one of an electronic signal, an optical signal, a radio signal, and a computer readable storage medium.
  • a network node that is configured to perform the methods disclosed herein.
  • the network node may include memory and processing circuitry coupled to the memory.
  • An advantage of the embodiments disclosed herein is that they facilitate use of ML and avoid unnecessary network signaling due to interactions between nodes. For example, requests of certain data for ML training purpose towards nodes that cannot provide the requested data (that would naturally fail) can be avoided. Similarly, it is possible for an inference function to avoid recommending that a particular network node take a particular action because the inference function will have knowledge as to the node's capabilities and, therefore, will know a priori whether or not the node can perform the recommended action (there is no point in recommending the node to perform a certain action if you already know that the node is not capable of performing the action).
  • FIG. 1 illustrates the current 5G RAN (a.k.a., the Next Generation RAN (NG-RAN)) architecture.
  • NG-RAN Next Generation RAN
  • FIG. 2 illustrates a Functional Framework for RAN Intelligence.
  • FIG. 3 illustrates the introduction of a model management function in the Functional Framework for RAN Intelligence.
  • FIG. 4 is a message flow diagram according to an embodiment.
  • FIG. 5 is a message flow diagram according to an embodiment.
  • FIG. 6 is a message flow diagram according to an embodiment.
  • FIG. 7 is a flowchart illustrating a process according to an embodiment.
  • FIG. 8 is a block diagram of a network node according to an embodiment.
  • a "network node” can be a node in a radio access network (RAN), an 0AM, a core network (ON) node (e.g., a ON function), an 0AM, an SMO, a Network Management System (NMS), a Non-Real Time RAN Intelligent Controller (Non-RT RIC), a Real-Time RAN Intelligent Controller (RT-RIC), a gNB, eNB, en- gNB, ng-eNB, gNB-CU, gNB-CU-CP (gNB-CU control plane (CP)), gNB-CU-UP (gNB-CU user plane (UP)), eNB-CU, eNB-CU-CP, eNB-CU-UP, lAB-node, lAB-donor DU, lAB-donor-CU, IAB-DU, IAB-MT, O-CU, O-CU-CP, O-CU-UP, O-DU, O-RU, O
  • RAN radio access network
  • a network node may be a physical node or a function or logical entity of any kind, e.g. a software entity implemented in a data center or a cloud, e.g. using one or more virtual machines, and two network nodes may well be implemented as logical software entities in the same data center or cloud.
  • a “RAN node” is a node or entity in a radio access network (RAN), such as a gNB, eNB, en-gNB, ng-eNB, gNB-CU, gNB-CU-CP, gNB-CU-UP, eNB-CU, eNB-CU-CP, eNB-CU-UP, lAB-node, lAB- donor DU, lAB-donor-CU, IAB-DU, IAB-MT, O-CU, O-CU-CP, O-CU-UP, O-DU, O-RU, or O-eNB.
  • RAN radio access network
  • a function, or entity of a RAN node is to be intended as one of the functional entities comprised in a RAN node.
  • a RAN node in split deployment comprises different functions or entities.
  • a gNB comprising a gNB-CU, one or more gNB-DU, one or more gNB-CU-CP.
  • model training model optimizing, model optimization, model updating are herein used interchangeably with the same meaning unless explicitly specified otherwise.
  • model changing, modify or similar are herein used interchangeably with the same meaning unless explicitly specified otherwise. In particular, they refer to the fact that the type, structure, parameters, connectivity of a model may have changed compared to a previous format/configuration of the model.
  • Al artificial intelligence
  • ML machine learning
  • Al model, ML model, AI/ML model, and AIML model are herein used interchangeably with the term model.
  • a model is as an application or an algorithm or a function including processes based on ML. For example, it may refer to the ML model itself as well as the support software packages needed for it to run properly, e.g., this may include the software necessary for data preparation.
  • Data collection refers to a process of collecting data for the purpose of model training, data analytics, and/or inference.
  • AI/ML models may include supervised learning algorithms, deep learning algorithms, reinforcement learning type of algorithms (such as DQN, A2C, A3C, etc.), contextual multi-armed bandit algorithms, autoregression algorithms, etc., or combinations thereof.
  • Such algorithms may exploit functional approximation models, hereafter referred to as AI/ML models, such as neural networks (e.g. feedforward neural networks, deep neural networks, recurrent neural networks, convolutional neural networks, etc.).
  • reinforcement learning algorithms may include deep reinforcement learning (such as deep Q-network (DQN), proximal policy optimization (PPO), double Q- learning), actor-critic algorithms (such as Advantage actor-critic algorithms, e.g. A2C or A3C, actor-critic with experience replay, etc), policy gradient algorithms, off-policy learning algorithms, etc.
  • DQN deep Q-network
  • PPO proximal policy optimization
  • double Q- learning double Q- learning
  • actor-critic algorithms such as Advantage actor-critic algorithms, e.g. A2C or A3C, actor-critic with experience replay, etc
  • policy gradient algorithms e.g. A2C or A3C, actor-critic with experience replay, etc.
  • the network nodes described herein can be directly connected (i.e., with a signaling connection between them) or indirectly connected (i.e., with a signaling connection traversing intermediate network nodes between them, relaying the signaling information exchanged between them).
  • transmitting a message to an intended recipient encompasses transmitting the message directly to the intended recipient or transmitting the message indirectly to the intended recipient (i.e., one or more other nodes are used to relay the message from the source node to the intended recipient).
  • receiving a message from a sender encompasses receiving the message directly from the sender or indirectly from the sender (i.e., one or more nodes are used to relay the message from the sender to the receiving node).
  • CU is used herein as short for “gNB-CU” (and may also refer to an eNB-CU or an IAB- donor-CU”).
  • CU-CP is used herein as short for “gNB-CU-CP'' (and may also refer to an eNB-CU-CP or an lAB-donor-CU-CP).
  • CU-UP is used herein as short for “gNB-CU-UP'' (and may also refer to an eNB-CU- UP or an lAB-donor-CU-UP).
  • DU is used herein as short for “gNB-DU” (and may also refer to an eNB-DU or an IAB-donor-DU)).
  • AI/ML capabilities refer to the support that a network node can offer when ML is used. For example, the ability to support certain use cases, to produce certain output (and at which pace), which AI/ML model types are supported, which AI/ML algorithms are supported.
  • AI/ML related requirements are used interchangeably and indicate requirements for executing an AI/ML model by a network node.
  • Requirements refer to attributes and aspects needed for the AI/ML model to function according to design. They include aspects such as “processing power”, “memory consumption”, “minimum hardware”, “recommended hardware”, “frequency of input”, “security requirements”, “user consent requirements.”
  • nodes In this disclosure it is often taken as an example that of RAN nodes. However, the person skilled in the art can easily deduce that the methods apply to all the nodes involved in ML based processes where exchange of ML capabilities is needed.
  • Such nodes may be CN nodes, management systems, UE, external systems.
  • This disclosure relates to handling of ML capabilities and requirements for executing ML processes.
  • this disclosure provides embodiments for signaling capabilities and requirements associated to the use of ML in RAN.
  • ML related capabilities and/or ML related requirements is communicated between nodes interworking with each other (e.g., two RAN nodes).
  • the ML related capabilities and/or ML related requirements are signaled between two nodes, e.g., two RAN nodes, via a direct signaling interface (e g., using the Xn interface)
  • this is signaled in the Xn Setup procedure; for example, a RAN node can declare its ML capabilities using the XN SETUP REQUEST or XN SETUP RESPONSE messages, e.g., a RAN node declares that it can transmit and/or receive the output of an ML Model inference function for Energy Saving (ES) use case.
  • the RAN node signals its ML capabilities using the Xn: NG-RAN node Configuration Update procedure.
  • the RAN node in case the RAN nodes communicate via a core network, the RAN node signals its ML capabilities via the NG interface. In one example, this is signaled in the NG Setup procedure; for example, a RAN node can declare its ML capabilities using the NG SETUP REQUEST message, e.g., a RAN node declares that it supports transmission and/or reception of output produced by an ML Model inference function for ES use case The CN will forward this information to the receiving RAN node. In another example, the RAN node signals its ML related capabilities and/or its ML related requirements using the NG: RAN Configuration Update procedure. Likewise, the CN forwards the information to the receiving RAN node.
  • NG Setup procedure for example, a RAN node can declare its ML capabilities using the NG SETUP REQUEST message, e.g., a RAN node declares that it supports transmission and/or reception of output produced by an ML Model inference function for
  • the CN may forward the information to the receiving RAN node by means of a message such as the NG: RAN CONFIGURATION UPDATE ACKNOWLEDGE message, if subsequently the receiving RAN node initiates an NG: RAN Configuration Update procedure.
  • the AMF may store the RAN node's capabilities and requirements related to ML and signal them to other RAN nodes using the NG: AMF Configuration Update procedure.
  • the gNB-CU in a disaggregated gNB, the gNB-CU signals its ML related capabilities and/or Al/M related requirements to the gNB-DU via the F1 interface. In one example, this is signaled at F1 setup. In another example the ML capabilities are signaled using the gNB-DU Configuration Update and/or the gNB-CU Configuration Update procedure.
  • a node hosting the model training function can request a node hosting the model inference function to provide the ML capabilities of the node hosting the inference function.
  • the 0AM can request from a RAN node its ML related capabilities and/or ML related requirements. This information can be used by the node hosting the model training function to determine whether a certain ML model can be deployed to the node hosting the model inference function.
  • the node hosting the model inference function can inform the node hosting the model training function about its ML related capabilities and/or ML related requirements without any prior request from it.
  • the first entity of the RAN node can request a second entity of the RAN node (e.g., a gNB-DU) to provide ML related capabilities and/or ML related requirements of the second entity of the RAN node.
  • This information can be used by the first entity of the RAN node to determine whether a certain ML model can be deployed to the second entity of the RAN node.
  • the second entity of the RAN node can inform the first entity of the RAN node about its ML related capabilities and/or ML related requirements without any prior request from the first entity of the RAN node.
  • two entities involved in ML based processes e.g. two RAN nodes (RAN Node 401 and RAN node 402), exchange ML capability information indicating ML capabilities and ML requirement information indicating ML requirement(s) during the setup of the signaling connection between them (e.g., at setup of Xn signaling connection).
  • the RAN node 401 (a.k.a.
  • the first RAN node will send a message (e.g., an XN SETUP REQUEST message M450) to RAN node 402 (a.k.a., “the second RAN node”) including its ML capabilities (i.e., including ML capability information indicating its ML capabilities) and including its ML requirements (including ML requirement information indicating its ML requirements).
  • a message e.g., an XN SETUP REQUEST message M450
  • RAN node 402 a.k.a., “the second RAN node”
  • its ML capabilities i.e., including ML capability information indicating its ML capabilities
  • ML requirements including ML requirement information indicating its ML requirements
  • ML capabilities are associated to a network node where the ML model is used or intended to be used.
  • ML requirements are associated to at least one ML model.
  • association could be realized by means of one or more of: a ML model identifier (ID), an ML model version, a use case identifier, etc.
  • ID ML model identifier
  • ML model version e.g., an ML model version
  • use case identifier e.g., an ML model version
  • ML capability information include: a) an indication of whether the network node indicating the ML capabilities can support the execution or training of ML models or algorithms. b) a use case indicator indicating a use case for which the RAN node supports ML operations.
  • Non-limiting examples of use cases are: Network Energy Saving, Power Saving, Load Balancing Optimization, Mobility Optimization, Link Adaptation, QoS Optimization, QoE Optimization, Coverage and Capacity Optimization, MIMO Layer Optimization, CSI Prediction, Beam management, Positioning, Channel Coding, Reference Signal Enhancements, Interference Optimization, ... c) information indicating a number of UEs/cells for which ML model inference can be produced per unit of time, wherein a unit of time could be expressed, for instance, in milliseconds, seconds, minutes, hours, etc.
  • an indication of whether the ML model can be executed with batch inference e) information indicating the number of ML model inference that can be executed per unit of time, wherein a unit of time could be expressed, for instance, in milliseconds, seconds, minutes, hours, etc In one example, this information can be interpreted as the capacity of batch inference.
  • Such value could in some examples be associated to specific types of ML model, where examples of ML model types include feedforward neural network, recurrent neural network, convolutional neural network, graph neural network, decision tree, transformer, autoencoder, etc.
  • Such value could in some examples be associated to the size of the ML model (described in an item below)
  • f) information indicating available ML model outputs indicating available ML model outputs.
  • Each individual output or group of outputs can be associated with any of the following: i) specific ML model producing the output, where the ML model may be identified, e.g., by means of an ML Model ID, ML Model Version, model developer/vendor, a human- readable name for the model, any other identification; ii) ML use case related to the output, e g., Energy Saving, Load Balancing, Mobility Optimization; iii) associated ML model performance (e.g., (root) mean square error or other metric for regression models, accuracy, precision, recall, or other metric for classification models, conditional value at risk across fixed-policy rollouts or other metric for reinforcement learning); iv) if the output is signaled with some uncertainty metric (e.g., confidence interval in case of regression model) and which metric(s) g) information indicating ML algorithm types and/or type of ML training that
  • the model size can be expressed, for instance, in number of operations required for its execution, in terms of memory requirements for its storage, etc.
  • the model size can be expressed with one or more of: number of layers, number of units per layers, connectivity degree between layers, number of connections between layers, etc. k) information indicating whether the node can support retraining. It can be a flag for instance that indicates whether retraining is allowed in the node l) information indicating which Lifecycle Management operations are allowed, and which are not allowed in the node (e.g., monitoring, testing, ...)
  • the ML requirement information include: a) information indicating minimum or recommended processing power, memory, hardware needed by the sending node to run the specific ML model, b) an indication of the processing unit type (e.g., CPU, GPU) required for executing the AIML model c) information indicating number of processing units (cores) needed for executing the ML model d) an indication of at least a memory type and/or memory size required for executing the AIML model e) information indicating time scale relevant for the supported use cases, namely the frequency of input data needed to carry out ML inference. f) information indicating security requirements for the specific ML model and/or for the specific ML use case.
  • the processing unit type e.g., CPU, GPU
  • aspects such as whether any data associated with the ML process should be security protected, the type of security algorithm/process used to secure such data, whether some or all of the data relative to an ML process, transferred to a neighbor node, are allowed to be forwarded further to other nodes or if they shall be kept internal to the receiving node.
  • information indicating user consent requirements namely whether the use and/or distribution of some or all of the data relative to an ML process are subject to user consent.
  • an indication that the use/execution of an ML model depends (functionally) on another (more basic) model or algorithm or software module
  • the ML model size can be expressed, for instance, in number of operations required for its execution, the memory requirements for its storage, etc
  • the ML model size can be expressed with one or more of: number of layers, number of units per layers, connectivity degree between layers, number of connections between layers, etc.
  • the node e.g., the second RAN node, receiving the ML capabilities and/or ML requirements of the first node, e.g., first RAN node, may in turn send its ML capabilities and/or ML requirements to the first node (e.g., within an XN SETUP RESPONSE message M452). After this step, both nodes are aware of the ML capabilities and/or ML requirements of the other node and can support appropriate ML operations. [0069] In another option, the first node requests the second node to provide second node's ML capabilities and/or ML requirements, with or without providing first node’s ML capabilities and/or ML requirements itself.
  • each RAN node can inform the other RAN node about its new/updated ML capabilities and/or ML requirements and/or request to obtain the ML capabilities and/or ML requirements of the other RAN node if needed (for instance if one RAN node wants to use an updated ML model and would like to know if it can receive input and collaborate with the second RAN node).
  • one of the RAN nodes can send to the other RAN node the same type of message as used for the case of establishing the signaling connection, or a different message (e.g. an XnAP NG-RAN NODE CONFIGURATION UPDATE message M454) including the ML capabilities (new or updated) and/or ML requirements (new or updated) of the requesting RAN node, and also a request to obtain the ML capabilities and/or ML requirements of the second RAN node.
  • the second RAN node receives the first RAN node’s ML capabilities and/or ML requirements and sends back its ML capabilities and/or ML requirements in a return acknowledgement message M456.
  • a number of procedures can be triggered. For example, if a RAN node 1 knows that a neighbor RAN node 2 supports the Energy Saving ML use case, the RAN node 1 may be able to request model inference outputs that are specific to the ML supported ES use case. Similarly, if RAN node 2 knows the ML capabilities and/or ML requirements of RAN node 1, e.g., that RAN node 1 supports a specific Al use case, then RAN node 2 may request input data that are needed to run ML models supporting the use case supported by RAN node 1.
  • a disaggregated RAN node e.g., a gNB comprising a gNB-CU 504 (see FIG. 5) and at least one gNB-DU 502
  • the ML capabilities and/or ML requirements can be transferred between two entities of the RAN node (e.g., between the gNB-CU and a gNB-DU).
  • a first entity of the RAN node e.g., the gNB-CU 504
  • a second entity of the RAN node e.g., the gNB-DU 502
  • exchange ML capabilities and/or ML requirements during the setup of the signaling connection between them e.g., at F1 signaling connection setup M550).
  • the second entity e.g., the gNB-DU
  • can send a message e.g., an F1 SETUP REQUEST message M550
  • the first entity e.g., the gNB-CU
  • the first entity requests the second entity to provide second entity's ML capabilities and/or ML requirements, with or without providing first entity’s ML capabilities and/or ML requirements itself.
  • each one of the entities of the RAN node can inform the other entity of the RAN node about its new/updated ML capabilities and/or ML requirements and/or request to obtain the ML capabilities and/or ML requirements of the other entity.
  • one of the entities of the RAN node can send the same type of message as used for the case of establishing the signaling connection, or a different message (e.g., an F1AP GNB-DU CONFIGURATION UPDATE message M554) including the ML capabilities (new or updated) and/or ML requirements (new or updated) of the requesting entity, and also a request to obtain the ML capabilities (new or updated) and/or ML requirements (new or updated) of the second entity (e.g., of the gNB-CU).
  • a different message e.g., an F1AP GNB-DU CONFIGURATION UPDATE message M554
  • the second entity receives the first entity's (e.g., the gNB-DU's) ML capabilities and/or ML requirements and sends back its ML capabilities and/or ML requirements in a return message (e.g., an F1AP GNB-DU CONFIGURATION UPDATE ACKNOWLEDGE message M556).
  • a return message e.g., an F1AP GNB-DU CONFIGURATION UPDATE ACKNOWLEDGE message M556
  • the gNB-CU can send a GNB-CU CONFIGURATION UPDATE message M558 including its ML capabilities and ML requirements and also requesting the ML capabilities and ML requirements of the gNB-DU.
  • the gNB-DU receives the gNB-CU's ML capabilities and ML requirements and sends back its ML capabilities and ML requirements in GNB-CU CONFIGURATION UPDATE ACKNOWLEDGE message M560.
  • RAN node 401 If the connection between the RAN node 401 and RAN node 402 takes place via a core network (ON), then the ML capabilities and/or ML requirements are signaled indirectly between the RAN nodes via one or more CN functions (CNFs) (e.g., over the NG interface).
  • CNFs CN functions
  • RAN node 401 signals its ML capabilities and/or ML requirements, together with details about RAN node 402 (e.g., an ID that identifies the RAN node 402, during the setup of the signaling connection towards CNF 602 (e.g., at NG signaling connection setup).
  • CNF 602 may be an AMF or other CNF.
  • RAN node 401 can send an NGAP NG SETUP REQUEST message m650 to the CNF also including its ML capabilities and/or its ML requirements (i.e., including ML capability information and/or ML requirement information).
  • the CNF responds with an NG SETUP RESPONSE message m652 and forwards the ML capability information and/or ML requirement information to the RAN node indicated as the recipient in the message received by the CN node, which, in this case is RAN node 402.
  • the network node can inform another network node (via CN nodes) about its new/updated ML capabilities and/or ML requirements and/or request to obtain the ML capabilities and/or ML requirements of the other RAN node if needed (e.g., if one RAN node wants to use an ML model and would like to know if it can receive input and collaborate with the second RAN node (via the core network)).
  • one of the RAN nodes can send to the other RAN node (via the CN) a message (e.g., an NGAP RAN CONFIGURATION UPDATE message M654 including its ML capabilities and/or ML requirements, the identity of the recipient RAN node and also requesting the ML capabilities and/or ML requirements of the second RAN node).
  • the CN node e.g., the AMF
  • the CN node can also request the RAN node or signal to other RAN nodes the ML capabilities and/or the ML requirements (e.g., by means of an NGAP AMF CONFIGURATION UPDATE message M656).
  • a first node e.g., RAN node or node implementing a CNF
  • a second node e.g., RAN node or a node implementing a CNF
  • the first node can also send an ML Capabilities IE that includes its ML capabilities and/or ML requirements.
  • the second node treats the message as implicitly requesting the second node to provide to the first node ML capability information and/or ML requirement information indicating its ML capabilities and requirements, respectively, and, hence, responds accordingly (i.e., transmits a message responsive to the message sent by the first node, which responsive message includes the second nodes ML capabilities and/or ML requirements).
  • 3GPP TS 38.423 v17.1.0 (“TS 38.423) describes the Xn Setup procedure.
  • the purpose of the Xn Setup procedure is to exchange application level configuration data needed for two NG-RAN nodes to interoperate correctly over the Xn-C interface.
  • An NG-RAN node (hereafter NG-RAN nodel) initiates the procedure by sending an XN SETUP REQUEST message to another NG-RAN node (hereafter NG-RAN node 2).
  • the NG-RAN node2 replies with the XN SETUP RESPONSE message.
  • the XN SETUP REQUEST message defined in TS 38.423 is extended such that it may include an ML Capabilities Query IE and/or a ML Capabilities IE; similarly, the XN SETUP RESPONSE message defined in TS 38.423 is extended such that it may include the ML Capabilities IE.
  • the ML Capabilities IE contains ML capability information indicating ML capabilities and/or ML requirement information indicating ML requirements (e.g. , ML capabilities/requirements of the node that originated the message containing the IE).
  • the NG-RAN node2 shall, if supported, include an ML Capabilities IE in an extended XN SETUP RESPONSE message.
  • the NG-RAN node2 if the ML Capabilities IE is contained in the extended XN SETUP REQUEST message, then the NG-RAN node2 shall, if supported, store this information and use it as defined in TS 38.300.
  • the NG-RAN nodel shall, if supported, store this information and use it as defined in TS 38.300.
  • the NG-RAN node2 shall, if supported, include an ML Capabilities IE in an extended XN SETUP RESPONSE message.
  • the XN SETUP REQUEST message and the XN SETUP RESPONSE message are further extended to include an ML Requirements IE for containing ML Requirement information indicating ML requirements of the node that transmitted the message.
  • Table 1 below illustrates an example of the extended XN SETUP REQUEST message; and table 2 below illustrates an example of the extended XN SETUP RESPONSE message.
  • TS 38.423 also describes the NG-RAN node Configuration Update procedure.
  • the purpose of the NG-RAN node Configuration Update procedure is to update application-level configuration data needed for two NG- RAN nodes to interoperate correctly over the Xn-C interface.
  • the NG-RAN nodel initiates the procedure by sending the NG-RAN NODE CONFIGURATION UPDATE message to NG-RAN node2.
  • NG-RAN node2 responds to the NG- RAN NODE CONFIGURATION UPDATE ACK message.
  • the NG-RAN NODE CONFIGURATION UPDATE message defined in TS 38.423 is extended such that it may include an ML Capabilities Query IE and/or ML Capabilities IE; similarly, the NG- RAN NODE CONFIGURATION UPDATE ACK message defined in TS 38.423 is extended such that it may include the ML Capabilities IE.
  • the NG-RAN node2 shall, if supported, include the ML Capabilities IE in the NG-RAN NODE CONFIGURATION UPDATE ACK message.
  • the NG-RAN node2 if the ML Capabilities IE is contained in the NG-RAN NODE CONFIGURATION UPDATE message, the NG-RAN node2 shall, if supported, store this information and use it as defined in TS 38.300 Similarly, if the ML Capabilities IE is contained in the NG- RAN NODE CONFIGURATION UPDATE ACK message, the NG-RAN nodel shall, if supported, store this information and use it as defined in TS 38.300.
  • the NG-RAN node2 shall, if supported, include the ML Capabilities IE in the NG-RAN NODE CONFIGURATION UPDATE ACK message.
  • the NG-RAN NODE CONFIGURATION UPDATE message and the NG- RAN NODE CONFIGURATION UPDATE ACK message are further extended to include an ML Reguirements IE for containing ML Reguirement information indicating ML reguirements of the node that transmitted the message.
  • Table 3 below illustrates an example of the extended NG-RAN NODE CONFIGURATION UPDATE message
  • table 4 below illustrates an example of the extended NG-RAN NODE CONFIGURATION UPDATE
  • 3GPP TS 38.473 v17.1.0 (“TS 38.473”) describes the F1 Setup procedure.
  • the purpose of the F1 Setup procedure is to exchange application level data needed for the gNB-DU and the gNB-CU to correctly interoperate on the F1 interface.
  • This procedure is the first F1AP procedure triggered for the F1-C interface instance after a TNL association has become operational.
  • the gNB-DU initiates the procedure by sending a F1 SETUP REQUEST message including the appropriate data to the gNB-CU.
  • the gNB-CU responds with a F1 SETUP RESPONSE message including the appropriate data.
  • the F1 SETUP REQUEST message defined in TS 38.473 is extended such that it may include an ML Capabilities Query IE and/or a ML Capabilities IE; similarly, the F1 SETUP RESPONSE message defined in TS 38.473 is extended such that it may include the ML Capabilities IE.
  • the gNB-CU shall, if supported, include an ML Capabilities IE in an extended F1 SETUP RESPONSE message.
  • the gNB-CU shall, if supported, store this information and use it as defined in TS 38.401 .
  • the gNB-DU shall, if supported, store this information and use it as defined in TS 38.401 .
  • the gNB-CU shall, if supported, include an ML Capabilities IE in an extended F1 SETUP RESPONSE message.
  • the F1 SETUP REQUEST message and the F1 SETUP RESPONSE message are further extended to include an ML Requirements IE for containing ML Requirement information indicating ML requirements of the node that transmitted the message.
  • Table 5 below illustrates an example of the extended F1 SETUP REQUEST message
  • table 6 below illustrates an example of the extended F1 SETUP RESPONSE message.
  • TS 38.473 also describes the gNB-DU Configuration Update procedure.
  • the purpose of the gNB- DU Configuration Update procedure is to update application level configuration data needed for the gNB-DU and the gNB-CU to interoperate correctly on the F1 interface.
  • the gNB-DU initiates the procedure by sending a GNB-DU CONFIGURATION UPDATE message to the gNB-CU including an appropriate set of updated configuration data that it has just taken into operational use.
  • the gNB-CU responds with GNB-DU CONFIGURATION UPDATE ACKNOWLEDGE message to acknowledge that it successfully updated the configuration data. If an information element is not included in the GNB-DU CONFIGURATION UPDATE message, the gNB-CU shall interpret that the corresponding configuration data is not changed and shall continue to operate the F1-C interface with the existing related configuration data.
  • the GNB-DU CONFIGURATION UPDATE message defined in TS 38.473 is extended such that it may include an ML Capabilities Query IE and/or ML Capabilities IE; similarly, the GNB-DU CONFIGURATION UPDATE ACK message defined in TS 38.473 is extended such that it may include the ML Capabilities IE.
  • the gNB-CU shall, if supported, include the ML Capabilities IE in the GNB-DU CONFIGURATION UPDATE ACK message.
  • the gNB-CU shall, if supported, store this information and use it as defined in TS 38.401.
  • the gNB-DU shall, if supported, store this information and use it as defined in TS 38.401.
  • the gNB-CU shall, if supported, include the ML Capabilities IE in the GNB-DU CONFIGURATION UPDATE ACK message.
  • the GNB-DU CONFIGURATION UPDATE message and the GNB-DU CONFIGURATION UPDATE ACK message are further extended to include an ML Requirements IE for containing ML Requirement information indicating ML requirements of the node that transmitted the message.
  • Table 7 below illustrates an example of the extended GNB-DU CONFIGURATION UPDATE message
  • table 8 below illustrates an example of the extended GNB-DU CONFIGURATION UPDATE ACK message.
  • TS 38.473 also describes the gNB-CU Configuration Update procedure.
  • the purpose of the gNB- CU Configuration Update procedure is to update application level configuration data needed for the gNB-DU and gNB-CU to interoperate correctly on the F1 interface.
  • the gNB-CU initiates the procedure by sending a GNB-CU CONFIGURATION UPDATE message including the appropriate updated configuration data to the gNB-DU.
  • the gNB-DU responds with a GNB-CU CONFIGURATION UPDATE ACKNOWLEDGE message to acknowledge that it successfully updated the configuration data. If an information element is not included in the GNB-CU CONFIGURATION UPDATE message, the gNB-DU shall interpret that the corresponding configuration data is not changed and shall continue to operate the F1-C interface with the existing related configuration data.
  • the updated configuration data shall be stored in the respective node and used as long as there is an operational TNL association or until any further update is performed.
  • the GNB-CU CONFIGURATION UPDATE message defined in TS 38.473 is extended such that it may include an ML Capabilities Query IE and/or ML Capabilities IE; similarly, the GNB-CU CONFIGURATION UPDATE ACK message defined in TS 38.473 is extended such that it may include the ML Capabilities IE.
  • the gNB-DU shall, if supported, include the ML Capabilities IE in the GNB-CU CONFIGURATION UPDATE ACK message.
  • the gNB-DU shall, if supported, store this information and use it as defined in TS 38.401.
  • the gNB-CU shall, if supported, store this information and use it as defined in TS 38.401.
  • the gNB-DU shall, if supported, include the ML Capabilities IE in the GNB-CU CONFIGURATION UPDATE ACK message
  • the GNB-CU CONFIGURATION UPDATE message and the GNB-CU CONFIGURATION UPDATE ACK message are further extended to include an ML Requirements IE for containing ML Requirement information indicating ML requirements of the node that transmitted the message.
  • Table 9 below illustrates an example of the extended GNB-CU CONFIGURATION UPDATE message
  • table 10 below illustrates an example of the extended GNB-CU CONFIGURATION UPDATE ACK message.
  • 3GPP TS 38.413 V17.1.1 (“TS 38.413”) describes the NG Setup procedure.
  • the purpose of the NG Setup procedure is to exchange application level data needed for the NG-RAN node and the AMF to correctly interoperate on the NG-C interface.
  • This procedure shall be the first NGAP procedure triggered after the TNL association has become operational.
  • the procedure uses non-UE associated signalling.
  • This procedure erases any existing application level configuration data in the two nodes, replaces it by the one received and clears AMF overload state information at the NG-RAN node If the NG-RAN node and AMF do not agree on retaining the UE contexts, this procedure also re-initialises the NGAP UE-related contexts (if any) and erases all related signalling connections in the two nodes like an NG Reset procedure would do.
  • the NG-RAN node initiates the procedure by sending an NG SETUP REQUEST message including the appropriate data to the AMF.
  • the AMF responds with an NG SETUP RESPONSE message including the appropriate data.
  • the NG SETUP REQUEST message defined in TS 38.413 is extended such that it may include an ML Capabilities Query IE and/or a ML Capabilities IE; similarly, the NG SETUP RESPONSE message defined in TS 38.413 is extended such that it may include the ML Capabilities IE.
  • the AMF shall, if supported, include an ML Capabilities IE in an extended NG SETUP RESPONSE message.
  • the AMF shall, if supported, store this information and use it as needed.
  • the NG-RAN node shall, if supported, store this information and use it as needed.
  • the AMF shall, if supported, include an ML Capabilities IE in an extended NG SETUP RESPONSE message.
  • the NG SETUP REQUEST message and the NG SETUP RESPONSE message are further extended to include an ML Requirements IE for containing ML Requirement information indicating ML requirements of the node that transmitted the message.
  • the NG SETUP REQUEST message defined in TS 38.413 is extended further such that it may include a Target RAN Node ID IE containing information identifying a target RAN node.
  • the AMF shall, if supported, transmit to the identified target RAN node a message comprising the content of the ML Capabilities IE and/or the content of the ML Requirements IE.
  • the AMF shall, if supported, respond to a NG SETUP REQUEST message transmitted by the identified target RAN node by sending to the target RAN node an NG SETUP RESPONSE message containing the content of the ML Capabilities IE and/or ML Requirements IE included in the first mentioned NG SETUP REQUEST.
  • Table 11 illustrates an example of the extended NG SETUP REQUEST message
  • TS 38.413 also describes a RAN Configuration Update procedure.
  • the purpose of the RAN Configuration Update procedure is to update application level configuration data needed for the NG-RAN node and the AMF to interoperate correctly on the NG-C interface. This procedure does not affect existing UE-related contexts, if any.
  • the procedure uses non UE-associated signalling.
  • the NG-RAN node initiates the RAN configuration update procedure by sending a RAN CONFIGURATION UPDATE message to the AMF including an appropriate set of updated configuration data that it has just taken into operational use.
  • the AMF responds with a RAN CONFIGURATION UPDATE ACKNOWLEDGE message to acknowledge that it successfully updated the configuration data. If an information element is not included in the RAN CONFIGURATION UPDATE message, the AMF shall interpret that the corresponding configuration data is not changed and shall continue to operate the NG-C interface with the existing related configuration data.
  • the RAN CONFIGURATION UPDATE message defined in TS 38.413 is extended such that it may include an ML Capabilities Query IE and/or ML Capabilities IE; similarly, the RAN CONFIGURATION UPDATE ACK message defined in TS 38.413 is extended such that it may include the ML Capabilities IE.
  • the AMF shall, if supported, include the ML Capabilities IE in the RAN CONFIGURATION UPDATE ACK message.
  • the AMF shall, if supported, store this information and use it as needed.
  • the RAN node shall, if supported, store this information and use it as appropriate.
  • the AMF shall, if supported, include the ML Capabilities IE in the RAN CONFIGURATION UPDATE ACK message.
  • the RAN CONFIGURATION UPDATE message and the RAN CONFIGURATION UPDATE ACK message are further extended to include an ML Requirements IE for containing ML Requirement information indicating ML requirements of the node that transmitted the message
  • the RAN CONFIGURATION UPDATE message defined in TS 38.413 is extended further such that it may include a Target RAN Node ID IE containing information identifying a target RAN node.
  • the AMF shall, if supported, transmit to the identified target RAN node a message comprising the content of the ML Capabilities IE and/or the content of the ML Requirements IE.
  • the ML Capabilities IE is contained in the extended RAN
  • the AMF shall, if supported, respond to a RAN CONFIGURATION UPDATE message transmitted by the identified target RAN node by sending to the target RAN node a RAN CONFIGURATION UPDATE ACK message containing the content of the ML Capabilities IE and/or ML Requirements IE included in the first mentioned RAN CONFIGURATION UPDATE message.
  • Table 13 below illustrates an example of the extended RAN CONFIGURATION UPDATE message
  • table 14 below illustrates an example of the extended RAN CONFIGURATION UPDATE ACK message.
  • AMF Configuration Update [00149] TS 38.413 also describes an AMF Configuration Update procedure. The purpose of the AMF
  • the AMF initiates the AMF Configuration Update procedure by sending an AMF CONFIGURATION UPDATE message including the appropriate updated configuration data to the NG-RAN node.
  • the NG-RAN node responds with an AMF CONFIGURATION UPDATE ACKNOWLEDGE message to acknowledge that it successfully updated the configuration data.
  • the NG-RAN node shall interpret that the corresponding configuration data is not changed and shall continue to operate the NG-C interface with the existing related configuration data.
  • the AMF CONFIGURATION UPDATE message defined in TS 38.413 is extended such that it may include an ML Capabilities Query IE and/or ML Capabilities IE; similarly, the AMF CONFIGURATION UPDATE ACK message defined in TS 38.413 is extended such that it may include the ML Capabilities IE.
  • the NG-RAN node shall, if supported, include the ML Capabilities IE in the AMF CONFIGURATION UPDATE ACK message.
  • the NG-RAN node if the ML Capabilities IE is contained in the AMF CONFIGURATION UPDATE message, the NG-RAN node shall, if supported, store this information and use it as needed.
  • the ML Capabilities IE is contained in the AMF CONFIGURATION UPDATE ACK message, the AMF shall, if supported, store this information and use it as appropriate.
  • the NG-RAN node shall, if supported, include the ML Capabilities IE in the AMF CONFIGURATION UPDATE ACK message
  • the AMF CONFIGURATION UPDATE message and the AMF CONFIGURATION UPDATE ACK message are further extended to include an ML Requirements IE for containing ML Requirement information indicating ML requirements of the node that transmitted the message.
  • Table 15 below illustrates an example of the extended AMF CONFIGURATION UPDATE message; and table 15 below illustrates an example of the extended AMF CONFIGURATION UPDATE ACK message.
  • FIG. 7 is a flow chart illustrating a process 700, according to an embodiment, for providing ML capability information and/or ML requirement information.
  • Process 700 may begin in step s702.
  • Step s702 comprises a first network node transmitting to a second network node a first message comprising a first information element, IE, comprising first ML capability information and/or first ML requirement information, the first ML capability information indicates one or more ML capabilities of the first network node, and the first ML requirement information indicates one or more ML requirements of the first network node.
  • IE first information element
  • the first ML capability information indicates one or more ML capabilities of the first network node
  • the first ML requirement information indicates one or more ML requirements of the first network node.
  • the first network node is a first radio access network, RAN, node, and the second network node is a second RAN node.
  • the first message is: an extended NG-RAN NODE CONFIGURATION UPDATE message, an extended NG-RAN NODE CONFIGURATION UPDATE ACK message, an extended XN SETUP REQUEST message, or an extended XN SETUP RESPONSE message.
  • the first network node is a distributed unit, DU, of a base station having a central unit, CU, and the second network node is the CU.
  • the first message is: an extended F1 SETUP REQUEST message, an extended F1 SETUP RESPONSE message, an extended GNB-DU CONFIGURATION UPDATE message, or an extended GNB-DU CONFIGURATION UPDATE ACK message.
  • the first network node is a first radio access network, RAN, node, and the second network node is a core network, CN, node.
  • the first message is: an extended NG SETUP REQUEST message, an extended NG SETUP RESPONSE message, an extended RAN CONFIGURATION UPDATE message, or an extended RAN CONFIGURATION UPDATE ACK message.
  • the process further includes receiving a second message transmitted by the second network node, the second message comprising second ML capability and/or second ML requirement information, wherein the second ML capability information indicates one or more ML capabilities of the second network node, and the second ML requirement information indicates one or more ML requirements of the second network node.
  • the second message is received from the second network node before the first network node transmits the first message to the second network node, and the first network node transmits the first message to the second network node as a result of determining that the second message comprises ML capability and/or ML requirement information.
  • the second message is received from the second network node before the first network node transmits the first message to the second network node, and the first network node transmits the first message to the second network node as a result of determining that the second message comprises an ML capability query IE.
  • the first network node is a core network, CN, node
  • the process further comprises, prior to transmitting the first message to second network node, receiving a second message transmitted by a first RAN node, the second message comprises the first ML capability information and/or the first ML requirement information, and the second network node is a second RAN node.
  • the second message further comprises a target RAN node identifier (ID) IE comprising an ID identifying the second RAN node.
  • ID target RAN node identifier
  • the first IE comprises the first ML capability information and the first ML capability information comprises one or more of: an indication of whether the first network node can support the execution or training of ML models or algorithms, a use case indicator indicating a use case for which the first network node supports ML operations, information indicating a number of UEs/cells for which ML model inference can be produced per unit of time, an indication of whether the first network node can execute an ML model with batch inference, information indicating the number of ML model inferences that can be executed per unit of time, information indicating available ML model outputs, information indicating ML algorithm types that can be supported, information indicating ML model types that can be supported, information indicating type of activation functions that can be supported for a given ML model type, information indicating a combination of ML model type and an indication of the ML model size that can be supported, information indicating whether the first network node supports retraining, or information indicating a allowed lifecycle management operations.
  • the first IE comprises the first ML requirement information and the first ML requirement information comprises one or more of: information indicating a minimum or recommended processing power and/or amount of memory, information indicating a required processing unit type, information indicating a required or recommended number of processing units, information indicating a required memory type and/or memory size, information indicating a frequency of input data needed to carry out ML inference, information indicating security requirements, information indicating user consent requirements, information indicating that use of an ML model depends on another model or algorithm or software module, or information indicating an ML model type and a required ML model size.
  • FIG. 8 is a block diagram of network node 800, according to some embodiments.
  • Network node 800 can be used to implement any of the network node described herein, such as, for example, RAN node 401, second network node 402, third network node 403, CRF 506.
  • network node 800 may comprise: processing circuitry (PC) 802, which may include one or more processors (P) 855 (e.g., one or more general purpose microprocessors and/or one or more other processors, such as an application specific integrated circuit (ASIC), field- programmable gate arrays (FPGAs), and the like), which processors may be co-located in a single housing or in a single data center or may be geographically distributed (i.e.
  • PC processing circuitry
  • P processors
  • ASIC application specific integrated circuit
  • FPGAs field- programmable gate arrays
  • network node 800 may be a distributed computing apparatus); at least one network interface 848 (e.g., a physical interface or air interface) comprising a transmitter (Tx) 845 and a receiver (Rx) 847 for enabling network node 800 to transmit data to and receive data from other nodes connected to a network 110 (e.g., an Internet Protocol (IP) network) to which network interface 848 is connected (physically or wirelessly) (e.g., network interface 848 may be coupled to an antenna arrangement comprising one or more antennas for enabling network node 800 to wirelessly transmit/receive data); and a storage unit (a.k.a., “data storage system”) 808, which may include one or more non-volatile storage devices and/or one or more volatile storage devices.
  • IP Internet Protocol
  • a computer readable storage medium (CRSM) 842 may be provided.
  • CRSM 842 may store a computer program (CP) 843 comprising computer readable instructions (CRI) 844.
  • CP computer program
  • CRSM 842 may be a non-transitory computer readable medium, such as, magnetic media (e.g., a hard disk), optical media, memory devices (e.g., random access memory, flash memory), and the like.
  • the CRI 844 of computer program 843 is configured such that when executed by PC 802, the CRI causes network node 800 to perform steps described herein (e.g , steps described herein with reference to the flow charts).
  • network node 800 may be configured to perform steps described herein without the need for code. That is, for example, PC 802 may consist merely of one or more ASICs. Hence, the features of the embodiments described herein may be implemented in hardware and/or software [00167] While various embodiments are described herein, it should be understood that they have been presented by way of example only, and not limitation. Thus, the breadth and scope of this disclosure should not be limited by any of the above-described exemplary embodiments. Moreover, any combination of the above-described elements in all possible variations thereof is encompassed by the disclosure unless otherwise indicated herein or otherwise clearly contradicted by context.
  • transmitting a message to a device encompasses transmitting the message directly to the device or transmitting the message indirectly to the device (i.e., one or more nodes are used to relay the message from the source to the device).
  • receiving a message from a device encompasses receiving the message directly from the device or indirectly from the device (i.e., one or more nodes are used to relay the message from the device to the receiving node).
  • en-gNB A gNB acting as a secondary node in an EN-DC scenario i.e. in a DC scenario with an eNB as the master node and a gNB as the secondary node.
  • NG The interface between an NG-RAN and a 5GC.
  • RNC Radio Network Controller [00233] RRC Radio Resource Control
  • S1 The interface between the RAN and the CN in LTE.

Abstract

A method for providing machine learning, ML, capability information and/or ML requirement information. The method includes a first network node transmitting to a second network node a first message comprising a first information element, IE, comprising first ML capability information and/or first ML requirement information, wherein the first ML capability information indicates one more ML capabilities of the first network node, and the first ML requirement information indicates one or more ML requirements of the first network node.

Description

MACHINE LEARNING CAPABILITY CONFIGURATION IN RADIO ACCESS NETWORK
TECHNICAL FIELD
[001] Disclosed are embodiments related to providing machine learning (ML) capability information and/or ML requirement information to a network node.
BACKGROUND
[002] Overall Architecture of NG-RAN
[003] FIG. 1 illustrates the current 5G RAN (a.k.a., the Next Generation RAN (NG-RAN)) architecture.
The NG-RAN architecture is described in 3GPP Technical Specification (TS) 38.401 V17.0.0 (“TS 38.401”). The NG- RAN consists of a set of base stations (denoted “gNBs”) connected to a 5G core network (5GC) through an NG interface. The gNBs can be interconnected through an Xn interface.
[004] As specified in 3GPP TS 38.300 v17.1.0 ("TS 38.300”), a gNB may consist of a gNB central unit (gNB-CU) and one or more gNB distributed units (gNB-DU(s)). A gNB-CU and a gNB-DU are connected via an F1 interface. One gNB-DU is connected to only one gNB-CU. The NG, Xn and F1 interfaces are logical interfaces. A gNB-CU may comprise a gNB-CU control plane (CP) function (gNB-CU-CP) and a gNB-CU user plane (UP) function (gNB-CU-UP).
[005] Ongoing 3GPP discussion
[006] The 3GPP Technical Report (TR) 37.817 V17.0.0 (“TR 37.817”) has been produced as outcome of the Study Item (SI) "Enhancement for Data Collection for NR and EN-DC” defined in 3GPP Technical Document (Tdoc) No. RP-201620.
[007] The study item aimed to study the functional framework for RAN intelligence enabled by further enhancement of data collection through use cases, examples, etc., and identify the potential standardization impacts on current NG-RAN nodes and interfaces.
[008] TR 37.817 identifies the following high-level principles that should be applied for Al-enabled RAN intelligence:
The detailed AI/ML algorithms and models for use cases are implementation specific and out of RAN3 scope.
The study focuses on AI/ML functionality and corresponding types of inputs/outputs.
The input/output and the location of the model training and Model Inference function should be studied case by case. The study focuses on the analysis of data needed at the model training function from Data Collection, while the aspects of how the model training function uses inputs to train a model are out of RAN3 scope.
The study focuses on the analysis of data needed at the model inference function from Data Collection, while the aspects of how the model inference function uses inputs to derive outputs are out of RAN3 scope.
Where AI/ML functionality resides within the current RAN architecture, depends on deployment and on the specific use cases.
The Model Training and Model Inference functions should be able to request, if needed, specific information to be used to train or execute the AI/ML algorithm and to avoid reception of unnecessary information. The nature of such information depends on the use case and on the AI/ML algorithm.
The Model Inference function should signal the outputs of the model only to nodes that have explicitly requested them (e.g. via subscription), or nodes that take actions based on the output from Model Inference.
An AI/ML model used in a Model Inference function has to be initially trained, validated and tested by the model training function before deployment.
NG-RAN SA is prioritized; EN-DC and MR-DC are down-prioritized, but not precluded from Rel.18.
Functional framework and high-level procedures defined in this TR should not prevent from “thinking beyond" them during normative phase if a use case requires so.
User data privacy and anonymisation should be respected during AI/ML operation
[009] Radio Access Network (RAN) Intelligence
[0010] FIG. 2 illustrates the Functional Framework for RAN Intelligence. As shown in FIG. 2, the framework includes the following functions: 1) a data collection function; 2) a model training function; 3) a model inference function; and 4) an actor function, or Actor.
[0011] The data collection function provides training data (e.g., a set of training data samples - i.e., one or more training data samples) to the model training function. Training data is data that is used by the model training function to train a model (e.g., a neural network or other model). In Machine Learning (ML) (a.k.a., “Artificial Intelligence (Al)") parlance, a model (e.g. a neural network) is defined as a functional approximation, whose parameters (e.g., neural network weights) are optimized to approximates a mathematical function, whose input-output behavior is characterized by a data set (i.e., the training set). In many Reinforcement Learning (RL) systems, the function approximated by the model is the Q-function, which assigns a value to a state-action pair. In turn, the Q-function (hence the ML model) determine the behavior (or policy) of the RL agent. The data collection function also provides inference data to the model inference function, which uses the inference data to produce an output (a.k.a., an inference).
[0012] ML model specific data preparation (e.g., data pre-processing and cleaning, formatting, and transformation) may also be carried out in the data collection function. Examples of interference and training data may include measurements from user equipments (UEs) or different network entities, feedback from the Actor, and output from the model inference function.
[0013] The model training function performs the ML model training, validation, and testing which may generate model performance metrics as part of the model testing procedure. The model training function is also responsible for data preparation (e.g. data pre-processing and cleaning, formatting, and transformation) based on training data delivered by a data collection function, if required. The model training function deploys a trained, validated and tested model (e.g., a model that parameterizes or approximates at least one of a policy function, a value function and a Q-function in a deep reinforcement learning environment) to the model inference function or delivers an updated model to the model inference function.
[0014] The model inference function provides model inference output (e.g. predictions or decisions). The model inference function may provide model performance feedback to the model training function when applicable. The model inference function is also responsible for data preparation (e g. data pre-processing and cleaning, formatting, and transformation) based on inference data delivered by a data collection function, if required. The model inference function may provide model performance feedback information to the model training function, which uses this feedback information for monitoring the performance of the model.
[0015] The actor is a function that receives the output from the model inference function and triggers or performs corresponding actions. The Actor may trigger actions directed to other entities or to itself. The actions may generate feedback information, provided to the data collection function, that may be needed to derive training or inference data.
[0016] Three use cases have been identified for RAN Intelligence: 1) Network Energy Saving, 2) Load Balancing, and 3) Mobility Optimization and the potential standard impacts. These are described in TR 37.817.
[0017] For Network energy saving use case, TR 37 817 states:
The following solutions can be considered for supporting AI/ML-based network energy saving:
1 . AI/ML Model Training is located in the 0AM and AI/ML Model Inference is located in the gNB [5G base station],
2. AI/ML Model Training and AI/ML Model Inference are both located in the gNB. Note: gNB is also allowed to continue model training based on model trained in the 0AM. In case of CU-DU split architecture, the following solutions are possible:
3. AI/ML Model Training is located in the 0AM and AI/ML Model Inference is located in the gNB-CU.
4. AI/ML Model Training and Model Inference are both located in the gNB-CU.
[0018] For Mobility Optimization use case, TR 37.817 states:
Considering the locations of AI/ML Model Training and AI/ML Model Inference for mobility solution, the following two options are considered:
1 . The AI/ML Model Training function is deployed in 0AM, while the model inference function resides within the RAN node.
2. Both the AI/ML Model Training function and the AI/ML Model Inference function reside within the RAN node.
Furthermore, for CU-DU split scenario, following option is possible:
3. AI/ML Model Training is located in CU-CP or 0AM and AI/ML Model Inference function is located in CU-CP. Note: gNB is also allowed to continue model training based on model trained in the 0 AM.
[0019] For Load Balancing use case, TR 37.817 states:
The following solutions can be considered for supporting AI/ML-based load balancing:
1 . AI/ML Model Training is located in the 0AM and AI/ML Model Inference is located in the gNB.
2. AI/ML Model Training and AI/ML Model Inference are both located in the gNB.
In case of CU-DU split architecture, the following solutions are possible:
3. AI/ML Model Training is located in the 0AM and AI/ML Model Inference is located in the gNB-CU.
4. AI/ML Model Training and Model Inference are both located in the gNB-CU.
Note: gNB is also allowed to continue model training based on model trained in the 0AM.
[0020] 3GPP Technical Docment (Tdoc) R3-215244 proposes to introduce a model management function in the Functional Framework for RAN Intelligence, as shown in FIG. 3. Tdoc R3-215244 states:
Model deployment/update should be decided by model management instead of model training. The model management may also host a model repository. The model deployment/update should be performed by model management. Model performance monitoring is a key function to assist and control model inference. The model performance feedback from model inference should be first sent to model management. If the performance is not ideal, the model management may decide to fallback to traditional algorithm or change/update the model.
The model training should be also controlled by model management.
The model management function may be taken by either 0AM or CU or other network entities depending on the use cases. Clearly defining a model management function is useful for future signalling design and analysis
Proposal 1 : Introduce a model management function into AI/ML framework [as shown in FIG. 3],
Model management function supports following roles: I) Requesting model training and receiving the model training result; ii) Model deployment/updates for inference, ill) Model performance monitoring, including receiving performance feedback from model inference and taking necessary action, e.g. keep the model, fallback to traditional algorithm, change or update the model, iv) Model storage.
[0021] Training architectures for Reinforcement Learning (RL)
[0022] The main objective of model training is to produce a model (e.g., neural network that parameterizes or approximates at least one of a policy function, a value function and a Q-function) that can generalize to conditions and situations not directly experienced in the training data (i.e., a model that performs well when used with inference data that differs from the training data used in the training process). This process is also known as a training process.
[0023] Recent advances in the field of reinforcement learning (RL) have focused on techniques that could improve the quality of learning, learning efficiency (i.e., how much information can be extracted from given training data) and learning speed. Many of these techniques rely on advanced training architectures that exploit parallel and distributed collection of training data which may comprise a set of training data samples, such as, for example, experience samples (or "experiences” for short), combined with either a centralized or distributed training process. In one scenario, each “rollout worker" (i.e., a function that combines the functionality of the model inference function and the Actor function) receives a model update from a Model Training function. The rollout worker (e.g., an RL agent) uses the received model to interact with an external environment by selecting actions and applying the actions to the environment. In return, the rollout worker can collect experience samples that can be used for further training and improving the model. Typically, an experience sample is a tuple that comprises: i) an observation (e.g., state vector) for time step t (denoted St), ii) an action (At) selected based on St, iii) an observation for time step t+1 (denoted St+1 ), and iv) a reward value Rt based on St and St+1 . Some techniques provide a shared storage memory, also known as “replay buffer” or “experience buffer,” in which the rollout workers store the experience samples (e.g., at each time step, the rollout worker generates and stores an experience in the replay buffer). The Model Trainer function can then filter experiences from the replay buffer to train/update the model (e.g., a new set of weights of a neural network), which is then provided to the distributed rollout workers.
[0024] Parallel and distributed experience sample collection allows the evaluation of multiple versions of a model in parallel and to quickly produce a new model. It also allows for improved diversity in the collected information, as different rollout workers can be tasked to test the model against different versions of the environment. This allows improved quality of the collected experiences, which in turns enables: producing a model that better generalizes against conditions (e.g., events) unseen during the training process, improving the speed of learning because updates of the model can be provided more frequently due to the high throughput of the training data generation, and improving learning efficiency (i.e., the improved data diversity provided by parallel and distributed rollout workers enables production of a better model for a given amount of experience samples compared to the case where a single rollout worker is used). Using these techniques in a RAN could achieve a performance that otherwise would not be possible to achieve.
SUMMARY
[0025] Certain challenges presently exist. For instance, in the 3GPP study on ML enabled NG-RAN (a.k.a., “Al enabled NG-RAN” or “AI/ML enabled NG-RAN”), a general functional framework was studied and agreed upon and use cases were studied and general procedures were outlined; however, the issue of ML capabilities (a.k.a., “Al capabilities or “AI/ML capabilities”) has not been touched upon. In the more general sense, ML capabilities refer to how network nodes are aware of the ML resources present in the network. The handling of the ML capabilities is of importance for a solution using Al to be able to function properly across a network. This lack of a proper ML capabilities handling creates a significant drawback because it is not possible, for example, for network nodes to know and be able to request ML model outputs. Hence, the problems arising from not knowing the ML capabilities of network nodes and the requirements of executing certain ML processes are those of preventing network nodes to subscribe to ML processes and data in a targeted way. Without such framework, network nodes can only guess whether other network nodes are able to provide certain data or are able to execute certain ML processes. The result is a potential excess of unnecessary signaling, potentially high number of failed requests coming from those nodes who cannot successfully respond to the requested data/process execution, and a potentially high amount of data transfer consisting of unnecessary information.
[0026] Accordingly, in one aspect there is provided a method for providing machine learning (ML) capability information and/or ML requirement information. The method includes a first network node transmitting to a second network node a first message comprising a first information element, IE, comprising first ML capability information and/or first ML requirement information, wherein the first ML capability information indicates one more ML capabilities of the first network node, and the first ML requirement information indicates one or more ML requirements of the first network node.
[0027] In another aspect there is provided a computer program comprising instructions which when executed by processing circuitry of a network node causes the network node to perform any of the methods disclosed herein. In some embodiments, there is provided a carrier containing the computer program wherein the carrier is one of an electronic signal, an optical signal, a radio signal, and a computer readable storage medium. In another aspect there is provided a network node that is configured to perform the methods disclosed herein. The network node may include memory and processing circuitry coupled to the memory.
[0028] An advantage of the embodiments disclosed herein is that they facilitate use of ML and avoid unnecessary network signaling due to interactions between nodes. For example, requests of certain data for ML training purpose towards nodes that cannot provide the requested data (that would naturally fail) can be avoided. Similarly, it is possible for an inference function to avoid recommending that a particular network node take a particular action because the inference function will have knowledge as to the node's capabilities and, therefore, will know a priori whether or not the node can perform the recommended action (there is no point in recommending the node to perform a certain action if you already know that the node is not capable of performing the action).
BRIEF DESCRIPTION OF THE DRAWINGS
[0029] The accompanying drawings, which are incorporated herein and form part of the specification, illustrate various embodiments.
[0030] FIG. 1 illustrates the current 5G RAN (a.k.a., the Next Generation RAN (NG-RAN)) architecture.
[0031] FIG. 2 illustrates a Functional Framework for RAN Intelligence.
[0032] FIG. 3 illustrates the introduction of a model management function in the Functional Framework for RAN Intelligence.
[0033] FIG. 4 is a message flow diagram according to an embodiment.
[0034] FIG. 5 is a message flow diagram according to an embodiment.
[0035] FIG. 6 is a message flow diagram according to an embodiment.
[0036] FIG. 7 is a flowchart illustrating a process according to an embodiment. [0037] FIG. 8 is a block diagram of a network node according to an embodiment.
DETAILED DESCRIPTION
[0038] Terminology
[0039] As used herein a "network node” can be a node in a radio access network (RAN), an 0AM, a core network (ON) node (e.g., a ON function), an 0AM, an SMO, a Network Management System (NMS), a Non-Real Time RAN Intelligent Controller (Non-RT RIC), a Real-Time RAN Intelligent Controller (RT-RIC), a gNB, eNB, en- gNB, ng-eNB, gNB-CU, gNB-CU-CP (gNB-CU control plane (CP)), gNB-CU-UP (gNB-CU user plane (UP)), eNB-CU, eNB-CU-CP, eNB-CU-UP, lAB-node, lAB-donor DU, lAB-donor-CU, IAB-DU, IAB-MT, O-CU, O-CU-CP, O-CU-UP, O-DU, O-RU, O-eNB, a user equipment (UE). References to network nodes herein should be understood such that a network node may be a physical node or a function or logical entity of any kind, e.g. a software entity implemented in a data center or a cloud, e.g. using one or more virtual machines, and two network nodes may well be implemented as logical software entities in the same data center or cloud.
[0040] As used herein a “RAN node” is a node or entity in a radio access network (RAN), such as a gNB, eNB, en-gNB, ng-eNB, gNB-CU, gNB-CU-CP, gNB-CU-UP, eNB-CU, eNB-CU-CP, eNB-CU-UP, lAB-node, lAB- donor DU, lAB-donor-CU, IAB-DU, IAB-MT, O-CU, O-CU-CP, O-CU-UP, O-DU, O-RU, or O-eNB.
[0041] A function, or entity of a RAN node is to be intended as one of the functional entities comprised in a RAN node. A RAN node in split deployment comprises different functions or entities. For example, a gNB comprising a gNB-CU, one or more gNB-DU, one or more gNB-CU-CP.
[0042] The terms model training, model optimizing, model optimization, model updating are herein used interchangeably with the same meaning unless explicitly specified otherwise.
[0043] The terms model changing, modify or similar are herein used interchangeably with the same meaning unless explicitly specified otherwise. In particular, they refer to the fact that the type, structure, parameters, connectivity of a model may have changed compared to a previous format/configuration of the model.
[0044] The terms artificial intelligence (Al) and machine learning (ML) are used interchangeably. Hence, the abbreviations Al, ML, and AI/ML are used interchangeably.
[0045] The terms Al model, ML model, AI/ML model, and AIML model are herein used interchangeably with the term model. A model is as an application or an algorithm or a function including processes based on ML. For example, it may refer to the ML model itself as well as the support software packages needed for it to run properly, e.g., this may include the software necessary for data preparation. [0046] Data collection refers to a process of collecting data for the purpose of model training, data analytics, and/or inference.
[0047] The embodiments disclosed herein are independent with respect to specific AI/ML model types or learning problems/setting (e.g. supervised learning, unsupervised learning, reinforcement learning, hybrid learning, centralized learning, federated learning, distributed learning, ...). Non limiting examples of AI/ML algorithms may include supervised learning algorithms, deep learning algorithms, reinforcement learning type of algorithms (such as DQN, A2C, A3C, etc.), contextual multi-armed bandit algorithms, autoregression algorithms, etc., or combinations thereof. Such algorithms may exploit functional approximation models, hereafter referred to as AI/ML models, such as neural networks (e.g. feedforward neural networks, deep neural networks, recurrent neural networks, convolutional neural networks, etc.). Examples of reinforcement learning algorithms may include deep reinforcement learning (such as deep Q-network (DQN), proximal policy optimization (PPO), double Q- learning), actor-critic algorithms (such as Advantage actor-critic algorithms, e.g. A2C or A3C, actor-critic with experience replay, etc), policy gradient algorithms, off-policy learning algorithms, etc.
[0048] The network nodes described herein can be directly connected (i.e., with a signaling connection between them) or indirectly connected (i.e., with a signaling connection traversing intermediate network nodes between them, relaying the signaling information exchanged between them). Additionally, as used herein transmitting a message to an intended recipient encompasses transmitting the message directly to the intended recipient or transmitting the message indirectly to the intended recipient (i.e., one or more other nodes are used to relay the message from the source node to the intended recipient). Likewise, as used herein receiving a message from a sender encompasses receiving the message directly from the sender or indirectly from the sender (i.e., one or more nodes are used to relay the message from the sender to the receiving node).
[0049] The term “CU" is used herein as short for “gNB-CU" (and may also refer to an eNB-CU or an IAB- donor-CU”). The term “CU-CP” is used herein as short for “gNB-CU-CP'' (and may also refer to an eNB-CU-CP or an lAB-donor-CU-CP). The term "CU-UP" is used herein as short for “gNB-CU-UP'' (and may also refer to an eNB-CU- UP or an lAB-donor-CU-UP). The term "DU” is used herein as short for “gNB-DU” (and may also refer to an eNB-DU or an IAB-donor-DU)).
[0050] The terms “AI/ML capabilities", “Al capabilities”, “ML capabilitiies”, “AI/ML related capabilities” are used interchangeably. ML capabilities refer to the support that a network node can offer when ML is used. For example, the ability to support certain use cases, to produce certain output (and at which pace), which AI/ML model types are supported, which AI/ML algorithms are supported.
[0051] The terms “AI/ML related requirements", “AI/ML requirements” are used interchangeably and indicate requirements for executing an AI/ML model by a network node. Requirements refer to attributes and aspects needed for the AI/ML model to function according to design. They include aspects such as “processing power”, “memory consumption”, “minimum hardware”, “recommended hardware”, “frequency of input”, “security requirements”, “user consent requirements.”
[0052] In this disclosure it is often taken as an example that of RAN nodes. However, the person skilled in the art can easily deduce that the methods apply to all the nodes involved in ML based processes where exchange of ML capabilities is needed. Such nodes may be CN nodes, management systems, UE, external systems.
[0053] Introduction
[0054] This disclosure relates to handling of ML capabilities and requirements for executing ML processes. Among other embodiments, this disclosure provides embodiments for signaling capabilities and requirements associated to the use of ML in RAN.
[0055] In some embodiments, ML related capabilities and/or ML related requirements is communicated between nodes interworking with each other (e.g., two RAN nodes).
[0056] In some embodiments the ML related capabilities and/or ML related requirements are signaled between two nodes, e.g., two RAN nodes, via a direct signaling interface (e g., using the Xn interface) In one example, this is signaled in the Xn Setup procedure; for example, a RAN node can declare its ML capabilities using the XN SETUP REQUEST or XN SETUP RESPONSE messages, e.g., a RAN node declares that it can transmit and/or receive the output of an ML Model inference function for Energy Saving (ES) use case. In another example, the RAN node signals its ML capabilities using the Xn: NG-RAN node Configuration Update procedure.
[0057] In another embodiment, in case the RAN nodes communicate via a core network, the RAN node signals its ML capabilities via the NG interface. In one example, this is signaled in the NG Setup procedure; for example, a RAN node can declare its ML capabilities using the NG SETUP REQUEST message, e.g., a RAN node declares that it supports transmission and/or reception of output produced by an ML Model inference function for ES use case The CN will forward this information to the receiving RAN node. In another example, the RAN node signals its ML related capabilities and/or its ML related requirements using the NG: RAN Configuration Update procedure. Likewise, the CN forwards the information to the receiving RAN node. The CN may forward the information to the receiving RAN node by means of a message such as the NG: RAN CONFIGURATION UPDATE ACKNOWLEDGE message, if subsequently the receiving RAN node initiates an NG: RAN Configuration Update procedure. In yet another example, the AMF may store the RAN node's capabilities and requirements related to ML and signal them to other RAN nodes using the NG: AMF Configuration Update procedure.
[0058] In a yet another embodiment, in a disaggregated gNB, the gNB-CU signals its ML related capabilities and/or Al/M related requirements to the gNB-DU via the F1 interface. In one example, this is signaled at F1 setup. In another example the ML capabilities are signaled using the gNB-DU Configuration Update and/or the gNB-CU Configuration Update procedure.
[0059] In another embodiment, a node hosting the model training function can request a node hosting the model inference function to provide the ML capabilities of the node hosting the inference function. As an example, when an ML model training function resides outside RAN (e.g., in CAM or SMO), the 0AM can request from a RAN node its ML related capabilities and/or ML related requirements. This information can be used by the node hosting the model training function to determine whether a certain ML model can be deployed to the node hosting the model inference function. In one variant, the node hosting the model inference function can inform the node hosting the model training function about its ML related capabilities and/or ML related requirements without any prior request from it.
[0060] In another embodiment, related to a RAN node in disaggregated deployment (e.g., a gNB comprising a gNB-CU and a gNB-DU), when an ML model training function resides in a first entity of a RAN node (e.g., in a gNB-CU), the first entity of the RAN node can request a second entity of the RAN node (e.g., a gNB-DU) to provide ML related capabilities and/or ML related requirements of the second entity of the RAN node. This information can be used by the first entity of the RAN node to determine whether a certain ML model can be deployed to the second entity of the RAN node. In one variant, the second entity of the RAN node can inform the first entity of the RAN node about its ML related capabilities and/or ML related requirements without any prior request from the first entity of the RAN node.
[0061] Additional Details
[0062] 1. Signaling of ML capabilities and ML requirements between RAN nodes
[0063] In some embodiments, which is illustrated in FIG. 4, two entities involved in ML based processes, e.g. two RAN nodes (RAN Node 401 and RAN node 402), exchange ML capability information indicating ML capabilities and ML requirement information indicating ML requirement(s) during the setup of the signaling connection between them (e.g., at setup of Xn signaling connection). The RAN node 401 (a.k.a. , “the first RAN node”) will send a message (e.g., an XN SETUP REQUEST message M450) to RAN node 402 (a.k.a., “the second RAN node”) including its ML capabilities (i.e., including ML capability information indicating its ML capabilities) and including its ML requirements (including ML requirement information indicating its ML requirements).
[0064] In some embodiments, ML capabilities are associated to a network node where the ML model is used or intended to be used.
[0065] In some embodiments ML requirements are associated to at least one ML model. In one example, such association could be realized by means of one or more of: a ML model identifier (ID), an ML model version, a use case identifier, etc. [0066] Some non-exhaustive examples of the ML capability information include: a) an indication of whether the network node indicating the ML capabilities can support the execution or training of ML models or algorithms. b) a use case indicator indicating a use case for which the RAN node supports ML operations. Non-limiting examples of use cases are: Network Energy Saving, Power Saving, Load Balancing Optimization, Mobility Optimization, Link Adaptation, QoS Optimization, QoE Optimization, Coverage and Capacity Optimization, MIMO Layer Optimization, CSI Prediction, Beam management, Positioning, Channel Coding, Reference Signal Enhancements, Interference Optimization, ... c) information indicating a number of UEs/cells for which ML model inference can be produced per unit of time, wherein a unit of time could be expressed, for instance, in milliseconds, seconds, minutes, hours, etc. d) an indication of whether the ML model can be executed with batch inference e) information indicating the number of ML model inference that can be executed per unit of time, wherein a unit of time could be expressed, for instance, in milliseconds, seconds, minutes, hours, etc In one example, this information can be interpreted as the capacity of batch inference. Such value could in some examples be associated to specific types of ML model, where examples of ML model types include feedforward neural network, recurrent neural network, convolutional neural network, graph neural network, decision tree, transformer, autoencoder, etc. Such value could in some examples be associated to the size of the ML model (described in an item below) f) information indicating available ML model outputs. Namely the information produced as part of the model inference process produced by a specific ML model. Each individual output or group of outputs can be associated with any of the following: i) specific ML model producing the output, where the ML model may be identified, e.g., by means of an ML Model ID, ML Model Version, model developer/vendor, a human- readable name for the model, any other identification; ii) ML use case related to the output, e g., Energy Saving, Load Balancing, Mobility Optimization; iii) associated ML model performance (e.g., (root) mean square error or other metric for regression models, accuracy, precision, recall, or other metric for classification models, conditional value at risk across fixed-policy rollouts or other metric for reinforcement learning); iv) if the output is signaled with some uncertainty metric (e.g., confidence interval in case of regression model) and which metric(s) g) information indicating ML algorithm types and/or type of ML training that can be supported (e.g., supervised learning, unsupervised learning, reinforcement learning, hybrid learning, centralized learning, federated learning, distributed learning) h) information indicating ML model types that can be supported (e.g., feedforward neural network, recurrent neural network, convolutional neural network, graph neural network, decision tree, transformer, autoencoder, etc.). i) information indicating type of activation functions that can be supported for a given ML model type (e.g., linear function, sigmoid function, hyperbolic tangent function, rectified linear unit, soft-max, etc.) j) information indicating a combination of ML model type and an indication of the AIML model size that can be supported, wherein the AIML model size can be expressed, for instance, in number of operations required for its execution, in terms of memory requirements for its storage, etc. In case of neural networks, the model size can be expressed with one or more of: number of layers, number of units per layers, connectivity degree between layers, number of connections between layers, etc. k) information indicating whether the node can support retraining. It can be a flag for instance that indicates whether retraining is allowed in the node l) information indicating which Lifecycle Management operations are allowed, and which are not allowed in the node (e.g., monitoring, testing, ...)
[0067] Some non-exhaustive examples of the ML requirement information include: a) information indicating minimum or recommended processing power, memory, hardware needed by the sending node to run the specific ML model, b) an indication of the processing unit type (e.g., CPU, GPU) required for executing the AIML model c) information indicating number of processing units (cores) needed for executing the ML model d) an indication of at least a memory type and/or memory size required for executing the AIML model e) information indicating time scale relevant for the supported use cases, namely the frequency of input data needed to carry out ML inference. f) information indicating security requirements for the specific ML model and/or for the specific ML use case. Namely, aspects such as whether any data associated with the ML process should be security protected, the type of security algorithm/process used to secure such data, whether some or all of the data relative to an ML process, transferred to a neighbor node, are allowed to be forwarded further to other nodes or if they shall be kept internal to the receiving node. g) information indicating user consent requirements, namely whether the use and/or distribution of some or all of the data relative to an ML process are subject to user consent. h) an indication that the use/execution of an ML model depends (functionally) on another (more basic) model or algorithm or software module
I) information indicating an ML model type combined with an indication of the required AIML model size, wherein the ML model size can be expressed, for instance, in number of operations required for its execution, the memory requirements for its storage, etc In case of neural networks, the ML model size can be expressed with one or more of: number of layers, number of units per layers, connectivity degree between layers, number of connections between layers, etc.
[0068] The node, e.g., the second RAN node, receiving the ML capabilities and/or ML requirements of the first node, e.g., first RAN node, may in turn send its ML capabilities and/or ML requirements to the first node (e.g., within an XN SETUP RESPONSE message M452). After this step, both nodes are aware of the ML capabilities and/or ML requirements of the other node and can support appropriate ML operations. [0069] In another option, the first node requests the second node to provide second node's ML capabilities and/or ML requirements, with or without providing first node’s ML capabilities and/or ML requirements itself.
[0070] In another example either because the ML capabilities of the RAN nodes were not exchanged at establishment of the signaling connection (e.g., at the Xn Setup procedure) or because there were subsequent changes (e.g., after the Xn Setup procedure), each RAN node can inform the other RAN node about its new/updated ML capabilities and/or ML requirements and/or request to obtain the ML capabilities and/or ML requirements of the other RAN node if needed (for instance if one RAN node wants to use an updated ML model and would like to know if it can receive input and collaborate with the second RAN node). For this purpose one of the RAN nodes can send to the other RAN node the same type of message as used for the case of establishing the signaling connection, or a different message (e.g. an XnAP NG-RAN NODE CONFIGURATION UPDATE message M454) including the ML capabilities (new or updated) and/or ML requirements (new or updated) of the requesting RAN node, and also a request to obtain the ML capabilities and/or ML requirements of the second RAN node. Correspondingly, the second RAN node receives the first RAN node’s ML capabilities and/or ML requirements and sends back its ML capabilities and/or ML requirements in a return acknowledgement message M456.
[0071] Once the ML capabilities and/or ML requirements of a network node are known to a neighbour network node, a number of procedures can be triggered. For example, if a RAN node 1 knows that a neighbor RAN node 2 supports the Energy Saving ML use case, the RAN node 1 may be able to request model inference outputs that are specific to the ML supported ES use case. Similarly, if RAN node 2 knows the ML capabilities and/or ML requirements of RAN node 1, e.g., that RAN node 1 supports a specific Al use case, then RAN node 2 may request input data that are needed to run ML models supporting the use case supported by RAN node 1.
[0072] 2. Signaling of ML capabilities and ML requirements within a RAN node in split deployment:
[0073] In the case of a disaggregated RAN node (e.g., a gNB comprising a gNB-CU 504 (see FIG. 5) and at least one gNB-DU 502, the ML capabilities and/or ML requirements can be transferred between two entities of the RAN node (e.g., between the gNB-CU and a gNB-DU). In some embodiments, a first entity of the RAN node (e.g., the gNB-CU 504) and a second entity of the RAN node (e.g., the gNB-DU 502) exchange ML capabilities and/or ML requirements during the setup of the signaling connection between them (e.g., at F1 signaling connection setup M550).
[0074] In one example, the second entity (e.g., the gNB-DU) can send a message (e.g., an F1 SETUP REQUEST message M550) to the first entity (e.g., the gNB-CU) including the ML capabilities and/or ML requirements of the second entity. The first entity (gNB-CU) receives the ML capabilities and/or ML requirements of the second entity (gNB-DU) and in turn sends its ML capabilities and/or ML requirements in another message (e.g., an F1 SETUP RESPONSE message M552). After this step both entities are aware of the ML capabilities and/or ML requirements of each other and can support appropriate ML operations.
[0075] In another option, the first entity requests the second entity to provide second entity's ML capabilities and/or ML requirements, with or without providing first entity’s ML capabilities and/or ML requirements itself.
[0076] In another example, either because the ML capabilities and/or ML requirements of the two entities of the RAN node (e.g., gNB-CU and gNB-DU) were not exchanged at the establishment of the signaling connection (e.g., at the F1 setup procedure) or because there were subsequent changes (e.g., after the F1 setup), each one of the entities of the RAN node (e.g., the gNB-CU or the gNB-DU) can inform the other entity of the RAN node about its new/updated ML capabilities and/or ML requirements and/or request to obtain the ML capabilities and/or ML requirements of the other entity. For this purpose, in one example, one of the entities of the RAN node (e.g., the gNB- DU) can send the same type of message as used for the case of establishing the signaling connection, or a different message (e.g., an F1AP GNB-DU CONFIGURATION UPDATE message M554) including the ML capabilities (new or updated) and/or ML requirements (new or updated) of the requesting entity, and also a request to obtain the ML capabilities (new or updated) and/or ML requirements (new or updated) of the second entity (e.g., of the gNB-CU). Correspondingly, the second entity (e.g., the gNB-CU) receives the first entity's (e.g., the gNB-DU's) ML capabilities and/or ML requirements and sends back its ML capabilities and/or ML requirements in a return message (e.g., an F1AP GNB-DU CONFIGURATION UPDATE ACKNOWLEDGE message M556).
[0077] In another example, the gNB-CU can send a GNB-CU CONFIGURATION UPDATE message M558 including its ML capabilities and ML requirements and also requesting the ML capabilities and ML requirements of the gNB-DU. Correspondingly, the gNB-DU receives the gNB-CU's ML capabilities and ML requirements and sends back its ML capabilities and ML requirements in GNB-CU CONFIGURATION UPDATE ACKNOWLEDGE message M560.
[0078] The ML capabilities and/or ML requirements that can be signaled are the same as described for the case of signaling between RAN nodes described above.
[0079] 3. Indirect signaling of ML capabilities and ML requirements between RAN nodes
[0080] If the connection between the RAN node 401 and RAN node 402 takes place via a core network (ON), then the ML capabilities and/or ML requirements are signaled indirectly between the RAN nodes via one or more CN functions (CNFs) (e.g., over the NG interface). In some embodiments, which is illustrated in FIG. 6, RAN node 401 signals its ML capabilities and/or ML requirements, together with details about RAN node 402 (e.g., an ID that identifies the RAN node 402, during the setup of the signaling connection towards CNF 602 (e.g., at NG signaling connection setup). CNF 602 may be an AMF or other CNF. [0081] For example, RAN node 401 can send an NGAP NG SETUP REQUEST message m650 to the CNF also including its ML capabilities and/or its ML requirements (i.e., including ML capability information and/or ML requirement information). After receiving message M650 containing the ML capabilities and/or ML requirements, the CNF responds with an NG SETUP RESPONSE message m652 and forwards the ML capability information and/or ML requirement information to the RAN node indicated as the recipient in the message received by the CN node, which, in this case is RAN node 402.
[0082] In another embodiment, if the ML capabilities and/or the ML requirements of a network node are not signaled at the establishment of the signaling connection towards the CN (e.g. , during the NG Setup procedure) or because there are subsequent changes after the execution of such procedure, the network node can inform another network node (via CN nodes) about its new/updated ML capabilities and/or ML requirements and/or request to obtain the ML capabilities and/or ML requirements of the other RAN node if needed (e.g., if one RAN node wants to use an ML model and would like to know if it can receive input and collaborate with the second RAN node (via the core network)). For this purpose, one of the RAN nodes can send to the other RAN node (via the CN) a message (e.g., an NGAP RAN CONFIGURATION UPDATE message M654 including its ML capabilities and/or ML requirements, the identity of the recipient RAN node and also requesting the ML capabilities and/or ML requirements of the second RAN node). The CN node (e.g., the AMF) can also request the RAN node or signal to other RAN nodes the ML capabilities and/or the ML requirements (e.g., by means of an NGAP AMF CONFIGURATION UPDATE message M656).
[0083] 4. 3GPP Technical Specification Implementation examples
[0084] In the following examples, there is exemplified a possible implementation of the embodiments as additions to the current 3GPP technical specification (s) (TSs). Two different alternatives are described below.
[0085] In the first alternative, a first node (e.g., RAN node or node implementing a CNF) can send to a second node (e.g., RAN node or a node implementing a CNF) an ML Capabilities Query IE requesting the ML capabilities and/or ML requirements of the second node. The first node can also send an ML Capabilities IE that includes its ML capabilities and/or ML requirements.
[0086] In the second alternative, when the first node sends to the second node a message comprising an ML Capabilities IE that includes the node's ML capabilities and/or ML requirements, the second node treats the message as implicitly requesting the second node to provide to the first node ML capability information and/or ML requirement information indicating its ML capabilities and requirements, respectively, and, hence, responds accordingly (i.e., transmits a message responsive to the message sent by the first node, which responsive message includes the second nodes ML capabilities and/or ML requirements).
[0087] Examples of implementation for both alternatives are presented below. Y1
[0088] XN Setup
[0089] 3GPP TS 38.423 v17.1.0 ("TS 38.423) describes the Xn Setup procedure. The purpose of the Xn Setup procedure is to exchange application level configuration data needed for two NG-RAN nodes to interoperate correctly over the Xn-C interface. An NG-RAN node (hereafter NG-RAN nodel) initiates the procedure by sending an XN SETUP REQUEST message to another NG-RAN node (hereafter NG-RAN node 2). The NG-RAN node2 replies with the XN SETUP RESPONSE message.
[0090] In some embodiments, the XN SETUP REQUEST message defined in TS 38.423 is extended such that it may include an ML Capabilities Query IE and/or a ML Capabilities IE; similarly, the XN SETUP RESPONSE message defined in TS 38.423 is extended such that it may include the ML Capabilities IE. The ML Capabilities IE contains ML capability information indicating ML capabilities and/or ML requirement information indicating ML requirements (e.g. , ML capabilities/requirements of the node that originated the message containing the IE).
[0091] In some embodiments, if the ML Capabilities Query IE is contained in the extended XN SETUP REQUEST message, then the NG-RAN node2 shall, if supported, include an ML Capabilities IE in an extended XN SETUP RESPONSE message. In some embodiments, if the ML Capabilities IE is contained in the extended XN SETUP REQUEST message, then the NG-RAN node2 shall, if supported, store this information and use it as defined in TS 38.300. Similarly, if the ML Capabilities IE is contained in the extended XN SETUP RESPONSE message, the NG-RAN nodel shall, if supported, store this information and use it as defined in TS 38.300.
[0092] In some embodiments, if the ML Capabilities IE is contained in the extended XN SETUP
REQUEST message, then the NG-RAN node2 shall, if supported, include an ML Capabilities IE in an extended XN SETUP RESPONSE message.
[0093] In some embodiments, the XN SETUP REQUEST message and the XN SETUP RESPONSE message are further extended to include an ML Requirements IE for containing ML Requirement information indicating ML requirements of the node that transmitted the message.
[0094] Table 1 below illustrates an example of the extended XN SETUP REQUEST message; and table 2 below illustrates an example of the extended XN SETUP RESPONSE message.
TABLE 1 - XN Setup Request
Figure imgf000019_0001
Figure imgf000020_0001
TABLE 2 -XN Setup Response
Figure imgf000021_0001
Figure imgf000022_0001
[0095] RAN Node Configuration Update
[0096] TS 38.423 also describes the NG-RAN node Configuration Update procedure. The purpose of the NG-RAN node Configuration Update procedure is to update application-level configuration data needed for two NG- RAN nodes to interoperate correctly over the Xn-C interface. The NG-RAN nodel initiates the procedure by sending the NG-RAN NODE CONFIGURATION UPDATE message to NG-RAN node2. NG-RAN node2 responds to the NG- RAN NODE CONFIGURATION UPDATE ACK message.
[0097] In some embodiments, the NG-RAN NODE CONFIGURATION UPDATE message defined in TS 38.423 is extended such that it may include an ML Capabilities Query IE and/or ML Capabilities IE; similarly, the NG- RAN NODE CONFIGURATION UPDATE ACK message defined in TS 38.423 is extended such that it may include the ML Capabilities IE.
[0098] In some embodiments, if the ML Capabilities Query IE is contained in the NG-RAN NODE CONFIGURATION UPDATE message, then the NG-RAN node2 shall, if supported, include the ML Capabilities IE in the NG-RAN NODE CONFIGURATION UPDATE ACK message In some embodiments, if the ML Capabilities IE is contained in the NG-RAN NODE CONFIGURATION UPDATE message, the NG-RAN node2 shall, if supported, store this information and use it as defined in TS 38.300 Similarly, if the ML Capabilities IE is contained in the NG- RAN NODE CONFIGURATION UPDATE ACK message, the NG-RAN nodel shall, if supported, store this information and use it as defined in TS 38.300.
[0099] In some embodiments, if the ML Capabilities IE is contained in the NG-RAN NODE CONFIGURATION UPDATE message, then the NG-RAN node2 shall, if supported, include the ML Capabilities IE in the NG-RAN NODE CONFIGURATION UPDATE ACK message. [00100] In some embodiments, the NG-RAN NODE CONFIGURATION UPDATE message and the NG- RAN NODE CONFIGURATION UPDATE ACK message are further extended to include an ML Reguirements IE for containing ML Reguirement information indicating ML reguirements of the node that transmitted the message.
[00101] Table 3 below illustrates an example of the extended NG-RAN NODE CONFIGURATION UPDATE message, and table 4 below illustrates an example of the extended NG-RAN NODE CONFIGURATION UPDATE
ACK message
[00102]
TABLE 3 - NG-RAN Node Configuration Update
Figure imgf000024_0001
Figure imgf000025_0001
Figure imgf000026_0001
TABLE 4 - NG-RAN Node Configuration Update Ack
Figure imgf000027_0001
Figure imgf000028_0001
[00103] F1 Setup
[00104] 3GPP TS 38.473 v17.1.0 (“TS 38.473") describes the F1 Setup procedure. The purpose of the F1 Setup procedure is to exchange application level data needed for the gNB-DU and the gNB-CU to correctly interoperate on the F1 interface. This procedure is the first F1AP procedure triggered for the F1-C interface instance after a TNL association has become operational. The gNB-DU initiates the procedure by sending a F1 SETUP REQUEST message including the appropriate data to the gNB-CU. The gNB-CU responds with a F1 SETUP RESPONSE message including the appropriate data.
[00105] In some embodiments, the F1 SETUP REQUEST message defined in TS 38.473 is extended such that it may include an ML Capabilities Query IE and/or a ML Capabilities IE; similarly, the F1 SETUP RESPONSE message defined in TS 38.473 is extended such that it may include the ML Capabilities IE.
[00106] In some embodiments, if the ML Capabilities Query IE is contained in the extended F1 SETUP REQUEST message, then the gNB-CU shall, if supported, include an ML Capabilities IE in an extended F1 SETUP RESPONSE message. In some embodiments, if the ML Capabilities IE is contained in the extended F1 SETUP REQUEST message, the gNB-CU shall, if supported, store this information and use it as defined in TS 38.401 . Similarly, if the ML Capabilities IE is contained in the extended F1 SETUP RESPONSE message, the gNB-DU shall, if supported, store this information and use it as defined in TS 38.401 .
[00107] In some embodiments, if the ML Capabilities IE is contained in the extended F1 SETUP REQUEST message, then the gNB-CU shall, if supported, include an ML Capabilities IE in an extended F1 SETUP RESPONSE message.
[00108] In some embodiments, the F1 SETUP REQUEST message and the F1 SETUP RESPONSE message are further extended to include an ML Requirements IE for containing ML Requirement information indicating ML requirements of the node that transmitted the message.
[00109] Table 5 below illustrates an example of the extended F1 SETUP REQUEST message, and table 6 below illustrates an example of the extended F1 SETUP RESPONSE message.
TABLE 5 - F1 Setup Request
Figure imgf000030_0001
TABLE 6 - F1 Setup Response
Figure imgf000031_0001
Figure imgf000032_0001
[00110] gNB-DU Configuration Update
[00111] TS 38.473 also describes the gNB-DU Configuration Update procedure. The purpose of the gNB- DU Configuration Update procedure is to update application level configuration data needed for the gNB-DU and the gNB-CU to interoperate correctly on the F1 interface.
[00112] The gNB-DU initiates the procedure by sending a GNB-DU CONFIGURATION UPDATE message to the gNB-CU including an appropriate set of updated configuration data that it has just taken into operational use. The gNB-CU responds with GNB-DU CONFIGURATION UPDATE ACKNOWLEDGE message to acknowledge that it successfully updated the configuration data. If an information element is not included in the GNB-DU CONFIGURATION UPDATE message, the gNB-CU shall interpret that the corresponding configuration data is not changed and shall continue to operate the F1-C interface with the existing related configuration data.
[00113] In some embodiments, the GNB-DU CONFIGURATION UPDATE message defined in TS 38.473 is extended such that it may include an ML Capabilities Query IE and/or ML Capabilities IE; similarly, the GNB-DU CONFIGURATION UPDATE ACK message defined in TS 38.473 is extended such that it may include the ML Capabilities IE.
[00114] In some embodiments, if the ML Capabilities Query IE is contained in the GNB-DU CONFIGURATION UPDATE message, then the gNB-CU shall, if supported, include the ML Capabilities IE in the GNB-DU CONFIGURATION UPDATE ACK message. In some embodiments, if the ML Capabilities IE is contained in the GNB-DU CONFIGURATION UPDATE message, the gNB-CU shall, if supported, store this information and use it as defined in TS 38.401. Likewise, if the ML Capabilities IE is contained in the GNB-DU CONFIGURATION UPDATE ACK message, the gNB-DU shall, if supported, store this information and use it as defined in TS 38.401.
[00115] In some embodiments, if the ML Capabilities IE is contained in the GNB-DU CONFIGURATION UPDATE message, then the gNB-CU shall, if supported, include the ML Capabilities IE in the GNB-DU CONFIGURATION UPDATE ACK message. [00116] In some embodiments, the GNB-DU CONFIGURATION UPDATE message and the GNB-DU CONFIGURATION UPDATE ACK message are further extended to include an ML Requirements IE for containing ML Requirement information indicating ML requirements of the node that transmitted the message.
[00117] Table 7 below illustrates an example of the extended GNB-DU CONFIGURATION UPDATE message; and table 8 below illustrates an example of the extended GNB-DU CONFIGURATION UPDATE ACK message.
TABLE 7 - gNB-DU Configuration Update
Figure imgf000033_0001
Figure imgf000034_0001
Figure imgf000035_0001
TABLE 8
Figure imgf000035_0002
Figure imgf000036_0001
[00118] gNB-CU Configuration Update
[00119] TS 38.473 also describes the gNB-CU Configuration Update procedure. The purpose of the gNB- CU Configuration Update procedure is to update application level configuration data needed for the gNB-DU and gNB-CU to interoperate correctly on the F1 interface.
[00120] The gNB-CU initiates the procedure by sending a GNB-CU CONFIGURATION UPDATE message including the appropriate updated configuration data to the gNB-DU. The gNB-DU responds with a GNB-CU CONFIGURATION UPDATE ACKNOWLEDGE message to acknowledge that it successfully updated the configuration data. If an information element is not included in the GNB-CU CONFIGURATION UPDATE message, the gNB-DU shall interpret that the corresponding configuration data is not changed and shall continue to operate the F1-C interface with the existing related configuration data. The updated configuration data shall be stored in the respective node and used as long as there is an operational TNL association or until any further update is performed.
[00121] In some embodiments, the GNB-CU CONFIGURATION UPDATE message defined in TS 38.473 is extended such that it may include an ML Capabilities Query IE and/or ML Capabilities IE; similarly, the GNB-CU CONFIGURATION UPDATE ACK message defined in TS 38.473 is extended such that it may include the ML Capabilities IE.
[00122] In some embodiments, if the ML Capabilities Query IE is contained in the GNB-CU CONFIGURATION UPDATE message, then the gNB-DU shall, if supported, include the ML Capabilities IE in the GNB-CU CONFIGURATION UPDATE ACK message. In some embodiments, if the ML Capabilities IE is contained in the GNB-CU CONFIGURATION UPDATE message, the gNB-DU shall, if supported, store this information and use it as defined in TS 38.401. Similarly, if the ML Capabilities IE is contained in the GNB-CU CONFIGURATION UPDATE ACK message, the gNB-CU shall, if supported, store this information and use it as defined in TS 38.401.
[00123] In some embodiments, if the ML Capabilities IE is contained in the GNB-CU CONFIGURATION UPDATE message, then the gNB-DU shall, if supported, include the ML Capabilities IE in the GNB-CU CONFIGURATION UPDATE ACK message
[00124] In some embodiments, the GNB-CU CONFIGURATION UPDATE message and the GNB-CU CONFIGURATION UPDATE ACK message are further extended to include an ML Requirements IE for containing ML Requirement information indicating ML requirements of the node that transmitted the message.
[00125] Table 9 below illustrates an example of the extended GNB-CU CONFIGURATION UPDATE message; and table 10 below illustrates an example of the extended GNB-CU CONFIGURATION UPDATE ACK message.
TABLE 9 - gNB-CU Configuration Update
Figure imgf000038_0001
Figure imgf000039_0001
Figure imgf000040_0001
Figure imgf000041_0001
TABLE 10 - gNB-CU Configuration Update Ack
Figure imgf000041_0002
Figure imgf000042_0001
Figure imgf000043_0001
[00126] NG Setup
[00127] 3GPP TS 38.413 V17.1.1 (“TS 38.413”) describes the NG Setup procedure. The purpose of the NG Setup procedure is to exchange application level data needed for the NG-RAN node and the AMF to correctly interoperate on the NG-C interface. This procedure shall be the first NGAP procedure triggered after the TNL association has become operational. The procedure uses non-UE associated signalling. This procedure erases any existing application level configuration data in the two nodes, replaces it by the one received and clears AMF overload state information at the NG-RAN node If the NG-RAN node and AMF do not agree on retaining the UE contexts, this procedure also re-initialises the NGAP UE-related contexts (if any) and erases all related signalling connections in the two nodes like an NG Reset procedure would do.
[00128] The NG-RAN node initiates the procedure by sending an NG SETUP REQUEST message including the appropriate data to the AMF. The AMF responds with an NG SETUP RESPONSE message including the appropriate data.
[00129] In some embodiments, the NG SETUP REQUEST message defined in TS 38.413 is extended such that it may include an ML Capabilities Query IE and/or a ML Capabilities IE; similarly, the NG SETUP RESPONSE message defined in TS 38.413 is extended such that it may include the ML Capabilities IE.
[00130] In some embodiments, if the ML Capabilities Query IE is contained in the extended NG SETUP REQUEST message, then the AMF shall, if supported, include an ML Capabilities IE in an extended NG SETUP RESPONSE message. In some embodiments, if the ML Capabilities IE is contained in the extended NG SETUP REQUEST message, the AMF shall, if supported, store this information and use it as needed. Likewise, if the ML Capabilities IE is contained in the extended NG SETUP RESPONSE message, the NG-RAN node shall, if supported, store this information and use it as needed.
[00131] In some embodiments, if the ML Capabilities IE is contained in the extended NG SETUP REQUEST message, then the AMF shall, if supported, include an ML Capabilities IE in an extended NG SETUP RESPONSE message.
[00132] In some embodiments, the NG SETUP REQUEST message and the NG SETUP RESPONSE message are further extended to include an ML Requirements IE for containing ML Requirement information indicating ML requirements of the node that transmitted the message. [00133] In some embodiments, the NG SETUP REQUEST message defined in TS 38.413 is extended further such that it may include a Target RAN Node ID IE containing information identifying a target RAN node.
[00134] In some embodiments, if the ML Capabilities IE and/or ML Requirements IE is contained in the extended NG SETUP REQUEST message and the Target RAN Node ID IE is also contained in the message, then the AMF shall, if supported, transmit to the identified target RAN node a message comprising the content of the ML Capabilities IE and/or the content of the ML Requirements IE.
[00135] In some embodiments, if the ML Capabilities IE is contained in the extended NG SETUP REQUEST message and the Target RAN Node ID IE is also contained in the message, then the AMF shall, if supported, respond to a NG SETUP REQUEST message transmitted by the identified target RAN node by sending to the target RAN node an NG SETUP RESPONSE message containing the content of the ML Capabilities IE and/or ML Requirements IE included in the first mentioned NG SETUP REQUEST.
[00136] Table 11 below illustrates an example of the extended NG SETUP REQUEST message; and table
12 below illustrates an example of the extended NG SETUP RESPONSE message.
TABLE 11 - NG Setup Request
Figure imgf000045_0001
Figure imgf000046_0001
TABLE 12 - NG Setup Response
Figure imgf000047_0001
[00137] RAN Config Update
[00138] TS 38.413 also describes a RAN Configuration Update procedure. The purpose of the RAN Configuration Update procedure is to update application level configuration data needed for the NG-RAN node and the AMF to interoperate correctly on the NG-C interface. This procedure does not affect existing UE-related contexts, if any. The procedure uses non UE-associated signalling.
[00139] The NG-RAN node initiates the RAN configuration update procedure by sending a RAN CONFIGURATION UPDATE message to the AMF including an appropriate set of updated configuration data that it has just taken into operational use. The AMF responds with a RAN CONFIGURATION UPDATE ACKNOWLEDGE message to acknowledge that it successfully updated the configuration data. If an information element is not included in the RAN CONFIGURATION UPDATE message, the AMF shall interpret that the corresponding configuration data is not changed and shall continue to operate the NG-C interface with the existing related configuration data.
[00140] In some embodiments, the RAN CONFIGURATION UPDATE message defined in TS 38.413 is extended such that it may include an ML Capabilities Query IE and/or ML Capabilities IE; similarly, the RAN CONFIGURATION UPDATE ACK message defined in TS 38.413 is extended such that it may include the ML Capabilities IE.
[00141] In some embodiments, if the ML Capabilities Query IE is contained in the RAN CONFIGURATION UPDATE message, then the AMF shall, if supported, include the ML Capabilities IE in the RAN CONFIGURATION UPDATE ACK message. In some embodiments, if the ML Capabilities IE is contained in the RAN CONFIGURATION UPDATE message, the AMF shall, if supported, store this information and use it as needed. Similarly, if the ML Capabilities IE is contained in the RAN CONFIGURATION UPDATE ACK message, the RAN node shall, if supported, store this information and use it as appropriate.
[00142] In some embodiments, if the ML Capabilities IE is contained in the RAN CONFIGURATION UPDATE message, then the AMF shall, if supported, include the ML Capabilities IE in the RAN CONFIGURATION UPDATE ACK message.
[00143] In some embodiments, the RAN CONFIGURATION UPDATE message and the RAN CONFIGURATION UPDATE ACK message are further extended to include an ML Requirements IE for containing ML Requirement information indicating ML requirements of the node that transmitted the message
[00144] In some embodiments, the RAN CONFIGURATION UPDATE message defined in TS 38.413 is extended further such that it may include a Target RAN Node ID IE containing information identifying a target RAN node. [00145] In some embodiments, if the ML Capabilities IE and/or ML Requirements IE is contained in the extended RAN CONFIGURATION UPDATE message and the Target RAN Node ID IE is also contained in the message, then the AMF shall, if supported, transmit to the identified target RAN node a message comprising the content of the ML Capabilities IE and/or the content of the ML Requirements IE. [00146] In some embodiments, if the ML Capabilities IE is contained in the extended RAN
CONFIGURATION UPDATE message and the Target RAN Node ID IE is also contained in the message, then the AMF shall, if supported, respond to a RAN CONFIGURATION UPDATE message transmitted by the identified target RAN node by sending to the target RAN node a RAN CONFIGURATION UPDATE ACK message containing the content of the ML Capabilities IE and/or ML Requirements IE included in the first mentioned RAN CONFIGURATION UPDATE message.
[00147] Table 13 below illustrates an example of the extended RAN CONFIGURATION UPDATE message; and table 14 below illustrates an example of the extended RAN CONFIGURATION UPDATE ACK message.
TABLE 13 - RAN Config Update
Figure imgf000050_0001
Figure imgf000051_0001
TABLE 14 - RAN Config Update Ack
Figure imgf000051_0002
[00148] AMF Configuration Update [00149] TS 38.413 also describes an AMF Configuration Update procedure. The purpose of the AMF
Configuration Update procedure is to update application level configuration data needed for the NG-RAN node and AMF to interoperate correctly on the NG-C interface. This procedure does not affect existing UE-related contexts, if any. The procedure uses non UE-associated signalling. [00150] The AMF initiates the AMF Configuration Update procedure by sending an AMF CONFIGURATION UPDATE message including the appropriate updated configuration data to the NG-RAN node. The NG-RAN node responds with an AMF CONFIGURATION UPDATE ACKNOWLEDGE message to acknowledge that it successfully updated the configuration data. Unless stated otherwise, if an information element is not included in the AMF CONFIGURATION UPDATE message, the NG-RAN node shall interpret that the corresponding configuration data is not changed and shall continue to operate the NG-C interface with the existing related configuration data.
[00151] In some embodiments, the AMF CONFIGURATION UPDATE message defined in TS 38.413 is extended such that it may include an ML Capabilities Query IE and/or ML Capabilities IE; similarly, the AMF CONFIGURATION UPDATE ACK message defined in TS 38.413 is extended such that it may include the ML Capabilities IE.
[00152] In some embodiments, if the ML Capabilities Query IE is contained in the AMF CONFIGURATION UPDATE message, then the NG-RAN node shall, if supported, include the ML Capabilities IE in the AMF CONFIGURATION UPDATE ACK message. In some embodiments, if the ML Capabilities IE is contained in the AMF CONFIGURATION UPDATE message, the NG-RAN node shall, if supported, store this information and use it as needed. Similarly, if the ML Capabilities IE is contained in the AMF CONFIGURATION UPDATE ACK message, the AMF shall, if supported, store this information and use it as appropriate.
[00153] In some embodiments, if the ML Capabilities IE is contained in the AMF CONFIGURATION UPDATE message, then the NG-RAN node shall, if supported, include the ML Capabilities IE in the AMF CONFIGURATION UPDATE ACK message
[00154] In some embodiments, the AMF CONFIGURATION UPDATE message and the AMF CONFIGURATION UPDATE ACK message are further extended to include an ML Requirements IE for containing ML Requirement information indicating ML requirements of the node that transmitted the message.
[00155] Table 15 below illustrates an example of the extended AMF CONFIGURATION UPDATE message; and table 15 below illustrates an example of the extended AMF CONFIGURATION UPDATE ACK message.
TABLE 15 - AMF Configuration Update
Figure imgf000053_0001
Figure imgf000054_0001
TABLE 16 - AMF Configuration Update Ack
Figure imgf000055_0001
[00156] FIG. 7 is a flow chart illustrating a process 700, according to an embodiment, for providing ML capability information and/or ML requirement information. Process 700 may begin in step s702. Step s702 comprises a first network node transmitting to a second network node a first message comprising a first information element, IE, comprising first ML capability information and/or first ML requirement information, the first ML capability information indicates one or more ML capabilities of the first network node, and the first ML requirement information indicates one or more ML requirements of the first network node.
[00157] In some embodiments, the first network node is a first radio access network, RAN, node, and the second network node is a second RAN node. In some embodiments, the first message is: an extended NG-RAN NODE CONFIGURATION UPDATE message, an extended NG-RAN NODE CONFIGURATION UPDATE ACK message, an extended XN SETUP REQUEST message, or an extended XN SETUP RESPONSE message.
[00158] In some embodiments, the first network node is a distributed unit, DU, of a base station having a central unit, CU, and the second network node is the CU. In some embodiments, the first message is: an extended F1 SETUP REQUEST message, an extended F1 SETUP RESPONSE message, an extended GNB-DU CONFIGURATION UPDATE message, or an extended GNB-DU CONFIGURATION UPDATE ACK message. [00159] In some embodiments, the first network node is a first radio access network, RAN, node, and the second network node is a core network, CN, node. In some embodiments, the first message is: an extended NG SETUP REQUEST message, an extended NG SETUP RESPONSE message, an extended RAN CONFIGURATION UPDATE message, or an extended RAN CONFIGURATION UPDATE ACK message.
[00160] In some embodiments the process further includes receiving a second message transmitted by the second network node, the second message comprising second ML capability and/or second ML requirement information, wherein the second ML capability information indicates one or more ML capabilities of the second network node, and the second ML requirement information indicates one or more ML requirements of the second network node.
[00161] In some embodiments, the second message is received from the second network node before the first network node transmits the first message to the second network node, and the first network node transmits the first message to the second network node as a result of determining that the second message comprises ML capability and/or ML requirement information.
[00162] In some embodiments, the second message is received from the second network node before the first network node transmits the first message to the second network node, and the first network node transmits the first message to the second network node as a result of determining that the second message comprises an ML capability query IE.
[00163] In some embodiments, the first network node is a core network, CN, node, the process further comprises, prior to transmitting the first message to second network node, receiving a second message transmitted by a first RAN node, the second message comprises the first ML capability information and/or the first ML requirement information, and the second network node is a second RAN node. In some embodiments, the second message further comprises a target RAN node identifier (ID) IE comprising an ID identifying the second RAN node.
[00164] In some embodiments, the first IE comprises the first ML capability information and the first ML capability information comprises one or more of: an indication of whether the first network node can support the execution or training of ML models or algorithms, a use case indicator indicating a use case for which the first network node supports ML operations, information indicating a number of UEs/cells for which ML model inference can be produced per unit of time, an indication of whether the first network node can execute an ML model with batch inference, information indicating the number of ML model inferences that can be executed per unit of time, information indicating available ML model outputs, information indicating ML algorithm types that can be supported, information indicating ML model types that can be supported, information indicating type of activation functions that can be supported for a given ML model type, information indicating a combination of ML model type and an indication of the ML model size that can be supported, information indicating whether the first network node supports retraining, or information indicating a allowed lifecycle management operations.
[00165] In some embodiments, the first IE comprises the first ML requirement information and the first ML requirement information comprises one or more of: information indicating a minimum or recommended processing power and/or amount of memory, information indicating a required processing unit type, information indicating a required or recommended number of processing units, information indicating a required memory type and/or memory size, information indicating a frequency of input data needed to carry out ML inference, information indicating security requirements, information indicating user consent requirements, information indicating that use of an ML model depends on another model or algorithm or software module, or information indicating an ML model type and a required ML model size.
[00166] FIG. 8 is a block diagram of network node 800, according to some embodiments. Network node 800 can be used to implement any of the network node described herein, such as, for example, RAN node 401, second network node 402, third network node 403, CRF 506. As shown in FIG. 8, network node 800 may comprise: processing circuitry (PC) 802, which may include one or more processors (P) 855 (e.g., one or more general purpose microprocessors and/or one or more other processors, such as an application specific integrated circuit (ASIC), field- programmable gate arrays (FPGAs), and the like), which processors may be co-located in a single housing or in a single data center or may be geographically distributed (i.e. , network node 800 may be a distributed computing apparatus); at least one network interface 848 (e.g., a physical interface or air interface) comprising a transmitter (Tx) 845 and a receiver (Rx) 847 for enabling network node 800 to transmit data to and receive data from other nodes connected to a network 110 (e.g., an Internet Protocol (IP) network) to which network interface 848 is connected (physically or wirelessly) (e.g., network interface 848 may be coupled to an antenna arrangement comprising one or more antennas for enabling network node 800 to wirelessly transmit/receive data); and a storage unit (a.k.a., “data storage system”) 808, which may include one or more non-volatile storage devices and/or one or more volatile storage devices. In embodiments where PC 802 includes a programmable processor, a computer readable storage medium (CRSM) 842 may be provided. CRSM 842 may store a computer program (CP) 843 comprising computer readable instructions (CRI) 844. CRSM 842 may be a non-transitory computer readable medium, such as, magnetic media (e.g., a hard disk), optical media, memory devices (e.g., random access memory, flash memory), and the like. In some embodiments, the CRI 844 of computer program 843 is configured such that when executed by PC 802, the CRI causes network node 800 to perform steps described herein (e.g , steps described herein with reference to the flow charts). In other embodiments, network node 800 may be configured to perform steps described herein without the need for code. That is, for example, PC 802 may consist merely of one or more ASICs. Hence, the features of the embodiments described herein may be implemented in hardware and/or software [00167] While various embodiments are described herein, it should be understood that they have been presented by way of example only, and not limitation. Thus, the breadth and scope of this disclosure should not be limited by any of the above-described exemplary embodiments. Moreover, any combination of the above-described elements in all possible variations thereof is encompassed by the disclosure unless otherwise indicated herein or otherwise clearly contradicted by context.
[00168] Additionally, as used herein, transmitting a message to a device encompasses transmitting the message directly to the device or transmitting the message indirectly to the device (i.e., one or more nodes are used to relay the message from the source to the device). Likewise, as used herein, receiving a message from a device encompasses receiving the message directly from the device or indirectly from the device (i.e., one or more nodes are used to relay the message from the device to the receiving node).
[00169] Also, while the processes described above and illustrated in the drawings are shown as a sequence of steps, this was done solely for the sake of illustration Accordingly, it is contemplated that some steps may be added, some steps may be omitted, the order of the steps may be re-arranged, and some steps may be performed in parallel.
[00170] Abbreviations:
[00171] 3GPP 3rd Generation Partnership Project
[00172] 5G 5th Generation
[00173] 5GC 5G Core network
[00174] 5GS 5th Generation System
[00175] AMF Access and Mobility Management F
Figure imgf000058_0001
[00176] ASN.1 Abstract Syntax Notation One
[00177] AT Attention
[00178] AR Augmented Reality
[00179] AS Access Stratum
[00180] CGI Cell Global Identity
[00181] CN Core Network
[00182] CP Control Plane
[00183] CU Central Unit [00184] CU-CP Central Unit Control Plane
[00185] CU-UP Central Unit User Plane
[00186] DU Distributed Unit
[00187] DASH Dynamic Adaptive Streaming over HTTP
[00188] DC Dual Connectivity
[00189] DL Downlink
[00190] DNS Domain Name System
[00191] DU Distributed Unit
[00192] E-CGI E-UTRAN CGI
[00193] eNB Evolved Node B / E-UTRAN Node B
[00194] en-gNB A gNB acting as a secondary node in an EN-DC scenario (i.e. in a DC scenario with an eNB as the master node and a gNB as the secondary node.
[00195] EN E-UTRAN-NR
[00196] EPC Evolved Packet Core
[00197] EPS Evolved Packet System
[00198] E-UTRA Evolved UTRA
[00199] E-UTRAN/EUTRAN Evolved UTRAN
[00200] gNB Radio base station in NR
[00201] HSS Home Subscriber Server
[00202] HTTP Hypertext Transfer Protocol
[00203] IAB Integrated Access and Backhaul
[00204] ID Identifier/ldentity
[00205] IE Information Element
[00206] LTE Long Term Evolution
[00207] MAC Medium Access Control [00208] MCC Mobile Country Code
[00209] MCE Measurement Collection Entity / Measurement Collector Entity
[00210] MDT Minimization of Drive Tests
[00211] MME Mobility Management Entity
[00212] MNC Mobile Network Code
[00213] MTSI Multimedia Telephony Service for IMS
[00214] N3IWF Non-3GPP Interworking Function
[00215] NG Next Generation
[00216] NG The interface between an NG-RAN and a 5GC.
[00217] NGAP NG Application Protocol
[00218] NG-RAN NG Radio Access Network
[00219] NID Network identifier
[00220] NR New Radio
[00221] NWDAF Network Data Analytics Function
[00222] O&M Operation and Maintenance
[00223] CAM Operation and Maintenance
[00224] PDCP Packet Data Convergence Protocol
[00225] PDU Protocol Data Unit
[00226] PLMN Public Land Mobile Network
[00227] QMC QoE Measurement Collection
[00228] QoE Quality of Experience
[00229] RAN Radio Access Network
[00230] RAT Radio Access Technology
[00231] RLC Radio Link Control
[00232] RNC Radio Network Controller [00233] RRC Radio Resource Control
[00234] RVCoE RAN Visible QoE
[00235] S1 The interface between the RAN and the CN in LTE.
[00236] S1AP S1 Application Protocol
[00237] S-NSSAI Single Network Slice Selection Assistance Information
[00238] SMC Service Management and Orchestration
[00239] SRB Signaling Radio Bearer
[00240] TA Tracking Area
[00241] TCE Trace Collection Entity / Trace Collector Entity
[00242] TNGF Trusted Non-3GPP Gateway Function
[00243] TWIF Trusted WLAN Interworking Function
[00244] UDM Unified Data Management
[00245] UE User Equipment
[00246] UMTS Universal Mobile Telecommunication System
[00247] URI Uniform Resource Identifier
[00248] URL Uniform Resource Locator Uniform Resource Locator
[00249] UTRA Universal Terrestrial Radio Access
[00250] UTRAN Universal Terrestrial Radio Access Network
[00251] WLAN Wireless Local Area Network
[00252] Xn The interface between two gNBs in NR.
[00253] XnAP Xn Application Protocol

Claims

1. A method (700) for providing machine learning, ML, capability information and/or ML requirement information, the method comprising: a first network node transmitting (s702) to a second network node a first message comprising a first information element, IE, comprising first ML capability information and/or first ML requirement information, wherein the first ML capability information indicates one or more ML capabilities of the first network node, and the first ML requirement information indicates one or more ML requirements of the first network node.
2. The method of claim 1 , wherein the first network node is a first radio access network, RAN, node, and the second network node is a second RAN node.
3. The method of claim 2, wherein the first message is: an extended NG-RAN NODE CONFIGURATION UPDATE message, an extended NG-RAN NODE CONFIGURATION UPDATE ACK message, an extended XN SETUP REQUEST message, or an extended XN SETUP RESPONSE message.
4. The method of claim 1 , wherein the first network node is a distributed unit, DU, of a base station having a central unit, OU, and the second network node is the OU.
5. The method of claim 4, wherein the first message is: an extended F1 SETUP REQUEST message, an extended F1 SETUP RESPONSE message, an extended GNB-DU CONFIGURATION UPDATE message, or an extended GNB-DU CONFIGURATION UPDATE ACK message.
6 The method of claim 1, wherein the first network node is a first radio access network, RAN, node, and the second network node is a core network, CN, node.
7. The method of claim 6, wherein the first message is: an extended NG SETUP REQUEST message, an extended NG SETUP RESPONSE message, an extended RAN CONFIGURATION UPDATE message, or an extended RAN CONFIGURATION UPDATE ACK message.
8. The method of any one of claims 1-7, further comprising: receiving a second message transmitted by the second network node, the second message comprising second ML capability and/or second ML requirement information, wherein the second ML capability information indicates one or more ML capabilities of the second network node, and the second ML requirement information indicates one or more ML requirements of the second network node.
9. The method of claim 8, wherein the second message is received from the second network node before the first network node transmits the first message to the second network node, and the first network node transmits the first message to the second network node as a result of determining that the second message comprises ML capability and/or ML requirement information.
10. The method of claim 8, wherein the second message is received from the second network node before the first network node transmits the first message to the second network node, and the first network node transmits the first message to the second network node as a result of determining that the second message comprises an ML capability query IE.
11. The method of claim 1 , wherein the first network node is a core network, CN, node, the method further comprises, prior to transmitting the first message to the second network node, receiving a second message transmitted by a first RAN node, the second message comprises the first ML capability information and/or the first ML requirement information, and the second network node is a second RAN node.
12. The method of claim 11 , wherein the second message further comprises a target RAN node identifier (ID) IE comprising an ID identifying the second RAN node.
13. The method of any one of claims 1-12, wherein the first IE comprises the first ML capability information and the first ML capability information comprises one or more of: an indication of whether the first network node can support the execution or training of ML models or algorithms, a use case indicator indicating a use case for which the first network node supports ML operations, information indicating a number of UEs/cells for which ML model inference can be produced per unit of time, an indication of whether the first network node can execute an ML model with batch inference, information indicating the number of ML model inferences that can be executed per unit of time, information indicating available ML model outputs, information indicating ML algorithm types that can be supported, information indicating ML model types that can be supported, information indicating type of activation functions that can be supported for a given ML model type, information indicating a combination of ML model type and an indication of the ML model size that can be supported, information indicating whether the first network node supports retraining, or information indicating a allowed lifecycle management operations.
14. The method of any one of claims 1-13, wherein the first IE comprises the first ML requirement information and the first ML requirement information comprises one or more of: information indicating a minimum or recommended processing power and/or amount of memory, information indicating a required processing unit type, information indicating a required or recommended number of processing units, information indicating a required memory type and/or memory size, information indicating a frequency of input data needed to carry out ML inference, information indicating security requirements, information indicating user consent requirements, information indicating that use of an ML model depends on another model or algorithm or software module, or information indicating an ML model type and a required ML model size.
15. A computer program (843) comprising instructions (844) which when executed by processing circuitry
Figure imgf000064_0001
16. A carrier containing the computer program of claim 15, wherein the carrier is one of an electronic signal, an optical signal, a radio signal, and a computer readable storage medium (842).
17. A first network node for providing machine learning, ML, capability information and/or ML requirement information, the first network node being configured to: transmit to a second network node a first message comprising a first information element, IE, comprising first ML capability information and/or first ML requirement information, wherein the first ML capability information indicates one or more ML capabilities of the first network node, and the first ML requirement information indicates one or more ML requirements of the first network node.
18. The first network node of claim 17, wherein the first network node is a first radio access network, RAN, node, and the second network node is a second RAN node.
19. The first network node of claim 18, wherein the first message is: an extended NG-RAN NODE CONFIGURATION UPDATE message, an extended NG-RAN NODE CONFIGURATION UPDATE ACK message, an extended XN SETUP REQUEST message, or an extended XN SETUP RESPONSE message.
20. The first network node of claim 17, wherein the first network node is a distributed unit, DU, of a base station having a central unit, OU, and the second network node is the OU.
21. The first network node of claim 20, wherein the first message is: an extended F1 SETUP REQUEST message, an extended F1 SETUP RESPONSE message, an extended GNB-DU CONFIGURATION UPDATE message, or an extended GNB-DU CONFIGURATION UPDATE ACK message.
22. The first network node of claim 17, wherein the first network node is a first radio access network, RAN, node, and the second network node is a core network, CN, node.
23. The first network node of claim 22, wherein the first message is: an extended NG SETUP REQUEST message, an extended NG SETUP RESPONSE message, an extended RAN CONFIGURATION UPDATE message, or an extended RAN CONFIGURATION UPDATE ACK message.
24. The first network node of any one of claims 17-23, wherein the first network node comprises a receiver for receiving a second message transmitted by the second network node, the second message comprising second ML capability and/or second ML requirement information, wherein the second ML capability information indicates one or more ML capabilities of the second network node, and the second ML requirement information indicates one or more ML requirements of the second network node.
25. The first network node of claim 24, wherein the second message is received from the second network node before the first network node transmits the first message to the second network node, and the first network node is configured to transmit the first message to the second network node as a result of determining that the second message comprises ML capability and/or ML requirement information.
26. The first network node of claim 24, wherein the second message is received from the second network node before the first network node transmits the first message to the second network node, and the first network node is configured to transmit the first message to the second network node as a result of determining that the second message comprises an ML capability query IE.
27. The first network node of claim 17, wherein the first network node is a core network, GN, node, the first network comprises a receiver for receiving a second message transmitted by a first RAN node, the second message comprises the first ML capability information and/or the first ML requirement information, and the second network node is a second RAN node.
28. The first network node of claim 27, wherein the second message further comprises a target RAN node identifier (ID) IE comprising an ID identifying the second RAN node
29. The first network node of any one of claims 17-28, wherein the first IE comprises the first ML capability information and the first ML capability information comprises one or more of: an indication of whether the first network node can support the execution or training of ML models or algorithms, a use case indicator indicating a use case for which the first network node supports ML operations, information indicating a number of UEs/cells for which ML model inference can be produced per unit of time, an indication of whether the first network node can execute an ML model with batch inference, information indicating the number of ML model inferences that can be executed per unit of time, information indicating available ML model outputs, information indicating ML algorithm types that can be supported, information indicating ML model types that can be supported, information indicating type of activation functions that can be supported for a given ML model type, information indicating a combination of ML model type and an indication of the ML model size that can be supported, information indicating whether the first network node supports retraining, or information indicating a allowed lifecycle management operations.
30. The first network node of any one of claims 17-29, wherein the first IE comprises the first ML requirement information and the first ML requirement information comprises one or more of: information indicating a minimum or recommended processing power and/or amount of memory, information indicating a required processing unit type, information indicating a required or recommended number of processing units, information indicating a required memory type and/or memory size, information indicating a frequency of input data needed to carry out ML inference, information indicating security requirements, information indicating user consent requirements, information indicating that use of an ML model depends on another model or algorithm or software module, information indicating an ML model type and a required ML model size.
PCT/EP2023/070714 2022-08-05 2023-07-26 Machine learning capability configuration in radio access network WO2024028183A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GR20220100646 2022-08-05
GR20220100646 2022-08-05

Publications (1)

Publication Number Publication Date
WO2024028183A1 true WO2024028183A1 (en) 2024-02-08

Family

ID=87554761

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2023/070714 WO2024028183A1 (en) 2022-08-05 2023-07-26 Machine learning capability configuration in radio access network

Country Status (1)

Country Link
WO (1) WO2024028183A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220012645A1 (en) * 2021-09-23 2022-01-13 Dawei Ying Federated learning in o-ran
US20220225126A1 (en) * 2021-01-13 2022-07-14 Samsung Electronics Co., Ltd. Data processing method and device in wireless communication network

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220225126A1 (en) * 2021-01-13 2022-07-14 Samsung Electronics Co., Ltd. Data processing method and device in wireless communication network
US20220012645A1 (en) * 2021-09-23 2022-01-13 Dawei Ying Federated learning in o-ran

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
3GPP TECHNICAL REPORT (TR) 37.817
3GPP TECHNICAL SPECIFICATION (TS) 38.401
3GPP TS 38.300
3GPP TS 38.413
3GPP TS 38.423
CMCC: "Revised SID: Study on enhancement for data collection for NR and ENDC", vol. TSG RAN, no. Electronic Meeting; 20200914 - 20200918, 7 September 2020 (2020-09-07), XP052340447, Retrieved from the Internet <URL:https://ftp.3gpp.org/tsg_ran/TSG_RAN/TSGR_89e/Docs/RP-201620.zip RP-201620 was RP-201304 Revised SID Study on enhancement for data collection for NR and ENDC-rm.docx> [retrieved on 20200907] *
ERICSSON: "BLCR to TS38.423: Support for AI/ML in NG-RAN", vol. RAN WG3, no. Online; 20220815 - 20220824, 8 August 2022 (2022-08-08), XP052264657, Retrieved from the Internet <URL:https://ftp.3gpp.org/tsg_ran/WG3_Iu/TSGR3_117-e/Docs/R3-224491.zip R3-224491 - BLCR to 38423 - Support for AI-ML.docx> [retrieved on 20220808] *

Similar Documents

Publication Publication Date Title
CN109891832B (en) Network slice discovery and selection
EP4099635A1 (en) Method and device for selecting service in wireless communication system
EP3934291A1 (en) Method and device for providing connectivity to terminal in order to use edge computing service
WO2018183789A1 (en) Interworking lpwan end nodes in mobile operator network
WO2018232253A1 (en) Network exposure function
CN113179539A (en) Network slice selection in cellular systems
WO2021047781A1 (en) Apparatus for radio access network data collection
US20230239680A1 (en) Method and device for supporting mobility for collecting and analyzing network data in wireless communication network
US20230147409A1 (en) Apparatus and method for network automation in wireless communication system
US20230412513A1 (en) Providing distributed ai models in communication networks and related nodes/devices
WO2020217224A1 (en) Amf and scp behavior in delegated discovery of pcf
US20220086257A1 (en) Flexible data analytics processing and exposure in 5gc
US11805022B2 (en) Method and device for providing network analytics information in wireless communication network
WO2024028183A1 (en) Machine learning capability configuration in radio access network
KR20210144535A (en) Method and apparatus to support mobility for network data collection and analysis function in radio communication networks
WO2024028370A1 (en) Deployment and update of machine learning models in a radio access network
US20230209452A1 (en) Method and apparatus for supporting mobility for collection and analysis of network data in wireless communication network
US20230135667A1 (en) Method and apparatus for providing network slice in wireless communication system
US20230116405A1 (en) Method and device for session breakout of home routed session in visited plmn in wireless communication system
WO2023287808A1 (en) Beamforming for multiple-input multiple-output (mimo) modes in open radio access network (o-ran) systems
WO2023014896A1 (en) User equipment trajectory-assisted handover
WO2022100869A1 (en) Technique for observability of operational data in a radio telecommunications system
WO2024035295A1 (en) Radio access network (ran) visible quality of experience (rvqoe) measurement configuration originating from the distributed unit (du) or control unit-user plane (cu-up)
WO2024036268A1 (en) Support of data transmission measurement action guarantee for data delivery service
WO2023126468A1 (en) Systems and methods for inter-node verification of aiml models

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23750581

Country of ref document: EP

Kind code of ref document: A1