WO2023088593A1 - Ran optimization with the help of a decentralized graph neural network - Google Patents

Ran optimization with the help of a decentralized graph neural network Download PDF

Info

Publication number
WO2023088593A1
WO2023088593A1 PCT/EP2022/076035 EP2022076035W WO2023088593A1 WO 2023088593 A1 WO2023088593 A1 WO 2023088593A1 EP 2022076035 W EP2022076035 W EP 2022076035W WO 2023088593 A1 WO2023088593 A1 WO 2023088593A1
Authority
WO
WIPO (PCT)
Prior art keywords
network node
radio network
gnn
ran
data
Prior art date
Application number
PCT/EP2022/076035
Other languages
French (fr)
Inventor
Wenfeng HU
Konstantinos Vandikas
Adriano MENDO MATEO
Martin Isaksson
Erik SANDERS
Original Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget Lm Ericsson (Publ) filed Critical Telefonaktiebolaget Lm Ericsson (Publ)
Publication of WO2023088593A1 publication Critical patent/WO2023088593A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/02Arrangements for optimising operational condition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/042Knowledge-based neural networks; Logical representations of neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/09Supervised learning

Definitions

  • Embodiments herein relate to a first radio network node, a second radio network node, a central network node, and methods performed therein for communication networks. Furthermore, a computer program product and a computer readable storage medium are also provided herein. In particular, embodiments herein relate to handling data, for example, for radio optimization in a communication network.
  • UE user equipments
  • STA mobile stations, stations
  • CN core networks
  • the RAN covers a geographical area which is divided into service areas or cell areas, with each service area or cell area being served by a radio network node such as an access node e.g. a Wi-Fi access point or a radio base station (RBS), which in some radio access technologies (RAT) may also be called, for example, a NodeB, an evolved NodeB (eNB) and a gNodeB (gNB).
  • RAT radio access technologies
  • the service area or cell area is a geographical area where radio coverage is provided by the radio network node.
  • the radio network node operates on radio frequencies to communicate over an air interface with the wireless devices within range of the access node.
  • the radio network node communicates over a downlink (DL) to the wireless device and the wireless device communicates over an uplink (UL) to the access node.
  • DL downlink
  • UL uplink
  • a way of learning is using machine learning (ML) algorithms to improve accuracy.
  • Computational graph models such as ML models, e.g., deep learning models or neural network models, are currently used in different applications and are based on different technologies.
  • a computational graph model is a directed graph model where nodes correspond to operations or variables. Variables can feed their value into operations, and operations can feed their output into other operations. This way, every node in the graph model defines a function of the variables.
  • Training of these computational graph models is typically an offline process, meaning that it usually happens in datacenters and the execution of these computational graph models may be done anywhere from an edge of the communication network also called network edge, e.g., in devices, gateways or radio access infrastructure, to centralized clouds, e.g., data centers.
  • network edge e.g., in devices, gateways or radio access infrastructure
  • GNN Graph neural networks
  • Al academic and industrial artificial intelligence
  • a GNN is a type of neural network designed to solve an analytic task on large-scale data expressed the form of graph structure.
  • a GNN can embed a node’s own features together with its neighbours’ features into a compact representation that can be used for down streaming machine learning tasks such as supervised learning classification or regression problems.
  • RAN optimization is an important task to improve end user experience.
  • GNN techniques can analyze RAN behavior by constructing input features from data sources like performance management (PM) counters and configuration management (CM) parameters, as well as more granular data such as cell/UE trace records (CTR), from radio network nodes.
  • PM performance management
  • CM configuration management
  • CTR cell/UE trace records
  • a centralized ML training solution is one where data sources, such as CTR, CM and PM, are collected in a centralized place, for example, an operations support system (OSS), for every RAN node.
  • OSS operations support system
  • the data sources are extracted to an GNN training environment and then features are extracted from them for the needs of down streaming GNN training tasks.
  • Fig. 1 illustrates a system accordingly, wherein a data pipeline for RAN optimization in a centralized GNN training.
  • CTR may include sensitive data such as RAN UE identity (ID) & ueTRacelD
  • Fig .2 shows a data pipeline for RAN optimization in a decentralized GNN training.
  • a decentralized solution as shown in figure 2, where feature extraction is done locally within each eNB/gNB, and GNN training is also implemented in the same nodes, scheduled by a training orchestrator.
  • Such a localized computation has two main advantages. Firstly, the data privacy concern can be mitigated as no data leaves these nodes. Secondly, and for the same reason, transport layer payload is reduced since eNB/gNB only needs to exchange (the gradients of) GNN parameters with the training orchestrator instead of heavy raw data files.
  • each source cell is not aware of the status of its neighbouring cells, which is essential for RAN optimization use cases.
  • the first radio network node obtains, from a second radio network node, a matrix indication of a second local computation associated with a GNN for predicting characteristics of the RAN, wherein the matrix indication is obtained over an internal interface when the first and second network node are comprised in a same logical radio network node, or the matrix indication is obtained over an external interface when the first and second network node are separated neighbouring radio network nodes.
  • the first radio network node further executes a first local computation associated with the GNN for predicting the characteristics of the RAN based on the obtained matrix indication, wherein an output of the first local computation indicates a gradient.
  • the first radio network node sends an indication of the gradient to a central network node training the GNN for predicting the characteristics of the RAN.
  • a second radio network node for handling data of a RAN in a communication network.
  • the second radio network node receives from a central network node, one or more updated GNN parameters of a GNN for predicting characteristics of the RAN.
  • the second radio network node executes a second local computation, associated with the GNN for predicting the characteristics of the RAN, with the one or more updated GNN parameters and also data of a local data source to obtain a matrix indication.
  • the second radio network node provides, to a first radio network node, the matrix indication of the second local computation associated with the GNN, wherein the matrix indication is provided over an internal interface when the first and second network node are comprised in a same logical radio network node, or the matrix indication is provided over an external interface when the first and second network node are separated neighbouring radio network nodes. It is herein also provided a method performed by a central network node for handling data of a RAN in a communication network. The central network node broadcasts to radio network nodes, one or more updated GNN parameters of a GNN for predicting characteristics of the RAN.
  • the central network node receives indications of gradients from a first radio network node and a second radio network node, wherein the gradients are from local computations, associated with the GNN, executed locally at respective radio network node.
  • the central network node trains the GNN for predicting the characteristics of the RAN using the received indications of gradients.
  • a computer program product comprising instructions, which, when executed on at least one processor, cause the at least one processor to carry out the method above, as performed by the radio network nodes and the central network node, respectively.
  • a computer-readable storage medium having stored thereon a computer program product comprising instructions which, when executed on at least one processor, cause the at least one processor to carry out the method above, as performed by the radio network nodes and the central network node, respectively.
  • a first radio network node for handling data of a RAN in a communication network.
  • the first radio network node is configured to obtain, from a second network node, a matrix indication of a second local computation associated with a GNN for predicting characteristics of the RAN, wherein the first radio network node is configured to obtain the matrix indication over an internal interface when the first and second network node are comprised in a same logical radio network node, or to obtain the matrix indication over an external interface when the first and second network node are separated neighbouring radio network nodes.
  • the first radio network node is further configured to execute a first local computation based on the obtained matrix indication, wherein an output of the first local computation indicates a gradient.
  • the first radio network node is configured to send an indication of the gradient to a central network node training the GNN for predicting the characteristics of the RAN.
  • a second radio network node for handling data of a RAN in a communication network.
  • the second radio network node is configured to receive from a central network node, one or more updated GNN parameters of a GNN for predicting characteristics of the RAN.
  • the second radio network node is further configured to execute a second local computation, associated with the GNN for predicting characteristics of the RAN, with the one or more updated GNN parameters and also data of a local data source to obtain a matrix indication.
  • the second radio network node is configured to provide, to a first radio network node, the matrix indication of the second local computation associated with the GNN, wherein the second radio network node is configured to provide the matrix indication over an internal interface when the first and second network node are comprised in a same logical radio network node, or to provide the matrix indication over an external interface when the first and second network node are separated neighbouring radio network nodes.
  • a central network node for handling data of a RAN in a communication network.
  • the central network node is configured to broadcast to radio network nodes, one or more updated GNN parameters of a GNN for predicting characteristics of the RAN.
  • the central network node is configured to receive indications of gradients from a first radio network node and a second radio network node, wherein the gradients are from local computations, associated with the GNN, executed locally at respective radio network node.
  • the central network node is further configured to train the GNN for predicting the characteristics of the RAN using the received indications of gradients.
  • Embodiments herein propose a decentralized GNN training method, for example, for RAN optimization use cases taking the prediction of the characteristics into account, where data extraction and model training are done within radio network nodes, and matrix indication(s) are exchanged between different radio network nodes.
  • Fig. 1 shows a system for ML training according to prior art
  • Fig. 2 shows a RAN optimization in a decentralized ML model training
  • FIG. 3 is a schematic overview depicting a communication network according to embodiments herein;
  • Fig. 4 is a flowchart depicting a method performed by a first radio network node according to embodiments herein;
  • Fig. 5 is a flowchart depicting a method performed by a second radio network node according to embodiments herein;
  • Fig. 6 is a flowchart depicting a method performed by a central network node according to embodiments herein;
  • Fig. 7 is a combined flowchart and signaling scheme according to embodiments herein;
  • Fig. 8 is a schematic overview depicting a communication network according to embodiments herein;
  • Fig. 9 is a schematic overview depicting a subgraph of cells or nodes according to embodiments herein;
  • Fig. 10 is a block diagram depicting embodiments of the first radio network node according to embodiments herein;
  • Fig. 11 is a block diagram depicting embodiments of the second radio network node according to embodiments herein;
  • Fig. 12 is a block diagram depicting embodiments of the central network node according to embodiments herein;
  • Fig. 13 schematically illustrates a telecommunication network connected via an intermediate network to a host computer
  • Fig. 14 is a generalized block diagram of a host computer communicating via a base station with a user equipment over a partially wireless connection; and Figs. 15-18 are flowcharts illustrating methods implemented in a communication system including a host computer, a base station and a user equipment.
  • Embodiments herein relate to communication networks in general.
  • Fig. 3 is a schematic overview depicting a communication network 1.
  • the communication network 1 may be any kind of communication network such as a wired communication network or a wireless communication network comprising e.g. a radio access network (RAN) and a core network (CN).
  • the wireless communications network 1 may use one or a number of different technologies, such as Wi-Fi, Long Term Evolution (LTE), LTE-Advanced, Fifth Generation (5G), Wideband Code Division Multiple Access (WCDMA), Global System for Mobile communications/enhanced Data rate for GSM Evolution (GSM/EDGE), Worldwide Interoperability for Microwave Access (WiMax), or Ultra Mobile Broadband (UMB), just to mention a few possible implementations.
  • LTE Long Term Evolution
  • LTE-Advanced Fifth Generation
  • WCDMA Wideband Code Division Multiple Access
  • GSM/EDGE Global System for Mobile communications/enhanced Data rate for GSM Evolution
  • WiMax Worldwide Interoperability for
  • wireless devices e.g. a UE 10 such as a mobile station, a non-access point (non-AP) station (STA), a STA, a user equipment and/or a wireless terminal, communicate via one or more Access Networks (AN), e.g. RAN, to one or more core networks (CN).
  • AN e.g. RAN
  • CN core networks
  • UE is a non-limiting term which means any terminal, wireless communication terminal, user equipment, Machine Type Communication (MTC) device, Device to Device (D2D) terminal, loT operable device, or node e.g. smart phone, laptop, mobile phone, sensor, relay, mobile tablets or even a small base station capable of communicating using radio communication with a network node within an area served by the network node.
  • MTC Machine Type Communication
  • D2D Device to Device
  • the communication network 1 comprises a first radio network node 11 providing e.g. radio coverage over a geographical area, a service area, or a first cell, of a radio access technology (RAT), such as NR, LTE, Wi-Fi, WiMAX or similar.
  • the first radio network node 11 may be a transmission and reception point, a computational server, a database, a server communicating with other servers, a server in a server park, a base station e.g.
  • a network node such as a satellite, a Wireless Local Area Network (WLAN) access point or an Access Point Station (AP STA), an access node, an access controller, a radio base station such as a NodeB, an evolved Node B (eNB, eNodeB), a gNodeB (gNB), a base transceiver station, a baseband unit, an Access Point Base Station, a base station router, a transmission arrangement of a radio base station, a stand-alone access point or any other network unit or node depending e.g. on the radio access technology and terminology used.
  • a radio base station such as a NodeB, an evolved Node B (eNB, eNodeB), a gNodeB (gNB), a base transceiver station, a baseband unit, an Access Point Base Station, a base station router, a transmission arrangement of a radio base station, a stand-alone access point or any other network unit or node depending e.g. on the radio
  • the first radio network node 11 may be referred to as a serving network node wherein the service area may be referred to as a serving cell or primary cell, and the serving network node communicates with the UE 10 in form of DL transmissions to the UE 10 and UL transmissions from the UE 10.
  • the communication network 1 comprises a second radio network node 12 providing, e.g., radio coverage over a geographical area, a second service area or second cell, of a radio access technology (RAT), such as NR, LTE, Wi-Fi, WiMAX or similar.
  • the second radio network node 12 may be a transmission and reception point, a computational server, a database, a server communicating with other servers, a server in a server park, a base station e.g.
  • a network node such as a satellite, a Wireless Local Area Network (WLAN) access point or an Access Point Station (AP STA), an access node, an access controller, a radio base station such as a NodeB, an evolved Node B (eNB, eNodeB), a gNodeB (gNB), a base transceiver station, a baseband unit, an Access Point Base Station, a base station router, a transmission arrangement of a radio base station, a stand-alone access point or any other network unit or node depending e.g. on the radio access technology and terminology used.
  • the second radio network node 12 may be referred to as a neighbouring node.
  • the first and second network nodes may be part of a same logical node, or different nodes.
  • the first radio network node may alternatively be denoted as first radio network function and the second radio network node may be denoted as second radio network function.
  • the communication network 1 comprises a central network node 13 for handling data from all the nodes in the communication network.
  • the central network node may be a computational server, a database, a server communicating with other servers, a server in a server park, or similar.
  • the central network node 13 comprises a GNN for predicting characteristics of the RAN. Similar to the radio network nodes, the central network node 13 may alternatively be denoted as central network function.
  • Embodiments herein concern GNN training, being a machine learning (ML) model.
  • the training is performed in a decentralized manner and the first radio network node 11 comprises a first (local) computation related to the GNN and the second radio network node 12 comprises a second (local) computation related to the GNN.
  • the respective computation is using GNN parameters.
  • the first radio network node 11 receives a matrix indication from the second radio network node 12, wherein the matrix indication is received over an internal interface when the first and second network node are comprised in a same logical radio network node, or the matrix indication is received over an external interface when the first and second network node are separated neighbouring radio network nodes.
  • Neighbouring radio network node are radio network nodes controlling cells with a neighbour relationship, also referred to as neighbour cells.
  • a neighbour cell is a cell for which there exist a cell relation, for example proximity and/or frequency, with another cell.
  • One or more cell relations may be setup manually or using a feature such as Automatic Cell Relations (ANR).
  • ANR Automatic Cell Relations
  • GNN theory there is a graph G with a vertex set V, and an edge set E.
  • the neighbours of the vertex v N(v) is the set ⁇ u in V
  • a neighbouring radio network node may be a first-hop- neighbour, and to extend this to more hops, there are communities, a subset of vertices that are densely connected to each other, and clusters.
  • the matrix indication may, for example, be a message passing vector obtained at the second radio network node 12 when executing the second local computation.
  • the first radio network node 11 then executes the first or the first local computation based on the obtained matrix indication, wherein an output of the first local computation indicates a gradient.
  • the first radio network node 11 then sends an indication of the gradient to the central network node 13.
  • the indication may be a gradient or be an average gradient of multiple gradients.
  • the second radio network node 12 performs a similar process.
  • the central network node 13 receives indications of gradients, for example, averaged gradients, from the first radio network node 11 and the second radio network node 12.
  • the gradients are from computations, related to the GNN, executed locally at respective radio network node.
  • the central network node 13 trains the GNN, for example, the GNN at the central network node 13 using the received indications of gradients, and may, when the training is complete, send the trained GNN to a model registry.
  • embodiments herein enable training of the GNN in a more accurate and/or efficient manner.
  • the central network node 13 may comprise a central training agent to orchestrate synchronized decentralized GNN training for RAN optimization.
  • the central network node 13 may prepare mini-batch training samples by sampling data of sub-graphs of the topology of radio network nodes and cells in the communication network. From the mini-batch of training samples the central network node 13 may then broadcast most updated parameters to the first and second radio network nodes.
  • the first radio network node 11 and the second radio network node 12 may then compute or calculate passing vector using, e.g., the updated parameters and local data, and send respective passing vector to one another.
  • the respective radio network node may transmit the passing vector over the internal interface when two radio network nodes have intra-site cell relations, and over the external interface such as an X2 or Xn interface when two radio network nodes have inter-site cell relations.
  • the respective radio network node computes the gradient of cells within respective radio network node and sends the gradient to the central network node 13.
  • the central network node 13 receives gradients from local agents and performs an GNN update.
  • the central network node 13 may commit the GNN model to a model registry, for example, a GNN registry.
  • the model registry is the place to store ML models, for example in a docker image format, so that an ML model may be instantiated for the inference purpose.
  • Embodiments herein propose a decentralized GNN training method for, e.g., RAN optimization use cases, where feature extraction and model training are done within radio network nodes, and interfaces are used to exchange information between different radio network nodes during a message passing phase.
  • embodiments herein enable RAN optimization and thereby that operations of the wireless communication network may be improved in an efficient manner.
  • Message passing over external and internal interfaces enables the decentralized training agent to become aware of surrounding radio network environment and traffic status. Consequently, better GNNs can be trained in terms of prediction performance and generalization capability.
  • the decentralized training method in RAN mitigates data privacy concerns and may reduce the payload on transport network by decentralized computation.
  • the method actions performed by the first radio network node 11 for handling data of the RAN in the communication network will now be described with reference to a flowchart depicted in Fig. 4.
  • the actions do not have to be taken in the order stated below, but may be taken in any suitable order. Actions performed in some embodiments are marked with dashed boxes.
  • the first radio network node 11 may receive one or more updated GNN parameters from the central network node 13 training the GNN for predicting the characteristics of the RAN.
  • the first radio network node 11 obtains from the second radio network node 12, the matrix indication of the second local computation associated with the GNN for predicting characteristics of the RAN, wherein the matrix indication is obtained over the internal interface when the first and second network node are comprised in the same logical radio network node, or the matrix indication is obtained over the external interface when the first and second network node are separated neighbouring radio network nodes.
  • the matrix indication may be a representation of the second local computation.
  • a representation of a computation may be a compact representation such as an embedded or encoded representation.
  • the matrix indication may comprise one or more vector indications such as a vector message or a passing vector.
  • the matrix indication may comprise one or more node features (x u ), embeddings and/or edge features (e u _> v ).
  • the second local computation may comprise a calculation using trained parameters and local parameters and result in model weights such Multi layer perceptron (MLP) are parameterized model weights and other are local data values.
  • MLP Multi layer perceptron
  • the second local computation is associated with the GNN of the central network node 13.
  • the second local computation may also be referred to as vector calculation.
  • the matrix indication may be received over the internal interface in case the first and second radio network node are part of a same logical node, or over the external interface, such as X2/Xn interface, in case of being separated nodes.
  • the first radio network node 11 executes the first local computation associated with the GNN for predicting the characteristics of the RAN based on the obtained matrix indication, wherein an output of the first local computation indicates a gradient.
  • the first radio network node may calculate loss for forward propagation, see equation (5) below, and take partial derivative on the loss to obtain the gradient.
  • the first radio network node 11 may perform execution of the first local computation, or executing the first local computation, for one or more neighbouring nodes or cells resulting in one or more derived gradients for the neighbouring nodes or cells.
  • the first local computation may be executed further based on the received one or more updated GNN parameters and data from a local data source or local data sources, such as PM data, CM data, CTR.
  • the output may then be used to derive the gradient through for example taking a partial derivative of the output.
  • the first radio network node 11 may further perform a calculation operation on the gradient and the one or more derived gradients; and wherein the indication of the gradient, in action 405, indicates a result of the calculation operation.
  • the calculation operation may comprise summarize, average, and/or concatenate the gradient and the one or more derived gradients.
  • the indication may be calculated from one or more multiple subgraphs of radio network nodes, where a center node of a subgraph belongs to the same radio network node.
  • the first radio network node 11 further sends the indication of the gradient to the central network node 13 training the GNN for predicting the characteristics of the RAN.
  • the characteristics may comprise one or more performance indications of respective radio network node. For example, models radio strength or quality such as a signal to interference plus noise ratio (SINR) value based on PM, CM and CTR values from local and/or external sources.
  • SINR signal to interference plus noise ratio
  • the second radio network node 12 receives from the central network node 13, the one or more updated GNN parameters of the (second) GNN for predicting the characteristics of the RAN.
  • the second radio network node 12 executes the (second) local computation, associated with the GNN for predicting the characteristics of the RAN, with the one or more updated GNN parameters and also data of a local data source to obtain the matrix indication.
  • the characteristics may comprise one or more performance indications of respective radio network node.
  • the GNN may be for predicting radio performance of the second radio network node, for example, used for RAN optimization decisions.
  • the data of the local data source may comprise one or more of PM data, CM data, and CTR obtained at the second radio network node 12.
  • the second radio network node 12 provides, for example, transmits, to the first radio network node 11 , the matrix indication of the second local computation associated with the GNN.
  • the matrix indication may comprise one or more node features (x u ), embeddings and/or edge features (e u _> v ).
  • the second radio network node 12 provides the matrix indication over the internal interface when the first and second network node are comprised in the same logical radio network node, or provides the matrix indication over the external interface when the first and second network node are separated neighbouring radio network nodes.
  • the second radio network node 12 may additionally, execute another computation such as the first local computation in action 403 and/or 404 based on one or more matrix indications from one or more radio network nodes such as the first radio network node 11 , wherein an output of the other computation indicates a gradient.
  • the second radio network node 12 may further send an indication of the gradient to the central network node 13 training the GNN for predicting the characteristics of the RAN.
  • the central network node 13 may sample data from data sources of the GNN, wherein the sampling is based on topology information of cells of radio network nodes in the communication network.
  • the central network node 13 may sample a minibatch of data composed by data of subgraphs from a complete RAN graph. Data of a subgraph may be data of cells controlled by a single radio network node.
  • the central network node 13 may further train the GNN to obtain the one more updated GNN parameters, wherein the training is based on the sampled data.
  • the central network node 13 broadcasts to radio network nodes, the one or more updated GNN parameters of the GNN for predicting the characteristics of the RAN.
  • Action 604. The central network node 13 receives the indications of gradients from the first radio network node 11 and the second radio network node 12. As stated above, the gradients are from local computation, associated with the GNN, executed locally at respective radio network node.
  • the central network node 13 trains the GNN for predicting the characteristics of the RAN using the received indications of gradients.
  • the central network node 13 may upon completion of the performed training of the GNN, send the GNN to the model registry.
  • the characteristics may comprise one or more performance indications of respective radio network node.
  • the GNN may be for predicting signal strength values for radio optimization.
  • the GNN may be based on PM, CM and CTR values.
  • Fig. 7 discloses a combined flowchart and signaling scheme according to embodiments herein.
  • the computation graph model is here exemplified as an GNN.
  • the central network node 13 prepares (a mini-batch) training samples by sampling sub-graphs of data from data sources of a GNN taking topology information into account.
  • the central network node 13 has global topology information and keeps track on GNN model parameters.
  • a Gradient Descent or similar optimization algorithm is executed iteratively to find the optimal values of the parameters to find the minimum possible value of a given cost function.
  • GNN training is done via multiple mini-batches iterations, so one or more GNN model parameters is updated at the end of each gradient descent process with mini-batch data. This is an example of actions 601 and 602 in Fig. 6.
  • the central network node 13 broadcasts updated one or more GNN parameters, such as weight parameters W, n or multi layer perceptron (MLP), to one or more radio network nodes.
  • GNN parameters such as weight parameters W, n or multi layer perceptron (MLP)
  • the first radio network node 11 receives the updated one or more GNN parameters
  • the second radio network node 12 receives the updated one or more GNN parameters.
  • the second radio network node 12 executes the second local computation, with the updated one or more GNN parameters and also present data of local data sources such as PM, CM and CTR, and obtains from the executed second local computation the matrix indication being exemplified as a vector indication of the representation of the second local computation.
  • the vector indication may comprise one or more features (x u ), embeddings and edge features (e u ⁇ v ).
  • the present data may be obtained at the second radio network node 13. This is an example of action 502 in Fig. 5.
  • the second radio network node 12 provides the vector indication to the first radio network node.
  • the vector indication may be referred to as message passing vector. This is an example of action 503 in Fig. 5 and action 402 in Fig. 4.
  • the first radio network node may similarly provide a vector indication to the second radio network node 12.
  • the vector indication or indications are exchanged over the internal or the external interface between the different radio network nodes.
  • the first radio network node 11 receives the vector indication (message passing vector) and uses the received vector indication and the received updated one or more GNN parameters when executing the first local computation resulting in a gradient of the representation of the first local computation. It should here be noted that the first local computation may be executed for one or more sub-graphs of different radio network nodes resulting in one or more gradients. The first radio network node 11 may then summarize, average, and/or concatenate (and other operations) the gradients calculated from one or more multiple subgraphs where a center node (cell) of a subgraph belongs to the same radio network node. This is an example of actions 403 and 404 in Fig. 4. It should be noted that the second radio network node may similarly receive vector indication and obtain gradient, see action 504 in Fig. 5.
  • the first radio network node 11 further transmits one or more indications of the gradients to the central network node 13.
  • the indication may be an averaged or summarized gradient. This is an example of action 405 in Fig. 4.
  • the second radio network node may similarly transmit one or more gradients to the central network node 13, see action 505 in Fig. 5.
  • the central network node 13 thus receives indications from the first radio network node 11 and the second radio network node 12 and trains the GNN using the received indications. This is an example of actions 604 and 605 in Fig. 6.
  • Action 708 Upon completion of the training of the GNN, the GNN may be committed to a ML model registry. This is an example of action 606 in Fig. 6.
  • a decentralized GNN training method for example, for RAN optimization use cases, where feature extraction and ML model training is done within radio network nodes, and information, i.e., matrix indications, are exchanged between different radio network nodes during the GNN message passing phase.
  • the end-to-end (e2e) training procedure may be orchestrated by a training orchestrator instantiated in a cloud environment, see Fig. 8. It will now be explained an example of using GNN to train a regression model to predict average physical uplink shared channel (PUSCH) SINR value based on 4G cells’ PM, CM and CTR data sources.
  • PUSCH physical uplink shared channel
  • Local data is present.
  • data pre-processing can be implemented locally within each radio network node, e.g., node features, edge features, adjacency matrix, labels may be obtained locally.
  • the RAN topological information is available. Using this information the GNN can be used to capture information about is surrounding nodes and achieve better performance). A potential source of such information is every cell’s Network Relations Table (NRT) (EutrancellRelation)
  • Data pre-processing using CTR, PM and CM may be done locally within each radio network node by a local agent individually per 4G/5G cell.
  • These cells are nodes in the graph formation, and the term cell and node are used interchangeably herein.
  • the central network node 13 may sample a mini-batch composed by subgraphs from the complete RAN graph, that batch size is a hyperparameter.
  • Fig. 9 is an illustrated 2-hop sub-graph centered in node V within a mini-batch. It includes center node V, its first hop neighbouring nodes B, C and D, and second hop neighbouring nodes E and F. Each node has a node feature to represent its status, and the center node V also has a label which is the target variable for the GNN. Different nodes are connected via edges if an EutrancellRelation exists, and each edge has edge features constructed by metrics like handover attempts and inter-site distances.
  • the metrics are characteristics that may indicate radio environment and relevant for RAN optimization use cases. If two nodes belong to the same logical eNB the edge is called intra-site cell relation edge, for example e vB cB and e vc if f wo nodes belong to different eNBs then edge is called inter-site cell relation edge, for example ecF .
  • a sub-graph centered in node B and C may also be sampled in the same mini-batch since V, B, C are 4G cells belonging to the same eNB 1 , so the training signaling overhead can be minimized.
  • the central network node 13 may send the most updated GNN parameters to all the nodes (or cells) in the mini-batch.
  • the central network node 13 may send the most update GNN parameters for a different use case to all the nodes in the mini-batch.
  • GNN parameters For a different use case, cell relations which construct the graph topology are very stable and do not change, and it will be the same graph topology for different use cases and then node features and/or edge features may be prepared separately.
  • Source and neighbour nodes may send messages for different use cases sequentially once the message passing handshake has been established and thus training signaling overhead is reduced.
  • Source radio network nodes could send their node features and embedding vectors as messages to the target radio network node, target radio network nodes, such as the first radio network node 11 may aggregate received messages with its own node features and embeddings.
  • Edge-conditioned GNN in “Dynamic Edge-Conditioned Filters in Convolutional Neural Networks on Graphs” https://arxiv.org/pdf/1704.02901.pdf (arxiv.org) is exemplified herein as a GNN algorithm example, and the GNN is exemplified as a two layer GNN:
  • the 0 th layer node embedding is equal to node features as shows in equation (1).
  • 6Q is nonlinear activation function to improve model’s expressiveness.
  • W, b and MLP are parameterized model weights, in equation (3) is the 2 nd layer node embedding for node V which is a compact representation of node V’s features and it’s first-hop and second-hop neighbour node features.
  • prediction output h ⁇ out ⁇ may be calculated in (4) and then loss may be calculated in (5).
  • x u vectors are vector indications or messages that are calculated locally, and then sent over X2/Xn interfaces if two nodes have inter-site cell relation edges. If u belongs to the same eNB as V then messages may be handled within the eNB, either by routing traffic to the same network interface or even writing the embedding in a shared memory space which is used by the same processing unit.
  • the size of the vector message is usually smaller than the size of the original vector x u and h. ⁇ as we want to embed information into a more compact representation.
  • node feature may be a very long vector and include rich information. Still that is more compact than transmitting raw data/features over X2/Xn interface.
  • Each radio network node such as the first and/or second radio network node 13 may perform the calculation operation on the gradients, such as summarize, average, concatenate, and other operations, the gradients, calculated from multiple subgraphs where center node or cell belongs to the same radio network node.
  • the gradients may alternatively or additionally only need to be computed when they change. Nodes or cells whose features do not change may be considered as drop-outs in the process - so if the nodes or cells lack variance, they won’t be able to contribute much to the GNN. This may further cut the message exchange costs.
  • the central network node 13 may then summarize, average, and/or concatenate the gradients received from all local radio network nodes within the mini-batch, and perform a parameter update.
  • the central network node 13 may commit the model to the model registry.
  • Fig. 10 is a block diagram depicting the first radio network node 11 , in two embodiments, for handling data of the RAN in the communication network according to embodiments herein.
  • the first radio network node 11 may comprise processing circuitry 1001 , e.g. one or more processors, configured to perform the methods herein.
  • processing circuitry 1001 e.g. one or more processors, configured to perform the methods herein.
  • the first radio network node 11 may comprise an obtaining unit 1002, e.g. a receiver or a transceiver.
  • the first radio network node 11 , the processing circuitry 1001 and/or the obtaining unit 1002 is configured to obtain from the second network node, the matrix indication of the second local computation associated with the GNN for predicting the characteristics of the RAN.
  • the first radio network node 11 , the processing circuitry 1001 and/or the obtaining unit 1002 is configured to obtain the matrix indication over the internal interface when the first and second network node are comprised in the same logical radio network node, or to obtain the matrix indication over the external interface when the first and second network node are separated neighbouring radio network nodes.
  • the first radio network node 11 may comprise an executing unit 1003.
  • the first radio network node 11 , the processing circuitry 1001 and/or the executing unit 1003 is configured to execute the first local computation, associated with the GNN for predicting the characteristics of the RAN, based on the obtained matrix indication, wherein the output of the first local computation indicates the gradient.
  • the characteristics may comprise one or more performance indications of respective radio network node
  • the first radio network node 11 may comprise a sending unit 1004, e.g., a transmitter or a transceiver.
  • the first radio network node 11 , the processing circuitry 1001 and/or the sending unit 1004 is configured to send the indication of the gradient to the central network node 13 training the GNN for predicting the characteristics of the RAN.
  • the first radio network node 11 , the processing circuitry 1001 and/or the obtaining unit 1002 may be configured to receive the one or more updated GNN parameters from the central network node 13.
  • the first radio network node 11 , the processing circuitry 1001 and/or the executing unit 1003 may then be configured to execute the first local computation further based on the received one or more updated GNN parameters and the data from the local data source.
  • the first radio network node 11 , the processing circuitry 1001 and/or the executing unit 1003 may be configured to execute the first local computation for the one or more neighbouring nodes or cells resulting in the one or more derived gradients for the one or more neighbouring nodes or cells.
  • the first radio network node 11 may comprise a calculating unit 1005.
  • the first radio network node 11 , the processing circuitry 1001 and/or the calculating unit 1005 may be configured to perform the calculation operation on the gradient and the one or more derived gradients; and wherein the indication of the gradient indicates the result of the performed calculation operation.
  • the first radio network node 11 further comprises a memory 1006.
  • the memory comprises one or more units to be used to store data on, such as GNN, local data, subgraph, parameters, values, operational parameters, applications to perform the methods disclosed herein when being executed, and similar.
  • embodiments herein may disclose a first radio network node for handling data in the communication network, wherein the first radio network node ccomprises processing circuitry and a memory, said memory comprising instructions executable by said processing circuitry whereby said first radio network node is operative to perform any of the methods herein.
  • the first radio network node 11 comprises a communication interface 1009 comprising, e.g., a transmitter, a receiver, a transceiver and/or one or more antennas.
  • the methods according to the embodiments described herein for the first radio network node 11 are respectively implemented by means of e.g. a computer program product 1007 or a computer program, comprising instructions, i.e., software code portions, which, when executed on at least one processor, cause the at least one processor to carry out the actions described herein, as performed by the first radio network node 11.
  • the computer program product 1007 may be stored on a computer-readable storage medium 1008, e.g., a universal serial bus (USB) stick, a disc or similar.
  • the computer-readable storage medium 1008, having stored thereon the computer program product may comprise the instructions which, when executed on at least one processor, cause the at least one processor to carry out the actions described herein, as performed by the first radio network node 11.
  • the computer-readable storage medium may be a non- transitory or a transitory computer-readable storage medium.
  • Fig. 11 is a block diagram depicting the second radio network node 12, in two embodiments, for handling data of the RAN in the communication network according to embodiments herein.
  • the second radio network node 12 may comprise processing circuitry 1101 , e.g., one or more processors, configured to perform the methods herein.
  • processing circuitry 1101 e.g., one or more processors, configured to perform the methods herein.
  • the second radio network node 12 may comprise a receiving unit 1102, e.g., a receiver or a transceiver.
  • the second radio network node 12, the processing circuitry 1101 and/or the receiving unit 1102 is configured to receive from the central network node 13, the one or more updated GNN parameters of the GNN for predicting the characteristics of the RAN.
  • the second radio network node 12 may comprise an executing unit 1103.
  • the second radio network node 12, the processing circuitry 1101 and/or the executing unit 1103 is configured to execute the second local computation, associated with the GNN for predicting the characteristics of the RAN, with the one or more updated GNN parameters and also the data of the local data source to obtain the matrix indication.
  • the data of the local data source may comprise one or more of PM data, CM data, and CTR obtained at the second radio network node.
  • the second radio network node 12 may comprise a providing unit 1104, e.g., a transmitter or a transceiver.
  • the second radio network node 12, the processing circuitry 1101 and/or the providing unit 1104 is configured to provide, to the first radio network node 11 , the matrix indication of the second local computation associated with the GNN.
  • the characteristics may comprise one or more performance indications of respective radio network node.
  • the matrix indication may comprise one or more node features (x u ), embeddings and/or edge features (e u ⁇ v ).
  • the second radio network node 12, the processing circuitry 1101 and/or the providing unit 1104 is configured to provide the matrix indication over the internal interface when the first and second network node are comprised in the same logical radio network node, or to provide the matrix indication over the external interface when the first and second network node are separated neighbouring radio network nodes.
  • the second radio network node 12, the processing circuitry 1101 and/or the executing unit 1103 may be configured to execute the other computation such as the first local computation based on one or more matrix indications from one or more radio network nodes, wherein the output of the other computation indicates the gradient.
  • the second radio network node 12, the processing circuitry 1101 and/or the providing unit 1104 may be configured to send the indication of the gradient to the central network node 13 training the GNN for predicting the characteristics of the RAN.
  • the second radio network node 12 further comprises a memory 1106.
  • the memory comprises one or more units to be used to store data on, such as GNN, local data, subgraph, parameters, values, operational parameters, applications to perform the methods disclosed herein when being executed, and similar.
  • embodiments herein may disclose a second radio network node for handling data in the communication network, wherein the second radio network node comprises processing circuitry and a memory, said memory comprising instructions executable by said processing circuitry whereby said second radio network node is operative to perform any of the methods herein.
  • the second radio network node 12 comprises a communication interface 1109 comprising, e.g., a transmitter, a receiver, a transceiver and/or one or more antennas.
  • the methods according to the embodiments described herein for the second radio network node 12 are respectively implemented by means of e.g. a computer program product 1107 or a computer program, comprising instructions, i.e., software code portions, which, when executed on at least one processor, cause the at least one processor to carry out the actions described herein, as performed by the second radio network node 12.
  • the computer program product 1107 may be stored on a computer-readable storage medium 1108, e.g., a universal serial bus (USB) stick, a disc or similar.
  • the computer-readable storage medium 1108, having stored thereon the computer program product may comprise the instructions which, when executed on at least one processor, cause the at least one processor to carry out the actions described herein, as performed by the second radio network node 12.
  • the computer-readable storage medium may be a non-transitory or a transitory computer-readable storage medium.
  • Fig. 12 is a block diagram depicting the central network node 13, in two embodiments, for handling data, e.g., handling GNN, of the RAN in the communication network according to embodiments herein.
  • the central network node 13 may comprise processing circuitry 1201 , e.g., one or more processors, configured to perform the methods herein.
  • the central network node 13 may comprise a sending unit 1202, e.g., a transmitter or a transceiver.
  • the central network node 13, the processing circuitry 1201 , and/or the sending unit 1202 is configured to broadcast to radio network nodes, the one or more updated GNN parameters of the GNN for predicting the characteristics of the RAN.
  • the characteristics may comprise one or more performance indications of respective radio network node.
  • the central network node 13 may comprise a receiving unit 1203, e.g., a receiver or a transceiver.
  • the central network node 13, the processing circuitry 1201 , and/or the sending unit 1202 is configured to receive the indications of gradients from the first radio network node 11 and the second radio network node 12, wherein the gradients are from local computations, associated with the GNN, executed locally at respective radio network node.
  • the received indications of gradients from the first radio network node 11 and the second radio network node 12 may be related to gradients processed in a calculation operation, at respective radio network node.
  • the central network node 13 may comprise a training unit 1204.
  • the central network node 13, the processing circuitry 1201 , and/or the training unit 1204 is configured to train the GNN using the received indications of gradients.
  • the central network node 13 may comprise a sampling unit 1205.
  • the central network node 13, the processing circuitry 1201 , and/or the sampling unit 1205 may be configured to sample the data from the data sources of the GNN, wherein the sampling is based on the topology information of cells of radio network nodes in the communication network.
  • the central network node 13, the processing circuitry 1201 , and/or the training unit 1204 may then be configured to train, based on the sampled data, the GNN to obtain the one more updated GNN parameters.
  • the central network node 13, the processing circuitry 1201 , and/or the sending unit 1202 may be configured to, upon completion of training the GNN using the received indications of gradients, send the GNN to a model registry.
  • the GNN may be for predicting the radio performance of the communication network such as a RAN.
  • the central network node 13 further comprises a memory 1206.
  • the memory comprises one or more units to be used to store data on, such as GNN, local data, subgraph, parameters, values, operational parameters, applications to perform the methods disclosed herein when being executed, and similar.
  • embodiments herein may disclose a central network node for handling data in the communication network, wherein the central network node comprises the processing circuitry and a memory, said memory comprising instructions executable by said processing circuitry whereby said central network node is operative to perform any of the methods herein.
  • the central network node 13 comprises a communication interface 1209 comprising, e.g., a transmitter, a receiver, a transceiver and/or one or more antennas.
  • the methods according to the embodiments described herein forthe central network node 13 are respectively implemented by means of, e.g., a computer program product 1207 or a computer program, comprising instructions, i.e., software code portions, which, when executed on at least one processor, cause the at least one processor to carry out the actions described herein, as performed by the central network node 13.
  • the computer program product 1207 may be stored on a computer-readable storage medium 1208, e.g., a universal serial bus (USB) stick, a disc or similar.
  • the computer-readable storage medium 1208, having stored thereon the computer program product may comprise the instructions which, when executed on at least one processor, cause the at least one processor to carry out the actions described herein, as performed by the central network node 13.
  • the computer-readable storage medium may be a non- transitory or a transitory computer-readable storage medium.
  • network node can correspond to any type of radio network node or any network node, which communicates with a wireless device and/or with another network node.
  • network nodes are NodeB, Master eNB, Secondary eNB, a network node belonging to Master cell group (MCG) or Secondary Cell Group (SCG), base station (BS), multi-standard radio (MSR) radio node such as MSR BS, eNodeB, network controller, radio network controller (RNC), base station controller (BSC), relay, donor node controlling relay, base transceiver station (BTS), access point (AP), transmission points, transmission nodes, Remote Radio Unit (RRU), nodes in distributed antenna system (DAS), core network node e.g.
  • Mobility Switching Centre MSC
  • MME Mobile Management Entity
  • O&M Operation and Maintenance
  • OSS Operation Support System
  • SON Self-Organizing Network
  • positioning node e.g. Evolved Serving Mobile Location Centre (E-SMLC), Minimizing Drive Test (MDT) etc.
  • E-SMLC Evolved Serving Mobile Location Centre
  • MDT Minimizing Drive Test
  • wireless device or user equipment refers to any type of wireless device communicating with a network node and/or with another UE in a cellular or mobile communication system.
  • UE refers to any type of wireless device communicating with a network node and/or with another UE in a cellular or mobile communication system.
  • Examples of UE are target device, device-to-device (D2D) UE, proximity capable UE (aka ProSe UE), machine type UE or UE capable of machine to machine (M2M) communication, Personal digital assistant (PDA), Tablet, mobile terminals, smart phone, laptop embedded equipped (LEE), laptop mounted equipment (LME), USB dongles etc.
  • D2D device-to-device
  • ProSe UE proximity capable UE
  • M2M machine to machine
  • PDA Personal digital assistant
  • Tablet mobile terminals
  • smart phone laptop embedded equipped (LEE), laptop mounted equipment (LME), USB dongles etc.
  • the embodiments are described for 5G. However, the embodiments are applicable to any RAT or multi-RAT systems, where the UE receives and/or transmit signals, e.g., data, e.g., LTE, LTE FDD/TDD, WCDMA/HSPA, GSM/GERAN, Wi Fi, WLAN, CDMA2000 etc.
  • signals e.g., data, e.g., LTE, LTE FDD/TDD, WCDMA/HSPA, GSM/GERAN, Wi Fi, WLAN, CDMA2000 etc.
  • ASIC application-specific integrated circuit
  • Several of the functions may be implemented on a processor shared with other functional components of a wireless device or network node, for example.
  • processors or “controller” as used herein does not exclusively refer to hardware capable of executing software and may implicitly include, without limitation, digital signal processor (DSP) hardware, read-only memory (ROM) for storing software, random-access memory for storing software and/or program or application data, and non-volatile memory.
  • DSP digital signal processor
  • ROM read-only memory
  • RAM random-access memory
  • non-volatile memory non-volatile memory
  • a communication system includes a telecommunication network 3210, such as a 3GPP-type cellular network, which comprises an access network 3211 , such as a radio access network, and a core network 3214.
  • the access network 3211 comprises a plurality of base stations 3212a, 3212b, 3212c, such as NBs, eNBs, gNBs or other types of wireless access points being examples of the radio network node 12 herein, each defining a corresponding coverage area 3213a, 3213b, 3213c.
  • Each base station 3212a, 3212b, 3212c is connectable to the core network 3214 over a wired or wireless connection 3215.
  • a first user equipment (UE) 3291 being an example of the UE 10, located in coverage area 3213c is configured to wirelessly connect to, or be paged by, the corresponding base station 3212c.
  • a second UE 3292 in coverage area 3213a is wirelessly connectable to the corresponding base station 3212a. While a plurality of UEs 3291 , 3292 are illustrated in this example, the disclosed embodiments are equally applicable to a situation where a sole UE is in the coverage area or where a sole UE is connecting to the corresponding base station 3212.
  • the telecommunication network 3210 is itself connected to a host computer 3230, which may be embodied in the hardware and/or software of a standalone server, a cloud- implemented server, a distributed server or as processing resources in a server farm.
  • the host computer 3230 may be under the ownership or control of a service provider, or may be operated by the service provider or on behalf of the service provider.
  • the connections 3221 , 3222 between the telecommunication network 3210 and the host computer 3230 may extend directly from the core network 3214 to the host computer 3230 or may go via an optional intermediate network 3220.
  • the intermediate network 3220 may be one of, or a combination of more than one of, a public, private or hosted network; the intermediate network 3220, if any, may be a backbone network or the Internet; in particular, the intermediate network 3220 may comprise two or more subnetworks (not shown).
  • the communication system of Fig. 13 as a whole enables connectivity between one of the connected UEs 3291 , 3292 and the host computer 3230.
  • the connectivity may be described as an over-the-top (OTT) connection 3250.
  • the host computer 3230 and the connected UEs 3291 , 3292 are configured to communicate data and/or signaling via the OTT connection 3250, using the access network 3211 , the core network 3214, any intermediate network 3220 and possible further infrastructure (not shown) as intermediaries.
  • the OTT connection 3250 may be transparent in the sense that the participating communication devices through which the OTT connection 3250 passes are unaware of routing of uplink and downlink communications.
  • a base station 3212 may not or need not be informed about the past routing of an incoming downlink communication with data originating from a host computer 3230 to be forwarded (e.g., handed over) to a connected UE 3291. Similarly, the base station 3212 need not be aware of the future routing of an outgoing uplink communication originating from the UE 3291 towards the host computer 3230.
  • a host computer 3310 comprises hardware 3315 including a communication interface 3316 configured to set up and maintain a wired or wireless connection with an interface of a different communication device of the communication system 3300.
  • the host computer 3310 further comprises processing circuitry 3318, which may have storage and/or processing capabilities.
  • the processing circuitry 3318 may comprise one or more programmable processors, application-specific integrated circuits, field programmable gate arrays or combinations of these (not shown) adapted to execute instructions.
  • the host computer 3310 further comprises software 3311 , which is stored in or accessible by the host computer 3310 and executable by the processing circuitry 3318.
  • the software 3311 includes a host application 3312.
  • the host application 3312 may be operable to provide a service to a remote user, such as a UE 3330 connecting via an OTT connection 3350 terminating at the UE 3330 and the host computer 3310. In providing the service to the remote user, the host application 3312 may provide user data which is transmitted using the OTT connection 3350.
  • the communication system 3300 further includes a base station 3320 provided in a telecommunication system and comprising hardware 3325 enabling it to communicate with the host computer 3310 and with the UE 3330.
  • the hardware 3325 may include a communication interface 3326 for setting up and maintaining a wired or wireless connection with an interface of a different communication device of the communication system 3300, as well as a radio interface 3327 for setting up and maintaining at least a wireless connection 3370 with a UE 3330 located in a coverage area (not shown in Fig.14) served by the base station 3320.
  • the communication interface 3326 may be configured to facilitate a connection 3360 to the host computer 3310.
  • connection 3360 may be direct or it may pass through a core network (not shown in Fig.14) of the telecommunication system and/or through one or more intermediate networks outside the telecommunication system.
  • the hardware 3325 of the base station 3320 further includes processing circuitry 3328, which may comprise one or more programmable processors, application-specific integrated circuits, field programmable gate arrays or combinations of these (not shown) adapted to execute instructions.
  • the base station 3320 further has software 3321 stored internally or accessible via an external connection.
  • the communication system 3300 further includes the UE 3330 already referred to.
  • Its hardware 3335 may include a radio interface 3337 configured to set up and maintain a wireless connection 3370 with a base station serving a coverage area in which the UE 3330 is currently located.
  • the hardware 3335 of the UE 3330 further includes processing circuitry 3338, which may comprise one or more programmable processors, application-specific integrated circuits, field programmable gate arrays or combinations of these (not shown) adapted to execute instructions.
  • the UE 3330 further comprises software 3331 , which is stored in or accessible by the UE 3330 and executable by the processing circuitry 3338.
  • the software 3331 includes a client application 3332.
  • the client application 3332 may be operable to provide a service to a human or non-human user via the UE 3330, with the support of the host computer 3310.
  • an executing host application 3312 may communicate with the executing client application 3332 via the OTT connection 3350 terminating at the UE 3330 and the host computer 3310.
  • the client application 3332 may receive request data from the host application 3312 and provide user data in response to the request data.
  • the OTT connection 3350 may transfer both the request data and the user data.
  • the client application 3332 may interact with the user to generate the user data that it provides.
  • the host computer 3310, base station 3320 and UE 3330 illustrated in Fig. 14 may be identical to the host computer 3230, one of the base stations 3212a, 3212b, 3212c and one of the UEs 3291 , 3292 of Fig. 13, respectively.
  • the inner workings of these entities may be as shown in Fig. 14 and independently, the surrounding network topology may be that of Fig. 13.
  • the OTT connection 3350 has been drawn abstractly to illustrate the communication between the host computer 3310 and the user equipment 3330 via the base station 3320, without explicit reference to any intermediary devices and the precise routing of messages via these devices.
  • Network infrastructure may determine the routing, which it may be configured to hide from the UE 3330 or from the service provider operating the host computer 3310, or both. While the OTT connection 3350 is active, the network infrastructure may further take decisions by which it dynamically changes the routing (e.g., on the basis of load balancing consideration or reconfiguration of the network).
  • the wireless connection 3370 between the UE 3330 and the base station 3320 is in accordance with the teachings of the embodiments described throughout this disclosure.
  • One or more of the various embodiments improve the performance of OTT services provided to the UE 3330 using the OTT connection 3350, in which the wireless connection 3370 forms the last segment. More precisely, the teachings of these embodiments may improve the performance since the GNN may model the RAN in a more accurate manner and thereby provide benefits such as reduced user waiting time, and better responsiveness.
  • a measurement procedure may be provided for the purpose of monitoring data rate, latency and other factors on which the one or more embodiments improve.
  • the measurement procedure and/or the network functionality for reconfiguring the OTT connection 3350 may be implemented in the software 3311 of the host computer 3310 or in the software 3331 of the UE 3330, or both.
  • sensors (not shown) may be deployed in or in association with communication devices through which the OTT connection 3350 passes; the sensors may participate in the measurement procedure by supplying values of the monitored quantities exemplified above, or supplying values of other physical quantities from which software 3311 , 3331 may compute or estimate the monitored quantities.
  • the reconfiguring of the OTT connection 3350 may include message format, retransmission settings, preferred routing etc.; the reconfiguring need not affect the base station 3320, and it may be unknown or imperceptible to the base station 3320.
  • measurements may involve proprietary UE signaling facilitating the host computer’s 3310 measurements of throughput, propagation times, latency and the like.
  • the measurements may be implemented in that the software 3311 , 3331 causes messages to be transmitted, in particular empty or ‘dummy’ messages, using the OTT connection 3350 while it monitors propagation times, errors etc.
  • Fig. 15 is a flowchart illustrating a method implemented in a communication system, in accordance with one embodiment.
  • the communication system includes a host computer, a base station and a UE which may be those described with reference to Figures 13 and 14. For simplicity of the present disclosure, only drawing references to Figure 15 will be included in this section.
  • the host computer provides user data.
  • the host computer provides the user data by executing a host application.
  • the host computer initiates a transmission carrying the user data to the UE.
  • Fig. 16 is a flowchart illustrating a method implemented in a communication system, in accordance with one embodiment.
  • the communication system includes a host computer, a base station and a UE which may be those described with reference to Figures 13 and 14. For simplicity of the present disclosure, only drawing references to Figure 16 will be included in this section.
  • the host computer provides user data.
  • the host computer provides the user data by executing a host application.
  • the host computer initiates a transmission carrying the user data to the UE.
  • the transmission may pass via the base station, in accordance with the teachings of the embodiments described throughout this disclosure.
  • the UE receives the user data carried in the transmission.
  • Fig. 17 is a flowchart illustrating a method implemented in a communication system, in accordance with one embodiment.
  • the communication system includes a host computer, a base station and a UE which may be those described with reference to Figures 13 and 14. For simplicity of the present disclosure, only drawing references to Figure 17 will be included in this section.
  • the UE receives input data provided by the host computer.
  • the UE provides user data.
  • the UE provides the user data by executing a client application.
  • the UE executes a client application which provides the user data in reaction to the received input data provided by the host computer.
  • the executed client application may further consider user input received from the user.
  • the UE initiates, in an optional third substep 3630, transmission of the user data to the host computer.
  • the host computer receives the user data transmitted from the UE, in accordance with the teachings of the embodiments described throughout this disclosure.
  • Fig. 18 is a flowchart illustrating a method implemented in a communication system, in accordance with one embodiment.
  • the communication system includes a host computer, a base station and a UE which may be those described with reference to Figures 13 and 14. For simplicity of the present disclosure, only drawing references to Figure 18 will be included in this section.
  • the base station receives user data from the UE.
  • the base station initiates transmission of the received user data to the host computer.
  • the host computer receives the user data carried in the transmission initiated by the base station.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

Embodiments herein relate, in some examples, to a method performed by a first radio network node (11) for handling data of a RAN in a communication network. The first radio network node obtains, from a second radio network node (12), a matrix indication of a second local computation associated with a GNN for predicting characteristics of the RAN, wherein the matrix indication is obtained over an internal interface when the first and second network node are comprised in a same logical radio network node, or the matrix indication is obtained over an external interface when the first and second network node are separated neighbouring radio network nodes. The first radio network node executes a first local computation, associated with the GNN for predicting the characteristics of the RAN, based on the obtained matrix indication, wherein an output of the first local computation indicates a gradient; and sends an indication of the gradient to a central network node (13) training the GNN for predicting the characteristics of the RAN.

Description

RAN OPTIMIZATION WITH THE HELP OF A DECENTRALIZED GRAPH
NEURAL NETWORK
TECHNICAL FIELD
Embodiments herein relate to a first radio network node, a second radio network node, a central network node, and methods performed therein for communication networks. Furthermore, a computer program product and a computer readable storage medium are also provided herein. In particular, embodiments herein relate to handling data, for example, for radio optimization in a communication network.
BACKGROUND
In a typical wireless communication network, user equipments (UE), also known as wireless communication devices, mobile stations, stations (STA) and/or wireless devices, communicate via a Radio access Network (RAN) to one or more core networks (CN). The RAN covers a geographical area which is divided into service areas or cell areas, with each service area or cell area being served by a radio network node such as an access node e.g. a Wi-Fi access point or a radio base station (RBS), which in some radio access technologies (RAT) may also be called, for example, a NodeB, an evolved NodeB (eNB) and a gNodeB (gNB). The service area or cell area is a geographical area where radio coverage is provided by the radio network node. The radio network node operates on radio frequencies to communicate over an air interface with the wireless devices within range of the access node. The radio network node communicates over a downlink (DL) to the wireless device and the wireless device communicates over an uplink (UL) to the access node.
To understand environment such as radio environment, images, sounds etc., different ways are used to detect certain event, objects or similar. A way of learning is using machine learning (ML) algorithms to improve accuracy. Computational graph models such as ML models, e.g., deep learning models or neural network models, are currently used in different applications and are based on different technologies. A computational graph model is a directed graph model where nodes correspond to operations or variables. Variables can feed their value into operations, and operations can feed their output into other operations. This way, every node in the graph model defines a function of the variables. Training of these computational graph models is typically an offline process, meaning that it usually happens in datacenters and the execution of these computational graph models may be done anywhere from an edge of the communication network also called network edge, e.g., in devices, gateways or radio access infrastructure, to centralized clouds, e.g., data centers.
Graph neural networks (GNN) have attracted attention by both academic and industrial artificial intelligence (Al) research/innovation communities. A GNN is a type of neural network designed to solve an analytic task on large-scale data expressed the form of graph structure. In essence, a GNN can embed a node’s own features together with its neighbours’ features into a compact representation that can be used for down streaming machine learning tasks such as supervised learning classification or regression problems.
SUMMARY
Upon developing embodiments herein one or more problems have been identified.
RAN optimization is an important task to improve end user experience. GNN techniques can analyze RAN behavior by constructing input features from data sources like performance management (PM) counters and configuration management (CM) parameters, as well as more granular data such as cell/UE trace records (CTR), from radio network nodes. WO2021190772A1 is an example use case of using such data sources for RAN optimization.
A centralized ML training solution is one where data sources, such as CTR, CM and PM, are collected in a centralized place, for example, an operations support system (OSS), for every RAN node. Thus, the data sources are extracted to an GNN training environment and then features are extracted from them for the needs of down streaming GNN training tasks. Fig. 1 illustrates a system accordingly, wherein a data pipeline for RAN optimization in a centralized GNN training.
However, operators hesitate to enable CTR traces in a live network due to:
• CTR is typically very demanding resource-wise and the collection of such information has high impact on service performance and adds significant load to the transport layer
• CTR may include sensitive data such as RAN UE identity (ID) & ueTRacelD
Fig .2 shows a data pipeline for RAN optimization in a decentralized GNN training. A decentralized solution, as shown in figure 2, where feature extraction is done locally within each eNB/gNB, and GNN training is also implemented in the same nodes, scheduled by a training orchestrator. Such a localized computation has two main advantages. Firstly, the data privacy concern can be mitigated as no data leaves these nodes. Secondly, and for the same reason, transport layer payload is reduced since eNB/gNB only needs to exchange (the gradients of) GNN parameters with the training orchestrator instead of heavy raw data files.
However, the major limitation of such an approach is that each source cell is not aware of the status of its neighbouring cells, which is essential for RAN optimization use cases.
In the document “Decentralized Inference with Graph Neural Networks in Wireless Communication Systems” https://arxiv.org/pdf/2104.09027.pdf (arxiv.org), it is shown a decentralized GNN algorithm for wireless communication system. Messages are passed through air interface between a base station and a UE.
It is herein provided a method performed by a first radio network node for handling data of a RAN in a communication network. The first radio network node obtains, from a second radio network node, a matrix indication of a second local computation associated with a GNN for predicting characteristics of the RAN, wherein the matrix indication is obtained over an internal interface when the first and second network node are comprised in a same logical radio network node, or the matrix indication is obtained over an external interface when the first and second network node are separated neighbouring radio network nodes. The first radio network node further executes a first local computation associated with the GNN for predicting the characteristics of the RAN based on the obtained matrix indication, wherein an output of the first local computation indicates a gradient. The first radio network node sends an indication of the gradient to a central network node training the GNN for predicting the characteristics of the RAN.
It is herein also provided a method performed by a second radio network node for handling data of a RAN in a communication network. The second radio network node receives from a central network node, one or more updated GNN parameters of a GNN for predicting characteristics of the RAN. The second radio network node executes a second local computation, associated with the GNN for predicting the characteristics of the RAN, with the one or more updated GNN parameters and also data of a local data source to obtain a matrix indication. The second radio network node provides, to a first radio network node, the matrix indication of the second local computation associated with the GNN, wherein the matrix indication is provided over an internal interface when the first and second network node are comprised in a same logical radio network node, or the matrix indication is provided over an external interface when the first and second network node are separated neighbouring radio network nodes. It is herein also provided a method performed by a central network node for handling data of a RAN in a communication network. The central network node broadcasts to radio network nodes, one or more updated GNN parameters of a GNN for predicting characteristics of the RAN. The central network node receives indications of gradients from a first radio network node and a second radio network node, wherein the gradients are from local computations, associated with the GNN, executed locally at respective radio network node. The central network node trains the GNN for predicting the characteristics of the RAN using the received indications of gradients.
It is furthermore provided herein a computer program product comprising instructions, which, when executed on at least one processor, cause the at least one processor to carry out the method above, as performed by the radio network nodes and the central network node, respectively. It is additionally provided herein a computer-readable storage medium, having stored thereon a computer program product comprising instructions which, when executed on at least one processor, cause the at least one processor to carry out the method above, as performed by the radio network nodes and the central network node, respectively.
To perform the methods, it is herein provided a first radio network node for handling data of a RAN in a communication network. The first radio network node is configured to obtain, from a second network node, a matrix indication of a second local computation associated with a GNN for predicting characteristics of the RAN, wherein the first radio network node is configured to obtain the matrix indication over an internal interface when the first and second network node are comprised in a same logical radio network node, or to obtain the matrix indication over an external interface when the first and second network node are separated neighbouring radio network nodes. The first radio network node is further configured to execute a first local computation based on the obtained matrix indication, wherein an output of the first local computation indicates a gradient. The first radio network node is configured to send an indication of the gradient to a central network node training the GNN for predicting the characteristics of the RAN.
It is herein also provided a second radio network node for handling data of a RAN in a communication network. The second radio network node is configured to receive from a central network node, one or more updated GNN parameters of a GNN for predicting characteristics of the RAN. The second radio network node is further configured to execute a second local computation, associated with the GNN for predicting characteristics of the RAN, with the one or more updated GNN parameters and also data of a local data source to obtain a matrix indication. The second radio network node is configured to provide, to a first radio network node, the matrix indication of the second local computation associated with the GNN, wherein the second radio network node is configured to provide the matrix indication over an internal interface when the first and second network node are comprised in a same logical radio network node, or to provide the matrix indication over an external interface when the first and second network node are separated neighbouring radio network nodes.
It is herein also provided a central network node for handling data of a RAN in a communication network. The central network node is configured to broadcast to radio network nodes, one or more updated GNN parameters of a GNN for predicting characteristics of the RAN. The central network node is configured to receive indications of gradients from a first radio network node and a second radio network node, wherein the gradients are from local computations, associated with the GNN, executed locally at respective radio network node. The central network node is further configured to train the GNN for predicting the characteristics of the RAN using the received indications of gradients.
Embodiments herein propose a decentralized GNN training method, for example, for RAN optimization use cases taking the prediction of the characteristics into account, where data extraction and model training are done within radio network nodes, and matrix indication(s) are exchanged between different radio network nodes.
BRIEF DESCRIPTION OF THE DRAWINGS
Embodiments will now be described in more detail in relation to the enclosed drawings, in which:
Fig. 1 shows a system for ML training according to prior art;
Fig. 2 shows a RAN optimization in a decentralized ML model training;
Fig. 3 is a schematic overview depicting a communication network according to embodiments herein;
Fig. 4 is a flowchart depicting a method performed by a first radio network node according to embodiments herein;
Fig. 5 is a flowchart depicting a method performed by a second radio network node according to embodiments herein;
Fig. 6 is a flowchart depicting a method performed by a central network node according to embodiments herein;
Fig. 7 is a combined flowchart and signaling scheme according to embodiments herein; Fig. 8 is a schematic overview depicting a communication network according to embodiments herein;
Fig. 9 is a schematic overview depicting a subgraph of cells or nodes according to embodiments herein;
Fig. 10 is a block diagram depicting embodiments of the first radio network node according to embodiments herein;
Fig. 11 is a block diagram depicting embodiments of the second radio network node according to embodiments herein;
Fig. 12 is a block diagram depicting embodiments of the central network node according to embodiments herein;
Fig. 13 schematically illustrates a telecommunication network connected via an intermediate network to a host computer;
Fig. 14 is a generalized block diagram of a host computer communicating via a base station with a user equipment over a partially wireless connection; and Figs. 15-18 are flowcharts illustrating methods implemented in a communication system including a host computer, a base station and a user equipment.
DETAILED DESCRIPTION
Embodiments herein relate to communication networks in general. Fig. 3 is a schematic overview depicting a communication network 1. The communication network 1 may be any kind of communication network such as a wired communication network or a wireless communication network comprising e.g. a radio access network (RAN) and a core network (CN). The wireless communications network 1 may use one or a number of different technologies, such as Wi-Fi, Long Term Evolution (LTE), LTE-Advanced, Fifth Generation (5G), Wideband Code Division Multiple Access (WCDMA), Global System for Mobile communications/enhanced Data rate for GSM Evolution (GSM/EDGE), Worldwide Interoperability for Microwave Access (WiMax), or Ultra Mobile Broadband (UMB), just to mention a few possible implementations. Embodiments herein relate to recent technology trends that are of particular interest in 5G systems, however, embodiments are also applicable in further development of the existing communication systems such as e.g. a WCDMA and LTE.
In the communication network 1 , wireless devices e.g. a UE 10 such as a mobile station, a non-access point (non-AP) station (STA), a STA, a user equipment and/or a wireless terminal, communicate via one or more Access Networks (AN), e.g. RAN, to one or more core networks (CN). It should be understood by the skilled in the art that “UE” is a non-limiting term which means any terminal, wireless communication terminal, user equipment, Machine Type Communication (MTC) device, Device to Device (D2D) terminal, loT operable device, or node e.g. smart phone, laptop, mobile phone, sensor, relay, mobile tablets or even a small base station capable of communicating using radio communication with a network node within an area served by the network node.
The communication network 1 comprises a first radio network node 11 providing e.g. radio coverage over a geographical area, a service area, or a first cell, of a radio access technology (RAT), such as NR, LTE, Wi-Fi, WiMAX or similar. The first radio network node 11 may be a transmission and reception point, a computational server, a database, a server communicating with other servers, a server in a server park, a base station e.g. a network node such as a satellite, a Wireless Local Area Network (WLAN) access point or an Access Point Station (AP STA), an access node, an access controller, a radio base station such as a NodeB, an evolved Node B (eNB, eNodeB), a gNodeB (gNB), a base transceiver station, a baseband unit, an Access Point Base Station, a base station router, a transmission arrangement of a radio base station, a stand-alone access point or any other network unit or node depending e.g. on the radio access technology and terminology used. The first radio network node 11 may be referred to as a serving network node wherein the service area may be referred to as a serving cell or primary cell, and the serving network node communicates with the UE 10 in form of DL transmissions to the UE 10 and UL transmissions from the UE 10.
The communication network 1 comprises a second radio network node 12 providing, e.g., radio coverage over a geographical area, a second service area or second cell, of a radio access technology (RAT), such as NR, LTE, Wi-Fi, WiMAX or similar. The second radio network node 12 may be a transmission and reception point, a computational server, a database, a server communicating with other servers, a server in a server park, a base station e.g. a network node such as a satellite, a Wireless Local Area Network (WLAN) access point or an Access Point Station (AP STA), an access node, an access controller, a radio base station such as a NodeB, an evolved Node B (eNB, eNodeB), a gNodeB (gNB), a base transceiver station, a baseband unit, an Access Point Base Station, a base station router, a transmission arrangement of a radio base station, a stand-alone access point or any other network unit or node depending e.g. on the radio access technology and terminology used. The second radio network node 12 may be referred to as a neighbouring node. The first and second network nodes may be part of a same logical node, or different nodes. Thus, the first radio network node may alternatively be denoted as first radio network function and the second radio network node may be denoted as second radio network function.
The communication network 1 comprises a central network node 13 for handling data from all the nodes in the communication network. For example, the central network node may be a computational server, a database, a server communicating with other servers, a server in a server park, or similar. The central network node 13 comprises a GNN for predicting characteristics of the RAN. Similar to the radio network nodes, the central network node 13 may alternatively be denoted as central network function.
Embodiments herein concern GNN training, being a machine learning (ML) model. The training is performed in a decentralized manner and the first radio network node 11 comprises a first (local) computation related to the GNN and the second radio network node 12 comprises a second (local) computation related to the GNN. For example, the respective computation is using GNN parameters.
According to embodiments herein the first radio network node 11 receives a matrix indication from the second radio network node 12, wherein the matrix indication is received over an internal interface when the first and second network node are comprised in a same logical radio network node, or the matrix indication is received over an external interface when the first and second network node are separated neighbouring radio network nodes. Neighbouring radio network node are radio network nodes controlling cells with a neighbour relationship, also referred to as neighbour cells. Thus, a neighbour cell is a cell for which there exist a cell relation, for example proximity and/or frequency, with another cell. One or more cell relations may be setup manually or using a feature such as Automatic Cell Relations (ANR). In GNN theory, there is a graph G with a vertex set V, and an edge set E. For a vertex v in the vertex set V, the neighbours of the vertex v N(v) is the set { u in V | (u,v) in E}, i.e., the vertices that are adjacent to v, meaning that there exists an edge between u and v (denoted by (u,v) here). Note that in a directed graph, there are incoming and outgoing neighbours. Thus, a neighbouring radio network node may be a first-hop- neighbour, and to extend this to more hops, there are communities, a subset of vertices that are densely connected to each other, and clusters.
The matrix indication may, for example, be a message passing vector obtained at the second radio network node 12 when executing the second local computation. The first radio network node 11 then executes the first or the first local computation based on the obtained matrix indication, wherein an output of the first local computation indicates a gradient. The first radio network node 11 then sends an indication of the gradient to the central network node 13. The indication may be a gradient or be an average gradient of multiple gradients. The second radio network node 12 performs a similar process. Thus, the central network node 13 receives indications of gradients, for example, averaged gradients, from the first radio network node 11 and the second radio network node 12. As stated, the gradients are from computations, related to the GNN, executed locally at respective radio network node. The central network node 13 trains the GNN, for example, the GNN at the central network node 13 using the received indications of gradients, and may, when the training is complete, send the trained GNN to a model registry. Thus, embodiments herein enable training of the GNN in a more accurate and/or efficient manner.
In an example, the central network node 13 may comprise a central training agent to orchestrate synchronized decentralized GNN training for RAN optimization. The central network node 13 may prepare mini-batch training samples by sampling data of sub-graphs of the topology of radio network nodes and cells in the communication network. From the mini-batch of training samples the central network node 13 may then broadcast most updated parameters to the first and second radio network nodes. The first radio network node 11 and the second radio network node 12 may then compute or calculate passing vector using, e.g., the updated parameters and local data, and send respective passing vector to one another. For example, the respective radio network node may transmit the passing vector over the internal interface when two radio network nodes have intra-site cell relations, and over the external interface such as an X2 or Xn interface when two radio network nodes have inter-site cell relations. The respective radio network node computes the gradient of cells within respective radio network node and sends the gradient to the central network node 13. Thus, the central network node 13 receives gradients from local agents and performs an GNN update. Once training is complete the central network node 13 may commit the GNN model to a model registry, for example, a GNN registry. The model registry is the place to store ML models, for example in a docker image format, so that an ML model may be instantiated for the inference purpose.
Embodiments herein propose a decentralized GNN training method for, e.g., RAN optimization use cases, where feature extraction and model training are done within radio network nodes, and interfaces are used to exchange information between different radio network nodes during a message passing phase. Thus, embodiments herein enable RAN optimization and thereby that operations of the wireless communication network may be improved in an efficient manner. Message passing over external and internal interfaces enables the decentralized training agent to become aware of surrounding radio network environment and traffic status. Consequently, better GNNs can be trained in terms of prediction performance and generalization capability. Furthermore, the decentralized training method in RAN mitigates data privacy concerns and may reduce the payload on transport network by decentralized computation.
The method actions performed by the first radio network node 11 for handling data of the RAN in the communication network according to embodiments will now be described with reference to a flowchart depicted in Fig. 4. The actions do not have to be taken in the order stated below, but may be taken in any suitable order. Actions performed in some embodiments are marked with dashed boxes.
Action 401. The first radio network node 11 may receive one or more updated GNN parameters from the central network node 13 training the GNN for predicting the characteristics of the RAN.
Action 402. The first radio network node 11 obtains from the second radio network node 12, the matrix indication of the second local computation associated with the GNN for predicting characteristics of the RAN, wherein the matrix indication is obtained over the internal interface when the first and second network node are comprised in the same logical radio network node, or the matrix indication is obtained over the external interface when the first and second network node are separated neighbouring radio network nodes. The matrix indication may be a representation of the second local computation. A representation of a computation may be a compact representation such as an embedded or encoded representation. The matrix indication may comprise one or more vector indications such as a vector message or a passing vector. The matrix indication may comprise one or more node features (xu ), embeddings
Figure imgf000012_0001
and/or edge features (eu_>v). The second local computation may comprise a calculation using trained parameters and local parameters and result in model weights such
Figure imgf000012_0002
Multi layer perceptron (MLP) are parameterized model weights and other are local data values. Thus, the second local computation is associated with the GNN of the central network node 13. The second local computation may also be referred to as vector calculation. The matrix indication may be received over the internal interface in case the first and second radio network node are part of a same logical node, or over the external interface, such as X2/Xn interface, in case of being separated nodes.
Action 403. The first radio network node 11 executes the first local computation associated with the GNN for predicting the characteristics of the RAN based on the obtained matrix indication, wherein an output of the first local computation indicates a gradient. For example, the first radio network node may calculate loss for forward propagation, see equation (5) below, and take partial derivative on the loss to obtain the gradient. The first radio network node 11 may perform execution of the first local computation, or executing the first local computation, for one or more neighbouring nodes or cells resulting in one or more derived gradients for the neighbouring nodes or cells. The first local computation may be executed further based on the received one or more updated GNN parameters and data from a local data source or local data sources, such as PM data, CM data, CTR. The output may then be used to derive the gradient through for example taking a partial derivative of the output.
Action 404. The first radio network node 11 may further perform a calculation operation on the gradient and the one or more derived gradients; and wherein the indication of the gradient, in action 405, indicates a result of the calculation operation. The calculation operation may comprise summarize, average, and/or concatenate the gradient and the one or more derived gradients. Thus, the indication may be calculated from one or more multiple subgraphs of radio network nodes, where a center node of a subgraph belongs to the same radio network node.
Action 405. The first radio network node 11 further sends the indication of the gradient to the central network node 13 training the GNN for predicting the characteristics of the RAN. The characteristics may comprise one or more performance indications of respective radio network node. For example, models radio strength or quality such as a signal to interference plus noise ratio (SINR) value based on PM, CM and CTR values from local and/or external sources.
The method actions performed by the second radio network node 12 for handling data of the RAN in the communication network according to embodiments will now be described with reference to a flowchart depicted in Fig. 5. The actions do not have to be taken in the order stated below, but may be taken in any suitable order. Actions performed in some embodiments are marked with dashed boxes.
Action 501. The second radio network node 12 receives from the central network node 13, the one or more updated GNN parameters of the (second) GNN for predicting the characteristics of the RAN.
Action 502. The second radio network node 12 executes the (second) local computation, associated with the GNN for predicting the characteristics of the RAN, with the one or more updated GNN parameters and also data of a local data source to obtain the matrix indication. The characteristics may comprise one or more performance indications of respective radio network node. The GNN may be for predicting radio performance of the second radio network node, for example, used for RAN optimization decisions. The data of the local data source may comprise one or more of PM data, CM data, and CTR obtained at the second radio network node 12.
Action 503. The second radio network node 12 provides, for example, transmits, to the first radio network node 11 , the matrix indication of the second local computation associated with the GNN. The matrix indication may comprise one or more node features (xu ), embeddings
Figure imgf000014_0001
and/or edge features (eu_>v). The second radio network node 12 provides the matrix indication over the internal interface when the first and second network node are comprised in the same logical radio network node, or provides the matrix indication over the external interface when the first and second network node are separated neighbouring radio network nodes.
Action 504. The second radio network node 12 may additionally, execute another computation such as the first local computation in action 403 and/or 404 based on one or more matrix indications from one or more radio network nodes such as the first radio network node 11 , wherein an output of the other computation indicates a gradient.
Action 505. The second radio network node 12 may further send an indication of the gradient to the central network node 13 training the GNN for predicting the characteristics of the RAN.
The method actions performed by the central network node 13 for handling data in the RAN, or the GNN, in the communication network according to embodiments will now be described with reference to a flowchart depicted in Fig. 6. The actions do not have to be taken in the order stated below, but may be taken in any suitable order. Actions performed in some embodiments are marked with dashed boxes.
Action 601. The central network node 13 may sample data from data sources of the GNN, wherein the sampling is based on topology information of cells of radio network nodes in the communication network. The central network node 13 may sample a minibatch of data composed by data of subgraphs from a complete RAN graph. Data of a subgraph may be data of cells controlled by a single radio network node.
Action 602. The central network node 13 may further train the GNN to obtain the one more updated GNN parameters, wherein the training is based on the sampled data.
Action 603. The central network node 13 broadcasts to radio network nodes, the one or more updated GNN parameters of the GNN for predicting the characteristics of the RAN. Action 604. The central network node 13 receives the indications of gradients from the first radio network node 11 and the second radio network node 12. As stated above, the gradients are from local computation, associated with the GNN, executed locally at respective radio network node.
Action 605. The central network node 13 trains the GNN for predicting the characteristics of the RAN using the received indications of gradients.
Action 606. The central network node 13 may upon completion of the performed training of the GNN, send the GNN to the model registry. The characteristics may comprise one or more performance indications of respective radio network node. For example, the GNN may be for predicting signal strength values for radio optimization. The GNN may be based on PM, CM and CTR values.
Fig. 7 discloses a combined flowchart and signaling scheme according to embodiments herein. The computation graph model is here exemplified as an GNN.
Action 701. The central network node 13 prepares (a mini-batch) training samples by sampling sub-graphs of data from data sources of a GNN taking topology information into account. The central network node 13 has global topology information and keeps track on GNN model parameters. A Gradient Descent or similar optimization algorithm is executed iteratively to find the optimal values of the parameters to find the minimum possible value of a given cost function. GNN training is done via multiple mini-batches iterations, so one or more GNN model parameters is updated at the end of each gradient descent process with mini-batch data. This is an example of actions 601 and 602 in Fig. 6.
Action 702. The central network node 13 broadcasts updated one or more GNN parameters, such as weight parameters W, n or multi layer perceptron (MLP), to one or more radio network nodes. Thus, the first radio network node 11 receives the updated one or more GNN parameters and the second radio network node 12 receives the updated one or more GNN parameters. This is an example of action 603 in Fig. 6, and action 401 in Fig. 4, and action 501 in Fig. 5.
Action 703. The second radio network node 12 executes the second local computation, with the updated one or more GNN parameters and also present data of local data sources such as PM, CM and CTR, and obtains from the executed second local computation the matrix indication being exemplified as a vector indication of the representation of the second local computation. The vector indication may comprise one or more features (xu ), embeddings and edge features (eu^v). The present data may be obtained at the second radio network node 13. This is an example of action 502 in Fig. 5. Action 704. The second radio network node 12 provides the vector indication to the first radio network node. The vector indication may be referred to as message passing vector. This is an example of action 503 in Fig. 5 and action 402 in Fig. 4. It should be noted that the first radio network node may similarly provide a vector indication to the second radio network node 12. The vector indication or indications are exchanged over the internal or the external interface between the different radio network nodes.
Action 705. The first radio network node 11 receives the vector indication (message passing vector) and uses the received vector indication and the received updated one or more GNN parameters when executing the first local computation resulting in a gradient of the representation of the first local computation. It should here be noted that the first local computation may be executed for one or more sub-graphs of different radio network nodes resulting in one or more gradients. The first radio network node 11 may then summarize, average, and/or concatenate (and other operations) the gradients calculated from one or more multiple subgraphs where a center node (cell) of a subgraph belongs to the same radio network node. This is an example of actions 403 and 404 in Fig. 4. It should be noted that the second radio network node may similarly receive vector indication and obtain gradient, see action 504 in Fig. 5.
Action 706. The first radio network node 11 further transmits one or more indications of the gradients to the central network node 13. The indication may be an averaged or summarized gradient. This is an example of action 405 in Fig. 4. It should be noted that the second radio network node may similarly transmit one or more gradients to the central network node 13, see action 505 in Fig. 5.
Action 707. The central network node 13 thus receives indications from the first radio network node 11 and the second radio network node 12 and trains the GNN using the received indications. This is an example of actions 604 and 605 in Fig. 6.
Action 708. Upon completion of the training of the GNN, the GNN may be committed to a ML model registry. This is an example of action 606 in Fig. 6.
It is herein disclosed a decentralized GNN training method, for example, for RAN optimization use cases, where feature extraction and ML model training is done within radio network nodes, and information, i.e., matrix indications, are exchanged between different radio network nodes during the GNN message passing phase. The end-to-end (e2e) training procedure may be orchestrated by a training orchestrator instantiated in a cloud environment, see Fig. 8. It will now be explained an example of using GNN to train a regression model to predict average physical uplink shared channel (PUSCH) SINR value based on 4G cells’ PM, CM and CTR data sources. Embodiments herein may be applied to any related RAN use cases assuming the following criteria are met:
1 . Local data is present. For example, data pre-processing can be implemented locally within each radio network node, e.g., node features, edge features, adjacency matrix, labels may be obtained locally.
2. The RAN topological information is available. Using this information the GNN can be used to capture information about is surrounding nodes and achieve better performance). A potential source of such information is every cell’s Network Relations Table (NRT) (EutrancellRelation)
Data pre-processing
Data pre-processing using CTR, PM and CM may be done locally within each radio network node by a local agent individually per 4G/5G cell. These cells are nodes in the graph formation, and the term cell and node are used interchangeably herein.
• Node feature
• ctrUIPrbAvail, example of CTR
• ctrUINoiselnterf_[0..15], example of CTR
• ctrUIUePathloss_[0..20], example of CTR
• ctrUIPowerRestricted, example of CTR
• ctrUIPowerUnrestricted, example of CTR
• AVG_ACTIVE_USERS_UL, example of PM
• AVG_UL_PATHLOSS, example of PM
• pZeroNominalPusch, example of CM •
• Edge feature
• ctrincomingHO load, example of CTR
• # of outgoing handover requests, example of PM
• Node label
• AVG_PUSCH_SINR, example of PM
• Adjacency matrix
• EutrancellRelation, example of CM -Training initialization
To initiate the GNN, e.g., GNN, model training, the central network node 13 may sample a mini-batch composed by subgraphs from the complete RAN graph, that batch size is a hyperparameter. Fig. 9 is an illustrated 2-hop sub-graph centered in node V within a mini-batch. It includes center node V, its first hop neighbouring nodes B, C and D, and second hop neighbouring nodes E and F. Each node has a node feature to represent its status, and the center node V also has a label which is the target variable for the GNN. Different nodes are connected via edges if an EutrancellRelation exists, and each edge has edge features constructed by metrics like handover attempts and inter-site distances. The metrics are characteristics that may indicate radio environment and relevant for RAN optimization use cases. If two nodes belong to the same logical eNB the edge is called intra-site cell relation edge, for example evB cB and evc if fwo nodes belong to different eNBs then edge is called inter-site cell relation edge, for example ecF.
Similarly, a sub-graph centered in node B and C may also be sampled in the same mini-batch since V, B, C are 4G cells belonging to the same eNB 1 , so the training signaling overhead can be minimized.
The central network node 13 may send the most updated GNN parameters to all the nodes (or cells) in the mini-batch.
In one embodiment the central network node 13 may send the most update GNN parameters for a different use case to all the nodes in the mini-batch. Generally, cell relations which construct the graph topology are very stable and do not change, and it will be the same graph topology for different use cases and then node features and/or edge features may be prepared separately. Source and neighbour nodes may send messages for different use cases sequentially once the message passing handshake has been established and thus training signaling overhead is reduced.
Forward propagation for each subgraph
Message passing GNN is one of the most popular GNN algorithm families, see “Design Space for Graph Neural Networks”, https://arxiv.org/pdf/2011.08843.pdf. Source radio network nodes could send their node features and embedding vectors as messages to the target radio network node, target radio network nodes, such as the first radio network node 11 may aggregate received messages with its own node features and embeddings. Without losing generality edge-conditioned GNN, see Edge-conditioned GNN in “Dynamic Edge-Conditioned Filters in Convolutional Neural Networks on Graphs” https://arxiv.org/pdf/1704.02901.pdf (arxiv.org) is exemplified herein as a GNN algorithm example, and the GNN is exemplified as a two layer GNN:
Figure imgf000019_0001
The 0th layer node embedding is equal to node features as shows in equation (1). 6Q is nonlinear activation function to improve model’s expressiveness. W, b and MLP are parameterized model weights,
Figure imgf000019_0002
in equation (3) is the 2nd layer node embedding for node V which is a compact representation of node V’s features and it’s first-hop and second-hop neighbour node features. Then prediction output h^out^ may be calculated in (4) and then loss may be calculated in (5). xu
Figure imgf000019_0003
vectors are vector indications or messages that are calculated locally, and then sent over X2/Xn interfaces if two nodes have inter-site cell relation edges. If u belongs to the same eNB as V then messages may be handled within the eNB, either by routing traffic to the same network interface or even writing the embedding in a shared memory space which is used by the same processing unit.
It should be noted that:
Since the message is an output of a series vector operations it’s more difficult for malicious users to find out sensitive information in the node feature xu even if the malicious users could tap X2/Xn interfaces as opposed to reading raw data;
Depending on the design choice, the size of the vector message is usually smaller than the size of the original vector xu and h.^ as we want to embed information into a more compact representation. This means that node feature may be a very long vector and include rich information. Still that is more compact than transmitting raw data/features over X2/Xn interface.
- Gradient calculation and GNN parameter update
Each radio network node such as the first and/or second radio network node 13 may perform the calculation operation on the gradients, such as summarize, average, concatenate, and other operations, the gradients, calculated from multiple subgraphs where center node or cell belongs to the same radio network node.
The gradients may alternatively or additionally only need to be computed when they change. Nodes or cells whose features do not change may be considered as drop-outs in the process - so if the nodes or cells lack variance, they won’t be able to contribute much to the GNN. This may further cut the message exchange costs.
The central network node 13 may then summarize, average, and/or concatenate the gradients received from all local radio network nodes within the mini-batch, and perform a parameter update.
After certain iterations the central network node 13 may commit the model to the model registry.
Fig. 10 is a block diagram depicting the first radio network node 11 , in two embodiments, for handling data of the RAN in the communication network according to embodiments herein.
The first radio network node 11 may comprise processing circuitry 1001 , e.g. one or more processors, configured to perform the methods herein.
The first radio network node 11 may comprise an obtaining unit 1002, e.g. a receiver or a transceiver. The first radio network node 11 , the processing circuitry 1001 and/or the obtaining unit 1002 is configured to obtain from the second network node, the matrix indication of the second local computation associated with the GNN for predicting the characteristics of the RAN. The first radio network node 11 , the processing circuitry 1001 and/or the obtaining unit 1002 is configured to obtain the matrix indication over the internal interface when the first and second network node are comprised in the same logical radio network node, or to obtain the matrix indication over the external interface when the first and second network node are separated neighbouring radio network nodes.
The first radio network node 11 may comprise an executing unit 1003. The first radio network node 11 , the processing circuitry 1001 and/or the executing unit 1003 is configured to execute the first local computation, associated with the GNN for predicting the characteristics of the RAN, based on the obtained matrix indication, wherein the output of the first local computation indicates the gradient. The characteristics may comprise one or more performance indications of respective radio network node
The first radio network node 11 may comprise a sending unit 1004, e.g., a transmitter or a transceiver. The first radio network node 11 , the processing circuitry 1001 and/or the sending unit 1004 is configured to send the indication of the gradient to the central network node 13 training the GNN for predicting the characteristics of the RAN.
The first radio network node 11 , the processing circuitry 1001 and/or the obtaining unit 1002 may be configured to receive the one or more updated GNN parameters from the central network node 13. The first radio network node 11 , the processing circuitry 1001 and/or the executing unit 1003 may then be configured to execute the first local computation further based on the received one or more updated GNN parameters and the data from the local data source.
The first radio network node 11 , the processing circuitry 1001 and/or the executing unit 1003 may be configured to execute the first local computation for the one or more neighbouring nodes or cells resulting in the one or more derived gradients for the one or more neighbouring nodes or cells.
The first radio network node 11 may comprise a calculating unit 1005. The first radio network node 11 , the processing circuitry 1001 and/or the calculating unit 1005 may be configured to perform the calculation operation on the gradient and the one or more derived gradients; and wherein the indication of the gradient indicates the result of the performed calculation operation.
The first radio network node 11 further comprises a memory 1006. The memory comprises one or more units to be used to store data on, such as GNN, local data, subgraph, parameters, values, operational parameters, applications to perform the methods disclosed herein when being executed, and similar. Thus, embodiments herein may disclose a first radio network node for handling data in the communication network, wherein the first radio network node ccomprises processing circuitry and a memory, said memory comprising instructions executable by said processing circuitry whereby said first radio network node is operative to perform any of the methods herein. The first radio network node 11 comprises a communication interface 1009 comprising, e.g., a transmitter, a receiver, a transceiver and/or one or more antennas.
The methods according to the embodiments described herein for the first radio network node 11 are respectively implemented by means of e.g. a computer program product 1007 or a computer program, comprising instructions, i.e., software code portions, which, when executed on at least one processor, cause the at least one processor to carry out the actions described herein, as performed by the first radio network node 11. The computer program product 1007 may be stored on a computer-readable storage medium 1008, e.g., a universal serial bus (USB) stick, a disc or similar. The computer-readable storage medium 1008, having stored thereon the computer program product, may comprise the instructions which, when executed on at least one processor, cause the at least one processor to carry out the actions described herein, as performed by the first radio network node 11. In some embodiments, the computer-readable storage medium may be a non- transitory or a transitory computer-readable storage medium.
Fig. 11 is a block diagram depicting the second radio network node 12, in two embodiments, for handling data of the RAN in the communication network according to embodiments herein.
The second radio network node 12 may comprise processing circuitry 1101 , e.g., one or more processors, configured to perform the methods herein.
The second radio network node 12 may comprise a receiving unit 1102, e.g., a receiver or a transceiver. The second radio network node 12, the processing circuitry 1101 and/or the receiving unit 1102 is configured to receive from the central network node 13, the one or more updated GNN parameters of the GNN for predicting the characteristics of the RAN.
The second radio network node 12 may comprise an executing unit 1103. The second radio network node 12, the processing circuitry 1101 and/or the executing unit 1103 is configured to execute the second local computation, associated with the GNN for predicting the characteristics of the RAN, with the one or more updated GNN parameters and also the data of the local data source to obtain the matrix indication. The data of the local data source may comprise one or more of PM data, CM data, and CTR obtained at the second radio network node.
The second radio network node 12 may comprise a providing unit 1104, e.g., a transmitter or a transceiver. The second radio network node 12, the processing circuitry 1101 and/or the providing unit 1104 is configured to provide, to the first radio network node 11 , the matrix indication of the second local computation associated with the GNN. The characteristics may comprise one or more performance indications of respective radio network node. The matrix indication may comprise one or more node features (xu ), embeddings
Figure imgf000022_0001
and/or edge features (eu^v). The second radio network node 12, the processing circuitry 1101 and/or the providing unit 1104 is configured to provide the matrix indication over the internal interface when the first and second network node are comprised in the same logical radio network node, or to provide the matrix indication over the external interface when the first and second network node are separated neighbouring radio network nodes. The second radio network node 12, the processing circuitry 1101 and/or the executing unit 1103 may be configured to execute the other computation such as the first local computation based on one or more matrix indications from one or more radio network nodes, wherein the output of the other computation indicates the gradient. The second radio network node 12, the processing circuitry 1101 and/or the providing unit 1104 may be configured to send the indication of the gradient to the central network node 13 training the GNN for predicting the characteristics of the RAN.
The second radio network node 12 further comprises a memory 1106. The memory comprises one or more units to be used to store data on, such as GNN, local data, subgraph, parameters, values, operational parameters, applications to perform the methods disclosed herein when being executed, and similar. Thus, embodiments herein may disclose a second radio network node for handling data in the communication network, wherein the second radio network node comprises processing circuitry and a memory, said memory comprising instructions executable by said processing circuitry whereby said second radio network node is operative to perform any of the methods herein. The second radio network node 12 comprises a communication interface 1109 comprising, e.g., a transmitter, a receiver, a transceiver and/or one or more antennas.
The methods according to the embodiments described herein for the second radio network node 12 are respectively implemented by means of e.g. a computer program product 1107 or a computer program, comprising instructions, i.e., software code portions, which, when executed on at least one processor, cause the at least one processor to carry out the actions described herein, as performed by the second radio network node 12. The computer program product 1107 may be stored on a computer-readable storage medium 1108, e.g., a universal serial bus (USB) stick, a disc or similar. The computer-readable storage medium 1108, having stored thereon the computer program product, may comprise the instructions which, when executed on at least one processor, cause the at least one processor to carry out the actions described herein, as performed by the second radio network node 12. In some embodiments, the computer-readable storage medium may be a non-transitory or a transitory computer-readable storage medium.
Fig. 12 is a block diagram depicting the central network node 13, in two embodiments, for handling data, e.g., handling GNN, of the RAN in the communication network according to embodiments herein.
The central network node 13 may comprise processing circuitry 1201 , e.g., one or more processors, configured to perform the methods herein. The central network node 13 may comprise a sending unit 1202, e.g., a transmitter or a transceiver. The central network node 13, the processing circuitry 1201 , and/or the sending unit 1202 is configured to broadcast to radio network nodes, the one or more updated GNN parameters of the GNN for predicting the characteristics of the RAN. The characteristics may comprise one or more performance indications of respective radio network node.
The central network node 13 may comprise a receiving unit 1203, e.g., a receiver or a transceiver. The central network node 13, the processing circuitry 1201 , and/or the sending unit 1202 is configured to receive the indications of gradients from the first radio network node 11 and the second radio network node 12, wherein the gradients are from local computations, associated with the GNN, executed locally at respective radio network node. The received indications of gradients from the first radio network node 11 and the second radio network node 12 may be related to gradients processed in a calculation operation, at respective radio network node.
The central network node 13 may comprise a training unit 1204. The central network node 13, the processing circuitry 1201 , and/or the training unit 1204 is configured to train the GNN using the received indications of gradients.
The central network node 13 may comprise a sampling unit 1205. The central network node 13, the processing circuitry 1201 , and/or the sampling unit 1205 may be configured to sample the data from the data sources of the GNN, wherein the sampling is based on the topology information of cells of radio network nodes in the communication network. The central network node 13, the processing circuitry 1201 , and/or the training unit 1204 may then be configured to train, based on the sampled data, the GNN to obtain the one more updated GNN parameters.
The central network node 13, the processing circuitry 1201 , and/or the sending unit 1202 may be configured to, upon completion of training the GNN using the received indications of gradients, send the GNN to a model registry.
The GNN may be for predicting the radio performance of the communication network such as a RAN.
The central network node 13 further comprises a memory 1206. The memory comprises one or more units to be used to store data on, such as GNN, local data, subgraph, parameters, values, operational parameters, applications to perform the methods disclosed herein when being executed, and similar. Thus, embodiments herein may disclose a central network node for handling data in the communication network, wherein the central network node comprises the processing circuitry and a memory, said memory comprising instructions executable by said processing circuitry whereby said central network node is operative to perform any of the methods herein. The central network node 13 comprises a communication interface 1209 comprising, e.g., a transmitter, a receiver, a transceiver and/or one or more antennas.
The methods according to the embodiments described herein forthe central network node 13 are respectively implemented by means of, e.g., a computer program product 1207 or a computer program, comprising instructions, i.e., software code portions, which, when executed on at least one processor, cause the at least one processor to carry out the actions described herein, as performed by the central network node 13. The computer program product 1207 may be stored on a computer-readable storage medium 1208, e.g., a universal serial bus (USB) stick, a disc or similar. The computer-readable storage medium 1208, having stored thereon the computer program product, may comprise the instructions which, when executed on at least one processor, cause the at least one processor to carry out the actions described herein, as performed by the central network node 13. In some embodiments, the computer-readable storage medium may be a non- transitory or a transitory computer-readable storage medium.
In some embodiments a more general term “network node” is used and it can correspond to any type of radio network node or any network node, which communicates with a wireless device and/or with another network node. Examples of network nodes are NodeB, Master eNB, Secondary eNB, a network node belonging to Master cell group (MCG) or Secondary Cell Group (SCG), base station (BS), multi-standard radio (MSR) radio node such as MSR BS, eNodeB, network controller, radio network controller (RNC), base station controller (BSC), relay, donor node controlling relay, base transceiver station (BTS), access point (AP), transmission points, transmission nodes, Remote Radio Unit (RRU), nodes in distributed antenna system (DAS), core network node e.g. Mobility Switching Centre (MSC), Mobile Management Entity (MME) etc., Operation and Maintenance (O&M), Operation Support System (OSS), Self-Organizing Network (SON), positioning node e.g. Evolved Serving Mobile Location Centre (E-SMLC), Minimizing Drive Test (MDT) etc.
In some embodiments the non-limiting term wireless device or user equipment (UE) is used and it refers to any type of wireless device communicating with a network node and/or with another UE in a cellular or mobile communication system. Examples of UE are target device, device-to-device (D2D) UE, proximity capable UE (aka ProSe UE), machine type UE or UE capable of machine to machine (M2M) communication, Personal digital assistant (PDA), Tablet, mobile terminals, smart phone, laptop embedded equipped (LEE), laptop mounted equipment (LME), USB dongles etc.
The embodiments are described for 5G. However, the embodiments are applicable to any RAT or multi-RAT systems, where the UE receives and/or transmit signals, e.g., data, e.g., LTE, LTE FDD/TDD, WCDMA/HSPA, GSM/GERAN, Wi Fi, WLAN, CDMA2000 etc.
As will be readily understood by those familiar with communications design, that functions means or modules may be implemented using digital logic and/or one or more microcontrollers, microprocessors, or other digital hardware. In some embodiments, several or all of the various functions may be implemented together, such as in a single application-specific integrated circuit (ASIC), or in two or more separate devices with appropriate hardware and/or software interfaces between them. Several of the functions may be implemented on a processor shared with other functional components of a wireless device or network node, for example.
Alternatively, several of the functional elements of the processing means discussed may be provided through the use of dedicated hardware, while others are provided with hardware for executing software, in association with the appropriate software or firmware. Thus, the term “processor” or “controller” as used herein does not exclusively refer to hardware capable of executing software and may implicitly include, without limitation, digital signal processor (DSP) hardware, read-only memory (ROM) for storing software, random-access memory for storing software and/or program or application data, and non-volatile memory. Other hardware, conventional and/or custom, may also be included. Designers of communications devices will appreciate the cost, performance, and maintenance trade-offs inherent in these design choices.
With reference to Fig 13, in accordance with an embodiment, a communication system includes a telecommunication network 3210, such as a 3GPP-type cellular network, which comprises an access network 3211 , such as a radio access network, and a core network 3214. The access network 3211 comprises a plurality of base stations 3212a, 3212b, 3212c, such as NBs, eNBs, gNBs or other types of wireless access points being examples of the radio network node 12 herein, each defining a corresponding coverage area 3213a, 3213b, 3213c. Each base station 3212a, 3212b, 3212c is connectable to the core network 3214 over a wired or wireless connection 3215. A first user equipment (UE) 3291 , being an example of the UE 10, located in coverage area 3213c is configured to wirelessly connect to, or be paged by, the corresponding base station 3212c. A second UE 3292 in coverage area 3213a is wirelessly connectable to the corresponding base station 3212a. While a plurality of UEs 3291 , 3292 are illustrated in this example, the disclosed embodiments are equally applicable to a situation where a sole UE is in the coverage area or where a sole UE is connecting to the corresponding base station 3212.
The telecommunication network 3210 is itself connected to a host computer 3230, which may be embodied in the hardware and/or software of a standalone server, a cloud- implemented server, a distributed server or as processing resources in a server farm. The host computer 3230 may be under the ownership or control of a service provider, or may be operated by the service provider or on behalf of the service provider. The connections 3221 , 3222 between the telecommunication network 3210 and the host computer 3230 may extend directly from the core network 3214 to the host computer 3230 or may go via an optional intermediate network 3220. The intermediate network 3220 may be one of, or a combination of more than one of, a public, private or hosted network; the intermediate network 3220, if any, may be a backbone network or the Internet; in particular, the intermediate network 3220 may comprise two or more subnetworks (not shown).
The communication system of Fig. 13 as a whole enables connectivity between one of the connected UEs 3291 , 3292 and the host computer 3230. The connectivity may be described as an over-the-top (OTT) connection 3250. The host computer 3230 and the connected UEs 3291 , 3292 are configured to communicate data and/or signaling via the OTT connection 3250, using the access network 3211 , the core network 3214, any intermediate network 3220 and possible further infrastructure (not shown) as intermediaries. The OTT connection 3250 may be transparent in the sense that the participating communication devices through which the OTT connection 3250 passes are unaware of routing of uplink and downlink communications. For example, a base station 3212 may not or need not be informed about the past routing of an incoming downlink communication with data originating from a host computer 3230 to be forwarded (e.g., handed over) to a connected UE 3291. Similarly, the base station 3212 need not be aware of the future routing of an outgoing uplink communication originating from the UE 3291 towards the host computer 3230.
Example implementations, in accordance with an embodiment, of the UE, base station and host computer discussed in the preceding paragraphs will now be described with reference to Fig. 14. In a communication system 3300, a host computer 3310 comprises hardware 3315 including a communication interface 3316 configured to set up and maintain a wired or wireless connection with an interface of a different communication device of the communication system 3300. The host computer 3310 further comprises processing circuitry 3318, which may have storage and/or processing capabilities. In particular, the processing circuitry 3318 may comprise one or more programmable processors, application-specific integrated circuits, field programmable gate arrays or combinations of these (not shown) adapted to execute instructions. The host computer 3310 further comprises software 3311 , which is stored in or accessible by the host computer 3310 and executable by the processing circuitry 3318. The software 3311 includes a host application 3312. The host application 3312 may be operable to provide a service to a remote user, such as a UE 3330 connecting via an OTT connection 3350 terminating at the UE 3330 and the host computer 3310. In providing the service to the remote user, the host application 3312 may provide user data which is transmitted using the OTT connection 3350.
The communication system 3300 further includes a base station 3320 provided in a telecommunication system and comprising hardware 3325 enabling it to communicate with the host computer 3310 and with the UE 3330. The hardware 3325 may include a communication interface 3326 for setting up and maintaining a wired or wireless connection with an interface of a different communication device of the communication system 3300, as well as a radio interface 3327 for setting up and maintaining at least a wireless connection 3370 with a UE 3330 located in a coverage area (not shown in Fig.14) served by the base station 3320. The communication interface 3326 may be configured to facilitate a connection 3360 to the host computer 3310. The connection 3360 may be direct or it may pass through a core network (not shown in Fig.14) of the telecommunication system and/or through one or more intermediate networks outside the telecommunication system. In the embodiment shown, the hardware 3325 of the base station 3320 further includes processing circuitry 3328, which may comprise one or more programmable processors, application-specific integrated circuits, field programmable gate arrays or combinations of these (not shown) adapted to execute instructions. The base station 3320 further has software 3321 stored internally or accessible via an external connection.
The communication system 3300 further includes the UE 3330 already referred to. Its hardware 3335 may include a radio interface 3337 configured to set up and maintain a wireless connection 3370 with a base station serving a coverage area in which the UE 3330 is currently located. The hardware 3335 of the UE 3330 further includes processing circuitry 3338, which may comprise one or more programmable processors, application-specific integrated circuits, field programmable gate arrays or combinations of these (not shown) adapted to execute instructions. The UE 3330 further comprises software 3331 , which is stored in or accessible by the UE 3330 and executable by the processing circuitry 3338. The software 3331 includes a client application 3332. The client application 3332 may be operable to provide a service to a human or non-human user via the UE 3330, with the support of the host computer 3310. In the host computer 3310, an executing host application 3312 may communicate with the executing client application 3332 via the OTT connection 3350 terminating at the UE 3330 and the host computer 3310. In providing the service to the user, the client application 3332 may receive request data from the host application 3312 and provide user data in response to the request data. The OTT connection 3350 may transfer both the request data and the user data. The client application 3332 may interact with the user to generate the user data that it provides.
It is noted that the host computer 3310, base station 3320 and UE 3330 illustrated in Fig. 14 may be identical to the host computer 3230, one of the base stations 3212a, 3212b, 3212c and one of the UEs 3291 , 3292 of Fig. 13, respectively. This is to say, the inner workings of these entities may be as shown in Fig. 14 and independently, the surrounding network topology may be that of Fig. 13.
In Fig. 14, the OTT connection 3350 has been drawn abstractly to illustrate the communication between the host computer 3310 and the user equipment 3330 via the base station 3320, without explicit reference to any intermediary devices and the precise routing of messages via these devices. Network infrastructure may determine the routing, which it may be configured to hide from the UE 3330 or from the service provider operating the host computer 3310, or both. While the OTT connection 3350 is active, the network infrastructure may further take decisions by which it dynamically changes the routing (e.g., on the basis of load balancing consideration or reconfiguration of the network).
The wireless connection 3370 between the UE 3330 and the base station 3320 is in accordance with the teachings of the embodiments described throughout this disclosure. One or more of the various embodiments improve the performance of OTT services provided to the UE 3330 using the OTT connection 3350, in which the wireless connection 3370 forms the last segment. More precisely, the teachings of these embodiments may improve the performance since the GNN may model the RAN in a more accurate manner and thereby provide benefits such as reduced user waiting time, and better responsiveness. A measurement procedure may be provided for the purpose of monitoring data rate, latency and other factors on which the one or more embodiments improve. There may further be an optional network functionality for reconfiguring the OTT connection 3350 between the host computer 3310 and UE 3330, in response to variations in the measurement results. The measurement procedure and/or the network functionality for reconfiguring the OTT connection 3350 may be implemented in the software 3311 of the host computer 3310 or in the software 3331 of the UE 3330, or both. In embodiments, sensors (not shown) may be deployed in or in association with communication devices through which the OTT connection 3350 passes; the sensors may participate in the measurement procedure by supplying values of the monitored quantities exemplified above, or supplying values of other physical quantities from which software 3311 , 3331 may compute or estimate the monitored quantities. The reconfiguring of the OTT connection 3350 may include message format, retransmission settings, preferred routing etc.; the reconfiguring need not affect the base station 3320, and it may be unknown or imperceptible to the base station 3320. Such procedures and functionalities may be known and practiced in the art. In certain embodiments, measurements may involve proprietary UE signaling facilitating the host computer’s 3310 measurements of throughput, propagation times, latency and the like. The measurements may be implemented in that the software 3311 , 3331 causes messages to be transmitted, in particular empty or ‘dummy’ messages, using the OTT connection 3350 while it monitors propagation times, errors etc.
Fig. 15 is a flowchart illustrating a method implemented in a communication system, in accordance with one embodiment. The communication system includes a host computer, a base station and a UE which may be those described with reference to Figures 13 and 14. For simplicity of the present disclosure, only drawing references to Figure 15 will be included in this section. In a first step 3410 of the method, the host computer provides user data. In an optional substep 3411 of the first step 3410, the host computer provides the user data by executing a host application. In a second step 3420, the host computer initiates a transmission carrying the user data to the UE. In an optional third step 3430, the base station transmits to the UE the user data which was carried in the transmission that the host computer initiated, in accordance with the teachings of the embodiments described throughout this disclosure. In an optional fourth step 3440, the UE executes a client application associated with the host application executed by the host computer. Fig. 16 is a flowchart illustrating a method implemented in a communication system, in accordance with one embodiment. The communication system includes a host computer, a base station and a UE which may be those described with reference to Figures 13 and 14. For simplicity of the present disclosure, only drawing references to Figure 16 will be included in this section. In a first step 3510 of the method, the host computer provides user data. In an optional substep (not shown) the host computer provides the user data by executing a host application. In a second step 3520, the host computer initiates a transmission carrying the user data to the UE. The transmission may pass via the base station, in accordance with the teachings of the embodiments described throughout this disclosure. In an optional third step 3530, the UE receives the user data carried in the transmission.
Fig. 17 is a flowchart illustrating a method implemented in a communication system, in accordance with one embodiment. The communication system includes a host computer, a base station and a UE which may be those described with reference to Figures 13 and 14. For simplicity of the present disclosure, only drawing references to Figure 17 will be included in this section. In an optional first step 3610 of the method, the UE receives input data provided by the host computer. Additionally or alternatively, in an optional second step 3620, the UE provides user data. In an optional substep 3621 of the second step 3620, the UE provides the user data by executing a client application. In a further optional substep 3611 of the first step 3610, the UE executes a client application which provides the user data in reaction to the received input data provided by the host computer. In providing the user data, the executed client application may further consider user input received from the user. Regardless of the specific manner in which the user data was provided, the UE initiates, in an optional third substep 3630, transmission of the user data to the host computer. In a fourth step 3640 of the method, the host computer receives the user data transmitted from the UE, in accordance with the teachings of the embodiments described throughout this disclosure.
Fig. 18 is a flowchart illustrating a method implemented in a communication system, in accordance with one embodiment. The communication system includes a host computer, a base station and a UE which may be those described with reference to Figures 13 and 14. For simplicity of the present disclosure, only drawing references to Figure 18 will be included in this section. In an optional first step 3710 of the method, in accordance with the teachings of the embodiments described throughout this disclosure, the base station receives user data from the UE. In an optional second step 3720, the base station initiates transmission of the received user data to the host computer. In a third step 3730, the host computer receives the user data carried in the transmission initiated by the base station.
It will be appreciated that the foregoing description and the accompanying drawings represent non-limiting examples of the methods and apparatus taught herein. As such, the apparatus and techniques taught herein are not limited by the foregoing description and accompanying drawings. Instead, the embodiments herein are limited only by the following claims and their legal equivalents.

Claims

1. A method performed by a first radio network node (11) for handling data of a radio access network, RAN, in a communication network, the method comprising: obtaining (402), from a second radio network node (12), a matrix indication of a second local computation associated with a graph neural network, GNN, for predicting characteristics of the RAN, wherein the matrix indication is obtained over an internal interface when the first and second network node are comprised in a same logical radio network node, or the matrix indication is obtained over an external interface when the first and second network node are separated neighbouring radio network nodes; executing (403) a first local computation associated with the GNN for predicting the characteristics of the RAN based on the obtained matrix indication, wherein an output of the first local computation indicates a gradient; and
- sending (405) an indication of the gradient to a central network node (13) training the GNN for predicting the characteristics of the RAN.
2. The method according to claim 1 , wherein the characteristics comprise one or more performance indications of respective radio network node.
3. The method according to any of the claims 1-2, further comprising:
- receiving (401) one or more updated GNN parameters from the central network node (13), and wherein executing the first local computation is further based on the received one or more updated GNN parameters and data from a local data source.
4. The method according to any of the claims 1-3, wherein executing the first local computation is performed for one or more neighbouring nodes or cells resulting in one or more derived gradients for the neighbouring nodes or cells, and the method further comprises: - performing (404) a calculation operation on the gradient and one or more derived gradients; and wherein the indication of the gradient indicates a result of the calculation operation.
5. A method performed by a second radio network node (12) for handling data of a radio access network, RAN, in a communication network, the method comprising:
- receiving (501) from a central network node (13), one or more updated graph neural network, GNN, parameters of a GNN for predicting characteristics of the RAN; executing (502) a second local computation, associated with the GNN for predicting the characteristics of the RAN, with the one or more updated GNN parameters and also data of a local data source to obtain a matrix indication; and
- providing (503), to a first radio network node (11), the matrix indication of the second local computation associated with the GNN, wherein the matrix indication is provided over an internal interface when the first and second network node are comprised in a same logical radio network node, or the matrix indication is provided over an external interface when the first and second network node are separated neighbouring radio network nodes..
6. The method according to claim 5, wherein the characteristics comprise one or more performance indications of respective radio network node.
7. The method according to any of the claims 5-6, wherein the matrix indication comprises one or more node features (xu ), embeddings
Figure imgf000034_0001
and/or edge features (eu^v).
8. The method according to any of the claims 5-7, wherein the data of the local data source comprises one or more of performance management data, configuration management data, and Cell or user equipment, UE, trace records obtained at the second radio network node.
9. The method according to any of the claims 5-10, further comprising: executing (504) another local computation based on one or more matrix indications from one or more radio network nodes, wherein an output of the other local computation indicates a gradient; and
- sending (505) an indication of the gradient to the central network node training the GNN for predicting the characteristics of the RAN (13).
10. A method performed by a central network node (13) for handling data of a radio access network, RAN, in a communication network, the method comprising:
- broadcasting (603) to radio network nodes, one or more updated graph neural network, GNN, parameters of a GNN for predicting the characteristics of the RAN;
- receiving (604) indications of gradients from a first radio network node (11) and a second radio network node (12), wherein the gradients are from local computations, associated with the GNN, executed locally at respective radio network node; and
- training (605) the GNN for predicting the characteristics of the RAN using the received indications of gradients.
11 . The method according to claim 10, further comprising:
- sampling (601) data from data sources of the GNN, wherein the sampling is based on topology information of cells of radio network nodes in the communication network; and
- training (602) the GNN to obtain the one more updated GNN parameters, wherein the training is based on the sampled data.
12. The method according to any of the claims 10-11 , further comprising, upon completion of the performed training of the GNN, sending (606) the GNN to a model registry.
13. The method according to any of the claims 10-12, wherein the characteristics comprise one or more performance indications of respective radio network node.
14. A computer program product comprising instructions, which, when executed on at least one processor, cause the at least one processor to carry out a method according to any of the claims 1-13, as performed by the first, the second radio network node or the central network node, respectively. A computer-readable storage medium, having stored thereon a computer program product comprising instructions which, when executed on at least one processor, cause the at least one processor to carry out a method according to any of the claims 1-13, as performed by the first, the second radio network node or the central network node, respectively. A first radio network node (11) for handling data of a radio access network, RAN, in a communication network, wherein the first radio network node (11) is configured to: obtain, from a second radio network node (12), a matrix indication of a second local computation associated with a graph neural network, GNN, for predicting characteristics of the RAN, wherein the first radio network node is configured to obtain the matrix indication over an internal interface when the first and second network node are comprised in a same logical radio network node, or to obtain the matrix indication over an external interface when the first and second network node are separated neighbouring radio network nodes; execute a first local computation associated with the GNN for predicting the characteristics of the RAN based on the obtained matrix indication, wherein an output of the first local computation indicates a gradient; and send an indication of the gradient to a central network node (13) training the GNN for predicting the characteristics of the RAN. The first radio network node according to claim 16, wherein the characteristics comprise one or more performance indications of respective radio network node. The first radio network node according to any of the claims 16-17, wherein the first radio network node is further configured to: receive one or more updated GNN parameters from the central network node, and wherein the first radio network node is configured to execute the first local computation further based on the received one or more updated GNN parameters and data from a local data source. The first radio network node according to any of the claims 16-18, wherein the first radio network node is further configured to execute the first local computation for one or more neighbouring nodes or cells resulting in one or more derived gradients for the one or more neighbouring nodes or cells, and the first radio network node is configured to: perform a calculation operation on the gradient and the one or more derived gradients; and wherein the indication of the gradient indicates a result of the performed calculation operation. A second radio network node (12) for handling data of a radio access network, RAN, in a communication network, where the second radio network node (12) is configured to: receive from a central network node (13), one or more updated graph neural network, GNN, parameters of a GNN for predicting characteristics of the RAN; execute a second local computation, associated with the GNN for predicting the characteristics of the RAN, with the one or more updated GNN parameters and also data of a local data source to obtain a matrix indication; and provide to a first radio network node (11), the matrix indication of the second local computation associated with the GNN, wherein the second radio network node is configured to provide the matrix indication over an internal interface when the first and second network node are comprised in a same logical radio network node, or to provide the matrix indication over an external interface when the first and second network node are separated neighbouring radio network nodes. The second radio network node according to claim 20, wherein the characteristics comprise one or more performance indications of respective radio network node. The second radio network node according to any of the claims 20-21 , wherein the matrix indication comprises one or more node features (xu ), embeddings
Figure imgf000037_0001
edge features (eu^v). The second radio network node according to any of the claims 20-22, wherein the data of the local data source comprises one or more of performance management data, configuration management data, and Cell or user equipment, UE, trace records obtained at the second radio network node. The second radio network node according to any of the claims 20-23, wherein the second radio network node is further configured to: execute the another local computation based on one or more matrix indications from one or more radio network nodes, wherein an output of the other local computation indicates a gradient; and send an indication of the gradient to the central network node training the GNN for predicting the characteristics of the RAN. A central network node (13) for handling data of a radio access network, RAN, in a communication network, wherein the central network node (13) is configured to: broadcast to radio network nodes (11 ,12), one or more updated graph neural network, GNN, parameters of a GNN for predicting characteristics of the RAN; receive indications of gradients from a first radio network node (11) and a second radio network node (12), wherein the gradients are from local computations, associated with the GNN, executed locally at respective radio network node; and train the GNN using the received indications of gradients. The central network node according to claim 25, wherein the central network node is further configured to: sample data from data sources of the GNN, wherein the sampling is based on topology information of cells of radio network nodes in the communication network; and train, based on the sampled data, the GNN to obtain the one more updated GNN parameters. The central network node according to any of the claims 25-26, wherein the central network node is further configured to, upon completion of training the GNN using the received indications of gradients, send the GNN to a model registry. The central network node according to any of the claims 25-27, wherein the characteristics comprise one or more performance indications of respective radio network node.
PCT/EP2022/076035 2021-11-16 2022-09-20 Ran optimization with the help of a decentralized graph neural network WO2023088593A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GR20210100804 2021-11-16
GR20210100804 2021-11-16

Publications (1)

Publication Number Publication Date
WO2023088593A1 true WO2023088593A1 (en) 2023-05-25

Family

ID=83898217

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2022/076035 WO2023088593A1 (en) 2021-11-16 2022-09-20 Ran optimization with the help of a decentralized graph neural network

Country Status (1)

Country Link
WO (1) WO2023088593A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190138934A1 (en) * 2018-09-07 2019-05-09 Saurav Prakash Technologies for distributing gradient descent computation in a heterogeneous multi-access edge computing (mec) networks
US20210119881A1 (en) * 2018-05-02 2021-04-22 Telefonaktiebolaget Lm Ericsson (Publ) First network node, third network node, and methods performed thereby, for handling a performance of a radio access network
WO2021190772A1 (en) 2020-03-27 2021-09-30 Telefonaktiebolaget Lm Ericsson (Publ) Policy for optimising cell parameters

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210119881A1 (en) * 2018-05-02 2021-04-22 Telefonaktiebolaget Lm Ericsson (Publ) First network node, third network node, and methods performed thereby, for handling a performance of a radio access network
US20190138934A1 (en) * 2018-09-07 2019-05-09 Saurav Prakash Technologies for distributing gradient descent computation in a heterogeneous multi-access edge computing (mec) networks
WO2021190772A1 (en) 2020-03-27 2021-09-30 Telefonaktiebolaget Lm Ericsson (Publ) Policy for optimising cell parameters

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
DECENTRALIZED INFERENCE WITH GRAPH NEURAL NETWORKS IN WIRELESS COMMUNICATION SYSTEMS, Retrieved from the Internet <URL:https://arxiv.org/pdf/2104.09027.pdf>
MENGYUAN LEE ET AL: "Decentralized Inference with Graph Neural Networks in Wireless Communication Systems", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 14 November 2021 (2021-11-14), XP091087938 *

Similar Documents

Publication Publication Date Title
US9736705B2 (en) Method and system for proxy base station
EP3729670B1 (en) Wireless device, first network node, and methods performed thereby to handle a log of information about a set of beams
US11304197B2 (en) Network node and method for deciding removal of a radio resource allocated to a UE
US20200322873A1 (en) Network node and method in a wireless communications network
WO2019070174A1 (en) High-gain beam handover
AU2021266236B2 (en) Explicit measurement definition
US20230162006A1 (en) Server and agent for reporting of computational results during an iterative learning process
US20230319597A1 (en) Network node and a method performed in a wireless communication network for handling configuration of radio network nodes using reinforcement learning
WO2023088593A1 (en) Ran optimization with the help of a decentralized graph neural network
EP4211821A1 (en) Method to determine the capability of simultaneous operation in iab nodes
US20220295308A1 (en) Method and Apparatus for Physical Cell Identifier Collision Detection
EP3909382A1 (en) Method and controller node for determining a network parameter
US20230068833A1 (en) Group management for v2x groupcast
US20240049162A1 (en) Radio network node, user equipment and methods performed in a wireless communication network
WO2022195600A1 (en) Prediction of cell traffic in a network
WO2024039898A1 (en) Method and apparatus for implementing ai-ml in a wireless network
US20230362823A1 (en) Method Performed by a Radio Network Node for Determining a Changed Bandwidth Interval
US20230164590A1 (en) Radio network node and method performed therein for communication in a wireless communication network
WO2023174560A1 (en) User equipment and method performed therein
US20210400558A1 (en) Wireless Device, Management Server and Methods Therein for Determining Transmission of Uplink Data
WO2024052428A1 (en) Rrm mobility reporting based on beam management measurements
WO2023172172A1 (en) Network node and method performed therein for handling interference in a communication network
WO2024084419A1 (en) Optimal device selection and beamforming in federated learning with over-the-air aggregation
WO2024018265A1 (en) Dynamic settings of thresholds used in dual connectivity
WO2023022643A1 (en) Master node, secondary node, and methods performed in a wireless communication network

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22790301

Country of ref document: EP

Kind code of ref document: A1