WO2023209275A1 - Apparatus, method and computer program for load prediction - Google Patents

Apparatus, method and computer program for load prediction Download PDF

Info

Publication number
WO2023209275A1
WO2023209275A1 PCT/FI2023/050209 FI2023050209W WO2023209275A1 WO 2023209275 A1 WO2023209275 A1 WO 2023209275A1 FI 2023050209 W FI2023050209 W FI 2023050209W WO 2023209275 A1 WO2023209275 A1 WO 2023209275A1
Authority
WO
WIPO (PCT)
Prior art keywords
load
incident load
node
user equipment
predicted
Prior art date
Application number
PCT/FI2023/050209
Other languages
French (fr)
Inventor
Janne ALI-TOLPPA
Amaanat ALI
Ahmad AWADA
Alperen GUNDOGAN
Sina KHATIBI
Anna Pantelidou
Original Assignee
Nokia Technologies Oy
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Technologies Oy filed Critical Nokia Technologies Oy
Publication of WO2023209275A1 publication Critical patent/WO2023209275A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/16Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • G06N3/0442Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/09Supervised learning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/147Network analysis or design for predicting network behaviour
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W36/00Hand-off or reselection arrangements
    • H04W36/16Performing reselection for specific purposes
    • H04W36/22Performing reselection for specific purposes for handling the traffic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0805Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters by checking availability
    • H04L43/0817Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters by checking availability by checking functioning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/08Load balancing or load distribution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W36/00Hand-off or reselection arrangements
    • H04W36/0005Control or signalling for completing the hand-off
    • H04W36/0055Transmission or use of information for re-establishing the radio link
    • H04W36/0064Transmission or use of information for re-establishing the radio link of control information between different access points
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W36/00Hand-off or reselection arrangements
    • H04W36/0005Control or signalling for completing the hand-off
    • H04W36/0083Determination of parameters used for hand-off, e.g. generation or modification of neighbour cell lists
    • H04W36/00837Determination of triggering parameters for hand-off
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W40/00Communication routing or communication path finding
    • H04W40/34Modification of an existing route

Definitions

  • the present application relates to a method, apparatus, system and computer program and in particular but not exclusively to machine learning driven incident load prediction prior to a mobility event.
  • a communication system can be seen as a facility that enables communication sessions between two or more entities such as user terminals, base stations and/or other nodes by providing carriers between the various entities involved in the communications path.
  • a communication system can be provided for example by means of a communication network and one or more compatible communication devices.
  • the communication sessions may comprise, for example, communication of data for carrying communications such as voice, video, electronic mail (email), text message, multimedia and/or content data and so on.
  • Nonlimiting examples of services provided comprise two-way or multi-way calls, data communication or multimedia services and access to a data network system, such as the Internet.
  • wireless communication system at least a part of a communication session between at least two stations occurs over a wireless link.
  • wireless systems comprise public land mobile networks (PLMN), satellite based communication systems and different wireless local networks, for example wireless local area networks (WLAN).
  • PLMN public land mobile networks
  • WLAN wireless local area networks
  • Some wireless systems can be divided into cells and are therefore often referred to as cellular systems.
  • a user can access the communication system by means of an appropriate communication device or terminal.
  • a communication device of a user may be referred to as user equipment (UE) or user device.
  • UE user equipment
  • a communication device is provided with an appropriate signal receiving and transmitting apparatus for enabling communications, for example enabling access to a communication network or communications directly with other users.
  • the communication device may access a carrier provided by a station, for example a base station of a cell, and transmit and/or receive communications on the carrier.
  • the communication system and associated devices typically operate in accordance with a given standard or specification which sets out what the various entities associated with the system are permitted to do and how that should be achieved. Communication protocols and/or parameters which shall be used for the connection are also typically defined.
  • UTRAN 3G radio
  • Other examples of communication systems are the long-term evolution (LTE) of the Universal Mobile Telecommunications System (UMTS) radio-access technology and so-called 5G or New Radio (NR) networks.
  • NR is being standardized by the 3rd Generation Partnership Project
  • an apparatus comprising means for receiving measurement values from a plurality of user equipments at a source node, providing, from the source node to a respective one of a plurality of target nodes, a handover request for each of the plurality of user equipments, the handover request including a request to measure incident load, determining to perform a handover for a subsequent user equipment to a subsequent target node, determining a predicted incident load based on the received measurement values and measured incidence loads, using the predicted incident load in mobility decisions and providing the predicted incident load to the subsequent target node in a handover request for the subsequent user equipment.
  • the means for determining a predicted incident load may comprise at least one machine learning model, which, when executed, is configured to determine the predicted incident load based on measurement values of the subsequent user equipment.
  • the machine learning model when executed, may be configured to determine the predicted incident load based on the identity of a target node.
  • the apparatus may comprise means for providing the received measurement values to a host node for use in training the machine learning model.
  • the apparatus may comprise means for receiving an indication of the measured incident load from each of the plurality of target nodes and providing an indication of the measured incident load to the host node for use in training the machine learning model.
  • the apparatus may comprise means for receiving an indication of capacity at the time of the measured incident load from each of the plurality of target nodes.
  • the incident load may comprise, for a given time period, the absolute number of resources allocated to the user equipment in at least one of downlink and uplink or the ratio of resources allocated to the user equipment to all resources being scheduled in at least one of uplink and downlink.
  • the apparatus may comprise means for receiving an indication of the measured incident load from each of the plurality of target nodes for use in training the machine learning model at the source node.
  • the apparatus may comprise means for determining the predicted incident load further based on an identifier of at least one of the subsequent target node and the source node.
  • an apparatus comprising means for receiving, from a source node at a target node, a handover request for a user equipment, wherein either the handover request includes a request to measure incident load and if so, providing an indication of measured incident load to at least one of the source node and a host node for use in training a machine learning model or the handover request includes an indication of a predicted incident load associated with the user equipment.
  • an apparatus comprising means for receiving, from one of a source node and a plurality of target nodes, at least one indication of a measured incident load for a user equipment at a respective one of the plurality of target nodes and measurement values for the user equipment and using the at least one indication and measurement values in training a machine learning model which, when executed is configured to determine a predicted incident load based on measurement values of a subsequent user equipment.
  • a method comprising receiving measurement values from a plurality of user equipments at a source node, providing, from the source node to a respective one of a plurality of target nodes, a handover request for each of the plurality of user equipments, the handover request including a request to measure incident load, determining to perform a handover for a subsequent user equipment to a subsequent target node, determining a predicted incident load based on the received measurement values and measured incidence loads, using the predicted incident load in mobility decisions and providing the predicted incident load to the subsequent target node in a handover request for the subsequent user equipment.
  • Determining a predicted incident load may comprise at least one machine learning model, which, when executed, is configured to determine the predicted incident load based on measurement values of the subsequent user equipment.
  • the machine learning model when executed, may be configured to determine the predicted incident load based on the identity of a target node.
  • the method may comprise providing the received measurement values to a host node for use in training the machine learning model.
  • the method may comprise receiving an indication of the measured incident load from each of the plurality of target nodes and providing an indication of the measured incident load to the host node for use in training the machine learning model.
  • the method may comprise receiving an indication of capacity at the time of the measured incident load from each of the plurality of target nodes.
  • the incident load may comprise, for a given time period, the absolute number of resources allocated to the user equipment in at least one of downlink and uplink or the ratio of resources allocated to the user equipment to all resources being scheduled in at least one of uplink and downlink.
  • the method may comprise receiving an indication of the measured incident load from each of the plurality of target nodes for use in training the machine learning model at the source node.
  • the method may comprise determining the predicted incident load further based on an identifier of at least one of the subsequent target node and the source node.
  • a method comprising receiving, from a source node at a target node, a handover request for a user equipment, wherein either the handover request includes a request to measure incident load and if so, providing an indication of measured incident load to at least one of the source node and a host node for use in training a machine learning model or the handover request includes an indication of a predicted incident load associated with the user equipment.
  • a method comprising receiving, from one of a source node and a plurality of target nodes, at least one indication of a measured incident load for a user equipment at a respective one of the plurality of target nodes and measurement values for the user equipment and using the at least one indication and measurement values in training a machine learning model which, when executed is configured to determine a predicted incident load based on measurement values of a subsequent user equipment.
  • an apparatus comprising: at least one processor and at least one memory including a computer program code, the at least one memory and computer program code configured to, with the at least one processor, cause the apparatus at least to receive measurement values from a plurality of user equipments at a source node, providing, from the source node to a respective one of a plurality of target nodes, a handover request for each of the plurality of user equipments, the handover request including a request to measure incident load, determine to perform a handover for a subsequent user equipment to a subsequent target node, determine a predicted incident load based on the received measurement values and measured incidence loads, use the predicted incident load in mobility decisions and provide the predicted incident load to the subsequent target node in a handover request for the subsequent user equipment.
  • the means for determining a predicted incident load may comprise at least one machine learning model, which, when executed, is configured to determine the predicted incident load based on measurement values of the subsequent user equipment.
  • the machine learning model when executed, may be configured to determine the predicted incident load based on the identity of a target node.
  • the apparatus may be configured to provide the received measurement values to a host node for use in training the machine learning model.
  • the apparatus may be configured to receive an indication of the measured incident load from each of the plurality of target nodes and provide an indication of the measured incident load to the host node for use in training the machine learning model.
  • the apparatus may be configured to receive an indication of capacity at the time of the measured incident load from each of the plurality of target nodes.
  • the incident load may comprise, for a given time period, the absolute number of resources allocated to the user equipment in at least one of downlink and uplink or the ratio of resources allocated to the user equipment to all resources being scheduled in at least one of uplink and downlink.
  • the apparatus may be configured to receive an indication of the measured incident load from each of the plurality of target nodes for use in training the machine learning model at the source node.
  • the apparatus may be configured to determine the predicted incident load further based on an identifier of at least one of the subsequent target node and the source node.
  • an apparatus comprising: at least one processor and at least one memory including a computer program code, the at least one memory and computer program code configured to, with the at least one processor, cause the apparatus at least to receive, from a source node at a target node, a handover request for a user equipment, wherein either the handover request includes a request to measure incident load and if so, provide an indication of measured incident load to at least one of the source node and a host node for use in training a machine learning model or the handover request includes an indication of a predicted incident load associated with the user equipment.
  • an apparatus comprising: at least one processor and at least one memory including a computer program code, the at least one memory and computer program code configured to, with the at least one processor, cause the apparatus at least to receive, from one of a source node and a plurality of target nodes, at least one indication of a measured incident load for a user equipment at a respective one of the plurality of target nodes and measurement values for the user equipment and use the at least one indication and measurement values in training a machine learning model which, when executed is configured to determine a predicted incident load based on measurement values of a subsequent user equipment.
  • a computer readable medium comprising program instructions for causing an apparatus to perform at least the following: receiving measurement values from a plurality of user equipments at a source node, providing, from the source node to a respective one of a plurality of target nodes, a handover request for each of the plurality of user equipments, the handover request including a request to measure incident load, determining to perform a handover for a subsequent user equipment to a subsequent target node, determining a predicted incident load based on the received measurement values and measured incidence loads, using the predicted incident load in mobility decisions and providing the predicted incident load to the subsequent target node in a handover request for the subsequent user equipment.
  • Determining a predicted incident load may comprise at least one machine learning model, which, when executed, is configured to determine the predicted incident load based on measurement values of the subsequent user equipment.
  • the machine learning model when executed, may be configured to determine the predicted incident load based on the identity of a target node.
  • the apparatus maybe caused to perform providing the received measurement values to a host node for use in training the machine learning model.
  • the apparatus may be caused to perform receiving an indication of the measured incident load from each of the plurality of target nodes and providing an indication of the measured incident load to the host node for use in training the machine learning model.
  • the apparatus may be caused to perform receiving an indication of capacity at the time of the measured incident load from each of the plurality of target nodes.
  • the incident load may comprise, for a given time period, the absolute number of resources allocated to the user equipment in at least one of downlink and uplink or the ratio of resources allocated to the user equipment to all resources being scheduled in at least one of uplink and downlink.
  • the apparatus may be caused to perform receiving an indication of the measured incident load from each of the plurality of target nodes for use in training the machine learning model at the source node.
  • the apparatus may be caused to perform determining the predicted incident load further based on an identifier of at least one of the subsequent target node and the source node.
  • a computer readable medium comprising program instructions for causing an apparatus to perform at least the following: receiving, from a source node at a target node, a handover request for a user equipment, wherein either the handover request includes a request to measure incident load and if so, providing an indication of measured incident load to at least one of the source node and a host node for use in training a machine learning model or the handover request includes an indication of a predicted incident load associated with the user equipment.
  • a computer readable medium comprising program instructions for causing an apparatus to perform at least the following: receiving, from one of a source node and a plurality of target nodes, at least one indication of a measured incident load for a user equipment at a respective one of the plurality of target nodes and measurement values for the user equipment and using the at least one indication and measurement values in training a machine learning model which, when executed is configured to determine a predicted incident load based on measurement values of a subsequent user equipment.
  • a non-transitory computer readable medium comprising program instructions for causing an apparatus to perform at least the method according to the third or fourth aspect.
  • Figure 1 shows a schematic diagram of an example 5GS communication system
  • Figure 2 shows a schematic diagram of an example mobile communication device
  • Figure 3 shows a schematic diagram of an example control apparatus
  • Figure 4 shows a high level signalling flow for a load balancing use case
  • Figure 5 shows a schematic diagram of a machine learning based predictive mobility concept
  • Figure 6 shows a signalling flow for resource status prediction assisted mobility
  • Figure 7 shows a flowchart of a method according to an example embodiment
  • Figure 8 shows a high level signalling flow according to an example embodiment
  • Figure 9 shows a high level signalling flow according to an example embodiment
  • Figure 10 shows a schematic diagram of a machine learning model according to an example embodiment
  • Figure 11 shows a signalling flow for training of a machine learning model according to an example embodiment
  • Figure 12 shows a signalling flow for training of a machine learning model according to an example embodiment
  • Figure 13 shows a signalling flow for incident load prediction fora baseline handover according to an example embodiment
  • Figure 14 shows a signalling flow for incident load prediction for a conditional handover according to an example embodiment.
  • 5GS 5G System
  • 5GS 5G System
  • Network architecture in 5GS may be similar to that of LTE-advanced.
  • Base stations of NR systems may be known as next generation Node Bs (gNBs).
  • Changes to the network architecture may depend on the need to support various radio technologies and finer QoS support, and some on-demand requirements for example QoS levels to support QoE of user point of view.
  • network aware services and applications, and service and application aware networks may bring changes to the architecture. Those are related to Information Centric Network (ICN) and User-Centric Content Delivery Network (UC-CDN) approaches.
  • ICN Information Centric Network
  • UC-CDN User-Centric Content Delivery Network
  • NR may use multiple input - multiple output (Ml MO) antennas, many more base stations or nodes than the LTE (a so- called small cell concept), including macro sites operating in co-operation with smaller stations and perhaps also employing a variety of radio technologies for better coverage and enhanced data rates.
  • Ml MO multiple input - multiple output
  • 5G networks may utilise network functions virtualization (NFV) which is a network architecture concept that proposes virtualizing network node functions into “building blocks” or entities that may be operationally connected or linked together to provide services.
  • a virtualized network function (VNF) may comprise one or more virtual machines running computer program codes using standard or general type servers instead of customized hardware. Cloud computing or data storage may also be utilized.
  • radio communications this may mean node operations to be carried out, at least partly, in a server, host or node operationally coupled to a remote radio head. It is also possible that node operations will be distributed among a plurality of servers, nodes or hosts. It should also be understood that the distribution of labour between core network operations and base station operations may differ from that of the LTE or even be non-existent.
  • FIG. 1 shows a schematic representation of a 5G system (5GS) 100.
  • the 5GS may comprise a user equipment (LIE) 102 (which may also be referred to as a communication device or a terminal), a 5G radio access network (5GRAN) 104, a 5G core network (5GCN) 106, one or more application functions (AF) 108 and one or more data networks (DN) 110.
  • LIE user equipment
  • 5GRAN 5G radio access network
  • 5GCN 5G core network
  • AF application functions
  • DN data networks
  • the 5GCN 106 comprises functional entities.
  • the 5GCN 106 may comprise one or more access and mobility management functions (AMF) 112, one or more session management functions (SMF) 114, an authentication server function (ALISF) 116, a unified data management (UDM) 118, one or more user plane functions (UPF) 120, a unified data repository (UDR) 122 and/or a network exposure function (NEF) 124.
  • the UPF is controlled by the SMF (Session Management Function) that receives policies from a PCF (Policy Control Function).
  • SMF Session Management Function
  • PCF Policy Control Function
  • the CN is connected to a terminal device via the radio access network (RAN).
  • the 5GRAN may comprise one or more gNodeB (GNB) distributed unit functions connected to one or more gNodeB (GNB) centralized unit functions.
  • the RAN may comprise one or more access nodes.
  • a UPF User Plane Function
  • PSA Protocol Data Unit (PDU) Session Anchor
  • DN data network
  • UE User Plane Function
  • a possible mobile communication device will now be described in more detail with reference to Figure 2 showing a schematic, partially sectioned view of a communication device 200.
  • a communication device is often referred to as user equipment (LIE) or terminal.
  • LIE user equipment
  • An appropriate mobile communication device may be provided by any device capable of sending and receiving radio signals.
  • Non-limiting examples comprise a mobile station (MS) or mobile device such as a mobile phone or what is known as a ’smart phone’, a computer provided with a wireless interface card or other wireless interface facility (e.g., USB dongle), personal data assistant (PDA) or a tablet provided with wireless communication capabilities, voice over IP (VoIP) phones, portable computers, desktop computer, image capture terminal devices such as digital cameras, gaming terminal devices, music storage and playback appliances, vehiclemounted wireless terminal devices, wireless endpoints, mobile stations, laptop-embedded equipment (LEE), laptop-mounted equipment (LME), smart devices, wireless customerpremises equipment (CPE), or any combinations of these or the like.
  • MS mobile station
  • mobile device such as a mobile phone or what is known as a ’smart phone’
  • a computer provided with a wireless interface card or other wireless interface facility (e.g., USB dongle), personal data assistant (PDA) or a tablet provided with wireless communication capabilities, voice over IP (VoIP) phones, portable computers,
  • a mobile communication device may provide, for example, communication of data for carrying communications such as voice, electronic mail (email), text message, multimedia and so on. Users may thus be offered and provided numerous services via their communication devices. Non-limiting examples of these services comprise two-way or multi-way calls, data communication or multimedia services or simply an access to a data communications network system, such as the Internet. Users may also be provided broadcast or multicast data. Non-limiting examples of the content comprise downloads, television and radio programs, videos, advertisements, various alerts and other information.
  • a mobile device is typically provided with at least one data processing entity 201 , at least one memory 202 and other possible components 203 for use in software and hardware aided execution of tasks it is designed to perform, including control of access to and communications with access systems and other communication devices.
  • the data processing, storage and other relevant control apparatus can be provided on an appropriate circuit board and/or in chipsets. This feature is denoted by reference 204.
  • the user may control the operation of the mobile device by means of a suitable user interface such as key pad 205, voice commands, touch sensitive screen or pad, combinations thereof or the like.
  • a display 208, a speaker and a microphone can be also provided.
  • a mobile communication device may comprise appropriate connectors (either wired or wireless) to other devices and/or for connecting external accessories, for example hands-free equipment, thereto.
  • the mobile device 200 may receive signals over an air or radio interface 207 via appropriate apparatus for receiving and may transmit signals via appropriate apparatus for transmitting radio signals.
  • transceiver apparatus is designated schematically by block 206.
  • the transceiver apparatus 206 may be provided for example by means of a radio part and associated antenna arrangement.
  • the antenna arrangement may be arranged internally or externally to the mobile device.
  • Figure 3 shows an example of a control apparatus 300 for a communication system, for example to be coupled to and/or for controlling a station of an access system, such as a RAN node, e.g. a base station, eNB or gNB, a relay node or a core network node such as an MME or S-GW or P-GW, or a core network function such as AMF/SMF, or a server or host.
  • a RAN node e.g. a base station, eNB or gNB
  • a relay node or a core network node such as an MME or S-GW or P-GW
  • a core network function such as AMF/SMF
  • the method may be implemented in a single control apparatus or across more than one control apparatus.
  • the control apparatus may be integrated with or external to a node or module of a core network or RAN.
  • base stations comprise a separate control apparatus unit or module.
  • control apparatus can be another network element such as a radio network controller or a spectrum controller.
  • each base station may have such a control apparatus as well as a control apparatus being provided in a radio network controller.
  • the control apparatus 300 can be arranged to provide control on communications in the service area of the system.
  • the control apparatus 300 comprises at least one memory 301 , at least one data processing unit 302, 303 and an input/output interface 304. Via the interface the control apparatus can be coupled to a receiver and a transmitter of the base station.
  • the receiver and/or the transmitter may be implemented as a radio front end or a remote radio head.
  • gNB load prediction is an enabler for many other use cases which can benefit from accurate load predictions, including mobility optimization, load balancing and energy saving.
  • Load balancing is an example of a use case, where predicted load information may improve the performance by providing higher QoS and enhanced system performance.
  • An example high-level signalling flow for the AI/ML use case related to Load Balancing is shown in Figure 4.
  • an AI/ML Model Training is located at NG-RAN node 1.
  • NG-RAN node 2 is assumed to have capabilities in providing NG-RAN node 1 with useful input information, including predicted resource status and/or mobility predictions.
  • NG-RAN node 1 configures and obtains LIE measurements and location information (e.g., RRM measurements, MDT measurements, velocity, position).
  • LIE measurements and location information e.g., RRM measurements, MDT measurements, velocity, position.
  • step 4 the NG-RAN node 1 receives from the neighbouring NG-RAN node 2 input information for load balancing model training.
  • step 5 Load Balancing model training takes place at NG-RAN node 1.
  • NG-RAN node 1 receives input from LIE which will be used in the inference phase, such as UE measurements and location information.
  • NG-RAN node 1 receives from neighbouring NG-RAN node 2 input data for load balancing inference.
  • NG-RAN node 1 performs Mobility Load Balancing predictions (e.g. for cells of NG- RAN node 1 ).
  • NG-RAN node 1 takes a Mobility Load Balancing decision/action, based on the Mobility Load Balancing Predictions, and UEs are moved from NG-RAN node 1 to NG-RAN node 2.
  • NG-RAN node 2 sends Feedback information to NG-RAN node 1 (e.g. resource status updates after load balancing). Feedback information may be signalled after receiving a Feedback Request.
  • Feedback information may be signalled after receiving a Feedback Request.
  • a gNB may request mobility feedback from a neighbouring node. Predicted resource status and performance information may be provided for a candidate target NG-RAN node to a source NG-RAN node.
  • a resource status reporting initiation procedure may be used by a NG-RAN node to request the reporting of load measurements to another NG-RAN node.
  • NG-RAN nodel initiates the procedure by sending the RESOURCE STATUS REQUEST message to NG-RAN node2 to start a measurement, stop a measurement or add cells to report for a measurement.
  • the Report Characteristics IE in the RESOURCE STATUS REQUEST indicates the type of objects NG-RAN node2 shall perform measurements on.
  • the Radio Resource Status IE might be included as a part of the RESOURCE STATUS RESPONSE if requested, that indicates the usage of the PRBs per cell and per SSB area for all traffic in Downlink and Uplink and the usage of PDCCH CCEs for Downlink and Uplink scheduling.
  • ML Machine Learning
  • Figure 5 illustrates a concept using ML-based mobility predictions for an optimal target and time to trigger a handover based on UE radio measurements (RSRP, RSRQ, SINR) using Recurrent Neural Networks (RNNs).
  • RSRP UE radio measurements
  • RSRQ Recurrent Neural Networks
  • Training frames with time series measurements and labels are used as input to train the ML Classification Model.
  • a new frame comprising time series measurements is input into the model with predicted probabilities of target cells as the output from the model.
  • Another concept involves predicting when and where to trigger Layer- 3 handovers based on serving beam indexes reported by a UE. The results from studies of these concepts indicate the potential of machine learning based sequence prediction in optimizing mobility use cases.
  • the current concepts for gNB load prediction are performed locally in a gNB.
  • Load changes caused by UE mobility, especially inbound mobility can be predicted statistically, but not accurately in short time frames per UE.
  • This so-called incident load caused by inbound mobility, may have a significant impact on the load of the gNB.
  • the incident load that a UE causes to the radio interface depends on its traffic demand and type (GBR, non-GBR) and QCI as well as the radio conditions, once the UE is handed over to the target cell, which again depends on the UE trajectory and the radio environment conditions at the time.
  • the incident load also causes a load on the transport and backhaul interfaces (e.g., on the RAN-RAN and RAN-core network interfaces as well as on the software and hardware processing components of the network node).
  • One challenge in predicting the incident load before a handover takes place is that the data and knowledge to achieve a meaningful prediction is split between the source and the target nodes.
  • a UE is reporting its measurements to the source and the source is also aware of the current traffic throughput and may have learnt some traffic patterns for this UE.
  • the target node knows which load values, i.e., load values of which managed elements, are relevant for its operation and load predictions.
  • the source node forwards the UE access stratum context (containing the source RRC configuration of the UE including the PDU sessions and DRBs allocated to the UE and the UEs radio access capabilities) to the target node.
  • the target node performs admission control for this UE which is based on the PDU session, QoS requirement, load in target cell, UE measurements etc. to finally come up with a radio reconfiguration message (the handover command) that is returned to the source node to be provided to the UE.
  • the whole resource allocation process takes into account only the time instant the UE is admitted into the target node and does not include information on various aspects that may help the target node make better and or more reliable admission control decisions such as to factor in the immediate and short/medium-term aspect (e.g., in terms of 10-100’s of milliseconds worth of traffic) of admitting the PDU session(s) of the UE.
  • the immediate and short/medium-term aspect e.g., in terms of 10-100’s of milliseconds worth of traffic
  • Figure 6 illustrates a method in a conditional Handover (CHO) where a source node may provide information to a target node necessary to predict the incident load. Such information may include the predicted trajectory and the predicted traffic demand of the UE. Using these inputs, the target node may estimate the additional incident load caused by the UE after its handover.
  • CHO conditional Handover
  • the source node provides the target nodes with a Resource Utilization Request Info element, which contains the trajectory and traffic demand predictions required to estimate the incident load.
  • the target nodes may use the predicted incident load in their admission control and may also provide it in a Resource Utilization Response in the CHO Request Acknowledge back to the source node.
  • the source node may also later trigger an update procedure using Resource Utilization Request Info and Resource Utilization Request elements in the Mobility Update Request and Response messages to get updated incident load estimates from the target nodes.
  • the source node needs to request them from the target node.
  • the source node will have the predicted incident load only after a CHO request has been sent.
  • the source node cannot utilize the predictions in choosing which target candidates to prepare.
  • the source node wants to get an update on the predicted incident loads, it needs to request them from the target nodes and send the required request information in a Resource Utilization Request Info, which adds signalling.
  • the method of Figure 6 involves chaining both the trajectory and the traffic demand predictions to the model to determine the induced incident load.
  • the challenge in chaining of ML models is that the inaccuracy of each of the models in accumulated and inaccuracy in chained ML model inputs may have significant impact on the model performance. Often it is better to build one prediction model from the observed, measured input values to the predicted KPI, which is also trained as one entity, rather than chain several independently trained models.
  • a method to predict the incident load directly at the source node is desirable.
  • Figure 7 shows a flowchart of a method according to an example embodiment.
  • the method comprises receiving measurement values from a plurality of user equipments at a source node.
  • the method comprises providing, from the source node to a respective one of a plurality of target nodes, a handover request for each of the plurality of user equipments, the handover request including a request to measure incident load.
  • the method comprises determining to perform a handover for a subsequent user equipment to a subsequent target node.
  • the method comprises determining a predicted incident load based on the received measurement values and measured incidence loads.
  • the method comprises using the predicted incident load in mobility decisions.
  • the method comprises providing the predicted incident load to the subsequent target node in a handover request for the subsequent user equipment.
  • Means for determining a predicted incident load may comprise at least one machine learning model, which, when executed, is configured to determine the predicted incident load based on measurement values of the subsequent user equipment.
  • the machine learning model when executed, may be configured to determine the predicted incident load based on the identity of a target node.
  • the method may provide a learning method (e.g., based on supervised learning) to predict the incident load caused by a UE handover on the target node.
  • the incident load may comprise, for a given time period, the absolute number of resources allocated to the user equipment in at least one of downlink and uplink or the ratio of resources allocated to the user equipment to all resources being scheduled in at least one of uplink and downlink.
  • the given time interval may be a configured time interval, Trident, after the UE has connected to the target cell and is transmitting and/or receiving data over the air interface.
  • the incident load may be defined as the absolute number of Physical Resource Blocks (PRBs) allocated to the handed-over UE in downlink (DL) and/or uplink (UL) in Trident.
  • PRBs Physical Resource Blocks
  • the incident load may be defined as the ratio of PRBs allocated to the handed-over UE compared to all PRBs being scheduled in Trident in DL and/or UL.
  • the definition of incident load may be extended to cover other additional loads caused by a UE when it’s handed over to a cell.
  • the definition of the incident load may be for different contexts, i.e. the context attributes may include a combination of the managed element, where the load is aggregated: gNB (DU), cell, SSB area, beam, uplink I downlink, RAN slice, multiconnectivity and CA setups, transport connectivity (e.g. IP throughput to external nodes) and software/hardware resources (i.e. memory and CPU).
  • the incident load may be affected by network congestion, in which case the UE might not get the throughput it is requesting and therefore will also create lower incident load than the requested traffic would have done.
  • it may be more relevant to be able to predict the load that will satisfy a UE’s requirements. Therefore the following focuses on predicting the incident load of the requested traffic. This should be also considered when collecting the training data.
  • target nodes measure the incident load.
  • the training for the machine learning model may be performed at a training host located at the RAN (e.g., at a source node) or at a host node external to the RAN (e.g., 0AM).
  • a training host located at the RAN (e.g., at a source node) or at a host node external to the RAN (e.g., 0AM).
  • the method may comprise providing the received measurement values to a host node for use in training the machine learning model.
  • the method may comprise receiving an indication of the measured incident load from each of the plurality of target nodes and providing an indication of the measured incident load to the host node for use in training the machine learning model.
  • the method may comprise receiving an indication of the measured incident load from each of the plurality of target nodes for use in training the machine learning model at the source node.
  • the method may comprise receiving an indication of capacity at the time of the measured incident load from each of the plurality of target nodes.
  • a target gNB may report also report its (average) Composite Available Capacity (CAC) at the time of the measurement of the incident load, which may be used to either filter out the measurements affected by congestion or to train the impact of target node congestion into the prediction model.
  • CAC Composite Available Capacity
  • the measurement values may be L1 beam measurements or L3 cell quality measurements.
  • the measurement values may comprise a time series of beam indexes of the past serving beams, a time series of indexes of the past serving cells, a time series of UE radio measurements (e.g., RSRP, RSRQ, SINR), a time series of UE location (in coordinates), a list of carriers a UE is having and their priority, e.g. their QoS Class Identifiers (QCIs), a time series of the past traffic demand for each carrier and (Slice specific) traffic demand for each PDU session (and corresponding DRBs).
  • QCIs QoS Class Identifiers
  • the incident load measurements may be used to train a ML model to predict the incident load from UE measurements and KPIs reported to the source node.
  • the input features for the prediction model depend on the implementation and for which managed object the load is predicted and may include one or more of the following, for example, source and target node IDs, if the same ML model instance covers several source and/or target nodes and/or UE measurement values.
  • a model may cover several potential mobility source and target nodes, in which case an identifier of those nodes may be provided as an input to the model.
  • the identifier of the nodes may be the physical cell ID (PCI) or NR-CGI (Cell Global ID). That is, the method may comprise determining the predicted incident load further based on an identifier of at least one of the subsequent target node and the source node.
  • the incident load may be predicted directly from the reported measurements, without the need for intermediate predictions for the UE trajectory and traffic demand.
  • Figure 8 shows a signalling flow according to an example embodiment.
  • the source gNB configures a UE to take measurements for training data collection.
  • UE reports the measurements, e.g. L1 beam measurements or L3 cell quality measurements.
  • L1 beam measurements including neighboring cells will be reported to gNB-DU and L3 cell quality measurements will be reported to gNB-CU.
  • step 3 measurements are stored by the source gNB associated to the UE context
  • a handover is prepared to a neighboring cell.
  • the source gNB may request the target gNB to measure the incident load caused by the UE being handed over.
  • step 5 the handover is executed.
  • step 6 the target gNB measures the incident load over Tincident as requested by the source gNB.
  • the target gNB sends feedback information to the source by reporting the measured incident load back to the source gNB.
  • the target gNB may report also its (average) Composite Available Capacity (CAC) at the time of the measurement, which may be used to either filter out the measurements affected by congestion or to train the impact of target node congestion into the prediction model.
  • CAC Composite Available Capacity
  • step 8 the source gNB stores and attaches the measured load to the collected input data.
  • the Training Host trains a ML with the provided training dataset in step 10.
  • the trained model is deployed back to the source gNB in step 11 .
  • the training host is external to the source gNB.
  • the source gNB may comprise the training host
  • the source gNB configures another UE with events triggering data collection for the inference load prediction. These may be mobility events, e.g., A3 event.
  • the LIE reports the configured measurements.
  • step 14 the source gNB uses the reported measurements and the trained ML model to predict the incident load, if the UE is handed over to the target gNB
  • the source gNB may utilize the predicted incident, e.g. to optimize load balancing in its mobility decisions.
  • step 16 a handover to the target gNB is requested.
  • the source gNB includes the predicted incident load.
  • the target gNB may use the predicted incident load e.g. in its admission control.
  • Figure 9 shows a signalling flow according to an alternative example embodiment.
  • the target gNB reports the measured incident load directly to the training host.
  • the target gNB reports the measured incident load and, optionally, its average CAC, directly to the Training Host in step 7.
  • the source gNB reports only the collected input feature data to the Training Host in step 8.
  • the Training Host correlates the input data with the corresponding incident load measurement(s).
  • the correlation may require the generation of a “common token” to identify a particular UE and a handover of that UE.
  • Figure 10 shows an example machine learning model.
  • the incident load is predicted based on a time series of the past serving beam indexes, traffic Quality Class Indexes and traffic demand (for all carriers).
  • the ID of the target node, for which the incident load is predicted is given.
  • the model is using Long-Short Term Memory (LSTM) Recurrent Neural Network (RNN), which is good in predicting from sequences of data.
  • LSTM Long-Short Term Memory
  • RNN Recurrent Neural Network
  • Figure 11 shows a signalling diagram fortraining an incidence load prediction model according to an example embodiment.
  • AMF provides the mobility configuration information, which may include configuration to collect training data for incident load prediction.
  • the source gNB configures a UE with an event to start the measurements for the incident load prediction. This may be an existing event or a new one.
  • the UE reports the configured measurements, which may be from the input feature list above
  • the source gNB records the collected data and keeps track of it with the UE context to identify the UE.
  • the source gNB decides a handover.
  • the source gNB sends a Handover Request to the target gNB.
  • This request may contain configuration for measuring the incident load.
  • the incidence load measurement is started.
  • the target gNB measures the incident load caused by the UE over the configured period T incident
  • the target gNB reports the measured incident load back to the source gNB. This could be, for example, done in an existing message such as the UE Context Release message (3GPP TS 38.300 Figure 9.2.3.2.1-1 ) or in a newly defined message. Reporting the measurement may delay the UE Context Release message.
  • the target gNB may report also its (average) Composite Available Capacity (CAC) at the time of the measurement, which may be used to either filter out the measurements affected by congestion or to train the impact of target node congestion into the prediction model.
  • CAC Composite Available Capacity
  • the source gNB stores the measured incident load together with the input data collected form the UE before its handover.
  • the steps are repeated a pre-configured number N of times to collect a sufficient amount of training data.
  • the source gNB reports the collected training data to the Training Host.
  • the Training Host trains an incident load prediction model with the provided data.
  • the Training Host deploys the trained incidence load prediction model to the source gNB.
  • Figure 12 shows a signalling flow for training an incidence load prediction model for an example embodiment where the required data, both the input data and the measured incident load, is collected directly at the Training Host. It is otherwise the same as the method presented in Figure 11 , except that both the target and source gNBs are reporting the measured incident load and the input to the prediction, respectively, directly to the Training Host (Steps 15 and 17).
  • the duration of the target node incident load measurement may be configured by the source node and the timing of sending measurements from source node to the training host can align with the duration of incident load measurement at the target node.
  • the Training Host correlates the two data sources in step 18 and uses the dataset for training the model.
  • the training host maps the incident load to the input data based on a “common token” that identify these from the same LIE, created in step 2.
  • the generation of the “common token” is unique to a given LIE but not reveal its identity. The token does not need to be retained at the training host once the data has been consumed to train the model.
  • all training data collection is consolidated in the source gNB, which may avoid the complexity of more distributed data collection and the need for the Training Host to be able to correlate which reported measured loads correlate to which input sequences. Additionally, in this embodiment, the target gNB is not required to integrate to the Training Host, which may be simpler in certain scenarios.
  • the embodiment shown in Figure 12 allows for longer load measurement periods after the handover.
  • the measured load needs to be reported in the LIE context release message to be able to correlate it to the input data based on the LIE context, before it’s released.
  • such restrictions are not needed, which allows measuring incident load of UEs with bursty traffic, which may not be sending or receiving right after the handover.
  • the training host may, for example, be either the source gNB or in the 0AM. Communication between the source and target gNB happens over the Xn interfaces.
  • Figure 13 shows a signalling flow for inferring the incidence load prediction model for a baseline HO.
  • AMF may configure, in addition, to activate the incidence load prediction and the required events to trigger the data collection at the LIE.
  • a UE reports its measurements and KPIs.
  • An event may trigger the incident load prediction additional data collection. This may be an existing event, such as A3, or a new one may be defined.
  • the source gNB collects the necessary input data for the prediction and may use the incident load prediction model to predict the load if the UE is handed over to a cell in the target gNB.
  • the inference may be triggered by mobility events.
  • the source gNB may use the predicted incident load in its handover decision, e.g. for load balancing or optimizing the network quality.
  • the source gNB sends a Handover Request to the target gNB.
  • the handover preparation message may additionally include the predicted incident load.
  • the target gNB may use the predicted incident load e.g. in its admission control procedure or predicting its future load.
  • Figure 14 shows the method of inference in CHO preparation.
  • the incident load prediction is done as above, and the predicted load is given to both target gNB candidates in CHO preparation.
  • the target gNBs may consider the predicted load in e.g. their admission control. Additionally, the source gNB may consider the predicted load in its decision, which target candidates to prepare.
  • a separate ML model instance may be trained.
  • the method may provide an ability to predict the UE incident loads on target (candidate) nodes at the source node, where it is needed for e.g., load balancing, choosing CHO preparations etc, learning to predict incident loads end-to-end from UE measurements which are adapted to the particular context, e.g., a particular cell border, may reach a higher accuracy, since predictions are not chained. Further, less signaling may be required between a source and target node.
  • An apparatus may comprise means for receiving measurement values from a plurality of user equipments at a source node, providing, from the source node to a respective one of a plurality of target nodes, a handover request for each of the plurality of user equipments, the handover request including a request to measure incident load, determining to perform a handover for a subsequent user equipment to a subsequent target node, determining a predicted incident load based on the received measurement values and measured incidence loads, using the predicted incident load in mobility decisions and providing the predicted incident load to the subsequent target node in a handover request for the subsequent user equipment.
  • an apparatus may comprise means for receiving, from a source node at a target node, a handover request for a user equipment, wherein either the handover request includes a request to measure incident load and if so, providing an indication of measured incident load to at least one of the source node and a host node for use in training a machine learning model or the handover request includes an indication of a predicted incident load associated with the user equipment.
  • an apparatus may comprise means for receiving, from one of a source node and a plurality of target nodes, at least one indication of a measured incident load for a user equipment at a respective one of the plurality of target nodes and measurement values for the user equipment and using the at least one indication and measurement values in training a machine learning model which, when executed is configured to determine a predicted incident load based on measurement values of a subsequent user equipment.
  • apparatuses may comprise or be coupled to other units or modules etc., such as radio parts or radio heads, used in or for transmission and/or reception.
  • apparatuses have been described as one entity, different modules and memory may be implemented in one or more physical or logical entities.
  • circuitry may refer to one or more or all of the following:
  • circuitry also covers an implementation of merely a hardware circuit or processor (or multiple processors) or portion of a hardware circuit or processor and its (or their) accompanying software and/or firmware.
  • circuitry also covers, for example and if applicable to the particular claim element, a baseband integrated circuit or processor integrated circuit for a mobile device or a similar integrated circuit in server, a cellular network device, or other computing or network device.
  • the embodiments of this disclosure may be implemented by computer software executable by a data processor of the mobile device, such as in the processor entity, or by hardware, or by a combination of software and hardware.
  • Computer software or program also called program product, including software routines, applets and/or macros, may be stored in any apparatus-readable data storage medium and they comprise program instructions to perform particular tasks.
  • a computer program product may comprise one or more computer- executable components which, when the program is run, are configured to carry out embodiments.
  • the one or more computer-executable components may be at least one software code or portions of it.
  • any blocks of the logic flow as in the Figures may represent program steps, or interconnected logic circuits, blocks and functions, or a combination of program steps and logic circuits, blocks and functions.
  • the software may be stored on such physical media as memory chips, or memory blocks implemented within the processor, magnetic media such as hard disk or floppy disks, and optical media such as for example DVD and the data variants thereof, CD.
  • the physical media is a non-transitory media.
  • the memory may be of any type suitable to the local technical environment and may be implemented using any suitable data storage technology, such as semiconductor based memory devices, magnetic memory devices and systems, optical memory devices and systems, fixed memory and removable memory.
  • the data processors may be of any type suitable to the local technical environment and may comprise one or more of general-purpose computers, special purpose computers, microprocessors, digital signal processors (DSPs), application specific integrated circuits (ASIC), FPGA, gate level circuits and processors based on multi core processor architecture, as non-limiting examples.
  • Embodiments of the disclosure may be practiced in various components such as integrated circuit modules.
  • the design of integrated circuits is by and large a highly automated process.
  • Complex and powerful software tools are available for converting a logic level design into a semiconductor circuit design ready to be etched and formed on a semiconductor substrate.

Abstract

There is provided an apparatus, said apparatus comprising means for receiving measurement values from a plurality of user equipments at a source node, providing, from the source node to a respective one of a plurality of target nodes, a handover request for each of the plurality of user equipments, the handover request including a request to measure incident load, determining to perform a handover for a subsequent user equipment to a subsequent target node, determining a predicted incident load based on the received measurement values and measured incidence loads, using the predicted incident load in mobility decisions and providing the predicted incident load to the subsequent target node in a handover request for the subsequent user equipment.

Description

APPARATUS, METHOD AND COMPUTER PROGRAM FOR LOAD PREDICTION
Field
The present application relates to a method, apparatus, system and computer program and in particular but not exclusively to machine learning driven incident load prediction prior to a mobility event.
Background
A communication system can be seen as a facility that enables communication sessions between two or more entities such as user terminals, base stations and/or other nodes by providing carriers between the various entities involved in the communications path. A communication system can be provided for example by means of a communication network and one or more compatible communication devices. The communication sessions may comprise, for example, communication of data for carrying communications such as voice, video, electronic mail (email), text message, multimedia and/or content data and so on. Nonlimiting examples of services provided comprise two-way or multi-way calls, data communication or multimedia services and access to a data network system, such as the Internet.
In a wireless communication system at least a part of a communication session between at least two stations occurs over a wireless link. Examples of wireless systems comprise public land mobile networks (PLMN), satellite based communication systems and different wireless local networks, for example wireless local area networks (WLAN). Some wireless systems can be divided into cells and are therefore often referred to as cellular systems.
A user can access the communication system by means of an appropriate communication device or terminal. A communication device of a user may be referred to as user equipment (UE) or user device. A communication device is provided with an appropriate signal receiving and transmitting apparatus for enabling communications, for example enabling access to a communication network or communications directly with other users. The communication device may access a carrier provided by a station, for example a base station of a cell, and transmit and/or receive communications on the carrier. The communication system and associated devices typically operate in accordance with a given standard or specification which sets out what the various entities associated with the system are permitted to do and how that should be achieved. Communication protocols and/or parameters which shall be used for the connection are also typically defined. One example of a communications system is UTRAN (3G radio). Other examples of communication systems are the long-term evolution (LTE) of the Universal Mobile Telecommunications System (UMTS) radio-access technology and so-called 5G or New Radio (NR) networks. NR is being standardized by the 3rd Generation Partnership Project (3GPP).
Summary
In a first aspect there is provided an apparatus comprising means for receiving measurement values from a plurality of user equipments at a source node, providing, from the source node to a respective one of a plurality of target nodes, a handover request for each of the plurality of user equipments, the handover request including a request to measure incident load, determining to perform a handover for a subsequent user equipment to a subsequent target node, determining a predicted incident load based on the received measurement values and measured incidence loads, using the predicted incident load in mobility decisions and providing the predicted incident load to the subsequent target node in a handover request for the subsequent user equipment.
The means for determining a predicted incident load may comprise at least one machine learning model, which, when executed, is configured to determine the predicted incident load based on measurement values of the subsequent user equipment.
The machine learning model, when executed, may be configured to determine the predicted incident load based on the identity of a target node.
The apparatus may comprise means for providing the received measurement values to a host node for use in training the machine learning model.
The apparatus may comprise means for receiving an indication of the measured incident load from each of the plurality of target nodes and providing an indication of the measured incident load to the host node for use in training the machine learning model.
The apparatus may comprise means for receiving an indication of capacity at the time of the measured incident load from each of the plurality of target nodes. The incident load may comprise, for a given time period, the absolute number of resources allocated to the user equipment in at least one of downlink and uplink or the ratio of resources allocated to the user equipment to all resources being scheduled in at least one of uplink and downlink.
The apparatus may comprise means for receiving an indication of the measured incident load from each of the plurality of target nodes for use in training the machine learning model at the source node.
The apparatus may comprise means for determining the predicted incident load further based on an identifier of at least one of the subsequent target node and the source node.
In a second aspect, there is provided an apparatus comprising means for receiving, from a source node at a target node, a handover request for a user equipment, wherein either the handover request includes a request to measure incident load and if so, providing an indication of measured incident load to at least one of the source node and a host node for use in training a machine learning model or the handover request includes an indication of a predicted incident load associated with the user equipment.
In a third aspect, there is provided an apparatus comprising means for receiving, from one of a source node and a plurality of target nodes, at least one indication of a measured incident load for a user equipment at a respective one of the plurality of target nodes and measurement values for the user equipment and using the at least one indication and measurement values in training a machine learning model which, when executed is configured to determine a predicted incident load based on measurement values of a subsequent user equipment.
In a fourth aspect there is provided a method comprising receiving measurement values from a plurality of user equipments at a source node, providing, from the source node to a respective one of a plurality of target nodes, a handover request for each of the plurality of user equipments, the handover request including a request to measure incident load, determining to perform a handover for a subsequent user equipment to a subsequent target node, determining a predicted incident load based on the received measurement values and measured incidence loads, using the predicted incident load in mobility decisions and providing the predicted incident load to the subsequent target node in a handover request for the subsequent user equipment. Determining a predicted incident load may comprise at least one machine learning model, which, when executed, is configured to determine the predicted incident load based on measurement values of the subsequent user equipment.
The machine learning model, when executed, may be configured to determine the predicted incident load based on the identity of a target node.
The method may comprise providing the received measurement values to a host node for use in training the machine learning model.
The method may comprise receiving an indication of the measured incident load from each of the plurality of target nodes and providing an indication of the measured incident load to the host node for use in training the machine learning model.
The method may comprise receiving an indication of capacity at the time of the measured incident load from each of the plurality of target nodes.
The incident load may comprise, for a given time period, the absolute number of resources allocated to the user equipment in at least one of downlink and uplink or the ratio of resources allocated to the user equipment to all resources being scheduled in at least one of uplink and downlink.
The method may comprise receiving an indication of the measured incident load from each of the plurality of target nodes for use in training the machine learning model at the source node.
The method may comprise determining the predicted incident load further based on an identifier of at least one of the subsequent target node and the source node.
In a fifth aspect there is provided a method comprising receiving, from a source node at a target node, a handover request for a user equipment, wherein either the handover request includes a request to measure incident load and if so, providing an indication of measured incident load to at least one of the source node and a host node for use in training a machine learning model or the handover request includes an indication of a predicted incident load associated with the user equipment.
In a sixth aspect there is provided a method comprising receiving, from one of a source node and a plurality of target nodes, at least one indication of a measured incident load for a user equipment at a respective one of the plurality of target nodes and measurement values for the user equipment and using the at least one indication and measurement values in training a machine learning model which, when executed is configured to determine a predicted incident load based on measurement values of a subsequent user equipment.
In a seventh aspect there is provided an apparatus comprising: at least one processor and at least one memory including a computer program code, the at least one memory and computer program code configured to, with the at least one processor, cause the apparatus at least to receive measurement values from a plurality of user equipments at a source node, providing, from the source node to a respective one of a plurality of target nodes, a handover request for each of the plurality of user equipments, the handover request including a request to measure incident load, determine to perform a handover for a subsequent user equipment to a subsequent target node, determine a predicted incident load based on the received measurement values and measured incidence loads, use the predicted incident load in mobility decisions and provide the predicted incident load to the subsequent target node in a handover request for the subsequent user equipment.
The means for determining a predicted incident load may comprise at least one machine learning model, which, when executed, is configured to determine the predicted incident load based on measurement values of the subsequent user equipment.
The machine learning model, when executed, may be configured to determine the predicted incident load based on the identity of a target node.
The apparatus may be configured to provide the received measurement values to a host node for use in training the machine learning model.
The apparatus may be configured to receive an indication of the measured incident load from each of the plurality of target nodes and provide an indication of the measured incident load to the host node for use in training the machine learning model.
The apparatus may be configured to receive an indication of capacity at the time of the measured incident load from each of the plurality of target nodes.
The incident load may comprise, for a given time period, the absolute number of resources allocated to the user equipment in at least one of downlink and uplink or the ratio of resources allocated to the user equipment to all resources being scheduled in at least one of uplink and downlink.
The apparatus may be configured to receive an indication of the measured incident load from each of the plurality of target nodes for use in training the machine learning model at the source node.
The apparatus may be configured to determine the predicted incident load further based on an identifier of at least one of the subsequent target node and the source node.
In an eighth aspect there is provided a provided an apparatus comprising: at least one processor and at least one memory including a computer program code, the at least one memory and computer program code configured to, with the at least one processor, cause the apparatus at least to receive, from a source node at a target node, a handover request for a user equipment, wherein either the handover request includes a request to measure incident load and if so, provide an indication of measured incident load to at least one of the source node and a host node for use in training a machine learning model or the handover request includes an indication of a predicted incident load associated with the user equipment.
In a ninth aspect there is provided an apparatus comprising: at least one processor and at least one memory including a computer program code, the at least one memory and computer program code configured to, with the at least one processor, cause the apparatus at least to receive, from one of a source node and a plurality of target nodes, at least one indication of a measured incident load for a user equipment at a respective one of the plurality of target nodes and measurement values for the user equipment and use the at least one indication and measurement values in training a machine learning model which, when executed is configured to determine a predicted incident load based on measurement values of a subsequent user equipment.
In a tenth aspect there is provided a computer readable medium comprising program instructions for causing an apparatus to perform at least the following: receiving measurement values from a plurality of user equipments at a source node, providing, from the source node to a respective one of a plurality of target nodes, a handover request for each of the plurality of user equipments, the handover request including a request to measure incident load, determining to perform a handover for a subsequent user equipment to a subsequent target node, determining a predicted incident load based on the received measurement values and measured incidence loads, using the predicted incident load in mobility decisions and providing the predicted incident load to the subsequent target node in a handover request for the subsequent user equipment.
Determining a predicted incident load may comprise at least one machine learning model, which, when executed, is configured to determine the predicted incident load based on measurement values of the subsequent user equipment.
The machine learning model, when executed, may be configured to determine the predicted incident load based on the identity of a target node.
The apparatus maybe caused to perform providing the received measurement values to a host node for use in training the machine learning model.
The apparatus may be caused to perform receiving an indication of the measured incident load from each of the plurality of target nodes and providing an indication of the measured incident load to the host node for use in training the machine learning model.
The apparatus may be caused to perform receiving an indication of capacity at the time of the measured incident load from each of the plurality of target nodes.
The incident load may comprise, for a given time period, the absolute number of resources allocated to the user equipment in at least one of downlink and uplink or the ratio of resources allocated to the user equipment to all resources being scheduled in at least one of uplink and downlink.
The apparatus may be caused to perform receiving an indication of the measured incident load from each of the plurality of target nodes for use in training the machine learning model at the source node.
The apparatus may be caused to perform determining the predicted incident load further based on an identifier of at least one of the subsequent target node and the source node.
In an eleventh aspect there is provided a computer readable medium comprising program instructions for causing an apparatus to perform at least the following: receiving, from a source node at a target node, a handover request for a user equipment, wherein either the handover request includes a request to measure incident load and if so, providing an indication of measured incident load to at least one of the source node and a host node for use in training a machine learning model or the handover request includes an indication of a predicted incident load associated with the user equipment.
In a twelfth aspect there is provided a computer readable medium comprising program instructions for causing an apparatus to perform at least the following: receiving, from one of a source node and a plurality of target nodes, at least one indication of a measured incident load for a user equipment at a respective one of the plurality of target nodes and measurement values for the user equipment and using the at least one indication and measurement values in training a machine learning model which, when executed is configured to determine a predicted incident load based on measurement values of a subsequent user equipment.
In a thirteenth aspect there is provided a non-transitory computer readable medium comprising program instructions for causing an apparatus to perform at least the method according to the third or fourth aspect.
In the above, many different embodiments have been described. It should be appreciated that further embodiments may be provided by the combination of any two or more of the embodiments described above.
Description of Figures
Embodiments will now be described, by way of example only, with reference to the accompanying Figures in which:
Figure 1 shows a schematic diagram of an example 5GS communication system;
Figure 2 shows a schematic diagram of an example mobile communication device;
Figure 3 shows a schematic diagram of an example control apparatus;
Figure 4 shows a high level signalling flow for a load balancing use case;
Figure 5 shows a schematic diagram of a machine learning based predictive mobility concept;
Figure 6 shows a signalling flow for resource status prediction assisted mobility; Figure 7 shows a flowchart of a method according to an example embodiment;
Figure 8 shows a high level signalling flow according to an example embodiment;
Figure 9 shows a high level signalling flow according to an example embodiment;
Figure 10 shows a schematic diagram of a machine learning model according to an example embodiment;
Figure 11 shows a signalling flow for training of a machine learning model according to an example embodiment;
Figure 12 shows a signalling flow for training of a machine learning model according to an example embodiment
Figure 13 shows a signalling flow for incident load prediction fora baseline handover according to an example embodiment;
Figure 14 shows a signalling flow for incident load prediction for a conditional handover according to an example embodiment.
Detailed description
Before explaining in detail the examples, certain general principles of a wireless communication system and mobile communication devices are briefly explained with reference to Figures 1 to 3 to assist in understanding the technology underlying the described examples.
An example of a suitable communications system is the 5G System (5GS). Network architecture in 5GS may be similar to that of LTE-advanced. Base stations of NR systems may be known as next generation Node Bs (gNBs). Changes to the network architecture may depend on the need to support various radio technologies and finer QoS support, and some on-demand requirements for example QoS levels to support QoE of user point of view. Also network aware services and applications, and service and application aware networks may bring changes to the architecture. Those are related to Information Centric Network (ICN) and User-Centric Content Delivery Network (UC-CDN) approaches. NR may use multiple input - multiple output (Ml MO) antennas, many more base stations or nodes than the LTE (a so- called small cell concept), including macro sites operating in co-operation with smaller stations and perhaps also employing a variety of radio technologies for better coverage and enhanced data rates.
5G networks may utilise network functions virtualization (NFV) which is a network architecture concept that proposes virtualizing network node functions into “building blocks” or entities that may be operationally connected or linked together to provide services. A virtualized network function (VNF) may comprise one or more virtual machines running computer program codes using standard or general type servers instead of customized hardware. Cloud computing or data storage may also be utilized. In radio communications this may mean node operations to be carried out, at least partly, in a server, host or node operationally coupled to a remote radio head. It is also possible that node operations will be distributed among a plurality of servers, nodes or hosts. It should also be understood that the distribution of labour between core network operations and base station operations may differ from that of the LTE or even be non-existent.
Figure 1 shows a schematic representation of a 5G system (5GS) 100. The 5GS may comprise a user equipment (LIE) 102 (which may also be referred to as a communication device or a terminal), a 5G radio access network (5GRAN) 104, a 5G core network (5GCN) 106, one or more application functions (AF) 108 and one or more data networks (DN) 110.
An example 5G core network (CN) comprises functional entities. The 5GCN 106 may comprise one or more access and mobility management functions (AMF) 112, one or more session management functions (SMF) 114, an authentication server function (ALISF) 116, a unified data management (UDM) 118, one or more user plane functions (UPF) 120, a unified data repository (UDR) 122 and/or a network exposure function (NEF) 124. The UPF is controlled by the SMF (Session Management Function) that receives policies from a PCF (Policy Control Function).
The CN is connected to a terminal device via the radio access network (RAN). The 5GRAN may comprise one or more gNodeB (GNB) distributed unit functions connected to one or more gNodeB (GNB) centralized unit functions. The RAN may comprise one or more access nodes.
A UPF (User Plane Function) whose role is called PSA (Protocol Data Unit (PDU) Session Anchor) may be responsible for forwarding frames back and forth between the DN (data network) and the tunnels established over the 5G towards the UE(s) exchanging traffic with the DN.
A possible mobile communication device will now be described in more detail with reference to Figure 2 showing a schematic, partially sectioned view of a communication device 200. Such a communication device is often referred to as user equipment (LIE) or terminal. An appropriate mobile communication device may be provided by any device capable of sending and receiving radio signals. Non-limiting examples comprise a mobile station (MS) or mobile device such as a mobile phone or what is known as a ’smart phone’, a computer provided with a wireless interface card or other wireless interface facility (e.g., USB dongle), personal data assistant (PDA) or a tablet provided with wireless communication capabilities, voice over IP (VoIP) phones, portable computers, desktop computer, image capture terminal devices such as digital cameras, gaming terminal devices, music storage and playback appliances, vehiclemounted wireless terminal devices, wireless endpoints, mobile stations, laptop-embedded equipment (LEE), laptop-mounted equipment (LME), smart devices, wireless customerpremises equipment (CPE), or any combinations of these or the like. A mobile communication device may provide, for example, communication of data for carrying communications such as voice, electronic mail (email), text message, multimedia and so on. Users may thus be offered and provided numerous services via their communication devices. Non-limiting examples of these services comprise two-way or multi-way calls, data communication or multimedia services or simply an access to a data communications network system, such as the Internet. Users may also be provided broadcast or multicast data. Non-limiting examples of the content comprise downloads, television and radio programs, videos, advertisements, various alerts and other information.
A mobile device is typically provided with at least one data processing entity 201 , at least one memory 202 and other possible components 203 for use in software and hardware aided execution of tasks it is designed to perform, including control of access to and communications with access systems and other communication devices. The data processing, storage and other relevant control apparatus can be provided on an appropriate circuit board and/or in chipsets. This feature is denoted by reference 204. The user may control the operation of the mobile device by means of a suitable user interface such as key pad 205, voice commands, touch sensitive screen or pad, combinations thereof or the like. A display 208, a speaker and a microphone can be also provided. Furthermore, a mobile communication device may comprise appropriate connectors (either wired or wireless) to other devices and/or for connecting external accessories, for example hands-free equipment, thereto. The mobile device 200 may receive signals over an air or radio interface 207 via appropriate apparatus for receiving and may transmit signals via appropriate apparatus for transmitting radio signals. In Figure 2 transceiver apparatus is designated schematically by block 206. The transceiver apparatus 206 may be provided for example by means of a radio part and associated antenna arrangement. The antenna arrangement may be arranged internally or externally to the mobile device.
Figure 3 shows an example of a control apparatus 300 for a communication system, for example to be coupled to and/or for controlling a station of an access system, such as a RAN node, e.g. a base station, eNB or gNB, a relay node or a core network node such as an MME or S-GW or P-GW, or a core network function such as AMF/SMF, or a server or host. The method may be implemented in a single control apparatus or across more than one control apparatus. The control apparatus may be integrated with or external to a node or module of a core network or RAN. In some embodiments, base stations comprise a separate control apparatus unit or module. In other embodiments, the control apparatus can be another network element such as a radio network controller or a spectrum controller. In some embodiments, each base station may have such a control apparatus as well as a control apparatus being provided in a radio network controller. The control apparatus 300 can be arranged to provide control on communications in the service area of the system. The control apparatus 300 comprises at least one memory 301 , at least one data processing unit 302, 303 and an input/output interface 304. Via the interface the control apparatus can be coupled to a receiver and a transmitter of the base station. The receiver and/or the transmitter may be implemented as a radio front end or a remote radio head.
The following relates to the enablement of Al in RAN and the functional framework including the Al functionality and the inputs and outputs needed by an ML algorithm, as well as the standardization impacts at a node in the existing architecture or in the network interfaces to transfer this input/output data through them. gNB load prediction is an enabler for many other use cases which can benefit from accurate load predictions, including mobility optimization, load balancing and energy saving.
Load balancing is an example of a use case, where predicted load information may improve the performance by providing higher QoS and enhanced system performance. An example high-level signalling flow for the AI/ML use case related to Load Balancing is shown in Figure 4. In step 0 of Figure 4, an AI/ML Model Training is located at NG-RAN node 1. NG-RAN node 2 is assumed to have capabilities in providing NG-RAN node 1 with useful input information, including predicted resource status and/or mobility predictions.
In steps 1-3, NG-RAN node 1 configures and obtains LIE measurements and location information (e.g., RRM measurements, MDT measurements, velocity, position).
In step 4, the NG-RAN node 1 receives from the neighbouring NG-RAN node 2 input information for load balancing model training.
In step 5, Load Balancing model training takes place at NG-RAN node 1.
In step 6, NG-RAN node 1 receives input from LIE which will be used in the inference phase, such as UE measurements and location information.
In step 7, NG-RAN node 1 receives from neighbouring NG-RAN node 2 input data for load balancing inference.
In step 8, NG-RAN node 1 performs Mobility Load Balancing predictions (e.g. for cells of NG- RAN node 1 ).
In step 9, NG-RAN node 1 takes a Mobility Load Balancing decision/action, based on the Mobility Load Balancing Predictions, and UEs are moved from NG-RAN node 1 to NG-RAN node 2.
In step 10, NG-RAN node 2 sends Feedback information to NG-RAN node 1 (e.g. resource status updates after load balancing). Feedback information may be signalled after receiving a Feedback Request.
To improve mobility decisions, a gNB may request mobility feedback from a neighbouring node. Predicted resource status and performance information may be provided for a candidate target NG-RAN node to a source NG-RAN node.
A resource status reporting initiation procedure may be used by a NG-RAN node to request the reporting of load measurements to another NG-RAN node. For example, NG-RAN nodel initiates the procedure by sending the RESOURCE STATUS REQUEST message to NG-RAN node2 to start a measurement, stop a measurement or add cells to report for a measurement. When starting a measurement, the Report Characteristics IE in the RESOURCE STATUS REQUEST indicates the type of objects NG-RAN node2 shall perform measurements on. The Radio Resource Status IE might be included as a part of the RESOURCE STATUS RESPONSE if requested, that indicates the usage of the PRBs per cell and per SSB area for all traffic in Downlink and Uplink and the usage of PDCCH CCEs for Downlink and Uplink scheduling.
In Machine Learning (ML) based predictive mobility concepts, a machine learning model is trained to make mobility decisions based on a series of UE measurements over time.
Figure 5 illustrates a concept using ML-based mobility predictions for an optimal target and time to trigger a handover based on UE radio measurements (RSRP, RSRQ, SINR) using Recurrent Neural Networks (RNNs). Training frames with time series measurements and labels are used as input to train the ML Classification Model. A new frame comprising time series measurements is input into the model with predicted probabilities of target cells as the output from the model. Another concept involves predicting when and where to trigger Layer- 3 handovers based on serving beam indexes reported by a UE. The results from studies of these concepts indicate the potential of machine learning based sequence prediction in optimizing mobility use cases.
The current concepts for gNB load prediction are performed locally in a gNB. Load changes caused by UE mobility, especially inbound mobility (e.g., UE handing over from other cells to the gNB), can be predicted statistically, but not accurately in short time frames per UE. This so-called incident load, caused by inbound mobility, may have a significant impact on the load of the gNB. The incident load that a UE causes to the radio interface depends on its traffic demand and type (GBR, non-GBR) and QCI as well as the radio conditions, once the UE is handed over to the target cell, which again depends on the UE trajectory and the radio environment conditions at the time. The incident load also causes a load on the transport and backhaul interfaces (e.g., on the RAN-RAN and RAN-core network interfaces as well as on the software and hardware processing components of the network node).
One challenge in predicting the incident load before a handover takes place is that the data and knowledge to achieve a meaningful prediction is split between the source and the target nodes. A UE is reporting its measurements to the source and the source is also aware of the current traffic throughput and may have learnt some traffic patterns for this UE. The target node, however, knows which load values, i.e., load values of which managed elements, are relevant for its operation and load predictions. During a handover preparation procedure (assuming a Rel-17 network implementation) the source node forwards the UE access stratum context (containing the source RRC configuration of the UE including the PDU sessions and DRBs allocated to the UE and the UEs radio access capabilities) to the target node. The target node performs admission control for this UE which is based on the PDU session, QoS requirement, load in target cell, UE measurements etc. to finally come up with a radio reconfiguration message (the handover command) that is returned to the source node to be provided to the UE.
The whole resource allocation process takes into account only the time instant the UE is admitted into the target node and does not include information on various aspects that may help the target node make better and or more reliable admission control decisions such as to factor in the immediate and short/medium-term aspect (e.g., in terms of 10-100’s of milliseconds worth of traffic) of admitting the PDU session(s) of the UE.
Figure 6 illustrates a method in a conditional Handover (CHO) where a source node may provide information to a target node necessary to predict the incident load. Such information may include the predicted trajectory and the predicted traffic demand of the UE. Using these inputs, the target node may estimate the additional incident load caused by the UE after its handover.
As part of the CHO request message, the source node provides the target nodes with a Resource Utilization Request Info element, which contains the trajectory and traffic demand predictions required to estimate the incident load. The target nodes may use the predicted incident load in their admission control and may also provide it in a Resource Utilization Response in the CHO Request Acknowledge back to the source node.
The source node may also later trigger an update procedure using Resource Utilization Request Info and Resource Utilization Request elements in the Mobility Update Request and Response messages to get updated incident load estimates from the target nodes.
In this approach, to get the incident load predictions, the source node needs to request them from the target node. The source node will have the predicted incident load only after a CHO request has been sent. The source node cannot utilize the predictions in choosing which target candidates to prepare. Similarly, whenever the source node wants to get an update on the predicted incident loads, it needs to request them from the target nodes and send the required request information in a Resource Utilization Request Info, which adds signalling. The method of Figure 6 involves chaining both the trajectory and the traffic demand predictions to the model to determine the induced incident load. The challenge in chaining of ML models is that the inaccuracy of each of the models in accumulated and inaccuracy in chained ML model inputs may have significant impact on the model performance. Often it is better to build one prediction model from the observed, measured input values to the predicted KPI, which is also trained as one entity, rather than chain several independently trained models.
A method to predict the incident load directly at the source node is desirable.
Figure 7 shows a flowchart of a method according to an example embodiment.
In S1 , the method comprises receiving measurement values from a plurality of user equipments at a source node.
In S2, the method comprises providing, from the source node to a respective one of a plurality of target nodes, a handover request for each of the plurality of user equipments, the handover request including a request to measure incident load.
In S3, the method comprises determining to perform a handover for a subsequent user equipment to a subsequent target node.
In S4, the method comprises determining a predicted incident load based on the received measurement values and measured incidence loads.
In S5, the method comprises using the predicted incident load in mobility decisions.
In S6, the method comprises providing the predicted incident load to the subsequent target node in a handover request for the subsequent user equipment.
Means for determining a predicted incident load may comprise at least one machine learning model, which, when executed, is configured to determine the predicted incident load based on measurement values of the subsequent user equipment. The machine learning model, when executed, may be configured to determine the predicted incident load based on the identity of a target node.
The method may provide a learning method (e.g., based on supervised learning) to predict the incident load caused by a UE handover on the target node. The incident load may comprise, for a given time period, the absolute number of resources allocated to the user equipment in at least one of downlink and uplink or the ratio of resources allocated to the user equipment to all resources being scheduled in at least one of uplink and downlink.
The given time interval may be a configured time interval, Trident, after the UE has connected to the target cell and is transmitting and/or receiving data over the air interface.
The incident load may be defined as the absolute number of Physical Resource Blocks (PRBs) allocated to the handed-over UE in downlink (DL) and/or uplink (UL) in Trident.
Alternatively, or in addition, the incident load may be defined as the ratio of PRBs allocated to the handed-over UE compared to all PRBs being scheduled in Trident in DL and/or UL.
The definition of incident load may be extended to cover other additional loads caused by a UE when it’s handed over to a cell. The definition of the incident load may be for different contexts, i.e. the context attributes may include a combination of the managed element, where the load is aggregated: gNB (DU), cell, SSB area, beam, uplink I downlink, RAN slice, multiconnectivity and CA setups, transport connectivity (e.g. IP throughput to external nodes) and software/hardware resources (i.e. memory and CPU).
The incident load may be affected by network congestion, in which case the UE might not get the throughput it is requesting and therefore will also create lower incident load than the requested traffic would have done. For load balancing and other similar use cases, however, it may be more relevant to be able to predict the load that will satisfy a UE’s requirements. Therefore the following focuses on predicting the incident load of the requested traffic. This should be also considered when collecting the training data.
In order to learn to predict the load and not to estimate it, target nodes measure the incident load.
The training for the machine learning model may be performed at a training host located at the RAN (e.g., at a source node) or at a host node external to the RAN (e.g., 0AM).
The method may comprise providing the received measurement values to a host node for use in training the machine learning model. The method may comprise receiving an indication of the measured incident load from each of the plurality of target nodes and providing an indication of the measured incident load to the host node for use in training the machine learning model.
Alternatively, or in addition, the method may comprise receiving an indication of the measured incident load from each of the plurality of target nodes for use in training the machine learning model at the source node.
The method may comprise receiving an indication of capacity at the time of the measured incident load from each of the plurality of target nodes. For example, a target gNB may report also report its (average) Composite Available Capacity (CAC) at the time of the measurement of the incident load, which may be used to either filter out the measurements affected by congestion or to train the impact of target node congestion into the prediction model.
The measurement values may be L1 beam measurements or L3 cell quality measurements. The measurement values may comprise a time series of beam indexes of the past serving beams, a time series of indexes of the past serving cells, a time series of UE radio measurements (e.g., RSRP, RSRQ, SINR), a time series of UE location (in coordinates), a list of carriers a UE is having and their priority, e.g. their QoS Class Identifiers (QCIs), a time series of the past traffic demand for each carrier and (Slice specific) traffic demand for each PDU session (and corresponding DRBs).
The incident load measurements may be used to train a ML model to predict the incident load from UE measurements and KPIs reported to the source node. The input features for the prediction model depend on the implementation and for which managed object the load is predicted and may include one or more of the following, for example, source and target node IDs, if the same ML model instance covers several source and/or target nodes and/or UE measurement values. A model may cover several potential mobility source and target nodes, in which case an identifier of those nodes may be provided as an input to the model. The identifier of the nodes may be the physical cell ID (PCI) or NR-CGI (Cell Global ID). That is, the method may comprise determining the predicted incident load further based on an identifier of at least one of the subsequent target node and the source node.
Using this method, the incident load may be predicted directly from the reported measurements, without the need for intermediate predictions for the UE trajectory and traffic demand.
Figure 8 shows a signalling flow according to an example embodiment. In step 1 , the source gNB configures a UE to take measurements for training data collection.
In step 2, UE reports the measurements, e.g. L1 beam measurements or L3 cell quality measurements. In the case of split architecture, L1 beam measurements including neighboring cells will be reported to gNB-DU and L3 cell quality measurements will be reported to gNB-CU.
In step 3, measurements are stored by the source gNB associated to the UE context
In step 4, a handover is prepared to a neighboring cell. In the handover request, the source gNB may request the target gNB to measure the incident load caused by the UE being handed over.
In step 5, the handover is executed.
In step 6, the target gNB measures the incident load over Tincident as requested by the source gNB.
In step 7, the target gNB sends feedback information to the source by reporting the measured incident load back to the source gNB. The target gNB may report also its (average) Composite Available Capacity (CAC) at the time of the measurement, which may be used to either filter out the measurements affected by congestion or to train the impact of target node congestion into the prediction model.
In step 8, the source gNB stores and attaches the measured load to the collected input data.
Once a configured number of training samples is collected by the source gNB, it reports the training dataset to the Training Host in step 9.
The Training Host trains a ML with the provided training dataset in step 10.
The trained model is deployed back to the source gNB in step 11 . In this example the training host is external to the source gNB. In an alternative embodiment, the source gNB may comprise the training host
In step 12, the source gNB configures another UE with events triggering data collection for the inference load prediction. These may be mobility events, e.g., A3 event. In step 13, the LIE reports the configured measurements.
In step 14, the source gNB uses the reported measurements and the trained ML model to predict the incident load, if the UE is handed over to the target gNB
In step 15, the source gNB may utilize the predicted incident, e.g. to optimize load balancing in its mobility decisions.
In step 16, a handover to the target gNB is requested. In the handover request, the source gNB includes the predicted incident load.
In step 17, the target gNB may use the predicted incident load e.g. in its admission control.
Figure 9 shows a signalling flow according to an alternative example embodiment. In this embodiment, the target gNB reports the measured incident load directly to the training host.
After the HO is executed, as in Figure 8, the target gNB reports the measured incident load and, optionally, its average CAC, directly to the Training Host in step 7.
The source gNB reports only the collected input feature data to the Training Host in step 8.
In step 9, the Training Host correlates the input data with the corresponding incident load measurement(s). The correlation may require the generation of a “common token” to identify a particular UE and a handover of that UE.
Figure 10 shows an example machine learning model. In this example, the incident load is predicted based on a time series of the past serving beam indexes, traffic Quality Class Indexes and traffic demand (for all carriers). As an additional input, the ID of the target node, for which the incident load is predicted, is given. The model is using Long-Short Term Memory (LSTM) Recurrent Neural Network (RNN), which is good in predicting from sequences of data. In the presented model, there are two LSTM layers, one agnostic of the target node ID and one aware of it.
Figure 11 shows a signalling diagram fortraining an incidence load prediction model according to an example embodiment. AMF provides the mobility configuration information, which may include configuration to collect training data for incident load prediction. The source gNB configures a UE with an event to start the measurements for the incident load prediction. This may be an existing event or a new one.
The UE reports the configured measurements, which may be from the input feature list above
The source gNB records the collected data and keeps track of it with the UE context to identify the UE.
The source gNB decides a handover.
The source gNB sends a Handover Request to the target gNB. This request may contain configuration for measuring the incident load.
Once the UE starts receiving and transmitting with the target node, the incidence load measurement is started.
The target gNB measures the incident load caused by the UE over the configured period T incident
The target gNB reports the measured incident load back to the source gNB. This could be, for example, done in an existing message such as the UE Context Release message (3GPP TS 38.300 Figure 9.2.3.2.1-1 ) or in a newly defined message. Reporting the measurement may delay the UE Context Release message. The target gNB may report also its (average) Composite Available Capacity (CAC) at the time of the measurement, which may be used to either filter out the measurements affected by congestion or to train the impact of target node congestion into the prediction model.
The source gNB stores the measured incident load together with the input data collected form the UE before its handover. The steps are repeated a pre-configured number N of times to collect a sufficient amount of training data.
The source gNB reports the collected training data to the Training Host.
The Training Host trains an incident load prediction model with the provided data. The Training Host deploys the trained incidence load prediction model to the source gNB.
Figure 12 shows a signalling flow for training an incidence load prediction model for an example embodiment where the required data, both the input data and the measured incident load, is collected directly at the Training Host. It is otherwise the same as the method presented in Figure 11 , except that both the target and source gNBs are reporting the measured incident load and the input to the prediction, respectively, directly to the Training Host (Steps 15 and 17). The duration of the target node incident load measurement may be configured by the source node and the timing of sending measurements from source node to the training host can align with the duration of incident load measurement at the target node. The Training Host correlates the two data sources in step 18 and uses the dataset for training the model. The training host maps the incident load to the input data based on a “common token” that identify these from the same LIE, created in step 2. The generation of the “common token” is unique to a given LIE but not reveal its identity. The token does not need to be retained at the training host once the data has been consumed to train the model.
In the embodiment shown in Figure 11 , all training data collection is consolidated in the source gNB, which may avoid the complexity of more distributed data collection and the need for the Training Host to be able to correlate which reported measured loads correlate to which input sequences. Additionally, in this embodiment, the target gNB is not required to integrate to the Training Host, which may be simpler in certain scenarios.
The embodiment shown in Figure 12 allows for longer load measurement periods after the handover. In the embodiment of Figure 11 , the measured load needs to be reported in the LIE context release message to be able to correlate it to the input data based on the LIE context, before it’s released. In the embodiment shown in Figure 12, such restrictions are not needed, which allows measuring incident load of UEs with bursty traffic, which may not be sending or receiving right after the handover.
The training host may, for example, be either the source gNB or in the 0AM. Communication between the source and target gNB happens over the Xn interfaces.
Figure 13 shows a signalling flow for inferring the incidence load prediction model for a baseline HO.
AMF may configure, in addition, to activate the incidence load prediction and the required events to trigger the data collection at the LIE. A UE reports its measurements and KPIs. An event may trigger the incident load prediction additional data collection. This may be an existing event, such as A3, or a new one may be defined.
The source gNB collects the necessary input data for the prediction and may use the incident load prediction model to predict the load if the UE is handed over to a cell in the target gNB. The inference may be triggered by mobility events.
The source gNB may use the predicted incident load in its handover decision, e.g. for load balancing or optimizing the network quality.
The source gNB sends a Handover Request to the target gNB. The handover preparation message may additionally include the predicted incident load.
The target gNB may use the predicted incident load e.g. in its admission control procedure or predicting its future load.
Figure 14 shows the method of inference in CHO preparation.
The incident load prediction is done as above, and the predicted load is given to both target gNB candidates in CHO preparation. The target gNBs may consider the predicted load in e.g. their admission control. Additionally, the source gNB may consider the predicted load in its decision, which target candidates to prepare.
For other contexts, such as, for example, predicting other incident load KPIs instead or in addition to the radio interface PRB usage, a separate ML model instance may be trained.
The method may provide an ability to predict the UE incident loads on target (candidate) nodes at the source node, where it is needed for e.g., load balancing, choosing CHO preparations etc, learning to predict incident loads end-to-end from UE measurements which are adapted to the particular context, e.g., a particular cell border, may reach a higher accuracy, since predictions are not chained. Further, less signaling may be required between a source and target node.
An apparatus may comprise means for receiving measurement values from a plurality of user equipments at a source node, providing, from the source node to a respective one of a plurality of target nodes, a handover request for each of the plurality of user equipments, the handover request including a request to measure incident load, determining to perform a handover for a subsequent user equipment to a subsequent target node, determining a predicted incident load based on the received measurement values and measured incidence loads, using the predicted incident load in mobility decisions and providing the predicted incident load to the subsequent target node in a handover request for the subsequent user equipment.
Alternatively, an apparatus may comprise means for receiving, from a source node at a target node, a handover request for a user equipment, wherein either the handover request includes a request to measure incident load and if so, providing an indication of measured incident load to at least one of the source node and a host node for use in training a machine learning model or the handover request includes an indication of a predicted incident load associated with the user equipment.
Alternatively, an apparatus may comprise means for receiving, from one of a source node and a plurality of target nodes, at least one indication of a measured incident load for a user equipment at a respective one of the plurality of target nodes and measurement values for the user equipment and using the at least one indication and measurement values in training a machine learning model which, when executed is configured to determine a predicted incident load based on measurement values of a subsequent user equipment.
It should be understood that the apparatuses may comprise or be coupled to other units or modules etc., such as radio parts or radio heads, used in or for transmission and/or reception. Although the apparatuses have been described as one entity, different modules and memory may be implemented in one or more physical or logical entities.
It is noted that whilst some embodiments have been described in relation to 5G networks, similar principles can be applied in relation to other networks and communication systems. Therefore, although certain embodiments were described above by way of example with reference to certain example architectures for wireless networks, technologies and standards, embodiments may be applied to any other suitable forms of communication systems than those illustrated and described herein.
It is also noted herein that while the above describes example embodiments, there are several variations and modifications which may be made to the disclosed solution without departing from the scope of the present invention. In general, the various embodiments may be implemented in hardware or special purpose circuitry, software, logic or any combination thereof. Some aspects of the disclosure may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device, although the disclosure is not limited thereto. While various aspects of the disclosure may be illustrated and described as block diagrams, flow charts, or using some other pictorial representation, it is well understood that these blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof.
As used in this application, the term “circuitry” may refer to one or more or all of the following:
(a) hardware-only circuit implementations (such as implementations in only analog and/or digital circuitry) and
(b) combinations of hardware circuits and software, such as (as applicable):
(i) a combination of analog and/or digital hardware circuit(s) with software/firmware and
(ii) any portions of hardware processor(s) with software (including digital signal processor(s)), software, and memory(ies) that work together to cause an apparatus, such as a mobile phone or server, to perform various functions) and
(c) hardware circuit(s) and or processor(s), such as a microprocessor(s) or a portion of a microprocessor(s), that requires software (e.g., firmware) for operation, but the software may not be present when it is not needed for operation.”
This definition of circuitry applies to all uses of this term in this application, including in any claims. As a further example, as used in this application, the term circuitry also covers an implementation of merely a hardware circuit or processor (or multiple processors) or portion of a hardware circuit or processor and its (or their) accompanying software and/or firmware. The term circuitry also covers, for example and if applicable to the particular claim element, a baseband integrated circuit or processor integrated circuit for a mobile device or a similar integrated circuit in server, a cellular network device, or other computing or network device.
The embodiments of this disclosure may be implemented by computer software executable by a data processor of the mobile device, such as in the processor entity, or by hardware, or by a combination of software and hardware. Computer software or program, also called program product, including software routines, applets and/or macros, may be stored in any apparatus-readable data storage medium and they comprise program instructions to perform particular tasks. A computer program product may comprise one or more computer- executable components which, when the program is run, are configured to carry out embodiments. The one or more computer-executable components may be at least one software code or portions of it.
Further in this regard it should be noted that any blocks of the logic flow as in the Figures may represent program steps, or interconnected logic circuits, blocks and functions, or a combination of program steps and logic circuits, blocks and functions. The software may be stored on such physical media as memory chips, or memory blocks implemented within the processor, magnetic media such as hard disk or floppy disks, and optical media such as for example DVD and the data variants thereof, CD. The physical media is a non-transitory media.
The memory may be of any type suitable to the local technical environment and may be implemented using any suitable data storage technology, such as semiconductor based memory devices, magnetic memory devices and systems, optical memory devices and systems, fixed memory and removable memory. The data processors may be of any type suitable to the local technical environment and may comprise one or more of general-purpose computers, special purpose computers, microprocessors, digital signal processors (DSPs), application specific integrated circuits (ASIC), FPGA, gate level circuits and processors based on multi core processor architecture, as non-limiting examples.
Embodiments of the disclosure may be practiced in various components such as integrated circuit modules. The design of integrated circuits is by and large a highly automated process. Complex and powerful software tools are available for converting a logic level design into a semiconductor circuit design ready to be etched and formed on a semiconductor substrate.
The scope of protection sought for various embodiments of the disclosure is set out by the independent claims. The embodiments and features, if any, described in this specification that do not fall under the scope of the independent claims are to be interpreted as examples useful for understanding various embodiments of the disclosure.
The foregoing description has provided by way of non-limiting examples a full and informative description of the exemplary embodiment of this disclosure. However, various modifications and adaptations may become apparent to those skilled in the relevant arts in view of the foregoing description, when read in conjunction with the accompanying drawings and the appended claims. However, all such and similar modifications of the teachings of this disclosure will still fall within the scope of this invention as defined in the appended claims. Indeed, there is a further embodiment comprising a combination of one or more embodiments with any of the other embodiments previously discussed.

Claims

Claims
1 . An apparatus comprising means for: receiving measurement values from a plurality of user equipments at a source node; providing, from the source node to a respective one of a plurality of target nodes, a handover request for each of the plurality of user equipments, the handover request including a request to measure incident load; determining to perform a handover for a subsequent user equipment to a subsequent target node; determining a predicted incident load based on the received measurement values and measured incidence loads; using the predicted incident load in mobility decisions; and providing the predicted incident load to the subsequent target node in a handover request for the subsequent user equipment.
2. An apparatus according to claim 1 , wherein the means for determining a predicted incident load comprises at least one machine learning model, which, when executed, is configured to determine the predicted incident load based on measurement values of the subsequent user equipment.
3. An apparatus according to claim 2, wherein the machine learning model, when executed, is configured to determine the predicted incident load based on the identity of a target node.
4. An apparatus according to any of claim 2 and claim 3, comprising means for providing the received measurement values to a host node for use in training the machine learning model.
5. An apparatus according to claim 4, comprising means for receiving an indication of the measured incident load from each of the plurality of target nodes and providing an indication of the measured incident load to the host node for use in training the machine learning model.
6. An apparatus according to any of claim 2 and claim 3, comprising means for receiving an indication of the measured incident load from each of the plurality of target nodes for use in training the machine learning model at the source node.
7. An apparatus according to claim 5 or claim 6, comprising means for receiving an indication of capacity at the time of the measured incident load from each of the plurality of target nodes.
8. An apparatus according to any of claims 1 to 7, comprising means for determining the predicted incident load further based on an identifier of at least one of the subsequent target node and the source node.
9. An apparatus according to any of claims 1 to 8, wherein the incident load comprises, for a given time period, the absolute number of resources allocated to the user equipment in at least one of downlink and uplink or the ratio of resources allocated to the user equipment to all resources being scheduled in at least one of uplink and downlink.
10. An apparatus comprising means for: receiving, from a source node at a target node, a handover request for a user equipment, wherein either the handover request includes a request to measure incident load; and if so, providing an indication of measured incident load to at least one of the source node and a host node for use in training a machine learning model; or the handover request includes an indication of a predicted incident load associated with the user equipment.
11. An apparatus comprising means for: receiving, from one of a source node and a plurality of target nodes, at least one indication of a measured incident load for a user equipment at a respective one of the plurality of target nodes and measurement values for the user equipment; and using the at least one indication and measurement values in training a machine learning model which, when executed is configured to determine a predicted incident load based on measurement values of a subsequent user equipment.
12. A method comprising: receiving measurement values from a plurality of user equipments at a source node; providing, from the source node to a respective one of a plurality of target nodes, a handover request for each of the plurality of user equipments, the handover request including a request to measure incident load; determining to perform a handover for a subsequent user equipment to a subsequent target node; determining a predicted incident load based on the received measurement values and measured incidence loads; using the predicted incident load in mobility decisions and providing the predicted incident load to the subsequent target node in a handover request for the subsequent user equipment.
13. An apparatus comprising: at least one processor and at least one memory including a computer program code, the at least one memory and computer program code configured to, with the at least one processor, cause the apparatus at least to: receive measurement values from a plurality of user equipments at a source node; providing, from the source node to a respective one of a plurality of target nodes, a handover request for each of the plurality of user equipments, the handover request including a request to measure incident load; determine to perform a handover for a subsequent user equipment to a subsequent target node; determine a predicted incident load based on the received measurement values and measured incidence loads; use the predicted incident load in mobility decisions and provide the predicted incident load to the subsequent target node in a handover request for the subsequent user equipment.
14. A computer readable medium comprising program instructions for causing an apparatus to perform at least the following: receiving measurement values from a plurality of user equipments at a source node; providing, from the source node to a respective one of a plurality of target nodes, a handover request for each of the plurality of user equipments, the handover request including a request to measure incident load; determining to perform a handover for a subsequent user equipment to a subsequent target node; determining a predicted incident load based on the received measurement values and measured incidence loads; using the predicted incident load in mobility decisions and providing the predicted incident load to the subsequent target node in a handover request for the subsequent user equipment.
PCT/FI2023/050209 2022-04-26 2023-04-14 Apparatus, method and computer program for load prediction WO2023209275A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
FI20225348 2022-04-26
FI20225348 2022-04-26

Publications (1)

Publication Number Publication Date
WO2023209275A1 true WO2023209275A1 (en) 2023-11-02

Family

ID=88517972

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/FI2023/050209 WO2023209275A1 (en) 2022-04-26 2023-04-14 Apparatus, method and computer program for load prediction

Country Status (1)

Country Link
WO (1) WO2023209275A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016072893A1 (en) * 2014-11-05 2016-05-12 Telefonaktiebolaget L M Ericsson (Publ) Training of models predicting the quality of service after handover for triggering handover
US20170026888A1 (en) * 2015-07-24 2017-01-26 Cisco Technology, Inc. System and method to facilitate radio access point load prediction in a network environment
WO2021088766A1 (en) * 2019-11-08 2021-05-14 中兴通讯股份有限公司 Handover method, handover device, and network system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016072893A1 (en) * 2014-11-05 2016-05-12 Telefonaktiebolaget L M Ericsson (Publ) Training of models predicting the quality of service after handover for triggering handover
US20170026888A1 (en) * 2015-07-24 2017-01-26 Cisco Technology, Inc. System and method to facilitate radio access point load prediction in a network environment
WO2021088766A1 (en) * 2019-11-08 2021-05-14 中兴通讯股份有限公司 Handover method, handover device, and network system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"3rd Generation Partnership Project; Technical Specification Group Radio Access Network; Evolved Universal Terrestrial Radio Access (E-UTRA) and NR; Study on enhancement for Data Collection for NR and EN-DC (Release 17)", 3GPP STANDARD; TECHNICAL REPORT; 3GPP TR 37.817, 3RD GENERATION PARTNERSHIP PROJECT (3GPP), MOBILE COMPETENCE CENTRE ; 650, ROUTE DES LUCIOLES ; F-06921 SOPHIA-ANTIPOLIS CEDEX ; FRANCE, no. V17.0.0, 6 April 2022 (2022-04-06), Mobile Competence Centre ; 650, route des Lucioles ; F-06921 Sophia-Antipolis Cedex ; France, pages 1 - 23, XP052146515 *
INTEL CORPORATION: "AI/ML based load balancing", 3GPP DRAFT; R3-213470, 3RD GENERATION PARTNERSHIP PROJECT (3GPP), MOBILE COMPETENCE CENTRE ; 650, ROUTE DES LUCIOLES ; F-06921 SOPHIA-ANTIPOLIS CEDEX ; FRANCE, vol. RAN WG3, no. Electronic meeting; 20210816 - 20210826, 6 August 2021 (2021-08-06), Mobile Competence Centre ; 650, route des Lucioles ; F-06921 Sophia-Antipolis Cedex ; France , XP052035298 *

Similar Documents

Publication Publication Date Title
US11917527B2 (en) Resource allocation and activation/deactivation configuration of open radio access network (O-RAN) network slice subnets
EP3869847B1 (en) Multi-access traffic management in open ran, o-ran
US20190200406A1 (en) Signaling for multiple radio access technology dual connectivity in wireless network
US20180049080A1 (en) Network controlled sharing of measurement gaps for intra and inter frequency measurements for wireless networks
US10461900B2 (en) Hierarchical arrangement and multiplexing of mobile network resource slices for logical networks
US20180115927A1 (en) Flexible quality of service for inter-base station handovers within wireless network
US20230300686A1 (en) Communication system for machine learning metadata
EP4203542A1 (en) Data transmission method and apparatus
US11785480B2 (en) Method and apparatus to support RACH optimization and delay measurements for 5G networks
US20230269606A1 (en) Measurement configuration for local area machine learning radio resource management
CN111727595B (en) Method, apparatus and computer readable storage medium for communication
WO2022053233A1 (en) Beam failure reporting
WO2015176613A1 (en) Measurement device and method, and control device and method for wireless network
US20240057139A1 (en) Optimization of deterministic and non-deterministic traffic in radio-access network (ran)
CN111713177B (en) Processing SMTC information at a user equipment
US20210075687A1 (en) Communication apparatus, method and computer program
WO2023209275A1 (en) Apparatus, method and computer program for load prediction
WO2023240589A1 (en) Apparatus, method and computer program
Andrés-Maldonado NB-IoT M2M communications in 5G cellular networks
US20240137783A1 (en) Signalling support for split ml-assistance between next generation random access networks and user equipment
EP4106273A1 (en) Apparatus, methods, and computer programs
US20230261792A1 (en) Apparatus, methods, and computer programs
WO2024033180A1 (en) Dual connectivity
US20220386155A1 (en) Apparatus, Method and Computer Program
WO2024068396A1 (en) Lower layer mobility release preparation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23795703

Country of ref document: EP

Kind code of ref document: A1