WO2021191176A1 - Reporting in wireless networks - Google Patents

Reporting in wireless networks Download PDF

Info

Publication number
WO2021191176A1
WO2021191176A1 PCT/EP2021/057341 EP2021057341W WO2021191176A1 WO 2021191176 A1 WO2021191176 A1 WO 2021191176A1 EP 2021057341 W EP2021057341 W EP 2021057341W WO 2021191176 A1 WO2021191176 A1 WO 2021191176A1
Authority
WO
WIPO (PCT)
Prior art keywords
physical layer
layer parameters
machine learning
learning model
inferring
Prior art date
Application number
PCT/EP2021/057341
Other languages
French (fr)
Inventor
Wolfgang Zirwas
István Zsolt KOVÁCS
Luis Guilherme Uzeda Garcia
Muhammad Majid BUTT
Original Assignee
Nokia Technologies Oy
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Technologies Oy filed Critical Nokia Technologies Oy
Publication of WO2021191176A1 publication Critical patent/WO2021191176A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/16Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/10Scheduling measurement reports ; Arrangements for measurement reports
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/02Arrangements for optimising operational condition

Definitions

  • Various example embodiments relate to reporting in wireless networks.
  • a wireless access network node which may be also be referred to as a base station, determines a transmission format, a transmission block size, a modulation and coding scheme, and the like to be used in a downlink (DL) and an uplink (UL).
  • the network node needs information about the performance of a current DL channel from a wireless (user) device, and the information is generally referred to as channel state information (CSI).
  • CSI channel state information
  • Machine learning (ML) algorithms may be used in network optimization and management. For example, ML algorithms may predict potential network problems before they actually happen. ML models require a lot of data for training. Reporting e.g. measurement data from user equipments, e.g. mobile phones, may cause reporting overhead.
  • an apparatus comprising means for: determining physical layer parameters based on real world measurements to obtain measured physical layer parameters; receiving inferred physical layer parameters; comparing the measured physical layer parameters with the inferred physical layer parameters; and transmitting, to a network node, a delta report generated based on the comparison.
  • the delta report comprises at least one or more of one or more difference values between the measured physical layer parameters and the inferred physical layer parameters; inference probabilities; a list of physical layer parameters generated based on the inference probabilities; an acknowledgement message if the one or more difference values are below one or more pre-determined threshold values; a negative- acknowledgement message if the one or more difference values are above one or more pre determined threshold values.
  • the inferred physical layer parameters are received as an output from a machine learning model residing at the apparatus, the model being configured to infer physical layer parameters, the inferring comprising receiving at least a building vector data map as input; performing raytracing simulation based on the building vector data map; inferring the physical layer parameters at least based on an output of the raytracing simulation and a known location of the apparatus.
  • the machine learning model residing at the apparatus is similar to a machine learning model residing at the network node.
  • the inferred physical layer parameters are received from a network node as an output from a machine learning model residing at the network node, the model being configured to infer physical layer parameters, the inferring comprising at least receiving at least a building vector data map as input; performing raytracing simulation based on the building vector data map; inferring the physical layer parameters at least based on an output of the raytracing simulation and a known location of the apparatus.
  • the inferred physical layer parameters are received as a cyclic redundancy check or a sparse representation of low priority bits.
  • an apparatus comprising means for receiving position information from one or more user equipments; inferring physical layer parameters using a machine learning model at least based on a building vector data map and the position information from the one or more user equipments; receiving a delta report from a user equipment; inferring updated physical layer parameters using at least part of the delta report as input for the machine learning model; and/or updating the machine learning model based on the delta report and inferring updated physical layer parameters using the updated machine learning model.
  • the apparatus further comprises means for calculating transmit signals for the updated physical layer parameters; transmit pre-coded signals to the one or more user equipments.
  • the apparatus further comprises means for calculating a cyclic redundancy check or sparse representation of low priority bits based on the inferred physical layer parameters; reporting the cyclic redundancy check or the sparse representation of low priority bits to the one or more user equipments.
  • the means comprises at least one processor; and at least one memory including computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the performance of the apparatus.
  • a method comprising: determining, by a user equipment, physical layer parameters based on real world measurements to obtain measured physical layer parameters; receiving inferred physical layer parameters; comparing the measured physical layer parameters with the inferred physical layer parameters; and transmitting, to a network node, a delta report generated based on the comparison.
  • the delta report comprises at least one or more of one or more difference values between the measured physical layer parameters and the inferred physical layer parameters; inference probabilities; a list of physical layer parameters generated based on the inference probabilities; an acknowledgement message if the one or more difference values are below one or more pre-determined threshold values; a negative- acknowledgement message if the one or more difference values are above one or more pre determined threshold values.
  • the method comprises receiving the inferred physical layer parameters as an output from a machine learning model residing at the user equipment, the model being configured to infer physical layer parameters, the inferring comprising receiving at least a building vector data map as input; performing raytracing simulation based on the building vector data map; inferring the physical layer parameters at least based on an output of the raytracing simulation and a known location of the user equipment.
  • the machine learning model residing at the apparatus is similar to a machine learning model residing at the network node.
  • the method comprises receiving the inferred physical layer parameters from a network node as an output from a machine learning model residing at the network node, the model being configured to infer physical layer parameters, the inferring comprising at least receiving at least a building vector data map as input; performing raytracing simulation based on the building vector data map; inferring the physical layer parameters at least based on an output of the raytracing simulation and a known location of the user equipment.
  • the inferred physical layer parameters are received as a cyclic redundancy check or a sparse representation of low priority bits.
  • a method comprising receiving position information from one or more user equipments; inferring physical layer parameters using a machine learning model at least based on a building vector data map and the position information from the one or more user equipments; receiving a delta report from a user equipment; inferring updated physical layer parameters using at least part of the delta report as input for the machine learning model; and/or updating the machine learning model based on the delta report and inferring updated physical layer parameters using the updated machine learning model.
  • the method further comprises calculating transmit signals for the updated physical layer parameters; transmit pre-coded signals to the one or more user equipments.
  • the method further comprises calculating a cyclic redundancy check or sparse representation of low priority bits based on the inferred physical layer parameters; reporting the cyclic redundancy check or the sparse representation of low priority bits to the one or more user equipments.
  • a non-transitory computer readable medium comprising program instructions that, when executed by at least one processor, cause an apparatus to perform at least the method according to the third aspect above and the embodiments thereof.
  • a non-transitory computer readable medium comprising program instructions that, when executed by at least one processor, cause an apparatus to perform at least the method according to the fourth aspect above and the embodiments thereof.
  • a computer program configured to cause a method in accordance with at least the third aspect above and the embodiments thereof.
  • a computer program configured to cause a method in accordance with at least the fourth aspect above and the embodiments thereof.
  • FIG. 1 shows, by way of example, a system architecture of communication system
  • Fig. 2 shows, by way of example, a flowchart of a method
  • FIG. 3 shows, by way of example, signalling between a network node and a user equipment
  • Fig. 4 shows, by way of example, a diagram of a machine learning model
  • Fig. 5 shows, by way of example, delta reporting of inference values
  • Fig. 6 shows, by way of example, a typical inference error distribution
  • Fig. 7 shows, by way of example, inference value reconstruction
  • Fig. 8 shows, by way of example, high level illustration of delta reporting
  • Fig. 9 shows, by way of example, a flowchart of a method
  • Fig. 10 shows, by way of example, a block diagram of an apparatus.
  • Fig. 1 shows, by way of an example, a system architecture of communication system.
  • a radio access architecture based on long term evolution advanced (LTE Advanced, LTE-A) or new radio (NR), also known as fifth generation (5G), without restricting the embodiments to such an architecture, however.
  • LTE Advanced long term evolution advanced
  • NR new radio
  • 5G fifth generation
  • UMTS universal mobile telecommunications system
  • UTRAN radio access network
  • LTE long term evolution
  • WLAN wireless local area network
  • WiFi worldwide interoperability for microwave access
  • Bluetooth® personal communications services
  • PCS personal communications services
  • WCDMA wideband code division multiple access
  • UWB ultra-wideband
  • sensor networks mobile ad-hoc networks
  • IMS Internet Protocol multimedia subsystems
  • Fig. 1 depicts examples of simplified system architectures only showing some elements and functional entities, all being logical units, whose implementation may differ from what is shown.
  • the connections shown in Fig. 1 are logical connections; the actual physical connections may be different.
  • the system typically comprises also other functions and structures than those shown in Fig. 1.
  • the embodiments are not, however, restricted to the system given as an example but a person skilled in the art may apply the solution to other communication systems provided with necessary properties. Examples of such other communication systems include microwave links and optical fibers, for example.
  • Fig. 1 shows a part of an exemplifying radio access network.
  • Fig. 1 shows user devices 100 and 102 configured to be in a wireless connection on one or more communication channels in a cell with an access node, such as gNB, i.e. next generation NodeB, or eNB, i.e. evolved NodeB (eNodeB), 104 providing the cell.
  • an access node such as gNB, i.e. next generation NodeB, or eNB, i.e. evolved NodeB (eNodeB), 104 providing the cell.
  • the physical link from a user device to the network node is called uplink (UL) or reverse link and the physical link from the network node to the user device is called downlink (DL) or forward link.
  • UL uplink
  • DL downlink
  • network nodes or their functionalities may be implemented by using any node, host, server or access point etc. entity suitable for such a usage.
  • a communications system typically comprises more than one network node in which case the network nodes may also be configured to communicate with one another over links, wired or wireless, designed for the purpose. These links may be used for signalling purposes.
  • the network node is a computing device configured to control the radio resources of the communication system it is coupled to.
  • the network node may also be referred to as a base station (BS), an access point or any other type of interfacing device including a relay station capable of operating in a wireless environment.
  • the network node includes or is coupled to transceivers. From the transceivers of the network node, a connection is provided to an antenna unit that establishes bi-directional radio links to user devices.
  • the antenna unit may comprise a plurality of antennas or antenna elements.
  • the network node is further connected to core network 110 (CN or next generation core NGC).
  • core network 110 CN or next generation core NGC.
  • the counterpart on the CN side can be a serving gateway (S-GW, routing and forwarding user data packets), packet data network gateway (P-GW), for providing connectivity of user devices (UEs) to external packet data networks, or mobile management entity (MME), etc.
  • S-GW serving gateway
  • P-GW packet data network gateway
  • MME mobile management entity
  • the user device also called UE, user equipment, user terminal, terminal device, etc.
  • UE user equipment
  • user terminal terminal device
  • any feature described herein with a user device may be implemented with a corresponding apparatus, also including a relay node.
  • An example of such a relay node is a layer 3 relay (self-backhauling relay) towards a base station.
  • the user device typically refers to a portable computing device that includes wireless mobile communication devices operating with or without a subscriber identification module (SIM), including, but not limited to, the following types of devices: a mobile station (mobile phone), smartphone, personal digital assistant (PDA), handset, device using a wireless modem (alarm or measurement device, etc.), laptop and/or touch screen computer, tablet, game console, notebook, and multimedia device.
  • SIM subscriber identification module
  • a user device may also be a nearly exclusive uplink only device, of which an example is a camera or video camera loading images or video clips to a network.
  • a user device may also be a device having capability to operate in Internet of Things (IoT) network which is a scenario in which objects are provided with the ability to transfer data over a network without requiring human-to-human or human-to-computer interaction.
  • IoT Internet of Things
  • 5G enables using multiple input - multiple output (MIMO) antennas, many more base stations or nodes than the LTE (a so-called small cell concept), including macro sites operating in co-operation with smaller stations and employing a variety of radio technologies depending on service needs, use cases and/or spectrum available.
  • MIMO multiple input - multiple output
  • 5G mobile communications supports a wide range of use cases and related applications including video streaming, augmented reality, different ways of data sharing and various forms of machine type applications (such as (massive) machine-type communications (mMTC), including vehicular safety, different sensors and real-time control.
  • 5G is expected to have multiple radio interfaces, namely below 6GHz, cmWave and mmWave, and also being integratable with existing legacy radio access technologies, such as the LTE.
  • Integration with the LTE may be implemented, at least in the early phase, as a system, where macro coverage is provided by the LTE and 5G radio interface access comes from small cells by aggregation to the LTE.
  • 5G is planned to support both inter-RAT operability (such as LTE-5G) and inter- RI operability (inter-radio interface operability, such as below 6GHz - cmWave, below 6GHz - cmWave - mmWave).
  • inter-RAT operability such as LTE-5G
  • inter- RI operability inter-radio interface operability, such as below 6GHz - cmWave, below 6GHz - cmWave - mmWave.
  • network slicing in which multiple independent and dedicated virtual sub-networks (network instances) may be created within the same infrastructure to run services that have different requirements on latency, reliability, throughput and mobility.
  • the current architecture in LTE networks is distributed in the radio and centralized in the core network.
  • the low latency applications and services in 5G require bringing the content close to the radio which leads to local break out and multi-access edge computing (MEC).
  • MEC multi-access edge computing
  • 5G enables analytics and knowledge generation to occur at the source of the data. This approach requires leveraging resources that may not be continuously connected to a network such as laptops, smartphones, tablets and sensors.
  • MEC provides a distributed computing environment for application and service hosting. It also has the ability to store and process content in close proximity to cellular subscribers for faster response time.
  • Edge computing covers a wide range of technologies such as wireless sensor networks, mobile data acquisition, mobile signature analysis, cooperative distributed peer-to-peer ad hoc networking and processing also classifiable as local cloud/fog computing and grid/mesh computing, dew computing, mobile edge computing, cloudlet, distributed data storage and retrieval, autonomic self-healing networks, remote cloud services, augmented and virtual reality, data caching, Internet of Things (massive connectivity and/or latency critical), critical communications (autonomous vehicles, traffic safety, real-time analytics, time-critical control, healthcare applications).
  • the communication system is also able to communicate with other networks, such as a public switched telephone network or the Internet 112, or utilize services provided by them.
  • the communication network may also be able to support the usage of cloud services, for example at least part of core network operations may be carried out as a cloud service (this is depicted in Fig. 1 by “cloud” 114).
  • the communication system may also comprise a central control entity, or a like, providing facilities for networks of different operators to cooperate for example in spectrum sharing.
  • Edge cloud may be brought into radio access network (RAN) by utilizing network function virtualization (NVF) and software defined networking (SDN).
  • RAN radio access network
  • NVF network function virtualization
  • SDN software defined networking
  • edge cloud may mean access node operations to be carried out, at least partly, in a server, host or node operationally coupled to a remote radio head or base station comprising radio parts. It is also possible that node operations will be distributed among a plurality of servers, nodes or hosts.
  • Application of cloud RAN architecture enables RAN real time functions being carried out at the RAN side (in a distributed unit, DU 104) and non-real time functions being carried out in a centralized manner (in a centralized unit, CU 108).
  • 5G may also utilize satellite communication to enhance or complement the coverage of 5G service, for example by providing backhauling.
  • Possible use cases are providing service continuity for machine-to-machine (M2M) or Internet of Things (IoT) devices or for passengers on board of vehicles, or ensuring service availability for critical communications, and future railway/maritime/aeronautical communications.
  • Satellite communication may utilise geostationary earth orbit (GEO) satellite systems, but also low earth orbit (LEO) satellite systems, in particular mega-constellations (systems in which hundreds of (nano)satellites are deployed).
  • GEO geostationary earth orbit
  • LEO low earth orbit
  • Each satellite 106 in the constellation may cover several satellite-enabled network entities that create on-ground cells.
  • the on-ground cells may be created through an on-ground relay node 104 or by a gNB located on-ground or in a satellite.
  • Machine learning (ML) algorithms may be used e.g. in network optimization and management, and physical (PHY) layer and medium access control (MAC) layer procedures of a communication system.
  • ML algorithms may predict potential network problems before they actually happen. Capacity requirements may be detected early using ML algorithms.
  • the ML algorithms may identify network problems and make recommendations to fix them.
  • Reinforcement learning is a type of a ML technique that enables an agent to learn in an interactive environment using feedback from its own actions and experiences.
  • an agent or an optimization algorithm performs an action according to its policy, that changes the environment state and receives a new state and reward for the action.
  • the agent s policy is then updated based on the reward of the state- action pair.
  • the agent may try unexplored state-action pairs to find a new and better policy. Therefore, learning an optimal policy requires some level of trial and error. This process may be referred as exploration, and it may be achieved e.g. by performing a random action and/or adding noise to the action.
  • exploration is part of the policy and is not explicitly selected.
  • the amount of exploration decreases over time and exploitation increases when the agent is confident about the policy, i.e. the agent acts according to its best knowledge.
  • a network may continuously measure network’s key performance indicators (KPIs) and perform radio network actions to learn their impact on the network based on RL principles. Over time, the network may be able to optimize the initial network configuration, and follow the dynamicity of the environment in a fully automated way by exploiting the learnings.
  • KPIs key performance indicators
  • the trial and error mechanism of RL takes time and increases the amount of errors in the network before converging to an optimized configuration. This is problematic especially for some applications, for example, ultra-reliable low-latency communication (URLLC) due to low error tolerance. Therefore, there is a need to perform exploration in a radio network without affecting network performance, e.g., customers’ data traffic.
  • URLLC ultra-reliable low-latency communication
  • the ML models may be run at a network node, e.g. gNB, and at the UE, or at the network node and not at the UE side.
  • the ML models may be trained based on measurement data collected from UEs. Measurement data is available at the UE, and may comprise data such as reference signal received power (RSRP) or channel state information (CSI) estimations based on CSI reference signals (RSs) or beam signal strength based on beam sweeping procedures.
  • RSRP reference signal received power
  • CSI channel state information
  • the measurement data is assumed to be the true values. This way, exploration and exploitation for reinforcement (RL) learning strategies can run more or less in parallel.
  • One ML model is used for doing actual inferences (exploitation), while a parallel ML model is constantly updated based on new delta estimates between the current ML inference and the estimated inferences based on the measurements (exploration). From time to time the UE sends an update of the hyperparameters of the newly trained ML model to the gNB and then both, the gNB and the UE, switch from the old to the new ML model.
  • Training of the ML model may be adapted depending on which parameters are to be inferred. For parameter estimation based on a deep neural network (DNN), different phases may be defined for exploration and exploitation. For that purpose, the gNB may use for exploration feedback from UE about the inferred PHY layer parameters and compare these results with its own inference. The difference between the ML inference and reported parameters from the UE is then the cost function for updating and training of the DNN weights.
  • Important input data for the ML models are e.g. geometries of the environment of the network node, UE position information, and UE measurements.
  • the geometries represent detailed model of the environment, typically in a form of a vector or raster data map of the surrounding buildings, trees, or other structures. Depending on the level of detail and features of interest, these maps are denoted as Digital Surface Models (DSM), Digital Elevation Models (DEM) and Digital Terrain Models (DTM).
  • a building vector data map (BVDM) is a digital vector map of a building.
  • BVDM is a digital vector map of a building.
  • BVDM is a digital vector map of a building.
  • BVDM is a digital vector map of a building.
  • BVDM is a digital vector map of a building.
  • BVDM is a digital vector map of a building.
  • BVDM is a digital vector map of a building.
  • a digital representation of the industry plant is a basic component of a digital twin.
  • the term Mirror World has been introduced to describe a more general digital representation of the real world.
  • Inferences about the UE channel conditions and of its evolution made by the ML models for network management comprises or relate to e.g. beam management (massive MIMO), best beam selection, small cell on-off, link adaptation, i.e. selection of the best modulation and coding schemes (MCS) taking inter cell interference into account, per beam received signal power, potential beam failures, best fitting beams for beam failure recovery or more generally for suitable handover decisions, channel state information (CSI) estimation and reporting to multiple TRPs.
  • MCS modulation and coding schemes
  • MCS modulation and coding schemes
  • CSI channel state information estimation and reporting to multiple TRPs.
  • the inferences are closely related to the UE positions. For example, global navigation satellite system (GNSS) provides accurate position information.
  • GNSS global navigation satellite system
  • Transfer of complex and large amount of detailed information from the UE to the network node causes reporting overhead, traffic, and interference on the physical uplink control channel (PUCCH).
  • Information reported from the UE to the network node comprises e.g. CSI for multiple TRPs (Tx/Rx point), reference signal received power (RSRP) per TRP, on-off of small cell (SC), etc.
  • the UE may determine differences between inferences of the model(s) based on measurements performed by the UE. Then, the UE may perform delta reporting for those differences instead of full reports, e.g. RSRP and/or CSI reports, on the physical uplink shared channel (PUSCH) and PUCCH channels.
  • the delta reports may be used for further improving the ML model(s) at the network node and possibly also at the UE.
  • Fig. 2 shows, by way of example, a flowchart of a method for delta reporting.
  • the method 200 may be performed e.g. by a user equipment.
  • the method 200 comprises determining 210 physical layer parameters based on real world measurements to obtain measured physical layer parameters.
  • the method 200 comprises receiving 220 inferred physical layer parameters.
  • the method 200 comprises comparing 230 the measured physical layer parameters with the inferred physical layer parameters.
  • the method 200 comprises transmitting 240, to a network node, a delta report generated based on the comparison.
  • Delta reporting allows the exchange of large amount of data between the UE and the network node, e.g. gNB, with minimum reporting overhead.
  • Delta reporting on the UL channels enables also reduction of the load in the physical downlink control channels (PDCCH).
  • Delta reporting allows accurate modelling of the mirror world at the network side, and thus improves the ML model at the network node and possibly also at the UE.
  • Delta reporting can replace, or at least reduce and/or enhance the conventional PHY layer measurements by direct inference of main parameters. For example, if the ML in the UE and in the network node become very precise and aligned, the PHY measurement rate in the UE may be reduced. Delta reporting ensures that despite the low reporting overhead the intended minimum accuracy for the PHY layer values will be achieved.
  • the inferred physical layer parameters may be received e.g. from a memory of the UE.
  • the UE may receive the inferred physical layer parameters either as an output from its ML model, or from the network node. In case the UE has not any ML model available, the comparison is carried out between the physical layer parameters measured by the UE and the physical layer parameters inferred by the network node.
  • Fig. 3 shows, by way of example, signalling between the network node 310, e.g. gNB, and the UE 320.
  • the entity 310 may, alternatively, be referred to as a cloud.
  • BVDM may be downloaded to the UE in advance, or is regularly updated over broadcast data channel (BDCH) by broad- and/or multicasting.
  • BDCH broadcast data channel
  • the UE has the information from the latest measurements, e.g. CSI or demodulation reference signal (DMRS) measurements from relevant beams and/or cells.
  • the network node 310 may transmit 330 periodically, e.g. every 5 ms, CSI RS to the UE 320.
  • the UE estimates and stores 335 e.g. channel transfer function (CTF), RSRP, position, etc.
  • CTF channel transfer function
  • RSRP RSRP
  • position etc.
  • the UE determines 345 physical layer parameters based on real world measurements to obtain measured physical layer parameters.
  • UE reports 340 its current position with a pre-defmed accuracy to the gNB.
  • the positioning may be based on location techniques, e.g. GNNS, and/or indoor positioning techniques.
  • UE position may be reported periodically, e.g. every 160 ms.
  • the gNB and the UE are doing in parallel an inference of the intended physical layer parameters from the same BVDM using the same ML model for the same given BVDM.
  • PHY layer inference at the gNB 350 and at the UE 355 is based on the BVDM, deep neural network (DNN), and raytracer for UE position.
  • the inferred physical layer parameters may comprise e.g. CSI to one or multiple TRP, channel quality indicator (CQI), modulation coding scheme (MCS), rank indicator (RI), best beam selection, inter cell interference, RSRP to close by small cells, birth and death of multi path components (MPCs), blocking due to moving objects, etc.
  • the gNB and the UE use the same ML models. This way, the UE is able to reconstruct what the gNB has inferred.
  • the UE receives the inferred physical layer parameters as an output from an ML model, or deep neural network model, residing at the UE.
  • the inference based on ML model at the UE 355 is illustrated in Fig. 3 with dashed lines. Later below a scenario is described where the UE does not perform PHY layer inference based on ML model of its own.
  • the UE compares 360 the measured physical layer parameters with the inferred physical layer parameters, and thus identify possible discrepancies.
  • a report e.g. a delta report, may be generated based on the comparison, and transmitted 365 to the gNB.
  • the gNB may update 370 the PHY layer inferences based on the information received in the delta report.
  • Delta report comprises the unavoidable mismatch between the inferred and the real, measured values.
  • the updated PHY layer parameters may be inferred using the current ML model using the delta report as input.
  • the ML model may be updated, and the updated parameters may be inferred using the updated model.
  • the ML model at the gNB may be tuned or re-trained e.g. continuously, with the goal to minimize the delta of the PHY parameters.
  • the adjustments of the PHY inference may comprise e.g. adjusting the weights, i.e.
  • the gNB may calculate 375 transmit signals for the updated PHY layer values, and transmit 380 pre-coded signals to the UE.
  • the pre-coded signals from the gNB are the DMRS signals, which may be adjusted in time and frequency, increase or decrease resolution, so that the UE may perform more accurate CSI estimation.
  • Fig. 4 shows, by way of example, a diagram 400 of a machine learning (ML) model.
  • the ML models at UE and gNB are expected to generate the same inferences for a given metric or feature and within a pre-determined error margin. Therefore, both may use approximately the same set of hyperparameters, e.g. DNN 410 type, number of layers and nodes, etc., for the ML model as well as the same input data.
  • Such input data may comprise, for example, at least the output 425 of a raytracing simulation 420 for a given BVDM and a known UE location.
  • This raytracer 420 may use input information 422 like BVDM, UE sensor data (orientations, antenna patterns, etc.), positions of SCs and gNBs together with their transmit power, beam shapes etc.
  • the raytracer 420 may comprise information of moving objects like cars, people, bikes etc. together with their locations and directions of movement.
  • the ML model, or DNN 410 may use the output 425 of the raytracer, like the raytraced multipath component (MPC) parameters, estimated RSRP values (Path Loss (PL)), received beam strengths per cell etc. as input data, together with further available data like UE locations (Ray Tracing (RT)), moving objects 430, etc.
  • the ML model may generate as an output 440 e.g. an improved inference, for example, regarding the CSI, UE location, or of BVDM parameters.
  • This improved inference inherently overcomes or minimizes then, for example, effects due to the RF characteristics of the BVDM or inaccuracies of the BVDM geometry.
  • the ML inference may be then in form of multi path component (MPC) parameters or as a sampled channel transfer function (CTF) or channel impulse response (CIR).
  • MPC multi path component
  • CTF sampled channel transfer function
  • CIR channel impulse response
  • Hyperparameters and input data to the ML models at both at the gNB and the UE are aligned. Both ML models run in parallel and the UE will then compare its latest knowledge of the inferred parameters, based on additional measurements, to generate a delta report between ML inferred and actually estimated parameters. The delta report is then reported to the gNB. The gNB can then reconstruct the full data based on the ML inference together with the delta reports.
  • the delta report may comprise an acknowledgement (ACK) or negative- acknowledgement (NACK) message. If ML inferences from the UE and from the gNB agree within pre-defmed limits, the UE may transmit an according ACK message to the gNB. Otherwise, the UE may transmit a NACK message.
  • ACK acknowledgement
  • NACK negative- acknowledgement
  • a threshold difference value or delta value may be pre-determined for this purpose.
  • a more comprehensive delta report may be generated.
  • the delta report comprises information relating to the comparison between the measured physical layer parameters and the inferred physical layer parameters.
  • the delta report may comprise the delta, or the difference between the measured physical layer parameters and the inferred physical layer parameters.
  • inferred CSI per PRB versus measured CSI per PRB may be reported.
  • the content of the delta report may be related to the inference probabilities.
  • the ML algorithm may output the probability of the best beam, or best small cell, or best cell for handover, etc. for all possible beams.
  • the delta report may indicate to choose the second, or third, most likely beam, instead of the one with the strongest likelihood, as this will be the right choice.
  • the delta report may comprise e.g. the probabilities of the beams, cells, etc., and the network may make decisions based on the probabilities.
  • the delta report may comprise a limited set of best choices determined based on the probabilities, from which the network may choose.
  • Delta reporting will result as a relatively low feedback overhead. Especially for very accurate BVDMs, the delta values will become small so that the related delta reporting will result as a relatively low feedback overhead.
  • the EE may have ML model available, but implementation of large models may be challenging.
  • small models that fit to limited EE battery power may have limited performance.
  • the EE receives the inferred physical layer parameters from the network node, e.g. from the gNB.
  • the EE may receive the inferred physical layer parameters from the network node as an output from a ML model, e.g. a deep neural network model, residing at the network node.
  • the gNB may transmit a hash function over one or more relevant parameters to the EE.
  • the EE may then compare the outcome of the hash function with measured physical layer parameters. This way, the EE may identify how the determinations of the physical layer parameters are matched between the EE and the gNB.
  • the gNB may provide some redundancy allowing correction of inference errors to a certain extent, so that EEs may identify some mismatch between the measured PHY parameters and the inferred PHY parameters.
  • the hash function may be for example a cyclic redundancy check (CRC) for the combined beam IDs.
  • CRC cyclic redundancy check
  • the gNB may report in DL a CRC check or sparse representation of lower priority information to the EE. The EE may then find the fine granular differences for estimating the delta report.
  • the accuracy levels of the measurements as well as of the ML based inferences may vary, and therefore, the strength of this CRC check may be adapted, e.g. by a similar mechanism as known for open loop link adaptation.
  • Open loop link adaptation is a long term adaptation of parameters based on e.g. the number of ACK/NACK feedback.
  • the strength of the CRC check may be adapted e.g. to slowly varying radio conditions.
  • the gNB may send in DL the low priority bits of relevant taps of the channel impulse response, of the multi path components, or of the reference signal subcarriers of the channel transfer function.
  • the gNB may report the low priority bits.
  • the gNB 310 may calculate 390 the CRC or sparse representation of low priority bits and report 395 the CRC check or the sparse representation of low priority bits. Based on the CRC, the UE may identify the right choices from a limited set of possible options from the measurements.
  • Fig. 5 shows, by way of example, delta reporting of inference values combining hashing, CRC reports and delta reports.
  • Any physical layer parameter either inferred by ML model at the UE and/or the gNB or measured by the UE, may be represented as a data vector 510.
  • this data vector might represent the bits of a quantized CSI value or a sorted list of best fitting beam identifiers or indices.
  • the allocation of bits depends on inference error distribution.
  • the assumption is a typical outcome of the ML instance having a certain error distribution as provided in Fig. 6.
  • Fig. 6 shows, by way of example, a typical error distribution 600 as outcome of ML inference, in this case for a MPC delay.
  • Fig. 6 shows, by way of example, a typical error distribution 600 as outcome of ML inference, in this case for a MPC delay.
  • the error distribution is close to a Gaussian normal distribution.
  • the delta reporting should take care of this error distribution per inference, as illustrated in the example of Fig.5. This may be learned by certain ML algorithms.
  • the inference data has been separated into ‘coarse’ 512, ‘medium’ 514 and ‘fine’ 516 bits.
  • a further assumption is that the ‘coarse’ inference part is provided by the current ML models with very high reliability. That is the part which does not have to be reported at all.
  • the ‘medium’ part has a high reliability with seldom occurring errors. Therefore, the ‘medium’ bits are combined 525 into larger data blocks together with a CRC check - or alternatively sufficient redundancy - to correct these seldom error events.
  • a hash function 520 may be used to combine the data into larger data blocks. In this case the UE does not report the data itself, but instead generates a hash report as the CRC check or redundancy bits 530. This saves a lot of overhead, while it still allows correcting a limited number of errors.
  • the ‘fine’ part 516 is then the main part of the delta report. It reports the delta signal to the above already inferred data parts according to the error distribution in Fig. 6.
  • Fig. 7 shows, by way of example, inference value reconstruction. It illustrates how the coarse 710, medium 720, 722, 724 and fine 730 inference data might be separated at the UE side and recombined at the gNB to get back to the full inference values.
  • the coarse data 710 is not reported, the medium data is reported using hash reports (CRC) 720, 722, 724, and the fine data 730 is reported via delta reports (huffman code).
  • CRC hash reports
  • the range i.e. the minimum and maximum values or the cardinality of the inference outcomes
  • the gNB For example, phase values can vary between 0 to 2p or amplitudes might be normalized to one.
  • Fig. 8 shows, by way of example, high level illustration of delta reporting of differences between measured and inferred PHY parameters.
  • UE 810 reports 840 its position to the network node 820, e.g. the gNB.
  • the UE determines physical layer parameters based on real world measurements 816 to obtain measured physical layer parameters.
  • the measured values over time (t) may comprise amplitude (a) and delay values (t) of MPCs comprising a radio channel.
  • the inferred physical layer parameters may be inferred by the UE itself, or received from the network node 820.
  • both UE and the network node perform the inference using the same ML models comprising e.g. deep neural networks 812, 822.
  • the same BVDM 818, 826 may be used as input for a raytracer 814, 824.
  • Raytracing simulation output for the given BVDM and UE location may be used as input for the neural network.
  • the measured physical layer parameters are then compared 830 with the inferred parameters outputted from the ML model.
  • a delta report generated based on the comparison is then transmitted 845 to the network node.
  • Fig. 9 shows, by way of example, a flowchart of a method.
  • the method 900 may be performed e.g. by a network node, or by a network entity, e.g. a cloud.
  • the method 900 comprises receiving 910 position information from one or more user equipments.
  • the method 900 comprises inferring 920 physical layer parameters using a machine learning model at least based on a building vector data map and the position information from the one or more user equipments.
  • the method 900 comprises receiving 930 a delta report from a user equipment.
  • the method 900 comprises inferring 940 updated physical layer parameters using at least part of the delta report as input for the machine learning model; and/or updating 945 the machine learning model based on the delta report and inferring updated physical layer parameters using the updated machine learning model.
  • Fig. 10 shows, by way of example, an apparatus capable of performing the method(s) as disclosed herein.
  • device 1000 which may comprise, for example, a mobile communication device such as mobile 110 of Fig. 1 a network node, e.g. access point 104 of Fig. 1.
  • processor 1010 which may comprise, for example, a single- or multi-core processor wherein a single-core processor comprises one processing core and a multi-core processor comprises more than one processing core.
  • Processor 1010 may comprise, in general, a control device.
  • Processor 1010 may comprise more than one processor.
  • Processor 1010 may be a control device.
  • a processing core may comprise, for example, a Cortex-A8 processing core manufactured by ARM Holdings or a Steamroller processing core designed by Advanced Micro Devices Corporation.
  • Processor 1010 may comprise at least one Qualcomm Snapdragon and/or Intel Atom processor.
  • Processor 1010 may comprise at least one application-specific integrated circuit, ASIC.
  • Processor 1010 may comprise at least one field-programmable gate array, FPGA.
  • Processor 1010 may be means for performing method steps in device 1000.
  • Processor 1010 may be configured, at least in part by computer instructions, to perform actions.
  • a processor may comprise circuitry, or be constituted as circuitry or circuitries, the circuitry or circuitries being configured to perform phases of methods in accordance with example embodiments described herein.
  • circuitry may refer to one or more or all of the following: (a) hardware-only circuit implementations, such as implementations in only analog and/or digital circuitry, and (b) combinations of hardware circuits and software, such as, as applicable: (i) a combination of analog and/or digital hardware circuit(s) with software/firmware and (ii) any portions of hardware processor(s) with software (including digital signal processor(s)), software, and memory(ies) that work together to cause an apparatus, such as a mobile phone or server, to perform various functions) and (c) hardware circuit(s) and or processor(s), such as a microprocessor s) or a portion of a microprocessor(s), that requires software (e.g., firmware) for operation, but the software may not be present when it is not needed for operation.
  • firmware firmware
  • circuitry also covers an implementation of merely a hardware circuit or processor (or multiple processors) or portion of a hardware circuit or processor and its (or their) accompanying software and/or firmware.
  • circuitry also covers, for example and if applicable to the particular claim element, a baseband integrated circuit or processor integrated circuit for a mobile device or a similar integrated circuit in server, a cellular network device, or other computing or network device.
  • Device 1000 may comprise memory 1020.
  • Memory 1020 may comprise random-access memory and/or permanent memory.
  • Memory 1020 may comprise at least one RAM chip.
  • Memory 1020 may comprise solid-state, magnetic, optical and/or holographic memory, for example.
  • Memory 1020 may be at least in part accessible to processor 1010.
  • Memory 1020 may be at least in part comprised in processor 1010.
  • Memory 1020 may be means for storing information.
  • Memory 1020 may comprise computer instructions that processor 1010 is configured to execute. When computer instructions configured to cause processor 1010 to perform certain actions are stored in memory 1020, and device 1000 overall is configured to run under the direction of processor 1010 using computer instructions from memory 1020, processor 1010 and/or its at least one processing core may be considered to be configured to perform said certain actions.
  • Memory 1020 may be at least in part comprised in processor 1010.
  • Memory 1020 may be at least in part external to device 1000 but accessible to device 1000.
  • Device 1000 may comprise a transmitter 1030.
  • Device 1000 may comprise a receiver 1040.
  • Transmitter 1030 and receiver 1040 may be configured to transmit and receive, respectively, information in accordance with at least one cellular or non-cellular standard.
  • Transmitter 1030 may comprise more than one transmitter.
  • Receiver 1040 may comprise more than one receiver.
  • Transmitter 1030 and/or receiver 1040 may be configured to operate in accordance with global system for mobile communication, GSM, wideband code division multiple access, WCDMA, 5G, long term evolution, LTE, IS-95, wireless local area network, WLAN, Ethernet and/or worldwide interoperability for microwave access, WiMAX, standards, for example.
  • Device 1000 may comprise a near-field communication, NFC, transceiver 1050.
  • NFC transceiver 1050 may support at least one NFC technology, such as NFC, Bluetooth, Wibree or similar technologies.
  • Device 1000 may comprise user interface, UI, 1060.
  • UI 1060 may comprise at least one of a display, a keyboard, a touchscreen, a vibrator arranged to signal to a user by causing device 1000 to vibrate, a speaker and a microphone.
  • a user may be able to operate device 1000 via UI 1060, for example to accept incoming telephone calls, to originate telephone calls or video calls, to browse the Internet, to manage digital files stored in memory 1020 or on a cloud accessible via transmitter 1030 and receiver 1040, or via NFC transceiver 1050, and/or to play games.
  • Device 1000 may comprise or be arranged to accept a user identity module 1070.
  • User identity module 1070 may comprise, for example, a subscriber identity module, SIM, card installable in device 1000.
  • a user identity module 1070 may comprise information identifying a subscription of a user of device 1000.
  • a user identity module 1070 may comprise cryptographic information usable to verify the identity of a user of device 1000 and/or to facilitate encryption of communicated information and billing of the user of device 1000 for communication effected via device 1000.
  • Processor 1010 may be furnished with a transmitter arranged to output information from processor 1010, via electrical leads internal to device 1000, to other devices comprised in device 1000.
  • Such a transmitter may comprise a serial bus transmitter arranged to, for example, output information via at least one electrical lead to memory 1020 for storage therein.
  • the transmitter may comprise a parallel bus transmitter.
  • processor 1010 may comprise a receiver arranged to receive information in processor 1010, via electrical leads internal to device 1000, from other devices comprised in device 1000.
  • Such a receiver may comprise a serial bus receiver arranged to, for example, receive information via at least one electrical lead from receiver 1040 for processing in processor 1010.
  • the receiver may comprise a parallel bus receiver.
  • Processor 1010, memory 1020, transmitter 1030, receiver 1040, NFC transceiver 1050, UI 1060 and/or user identity module 1070 may be interconnected by electrical leads internal to device 1000 in a multitude of different ways.
  • each of the aforementioned devices may be separately connected to a master bus internal to device 1000, to allow for the devices to exchange information.
  • this is only one example and depending on the embodiment various ways of interconnecting at least two of the aforementioned devices may be selected.

Abstract

There is provided an apparatus comprising means for: determining physical layer parameters based on real world measurements to obtain measured physical layer parameters; receiving inferred physical layer parameters; comparing the measured physical layer parameters with the inferred physical layer parameters; and transmitting, to a network node, a delta report generated based on the comparison.

Description

Reporting in wireless networks
FIELD
[0001] Various example embodiments relate to reporting in wireless networks.
BACKGROUND
[0002] In a modern mobile communication systems, a wireless access network node, which may be also be referred to as a base station, determines a transmission format, a transmission block size, a modulation and coding scheme, and the like to be used in a downlink (DL) and an uplink (UL). To perform such determination for the DL, the network node needs information about the performance of a current DL channel from a wireless (user) device, and the information is generally referred to as channel state information (CSI).
[0003] Machine learning (ML) algorithms may be used in network optimization and management. For example, ML algorithms may predict potential network problems before they actually happen. ML models require a lot of data for training. Reporting e.g. measurement data from user equipments, e.g. mobile phones, may cause reporting overhead.
SUMMARY
[0004] According to some aspects, there is provided the subject-matter of the independent claims. Some example embodiments are defined in the dependent claims. The scope of protection sought for various example embodiments is set out by the independent claims. The example embodiments and features, if any, described in this specification that do not fall under the scope of the independent claims are to be interpreted as examples useful for understanding various example embodiments.
[0005] According to a first aspect, there is provided an apparatus comprising means for: determining physical layer parameters based on real world measurements to obtain measured physical layer parameters; receiving inferred physical layer parameters; comparing the measured physical layer parameters with the inferred physical layer parameters; and transmitting, to a network node, a delta report generated based on the comparison.
[0006] According to an embodiment, the delta report comprises at least one or more of one or more difference values between the measured physical layer parameters and the inferred physical layer parameters; inference probabilities; a list of physical layer parameters generated based on the inference probabilities; an acknowledgement message if the one or more difference values are below one or more pre-determined threshold values; a negative- acknowledgement message if the one or more difference values are above one or more pre determined threshold values.
[0007] According to an embodiment, the inferred physical layer parameters are received as an output from a machine learning model residing at the apparatus, the model being configured to infer physical layer parameters, the inferring comprising receiving at least a building vector data map as input; performing raytracing simulation based on the building vector data map; inferring the physical layer parameters at least based on an output of the raytracing simulation and a known location of the apparatus.
[0008] According to an embodiment, the machine learning model residing at the apparatus is similar to a machine learning model residing at the network node.
[0009] According to an embodiment, the inferred physical layer parameters are received from a network node as an output from a machine learning model residing at the network node, the model being configured to infer physical layer parameters, the inferring comprising at least receiving at least a building vector data map as input; performing raytracing simulation based on the building vector data map; inferring the physical layer parameters at least based on an output of the raytracing simulation and a known location of the apparatus.
[0010] According to an embodiment, the inferred physical layer parameters are received as a cyclic redundancy check or a sparse representation of low priority bits.
[0011] According to a second aspect, there is provided an apparatus comprising means for receiving position information from one or more user equipments; inferring physical layer parameters using a machine learning model at least based on a building vector data map and the position information from the one or more user equipments; receiving a delta report from a user equipment; inferring updated physical layer parameters using at least part of the delta report as input for the machine learning model; and/or updating the machine learning model based on the delta report and inferring updated physical layer parameters using the updated machine learning model. [0012] According to an embodiment, the apparatus further comprises means for calculating transmit signals for the updated physical layer parameters; transmit pre-coded signals to the one or more user equipments.
[0013] According to an embodiment, the apparatus further comprises means for calculating a cyclic redundancy check or sparse representation of low priority bits based on the inferred physical layer parameters; reporting the cyclic redundancy check or the sparse representation of low priority bits to the one or more user equipments.
[0014] According to an embodiment, the means comprises at least one processor; and at least one memory including computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the performance of the apparatus.
[0015] According to a third aspect, there is provided a method comprising: determining, by a user equipment, physical layer parameters based on real world measurements to obtain measured physical layer parameters; receiving inferred physical layer parameters; comparing the measured physical layer parameters with the inferred physical layer parameters; and transmitting, to a network node, a delta report generated based on the comparison.
[0016] According to an embodiment, the delta report comprises at least one or more of one or more difference values between the measured physical layer parameters and the inferred physical layer parameters; inference probabilities; a list of physical layer parameters generated based on the inference probabilities; an acknowledgement message if the one or more difference values are below one or more pre-determined threshold values; a negative- acknowledgement message if the one or more difference values are above one or more pre determined threshold values.
[0017] According to an embodiment, the method comprises receiving the inferred physical layer parameters as an output from a machine learning model residing at the user equipment, the model being configured to infer physical layer parameters, the inferring comprising receiving at least a building vector data map as input; performing raytracing simulation based on the building vector data map; inferring the physical layer parameters at least based on an output of the raytracing simulation and a known location of the user equipment.
[0018] According to an embodiment, the machine learning model residing at the apparatus is similar to a machine learning model residing at the network node. [0019] According to an embodiment, the method comprises receiving the inferred physical layer parameters from a network node as an output from a machine learning model residing at the network node, the model being configured to infer physical layer parameters, the inferring comprising at least receiving at least a building vector data map as input; performing raytracing simulation based on the building vector data map; inferring the physical layer parameters at least based on an output of the raytracing simulation and a known location of the user equipment.
[0020] According to an embodiment, the inferred physical layer parameters are received as a cyclic redundancy check or a sparse representation of low priority bits.
[0021] According to a fourth aspect, there is provided a method comprising receiving position information from one or more user equipments; inferring physical layer parameters using a machine learning model at least based on a building vector data map and the position information from the one or more user equipments; receiving a delta report from a user equipment; inferring updated physical layer parameters using at least part of the delta report as input for the machine learning model; and/or updating the machine learning model based on the delta report and inferring updated physical layer parameters using the updated machine learning model.
[0022] According to an embodiment, the method further comprises calculating transmit signals for the updated physical layer parameters; transmit pre-coded signals to the one or more user equipments.
[0023] According to an embodiment, the method further comprises calculating a cyclic redundancy check or sparse representation of low priority bits based on the inferred physical layer parameters; reporting the cyclic redundancy check or the sparse representation of low priority bits to the one or more user equipments.
[0024] According to a fifth aspect, there is provided a non-transitory computer readable medium comprising program instructions that, when executed by at least one processor, cause an apparatus to perform at least the method according to the third aspect above and the embodiments thereof.
[0025] According to a sixth aspect, there is provided a non-transitory computer readable medium comprising program instructions that, when executed by at least one processor, cause an apparatus to perform at least the method according to the fourth aspect above and the embodiments thereof. [0026] According to a seventh aspect, there is provided a computer program configured to cause a method in accordance with at least the third aspect above and the embodiments thereof.
[0027] According to an eighth aspect, there is provided a computer program configured to cause a method in accordance with at least the fourth aspect above and the embodiments thereof.
BRIEF DESCRIPTION OF THE DRAWINGS
[0028] Fig. 1 shows, by way of example, a system architecture of communication system; [0029] Fig. 2 shows, by way of example, a flowchart of a method;
[0030] Fig. 3 shows, by way of example, signalling between a network node and a user equipment;
[0031] Fig. 4 shows, by way of example, a diagram of a machine learning model; [0032] Fig. 5 shows, by way of example, delta reporting of inference values; [0033] Fig. 6 shows, by way of example, a typical inference error distribution;
[0034] Fig. 7 shows, by way of example, inference value reconstruction; [0035] Fig. 8 shows, by way of example, high level illustration of delta reporting; [0036] Fig. 9 shows, by way of example, a flowchart of a method; [0037] Fig. 10 shows, by way of example, a block diagram of an apparatus.
DETAILED DESCRIPTION
[0038] Fig. 1 shows, by way of an example, a system architecture of communication system. In the following, different exemplifying embodiments will be described using, as an example of an access architecture to which the embodiments may be applied, a radio access architecture based on long term evolution advanced (LTE Advanced, LTE-A) or new radio (NR), also known as fifth generation (5G), without restricting the embodiments to such an architecture, however. It is obvious for a person skilled in the art that the embodiments may also be applied to other kinds of communications networks having suitable means by adjusting parameters and procedures appropriately. Some examples of other options for suitable systems are the universal mobile telecommunications system (UMTS) radio access network (UTRAN or E-UTRAN), long term evolution (LTE, the same as E-UTRA), wireless local area network (WLAN or WiFi), worldwide interoperability for microwave access (WiMAX), Bluetooth®, personal communications services (PCS), ZigBee®, wideband code division multiple access (WCDMA), systems using ultra-wideband (UWB) technology, sensor networks, mobile ad-hoc networks (MANETs) and Internet Protocol multimedia subsystems (IMS) or any combination thereof.
[0039] Fig. 1 depicts examples of simplified system architectures only showing some elements and functional entities, all being logical units, whose implementation may differ from what is shown. The connections shown in Fig. 1 are logical connections; the actual physical connections may be different. It is apparent to a person skilled in the art that the system typically comprises also other functions and structures than those shown in Fig. 1. The embodiments are not, however, restricted to the system given as an example but a person skilled in the art may apply the solution to other communication systems provided with necessary properties. Examples of such other communication systems include microwave links and optical fibers, for example.
[0040] The example of Fig. 1 shows a part of an exemplifying radio access network. Fig. 1 shows user devices 100 and 102 configured to be in a wireless connection on one or more communication channels in a cell with an access node, such as gNB, i.e. next generation NodeB, or eNB, i.e. evolved NodeB (eNodeB), 104 providing the cell. The physical link from a user device to the network node is called uplink (UL) or reverse link and the physical link from the network node to the user device is called downlink (DL) or forward link. It should be appreciated that network nodes or their functionalities may be implemented by using any node, host, server or access point etc. entity suitable for such a usage. A communications system typically comprises more than one network node in which case the network nodes may also be configured to communicate with one another over links, wired or wireless, designed for the purpose. These links may be used for signalling purposes. The network node is a computing device configured to control the radio resources of the communication system it is coupled to. The network node may also be referred to as a base station (BS), an access point or any other type of interfacing device including a relay station capable of operating in a wireless environment. The network node includes or is coupled to transceivers. From the transceivers of the network node, a connection is provided to an antenna unit that establishes bi-directional radio links to user devices. The antenna unit may comprise a plurality of antennas or antenna elements. The network node is further connected to core network 110 (CN or next generation core NGC). Depending on the system, the counterpart on the CN side can be a serving gateway (S-GW, routing and forwarding user data packets), packet data network gateway (P-GW), for providing connectivity of user devices (UEs) to external packet data networks, or mobile management entity (MME), etc.
[0041] The user device (also called UE, user equipment, user terminal, terminal device, etc.) illustrates one type of an apparatus to which resources on the air interface are allocated and assigned, and thus any feature described herein with a user device may be implemented with a corresponding apparatus, also including a relay node. An example of such a relay node is a layer 3 relay (self-backhauling relay) towards a base station.
[0042] The user device, or user equipment UE, typically refers to a portable computing device that includes wireless mobile communication devices operating with or without a subscriber identification module (SIM), including, but not limited to, the following types of devices: a mobile station (mobile phone), smartphone, personal digital assistant (PDA), handset, device using a wireless modem (alarm or measurement device, etc.), laptop and/or touch screen computer, tablet, game console, notebook, and multimedia device. It should be appreciated that a user device may also be a nearly exclusive uplink only device, of which an example is a camera or video camera loading images or video clips to a network. A user device may also be a device having capability to operate in Internet of Things (IoT) network which is a scenario in which objects are provided with the ability to transfer data over a network without requiring human-to-human or human-to-computer interaction.
[0043] Additionally, although the apparatuses have been depicted as single entities, different units, processors and/or memory units (not all shown in Fig. 1) may be implemented inside these apparatuses, to enable the functioning thereof.
[0044] 5G enables using multiple input - multiple output (MIMO) antennas, many more base stations or nodes than the LTE (a so-called small cell concept), including macro sites operating in co-operation with smaller stations and employing a variety of radio technologies depending on service needs, use cases and/or spectrum available. 5G mobile communications supports a wide range of use cases and related applications including video streaming, augmented reality, different ways of data sharing and various forms of machine type applications (such as (massive) machine-type communications (mMTC), including vehicular safety, different sensors and real-time control. 5G is expected to have multiple radio interfaces, namely below 6GHz, cmWave and mmWave, and also being integratable with existing legacy radio access technologies, such as the LTE. Integration with the LTE may be implemented, at least in the early phase, as a system, where macro coverage is provided by the LTE and 5G radio interface access comes from small cells by aggregation to the LTE. In other words, 5G is planned to support both inter-RAT operability (such as LTE-5G) and inter- RI operability (inter-radio interface operability, such as below 6GHz - cmWave, below 6GHz - cmWave - mmWave). One of the concepts considered to be used in 5G networks is network slicing in which multiple independent and dedicated virtual sub-networks (network instances) may be created within the same infrastructure to run services that have different requirements on latency, reliability, throughput and mobility.
[0045] The current architecture in LTE networks is distributed in the radio and centralized in the core network. The low latency applications and services in 5G require bringing the content close to the radio which leads to local break out and multi-access edge computing (MEC). 5G enables analytics and knowledge generation to occur at the source of the data. This approach requires leveraging resources that may not be continuously connected to a network such as laptops, smartphones, tablets and sensors. MEC provides a distributed computing environment for application and service hosting. It also has the ability to store and process content in close proximity to cellular subscribers for faster response time. Edge computing covers a wide range of technologies such as wireless sensor networks, mobile data acquisition, mobile signature analysis, cooperative distributed peer-to-peer ad hoc networking and processing also classifiable as local cloud/fog computing and grid/mesh computing, dew computing, mobile edge computing, cloudlet, distributed data storage and retrieval, autonomic self-healing networks, remote cloud services, augmented and virtual reality, data caching, Internet of Things (massive connectivity and/or latency critical), critical communications (autonomous vehicles, traffic safety, real-time analytics, time-critical control, healthcare applications).
[0046] The communication system is also able to communicate with other networks, such as a public switched telephone network or the Internet 112, or utilize services provided by them. The communication network may also be able to support the usage of cloud services, for example at least part of core network operations may be carried out as a cloud service (this is depicted in Fig. 1 by “cloud” 114). The communication system may also comprise a central control entity, or a like, providing facilities for networks of different operators to cooperate for example in spectrum sharing. [0047] Edge cloud may be brought into radio access network (RAN) by utilizing network function virtualization (NVF) and software defined networking (SDN). Using edge cloud may mean access node operations to be carried out, at least partly, in a server, host or node operationally coupled to a remote radio head or base station comprising radio parts. It is also possible that node operations will be distributed among a plurality of servers, nodes or hosts. Application of cloud RAN architecture enables RAN real time functions being carried out at the RAN side (in a distributed unit, DU 104) and non-real time functions being carried out in a centralized manner (in a centralized unit, CU 108).
[0048] 5G may also utilize satellite communication to enhance or complement the coverage of 5G service, for example by providing backhauling. Possible use cases are providing service continuity for machine-to-machine (M2M) or Internet of Things (IoT) devices or for passengers on board of vehicles, or ensuring service availability for critical communications, and future railway/maritime/aeronautical communications. Satellite communication may utilise geostationary earth orbit (GEO) satellite systems, but also low earth orbit (LEO) satellite systems, in particular mega-constellations (systems in which hundreds of (nano)satellites are deployed). Each satellite 106 in the constellation may cover several satellite-enabled network entities that create on-ground cells. The on-ground cells may be created through an on-ground relay node 104 or by a gNB located on-ground or in a satellite.
[0049] Machine learning (ML) algorithms may be used e.g. in network optimization and management, and physical (PHY) layer and medium access control (MAC) layer procedures of a communication system. For example, ML algorithms may predict potential network problems before they actually happen. Capacity requirements may be detected early using ML algorithms. The ML algorithms may identify network problems and make recommendations to fix them.
[0050] Reinforcement learning (RL) is a type of a ML technique that enables an agent to learn in an interactive environment using feedback from its own actions and experiences. In RL, in a certain state of the environment, an agent or an optimization algorithm, performs an action according to its policy, that changes the environment state and receives a new state and reward for the action. The agent’s policy is then updated based on the reward of the state- action pair. Sometimes the agent may try unexplored state-action pairs to find a new and better policy. Therefore, learning an optimal policy requires some level of trial and error. This process may be referred as exploration, and it may be achieved e.g. by performing a random action and/or adding noise to the action. Sometimes in policy based RL, exploration is part of the policy and is not explicitly selected.
[0051] Typically, the amount of exploration decreases over time and exploitation increases when the agent is confident about the policy, i.e. the agent acts according to its best knowledge. For example, a network may continuously measure network’s key performance indicators (KPIs) and perform radio network actions to learn their impact on the network based on RL principles. Over time, the network may be able to optimize the initial network configuration, and follow the dynamicity of the environment in a fully automated way by exploiting the learnings. However, the trial and error mechanism of RL takes time and increases the amount of errors in the network before converging to an optimized configuration. This is problematic especially for some applications, for example, ultra-reliable low-latency communication (URLLC) due to low error tolerance. Therefore, there is a need to perform exploration in a radio network without affecting network performance, e.g., customers’ data traffic.
[0052] The ML models may be run at a network node, e.g. gNB, and at the UE, or at the network node and not at the UE side. The ML models may be trained based on measurement data collected from UEs. Measurement data is available at the UE, and may comprise data such as reference signal received power (RSRP) or channel state information (CSI) estimations based on CSI reference signals (RSs) or beam signal strength based on beam sweeping procedures. The measurement data is assumed to be the true values. This way, exploration and exploitation for reinforcement (RL) learning strategies can run more or less in parallel. One ML model is used for doing actual inferences (exploitation), while a parallel ML model is constantly updated based on new delta estimates between the current ML inference and the estimated inferences based on the measurements (exploration). From time to time the UE sends an update of the hyperparameters of the newly trained ML model to the gNB and then both, the gNB and the UE, switch from the old to the new ML model.
[0053] Training of the ML model may be adapted depending on which parameters are to be inferred. For parameter estimation based on a deep neural network (DNN), different phases may be defined for exploration and exploitation. For that purpose, the gNB may use for exploration feedback from UE about the inferred PHY layer parameters and compare these results with its own inference. The difference between the ML inference and reported parameters from the UE is then the cost function for updating and training of the DNN weights. [0054] Important input data for the ML models are e.g. geometries of the environment of the network node, UE position information, and UE measurements. The geometries represent detailed model of the environment, typically in a form of a vector or raster data map of the surrounding buildings, trees, or other structures. Depending on the level of detail and features of interest, these maps are denoted as Digital Surface Models (DSM), Digital Elevation Models (DEM) and Digital Terrain Models (DTM). A building vector data map (BVDM) is a digital vector map of a building. For example, in the industry 4.0 context a digital representation of the industry plant is a basic component of a digital twin. Lately, the term Mirror World has been introduced to describe a more general digital representation of the real world. Accurate geometries of the network node’s environment may be provided e.g. by various map services, light induced distance and ranging (LIDAR) measurements, radar data or by other related means. Constant ML training of the environment leads to more and more accurate digital twin or mirror world.
[0055] Inferences about the UE channel conditions and of its evolution made by the ML models for network management comprises or relate to e.g. beam management (massive MIMO), best beam selection, small cell on-off, link adaptation, i.e. selection of the best modulation and coding schemes (MCS) taking inter cell interference into account, per beam received signal power, potential beam failures, best fitting beams for beam failure recovery or more generally for suitable handover decisions, channel state information (CSI) estimation and reporting to multiple TRPs. The inferences are closely related to the UE positions. For example, global navigation satellite system (GNSS) provides accurate position information.
[0056] Transfer of complex and large amount of detailed information from the UE to the network node causes reporting overhead, traffic, and interference on the physical uplink control channel (PUCCH). Information reported from the UE to the network node comprises e.g. CSI for multiple TRPs (Tx/Rx point), reference signal received power (RSRP) per TRP, on-off of small cell (SC), etc.
[0057] Let us consider two ML models running in parallel at the network node and at the UE. Assuming that the geometry of the environment, e.g. digital surface model (DSM) or building vector data map (BVDM), which is perfectly corresponding to the reality, is used as a basis for the inferences, the ML algorithms at the network node and the UE should output under ideal conditions the same inference results. The inference results may comprise e.g. RSRP, CSI, and/or channel quality indicator (CQI). In this case, it might be possible that the reporting merely the UE position and UE’s movement would be sufficient to run the mobile radio network. Then, the network would have accurate knowledge of the UE positions within the BVDMs.
[0058] In reality, the non-idealities of the geometry of the environment, of the UE position data and diffuse scatterer processes, etc. may lead to a mismatch between the inferences of the network side and the inferences of the UE side.
[0059] There is provided a reporting mechanism with low signalling overhead. The UE may determine differences between inferences of the model(s) based on measurements performed by the UE. Then, the UE may perform delta reporting for those differences instead of full reports, e.g. RSRP and/or CSI reports, on the physical uplink shared channel (PUSCH) and PUCCH channels. The delta reports may be used for further improving the ML model(s) at the network node and possibly also at the UE.
[0060] Fig. 2 shows, by way of example, a flowchart of a method for delta reporting. The method 200 may be performed e.g. by a user equipment. The method 200 comprises determining 210 physical layer parameters based on real world measurements to obtain measured physical layer parameters. The method 200 comprises receiving 220 inferred physical layer parameters. The method 200 comprises comparing 230 the measured physical layer parameters with the inferred physical layer parameters. The method 200 comprises transmitting 240, to a network node, a delta report generated based on the comparison.
[0061] The method disclosed herein saves reporting overhead. Delta reporting allows the exchange of large amount of data between the UE and the network node, e.g. gNB, with minimum reporting overhead. Delta reporting on the UL channels enables also reduction of the load in the physical downlink control channels (PDCCH). Delta reporting allows accurate modelling of the mirror world at the network side, and thus improves the ML model at the network node and possibly also at the UE. Delta reporting can replace, or at least reduce and/or enhance the conventional PHY layer measurements by direct inference of main parameters. For example, if the ML in the UE and in the network node become very precise and aligned, the PHY measurement rate in the UE may be reduced. Delta reporting ensures that despite the low reporting overhead the intended minimum accuracy for the PHY layer values will be achieved.
[0062] The inferred physical layer parameters may be received e.g. from a memory of the UE. The UE may receive the inferred physical layer parameters either as an output from its ML model, or from the network node. In case the UE has not any ML model available, the comparison is carried out between the physical layer parameters measured by the UE and the physical layer parameters inferred by the network node.
[0063] Fig. 3 shows, by way of example, signalling between the network node 310, e.g. gNB, and the UE 320. The entity 310 may, alternatively, be referred to as a cloud. First, let us consider a scenario where the UE has a ML model available. The gNB and the UE share the knowledge about the BVDM for the given environment. BVDM may be downloaded to the UE in advance, or is regularly updated over broadcast data channel (BDCH) by broad- and/or multicasting.
[0064] The UE has the information from the latest measurements, e.g. CSI or demodulation reference signal (DMRS) measurements from relevant beams and/or cells. For example, the network node 310 may transmit 330 periodically, e.g. every 5 ms, CSI RS to the UE 320. The UE estimates and stores 335 e.g. channel transfer function (CTF), RSRP, position, etc. The UE determines 345 physical layer parameters based on real world measurements to obtain measured physical layer parameters.
[0065] UE reports 340 its current position with a pre-defmed accuracy to the gNB. The positioning may be based on location techniques, e.g. GNNS, and/or indoor positioning techniques. UE position may be reported periodically, e.g. every 160 ms.
[0066] The gNB and the UE are doing in parallel an inference of the intended physical layer parameters from the same BVDM using the same ML model for the same given BVDM. PHY layer inference at the gNB 350 and at the UE 355 is based on the BVDM, deep neural network (DNN), and raytracer for UE position. The inferred physical layer parameters may comprise e.g. CSI to one or multiple TRP, channel quality indicator (CQI), modulation coding scheme (MCS), rank indicator (RI), best beam selection, inter cell interference, RSRP to close by small cells, birth and death of multi path components (MPCs), blocking due to moving objects, etc. For the inference, the gNB and the UE use the same ML models. This way, the UE is able to reconstruct what the gNB has inferred. Thus, the UE receives the inferred physical layer parameters as an output from an ML model, or deep neural network model, residing at the UE.
[0067] The inference based on ML model at the UE 355 is illustrated in Fig. 3 with dashed lines. Later below a scenario is described where the UE does not perform PHY layer inference based on ML model of its own. [0068] The UE compares 360 the measured physical layer parameters with the inferred physical layer parameters, and thus identify possible discrepancies. A report, e.g. a delta report, may be generated based on the comparison, and transmitted 365 to the gNB.
[0069] Then, the gNB may update 370 the PHY layer inferences based on the information received in the delta report. Delta report comprises the unavoidable mismatch between the inferred and the real, measured values. The updated PHY layer parameters may be inferred using the current ML model using the delta report as input. Or, the ML model may be updated, and the updated parameters may be inferred using the updated model. The ML model at the gNB may be tuned or re-trained e.g. continuously, with the goal to minimize the delta of the PHY parameters. The adjustments of the PHY inference may comprise e.g. adjusting the weights, i.e. give more or less weight, to the UE delta reports used as input, switching to lower or higher time-resolution inference, etc. The gNB may calculate 375 transmit signals for the updated PHY layer values, and transmit 380 pre-coded signals to the UE. The pre-coded signals from the gNB are the DMRS signals, which may be adjusted in time and frequency, increase or decrease resolution, so that the UE may perform more accurate CSI estimation.
[0070] Fig. 4 shows, by way of example, a diagram 400 of a machine learning (ML) model. The ML models at UE and gNB are expected to generate the same inferences for a given metric or feature and within a pre-determined error margin. Therefore, both may use approximately the same set of hyperparameters, e.g. DNN 410 type, number of layers and nodes, etc., for the ML model as well as the same input data. Such input data may comprise, for example, at least the output 425 of a raytracing simulation 420 for a given BVDM and a known UE location. This raytracer 420 may use input information 422 like BVDM, UE sensor data (orientations, antenna patterns, etc.), positions of SCs and gNBs together with their transmit power, beam shapes etc. In addition, the raytracer 420 may comprise information of moving objects like cars, people, bikes etc. together with their locations and directions of movement.
[0071] For example, the ML model, or DNN 410, may use the output 425 of the raytracer, like the raytraced multipath component (MPC) parameters, estimated RSRP values (Path Loss (PL)), received beam strengths per cell etc. as input data, together with further available data like UE locations (Ray Tracing (RT)), moving objects 430, etc. Based on this information the ML model may generate as an output 440 e.g. an improved inference, for example, regarding the CSI, UE location, or of BVDM parameters. This improved inference inherently overcomes or minimizes then, for example, effects due to the RF characteristics of the BVDM or inaccuracies of the BVDM geometry. For CSI, for example, the ML inference may be then in form of multi path component (MPC) parameters or as a sampled channel transfer function (CTF) or channel impulse response (CIR).
[0072] By observing multiple successive measurements, i.e. UE observations, some offset like parasitic effects related to antenna patterns, UE rotations, frequency offsets, or, RF characteristics of the reflections in the BVDM may be found and corrected by the ML model. Beside the here described parallel implementation of the raytracer and the ML model, one can also consider a combined deep neural network, which inherently includes the BVDM as the result of an intensive and longer supervised or reinforcement learning process.
[0073] Hyperparameters and input data to the ML models at both at the gNB and the UE are aligned. Both ML models run in parallel and the UE will then compare its latest knowledge of the inferred parameters, based on additional measurements, to generate a delta report between ML inferred and actually estimated parameters. The delta report is then reported to the gNB. The gNB can then reconstruct the full data based on the ML inference together with the delta reports.
[0074] The delta report may comprise an acknowledgement (ACK) or negative- acknowledgement (NACK) message. If ML inferences from the UE and from the gNB agree within pre-defmed limits, the UE may transmit an according ACK message to the gNB. Otherwise, the UE may transmit a NACK message. A threshold difference value or delta value may be pre-determined for this purpose.
[0075] Alternatively, in case of meaningful deviations between the measured physical layer parameters and the inferred physical layer parameters, a more comprehensive delta report may be generated. The delta report comprises information relating to the comparison between the measured physical layer parameters and the inferred physical layer parameters. For example, the delta report may comprise the delta, or the difference between the measured physical layer parameters and the inferred physical layer parameters. For example, inferred CSI per PRB versus measured CSI per PRB may be reported. Alternatively or in addition, the content of the delta report may be related to the inference probabilities. For example, the ML algorithm may output the probability of the best beam, or best small cell, or best cell for handover, etc. for all possible beams. For example, the delta report may indicate to choose the second, or third, most likely beam, instead of the one with the strongest likelihood, as this will be the right choice. Thus, the delta report may comprise e.g. the probabilities of the beams, cells, etc., and the network may make decisions based on the probabilities. Alternatively, or in addition, the delta report may comprise a limited set of best choices determined based on the probabilities, from which the network may choose.
[0076] Delta reporting will result as a relatively low feedback overhead. Especially for very accurate BVDMs, the delta values will become small so that the related delta reporting will result as a relatively low feedback overhead.
[0077] Referring back to Fig. 3, let us then consider a scenario where the UE has not any ML model available. Or, the EE may have ML model available, but implementation of large models may be challenging. On the other hand, small models that fit to limited EE battery power may have limited performance. In this case, where the gNB and the EE have either ML models of different size, or the EE might not use an ML model at all, the EE receives the inferred physical layer parameters from the network node, e.g. from the gNB. In other words, the EE may receive the inferred physical layer parameters from the network node as an output from a ML model, e.g. a deep neural network model, residing at the network node.
[0078] To avoid a situation, where the gNB has to download the full information on its inference to the EE, the gNB may transmit a hash function over one or more relevant parameters to the EE. The EE may then compare the outcome of the hash function with measured physical layer parameters. This way, the EE may identify how the determinations of the physical layer parameters are matched between the EE and the gNB.
[0079] Beside the hash function, the gNB may provide some redundancy allowing correction of inference errors to a certain extent, so that EEs may identify some mismatch between the measured PHY parameters and the inferred PHY parameters.
[0080] In case of beam selection for multiple beams the hash function may be for example a cyclic redundancy check (CRC) for the combined beam IDs. For accurate ML inferences the maximum possible deviation will be limited. Thus, the gNB may report in DL a CRC check or sparse representation of lower priority information to the EE. The EE may then find the fine granular differences for estimating the delta report.
[0081] The accuracy levels of the measurements as well as of the ML based inferences may vary, and therefore, the strength of this CRC check may be adapted, e.g. by a similar mechanism as known for open loop link adaptation. Open loop link adaptation is a long term adaptation of parameters based on e.g. the number of ACK/NACK feedback. The strength of the CRC check may be adapted e.g. to slowly varying radio conditions. [0082] In case of CSI, in some way being represented by complex tap or subcarrier values, the gNB may send in DL the low priority bits of relevant taps of the channel impulse response, of the multi path components, or of the reference signal subcarriers of the channel transfer function. In other words, instead of the CRC check, the gNB may report the low priority bits. The gNB 310 may calculate 390 the CRC or sparse representation of low priority bits and report 395 the CRC check or the sparse representation of low priority bits. Based on the CRC, the UE may identify the right choices from a limited set of possible options from the measurements.
[0083] Fig. 5 shows, by way of example, delta reporting of inference values combining hashing, CRC reports and delta reports. Any physical layer parameter, either inferred by ML model at the UE and/or the gNB or measured by the UE, may be represented as a data vector 510. For example, this data vector might represent the bits of a quantized CSI value or a sorted list of best fitting beam identifiers or indices. The allocation of bits depends on inference error distribution. The assumption is a typical outcome of the ML instance having a certain error distribution as provided in Fig. 6. Fig. 6 shows, by way of example, a typical error distribution 600 as outcome of ML inference, in this case for a MPC delay. In the example of Fig. 6, the error distribution is close to a Gaussian normal distribution. The delta reporting should take care of this error distribution per inference, as illustrated in the example of Fig.5. This may be learned by certain ML algorithms. In Fig. 5, the inference data has been separated into ‘coarse’ 512, ‘medium’ 514 and ‘fine’ 516 bits. A further assumption is that the ‘coarse’ inference part is provided by the current ML models with very high reliability. That is the part which does not have to be reported at all.
[0084] The ‘medium’ part has a high reliability with seldom occurring errors. Therefore, the ‘medium’ bits are combined 525 into larger data blocks together with a CRC check - or alternatively sufficient redundancy - to correct these seldom error events. A hash function 520 may be used to combine the data into larger data blocks. In this case the UE does not report the data itself, but instead generates a hash report as the CRC check or redundancy bits 530. This saves a lot of overhead, while it still allows correcting a limited number of errors.
[0085] The ‘fine’ part 516 is then the main part of the delta report. It reports the delta signal to the above already inferred data parts according to the error distribution in Fig. 6.
[0086] Fig. 7 shows, by way of example, inference value reconstruction. It illustrates how the coarse 710, medium 720, 722, 724 and fine 730 inference data might be separated at the UE side and recombined at the gNB to get back to the full inference values. The coarse data 710 is not reported, the medium data is reported using hash reports (CRC) 720, 722, 724, and the fine data 730 is reported via delta reports (huffman code).
[0087] For the reporting it is assumed that for each inference, the range, i.e. the minimum and maximum values or the cardinality of the inference outcomes, is known at the UE as well as the gNB. For example, phase values can vary between 0 to 2p or amplitudes might be normalized to one.
[0088] Fig. 8 shows, by way of example, high level illustration of delta reporting of differences between measured and inferred PHY parameters. UE 810 reports 840 its position to the network node 820, e.g. the gNB. The UE determines physical layer parameters based on real world measurements 816 to obtain measured physical layer parameters. For example, the measured values over time (t) may comprise amplitude (a) and delay values (t) of MPCs comprising a radio channel. The inferred physical layer parameters may be inferred by the UE itself, or received from the network node 820. In this example, both UE and the network node perform the inference using the same ML models comprising e.g. deep neural networks 812, 822. The same BVDM 818, 826 may be used as input for a raytracer 814, 824. Raytracing simulation output for the given BVDM and UE location may be used as input for the neural network. The measured physical layer parameters are then compared 830 with the inferred parameters outputted from the ML model. A delta report generated based on the comparison is then transmitted 845 to the network node.
[0089] Fig. 9 shows, by way of example, a flowchart of a method. The method 900 may be performed e.g. by a network node, or by a network entity, e.g. a cloud. The method 900 comprises receiving 910 position information from one or more user equipments. The method 900 comprises inferring 920 physical layer parameters using a machine learning model at least based on a building vector data map and the position information from the one or more user equipments. The method 900 comprises receiving 930 a delta report from a user equipment. The method 900 comprises inferring 940 updated physical layer parameters using at least part of the delta report as input for the machine learning model; and/or updating 945 the machine learning model based on the delta report and inferring updated physical layer parameters using the updated machine learning model.
[0090] Fig. 10 shows, by way of example, an apparatus capable of performing the method(s) as disclosed herein. Illustrated is device 1000, which may comprise, for example, a mobile communication device such as mobile 110 of Fig. 1 a network node, e.g. access point 104 of Fig. 1. Comprised in device 1000 is processor 1010, which may comprise, for example, a single- or multi-core processor wherein a single-core processor comprises one processing core and a multi-core processor comprises more than one processing core. Processor 1010 may comprise, in general, a control device. Processor 1010 may comprise more than one processor. Processor 1010 may be a control device. A processing core may comprise, for example, a Cortex-A8 processing core manufactured by ARM Holdings or a Steamroller processing core designed by Advanced Micro Devices Corporation. Processor 1010 may comprise at least one Qualcomm Snapdragon and/or Intel Atom processor. Processor 1010 may comprise at least one application-specific integrated circuit, ASIC. Processor 1010 may comprise at least one field-programmable gate array, FPGA. Processor 1010 may be means for performing method steps in device 1000. Processor 1010 may be configured, at least in part by computer instructions, to perform actions.
[0091] A processor may comprise circuitry, or be constituted as circuitry or circuitries, the circuitry or circuitries being configured to perform phases of methods in accordance with example embodiments described herein. As used in this application, the term “circuitry” may refer to one or more or all of the following: (a) hardware-only circuit implementations, such as implementations in only analog and/or digital circuitry, and (b) combinations of hardware circuits and software, such as, as applicable: (i) a combination of analog and/or digital hardware circuit(s) with software/firmware and (ii) any portions of hardware processor(s) with software (including digital signal processor(s)), software, and memory(ies) that work together to cause an apparatus, such as a mobile phone or server, to perform various functions) and (c) hardware circuit(s) and or processor(s), such as a microprocessor s) or a portion of a microprocessor(s), that requires software (e.g., firmware) for operation, but the software may not be present when it is not needed for operation.
[0092] This definition of circuitry applies to all uses of this term in this application, including in any claims. As a further example, as used in this application, the term circuitry also covers an implementation of merely a hardware circuit or processor (or multiple processors) or portion of a hardware circuit or processor and its (or their) accompanying software and/or firmware. The term circuitry also covers, for example and if applicable to the particular claim element, a baseband integrated circuit or processor integrated circuit for a mobile device or a similar integrated circuit in server, a cellular network device, or other computing or network device.
[0093] Device 1000 may comprise memory 1020. Memory 1020 may comprise random-access memory and/or permanent memory. Memory 1020 may comprise at least one RAM chip. Memory 1020 may comprise solid-state, magnetic, optical and/or holographic memory, for example. Memory 1020 may be at least in part accessible to processor 1010. Memory 1020 may be at least in part comprised in processor 1010. Memory 1020 may be means for storing information. Memory 1020 may comprise computer instructions that processor 1010 is configured to execute. When computer instructions configured to cause processor 1010 to perform certain actions are stored in memory 1020, and device 1000 overall is configured to run under the direction of processor 1010 using computer instructions from memory 1020, processor 1010 and/or its at least one processing core may be considered to be configured to perform said certain actions. Memory 1020 may be at least in part comprised in processor 1010. Memory 1020 may be at least in part external to device 1000 but accessible to device 1000.
[0094] Device 1000 may comprise a transmitter 1030. Device 1000 may comprise a receiver 1040. Transmitter 1030 and receiver 1040 may be configured to transmit and receive, respectively, information in accordance with at least one cellular or non-cellular standard. Transmitter 1030 may comprise more than one transmitter. Receiver 1040 may comprise more than one receiver. Transmitter 1030 and/or receiver 1040 may be configured to operate in accordance with global system for mobile communication, GSM, wideband code division multiple access, WCDMA, 5G, long term evolution, LTE, IS-95, wireless local area network, WLAN, Ethernet and/or worldwide interoperability for microwave access, WiMAX, standards, for example.
[0095] Device 1000 may comprise a near-field communication, NFC, transceiver 1050. NFC transceiver 1050 may support at least one NFC technology, such as NFC, Bluetooth, Wibree or similar technologies.
[0096] Device 1000 may comprise user interface, UI, 1060. UI 1060 may comprise at least one of a display, a keyboard, a touchscreen, a vibrator arranged to signal to a user by causing device 1000 to vibrate, a speaker and a microphone. A user may be able to operate device 1000 via UI 1060, for example to accept incoming telephone calls, to originate telephone calls or video calls, to browse the Internet, to manage digital files stored in memory 1020 or on a cloud accessible via transmitter 1030 and receiver 1040, or via NFC transceiver 1050, and/or to play games.
[0097] Device 1000 may comprise or be arranged to accept a user identity module 1070. User identity module 1070 may comprise, for example, a subscriber identity module, SIM, card installable in device 1000. A user identity module 1070 may comprise information identifying a subscription of a user of device 1000. A user identity module 1070 may comprise cryptographic information usable to verify the identity of a user of device 1000 and/or to facilitate encryption of communicated information and billing of the user of device 1000 for communication effected via device 1000. [0098] Processor 1010 may be furnished with a transmitter arranged to output information from processor 1010, via electrical leads internal to device 1000, to other devices comprised in device 1000. Such a transmitter may comprise a serial bus transmitter arranged to, for example, output information via at least one electrical lead to memory 1020 for storage therein. Alternatively to a serial bus, the transmitter may comprise a parallel bus transmitter. Likewise processor 1010 may comprise a receiver arranged to receive information in processor 1010, via electrical leads internal to device 1000, from other devices comprised in device 1000. Such a receiver may comprise a serial bus receiver arranged to, for example, receive information via at least one electrical lead from receiver 1040 for processing in processor 1010. Alternatively to a serial bus, the receiver may comprise a parallel bus receiver.
[0099] Processor 1010, memory 1020, transmitter 1030, receiver 1040, NFC transceiver 1050, UI 1060 and/or user identity module 1070 may be interconnected by electrical leads internal to device 1000 in a multitude of different ways. For example, each of the aforementioned devices may be separately connected to a master bus internal to device 1000, to allow for the devices to exchange information. However, as the skilled person will appreciate, this is only one example and depending on the embodiment various ways of interconnecting at least two of the aforementioned devices may be selected.

Claims

CLAIMS:
1. An apparatus comprising means for:
- determining physical layer parameters based on real world measurements to obtain measured physical layer parameters;
- receiving inferred physical layer parameters;
- comparing the measured physical layer parameters with the inferred physical layer parameters; and
- transmitting, to a network node, a delta report generated based on the comparison.
2. The apparatus of claim 1, wherein the delta report comprises at least one or more of one or more difference values between the measured physical layer parameters and the inferred physical layer parameters; inference probabilities; a list of physical layer parameters generated based on the inference probabilities; an acknowledgement message if the one or more difference values are below one or more pre-determined threshold values; a negative-acknowledgement message if the one or more difference values are above one or more pre-determined threshold values.
3. The apparatus of claim 1 or 2, wherein the inferred physical layer parameters are received as an output from a machine learning model residing at the apparatus, the model being configured to infer physical layer parameters, the inferring comprising
- receiving at least a building vector data map as input;
- performing raytracing simulation based on the building vector data map;
- inferring the physical layer parameters at least based on an output of the raytracing simulation and a known location of the apparatus.
4. The apparatus of claim 3, wherein the machine learning model residing at the apparatus is similar to a machine learning model residing at the network node
5. The apparatus of claim 1 or 2, wherein the inferred physical layer parameters are received from a network node as an output from a machine learning model residing at the network node, the model being configured to infer physical layer parameters, the inferring comprising at least
- receiving at least a building vector data map as input;
- performing raytracing simulation based on the building vector data map;
- inferring the physical layer parameters at least based on an output of the raytracing simulation and a known location of the apparatus.
6. The apparatus of claim 5, wherein the inferred physical layer parameters are received as a cyclic redundancy check or a sparse representation of low priority bits.
7. An apparatus comprising means for receiving position information from one or more user equipments; inferring physical layer parameters using a machine learning model at least based on a building vector data map and the position information from the one or more user equipments; receiving a delta report from a user equipment; inferring updated physical layer parameters using at least part of the delta report as input for the machine learning model; and/or updating the machine learning model based on the delta report and inferring updated physical layer parameters using the updated machine learning model.
8. The apparatus of claim 7, further comprising means for calculating transmit signals for the updated physical layer parameters; transmit pre-coded signals to the one or more user equipments.
9. The apparatus of claim 7 or 8, further comprising means for calculating a cyclic redundancy check or sparse representation of low priority bits based on the inferred physical layer parameters; reporting the cyclic redundancy check or the sparse representation of low priority bits to the one or more user equipments.
10. The apparatus of any preceding claim, wherein the means comprises at least one processor; and at least one memory including computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the performance of the apparatus.
11. A method comprising:
- determining, by a user equipment, physical layer parameters based on real world measurements to obtain measured physical layer parameters;
- receiving inferred physical layer parameters;
- comparing the measured physical layer parameters with the inferred physical layer parameters; and
- transmitting, to a network node, a delta report generated based on the comparison.
12. The method of claim 11, further comprising receiving the inferred physical layer parameters as an output from a machine learning model residing at the user equipment, the model being configured to infer physical layer parameters, the inferring comprising
- receiving at least a building vector data map as input;
- performing raytracing simulation based on the building vector data map;
- inferring the physical layer parameters at least based on an output of the raytracing simulation and a known location of the user equipment.
13. A method comprising receiving position information from one or more user equipments; inferring physical layer parameters using a machine learning model at least based on a building vector data map and the position information from the one or more user equipments; receiving a delta report from a user equipment; inferring updated physical layer parameters using at least part of the delta report as input for the machine learning model; and/or updating the machine learning model based on the delta report and inferring updated physical layer parameters using the updated machine learning model.
14. A non-transitory computer readable medium comprising program instructions that, when executed by at least one processor, cause an apparatus to perform at least: - determining physical layer parameters based on real world measurements to obtain measured physical layer parameters;
- receiving inferred physical layer parameters;
- comparing the measured physical layer parameters with the inferred physical layer parameters; and
- transmitting, to a network node, a delta report generated based on the comparison.
15. A non-transitory computer readable medium comprising program instructions that, when executed by at least one processor, cause an apparatus to perform at least: receiving position information from one or more user equipments; inferring physical layer parameters using a machine learning model at least based on a building vector data map and the position information from the one or more user equipments; receiving a delta report from a user equipment; inferring updated physical layer parameters using at least part of the delta report as input for the machine learning model; and/or updating the machine learning model based on the delta report and inferring updated physical layer parameters using the updated machine learning model.
PCT/EP2021/057341 2020-03-27 2021-03-23 Reporting in wireless networks WO2021191176A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
FI20205306 2020-03-27
FI20205306 2020-03-27

Publications (1)

Publication Number Publication Date
WO2021191176A1 true WO2021191176A1 (en) 2021-09-30

Family

ID=75278007

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2021/057341 WO2021191176A1 (en) 2020-03-27 2021-03-23 Reporting in wireless networks

Country Status (1)

Country Link
WO (1) WO2021191176A1 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11304063B2 (en) * 2020-07-09 2022-04-12 Industry Foundation Of Chonnam National University Deep learning-based beamforming communication system and method
WO2022229018A1 (en) * 2021-04-26 2022-11-03 Nokia Technologies Oy Enhancement of data map of objects via object specific radio frequency parameters
WO2023113657A1 (en) * 2021-12-13 2023-06-22 Telefonaktiebolaget Lm Ericsson (Publ) Managing a wireless device which has available a machine learning model that is operable to connect to a communication network
WO2023110087A1 (en) * 2021-12-15 2023-06-22 Nokia Technologies Oy Control signalling
WO2023123379A1 (en) * 2021-12-31 2023-07-06 Nec Corporation Methods, devices, and computer readable medium for communication
WO2023206114A1 (en) * 2022-04-27 2023-11-02 Qualcomm Incorporated Inference error information feedback for machine learning-based inferences
WO2024000385A1 (en) * 2022-06-30 2024-01-04 Qualcomm Incorporated Blockage prediction report
WO2024073192A1 (en) * 2022-09-28 2024-04-04 Qualcomm Incorporated Virtual instance for reference signal for positioning
WO2024081744A1 (en) * 2022-10-12 2024-04-18 Tektronix, Inc. Ad hoc machine learning training through constraints, predictive traffic loading, and private end-to-end encryption

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190191425A1 (en) * 2017-12-15 2019-06-20 Qualcomm Incorporated Methods and apparatuses for dynamic beam pair determination
WO2019138156A1 (en) * 2018-01-12 2019-07-18 Nokia Technologies Oy Profiled channel impulse response for accurate multipath parameter estimation
US20190372644A1 (en) * 2018-06-01 2019-12-05 Samsung Electronics Co., Ltd. Method and apparatus for machine learning based wide beam optimization in cellular network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190191425A1 (en) * 2017-12-15 2019-06-20 Qualcomm Incorporated Methods and apparatuses for dynamic beam pair determination
WO2019138156A1 (en) * 2018-01-12 2019-07-18 Nokia Technologies Oy Profiled channel impulse response for accurate multipath parameter estimation
US20190372644A1 (en) * 2018-06-01 2019-12-05 Samsung Electronics Co., Ltd. Method and apparatus for machine learning based wide beam optimization in cellular network

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11304063B2 (en) * 2020-07-09 2022-04-12 Industry Foundation Of Chonnam National University Deep learning-based beamforming communication system and method
WO2022229018A1 (en) * 2021-04-26 2022-11-03 Nokia Technologies Oy Enhancement of data map of objects via object specific radio frequency parameters
WO2023113657A1 (en) * 2021-12-13 2023-06-22 Telefonaktiebolaget Lm Ericsson (Publ) Managing a wireless device which has available a machine learning model that is operable to connect to a communication network
WO2023110087A1 (en) * 2021-12-15 2023-06-22 Nokia Technologies Oy Control signalling
WO2023123379A1 (en) * 2021-12-31 2023-07-06 Nec Corporation Methods, devices, and computer readable medium for communication
WO2023206114A1 (en) * 2022-04-27 2023-11-02 Qualcomm Incorporated Inference error information feedback for machine learning-based inferences
WO2023208021A1 (en) * 2022-04-27 2023-11-02 Qualcomm Incorporated Inference error information feedback for machine learning-based inferences
WO2024000385A1 (en) * 2022-06-30 2024-01-04 Qualcomm Incorporated Blockage prediction report
WO2024073192A1 (en) * 2022-09-28 2024-04-04 Qualcomm Incorporated Virtual instance for reference signal for positioning
WO2024081744A1 (en) * 2022-10-12 2024-04-18 Tektronix, Inc. Ad hoc machine learning training through constraints, predictive traffic loading, and private end-to-end encryption

Similar Documents

Publication Publication Date Title
WO2021191176A1 (en) Reporting in wireless networks
WO2019102064A1 (en) Joint beam reporting for wireless networks
US20240107429A1 (en) Machine Learning Non-Standalone Air-Interface
US10652782B1 (en) Latency reduction based on packet error prediction
US11012133B2 (en) Efficient data generation for beam pattern optimization
US20220131625A1 (en) Selecting Uplink Transmission Band in Wireless Network
US20220038931A1 (en) Radio link adaptation in wireless network
US11811527B2 (en) Detecting control information communicated in frame using a neural network
US11337100B1 (en) Transfer of channel estimate in radio access network
EP4184804A1 (en) Algorithm for mitigation of impact of uplink/downlink beam mis-match
US20220264514A1 (en) Rf-fingerprinting map update
US20220394509A1 (en) Link adaptation
CN113905385B (en) Radio resource parameter configuration
WO2023208021A1 (en) Inference error information feedback for machine learning-based inferences
US20230300671A1 (en) Downlink congestion control optimization
WO2023201613A1 (en) Measurement report resource management in wireless communications
WO2022178706A1 (en) Out-of-distribution detection and reporting for machine learning model deployment
WO2024050655A1 (en) Event-triggered beam avoidance prediction report
US20240064516A1 (en) Schemes for identifying corrupted datasets for machine learning security
WO2022227081A1 (en) Techniques for channel state information and channel compression switching
US20240104384A1 (en) Management of federated learning
WO2023184070A1 (en) Differential demodulation reference signal based channel quality reporting
WO2023216178A1 (en) Sensing-aided radio access technology communications
WO2023231041A1 (en) Machine learning based predictive initial beam pairing for sidelink
WO2023083431A1 (en) Reconfigurable demodulation and decoding

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21715197

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21715197

Country of ref document: EP

Kind code of ref document: A1