CN118158700A - Artificial intelligence/machine learning method and device based on beam management - Google Patents

Artificial intelligence/machine learning method and device based on beam management Download PDF

Info

Publication number
CN118158700A
CN118158700A CN202311670027.7A CN202311670027A CN118158700A CN 118158700 A CN118158700 A CN 118158700A CN 202311670027 A CN202311670027 A CN 202311670027A CN 118158700 A CN118158700 A CN 118158700A
Authority
CN
China
Prior art keywords
subset
monitoring
metric
beams
performance metrics
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311670027.7A
Other languages
Chinese (zh)
Inventor
辜禹仁
庆奎范
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
MediaTek Inc
Original Assignee
MediaTek Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US18/529,092 external-priority patent/US20240196242A1/en
Application filed by MediaTek Inc filed Critical MediaTek Inc
Publication of CN118158700A publication Critical patent/CN118158700A/en
Pending legal-status Critical Current

Links

Landscapes

  • Mobile Radio Communication Systems (AREA)

Abstract

Artificial intelligence/machine learning methods and apparatus based on beam management are disclosed. A User Equipment (UE) receives a first monitoring configuration from a base station for monitoring an artificial intelligence/machine learning (AI/ML) model of a set of beams. The UE measures a first subset of the set of beams. The UE performs reasoning using the AI/ML model based on measurements of the first subset to determine predicted values for a second subset of the set of beams. The second subset is selected based on the first monitoring configuration. The UE measures the beams of the second subset to determine the measurements of the second subset. The UE calculates one or more performance metric indicators from the predicted values and measured values of the second subset. The one or more performance metrics are selected based on the first monitoring configuration.

Description

Artificial intelligence/machine learning method and device based on beam management
Technical Field
The present disclosure relates to communication systems, and more particularly to AI/ML technology based on beam management.
Background
The statements in this section merely provide background information related to the present disclosure and may not constitute prior art.
Wireless communication systems are widely deployed to provide various telecommunication services such as telephony, video, data, messaging, and broadcast. A typical wireless communication system may employ a variety of access techniques that are capable of supporting communication with multiple users by sharing the available system resources. Examples of such multiple-access techniques include Code Division Multiple Access (CDMA) systems, time Division Multiple Access (TDMA) systems, frequency Division Multiple Access (FDMA) systems, orthogonal Frequency Division Multiple Access (OFDMA) systems, single carrier frequency division multiple access (SC-FDMA) systems, and time division synchronous code division multiple access (TD-SCDMA) systems.
These multiple access techniques have been employed in various telecommunications standards to provide a common protocol that enables different wireless devices to communicate at the urban, national, regional, and even global levels. An example of a telecommunications standard is the 5G New Radio (NR). The 5G NR is part of the continuous mobile broadband evolution promoted by the third generation partnership project (3 GPP) to meet new requirements related to potential capabilities, reliability, security, scalability (e.g., related to the internet of things (IoT)) and other requirements. Certain aspects of 5G NR may be based on the 4G Long Term Evolution (LTE) standard. Further improvements are needed for the 5G NR technology. These improvements may also be applicable to other multiple access techniques and telecommunication standards employing these techniques.
Disclosure of Invention
The following presents a simplified summary of one or more aspects of the invention in order to provide a basic understanding of such aspects. This abstract is neither a broad overview of all aspects of the invention nor is it intended to identify key or critical elements of any or all aspects of the invention, nor to identify the scope of any or all aspects of the invention. Its sole purpose is to present some concepts of the aspects of the invention in a simplified form as a prelude to the more detailed description that is presented later.
In one aspect of the disclosure, a method, a computer-readable medium, and an apparatus are provided. The apparatus may be a User Equipment (UE). The UE receives a first monitoring configuration from a base station for monitoring an artificial intelligence/machine learning (AI/ML) model that manages a set of beams. The UE measures a first subset of the set of beams. The UE performs reasoning using the AI/ML model based on measurements of the first subset to determine predicted values of beams of the second subset. The second subset is selected based on the first monitoring configuration. The UE measures beams of the second subset to determine measurements of the second subset. The UE calculates one or more performance metric indicators based on the predicted values and measured values of the second subset. The selection of the one or more performance metrics is based on the first monitoring configuration.
To the accomplishment of the foregoing and related ends, one or more aspects of the invention comprise the features hereinafter fully described and particularly pointed out in the claims. The following description and the annexed drawings set forth in detail certain illustrative features of the one or more aspects of the invention. These features are indicative, however, of but a few of the various ways in which the principles of the invention may be employed and the present description is intended to include all such aspects and their equivalents.
Drawings
Fig. 1 shows an example of a wireless communication system and an access network.
Fig. 2 shows a base station in communication with a UE in an access network.
Fig. 3 illustrates an example logical architecture of a distributed access network.
Fig. 4 illustrates an example physical architecture of a distributed access network.
Fig. 5 shows an example of DL-centric time slots.
Fig. 6 shows an example of UL-centric time slots.
Fig. 7 is a schematic diagram schematically showing an artificial intelligence/machine learning model for spatial and time domain beam prediction.
FIG. 8 is a schematic diagram schematically illustrating a round of artificial intelligence/machine learning model monitoring.
Fig. 9 is an exemplary diagram schematically showing a common use of different model monitoring methods and different monitoring periods.
Figure 10 is a diagram schematically illustrating an embodiment in which a user equipment reports calculated metrics.
Fig. 11 is a diagram schematically illustrating an embodiment in which a user equipment reports a monitoring event.
Figure 12 is a diagram schematically illustrating an embodiment of user equipment reporting metric statistics.
Fig. 13 is a diagram schematically showing a user equipment configured with different resource sets and different frequency monitoring.
Fig. 14 is a flowchart of a method (process) for monitoring an AI/ML model.
Detailed Description
The following detailed description with respect to the drawings is intended as a description of various configurations and is not intended to represent the only configurations in which the concepts described herein may be implemented. The detailed description includes specific details necessary to provide a thorough understanding of the various concepts. However, it will be understood by those skilled in the art that the concepts may be practiced without these specific details. In some instances, well-known structures and components are shown in block diagram form in order to avoid obscuring the concepts.
Several aspects of the telecommunications system will now be presented with reference to various apparatus and methods. These devices and methods will be described and illustrated in the following detailed description by various modules, components, circuits, processes, algorithms, etc. (collectively referred to as "elements"). These elements may be implemented using electronic hardware, computer software, or any combination of the two. Whether such elements are implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system.
For example, any portion of an element or any combination of elements can be implemented as a "processing system" that includes one or more processors. Examples of processors include microprocessors, microcontrollers, graphics Processing Units (GPUs), central Processing Units (CPUs), application processors, digital Signal Processors (DSPs), reduced Instruction Set Computing (RISC) processors, system on a chip (SoC), baseband processors, field Programmable Gate Arrays (FPGAs), programmable Logic Devices (PLDs), state machines, gating logic, discrete hardware circuits, and other suitable hardware configured to perform the various functions described in this disclosure. One or more processors in the processing system may execute the software. Software should be construed broadly as instructions, instruction sets, code segments, program code, programs, subroutines, software components, applications, software packages, routines, subroutines, objects, executable files, threads of execution, procedures, functions, etc., whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise.
For example, in one or more example aspects, the functionality may be implemented as hardware, software, or any combination of the two. If implemented in software, the functions may be stored on a computer-readable medium or encoded in one or more instructions or code. Computer readable media includes computer storage media. A storage media may be any available media that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), optical disk storage, magnetic disk storage, other magnetic storage devices, combinations of the above-described types of computer-readable media, or any other medium that can be used to store computer-executable code in the form of computer-accessible instructions or data structures.
Fig. 1 illustrates an example of a wireless communication system and an access network 100. A wireless communication system, also referred to as a Wireless Wide Area Network (WWAN), includes a base station 102, a UE 104, an Evolved Packet Core (EPC) 160, and another core network 190 (e.g., a 5G core network (5 GC)). Base station 102 may include a macrocell (high power cellular base station) and/or a cell (low power cellular base station). The macrocell includes a base station. The cell includes a home base station, a pico base station, and a micro base station.
A base station 102 configured as 4G LTE, collectively referred to as an evolved Universal Mobile Telecommunications System (UMTS) terrestrial radio access network (E-UTRAN), may be connected with EPC 160 through a backhaul link 132 (e.g., SI interface). A base station 102 configured as a 5G NR, collectively referred to as a next generation RAN (NG-RAN), may be connected to a core network 190 through a backhaul link 184. Base station 102 can perform one or more of user data transmission, radio channel encryption and decryption, integrity protection, header compression, mobility control functions (e.g., handover, dual connectivity), inter-cell interference coordination, connection setup and release, load balancing, allocation of non-access stratum (NAS) messages, NAS node selection, synchronization, radio Access Network (RAN) sharing, multimedia Broadcast Multicast Services (MBMS), user and device tracking, RAN Information Management (RIM), paging, positioning, and delivery of alert messages, among other functions. Base stations 102 may communicate with each other directly or indirectly (e.g., through EPC 160 or core network 190) over backhaul link 134 (e.g., an X2 interface). The backhaul link 134 may be wired or wireless.
The base station 102 may communicate wirelessly with the UE 104. Each base station 102 may provide communication coverage for a respective geographic coverage area 110. There may be overlapping geographic coverage areas 110. For example, the cell 102 'may have a coverage area 110' that overlaps with the coverage area 110 of one or more macro base stations 102. A network comprising cells and macro cells may be referred to as a heterogeneous network. The heterogeneous network may also include home evolved nodes B (eNB) (HeNB), which may serve a restricted group known as a Closed Subscriber Group (CSG). The communication link 120 between the base station 102 and the UE 104 may include Uplink (UL) (also referred to as reverse link) transmissions from the UE 104 to the base station 102 and/or Downlink (DL) (also referred to as forward link) transmissions from the base station 102 to the UE 104. Communication link 120 may use multiple-input multiple-output (MIMO) antenna techniques including spatial multiplexing, beamforming, and/or transmit diversity. The communication link may be through one or more carriers. The base station 102/UE 104 may allocate bandwidth of up to 7 megahertz (e.g., 5, 10, 15, 20, 100, 400, etc.) using a single carrier. Carrier aggregation is used in each transmission direction for up to Yx megahertz (x component carriers) in total. The carriers may not be adjacent to each other. The allocation of carriers may be asymmetric for DL and UL (e.g., the carriers allocated to DL may be more than the carriers allocated to UL). The component carriers may include a primary component carrier and one or more secondary component carriers. The primary component carrier may be referred to as a primary cell (PCell) and the secondary component carrier may be referred to as a secondary cell (SCell).
Some UEs 104 may communicate with each other using a device-to-device (D2D) communication link 158. The D2D communication link 158 may use the DL/UL WWAN spectrum. The D2D communication link 158 may use one or more edge link channels, such as a physical edge link broadcast channel (PSBCH), a physical edge link discovery channel (PSDCH), a physical edge link shared channel (PSSCH), and a physical edge link control channel (PSCCH). D2D communication may be performed through various wireless D2D communication systems, such as FLASHLINQ, WIMEDIA, bluetooth, zigBee, wi-Fi based on the IEEE 802.11 standard, LTE, or NR.
The wireless communication system may also include a Wi-Fi Access Point (AP) 150 in the 5GHz unlicensed spectrum that communicates with Wi-Fi Stations (STAs) 152 via a communication link 154. In communicating in the unlicensed spectrum, STA 152/AP 150 may perform channel detection (CCA) prior to communication to determine whether a channel is available.
The cell 102' may operate in licensed and/or unlicensed spectrum. When operating in unlicensed spectrum, cell 102' may employ NR and use the same 5GHz unlicensed spectrum as Wi-Fi AP 150. Cells 102' employing NRs in the unlicensed spectrum may enhance coverage and/or increase capacity for the access network.
Base station 102, whether a small base station 102' or a large base station (e.g., a macro base station), may comprise an eNB, a gndeb (gNB), or other type of base station. Some base stations, such as gNB 180, may operate in the conventional sub-6 GHz spectrum, communicating with UE 104 at millimeter wave (mmW) frequencies and/or frequencies near millimeter waves. When gNB 180 operates at frequencies near mmW or millimeter waves, gNB 180 may be referred to as a mmW base station. Extremely High Frequency (EHF) is a portion of the radio frequency portion of the electromagnetic spectrum. The EHF ranges from 30GHz to 300GHz and has a wavelength between 1mm and 10 mm. The radio waves in this frequency band may be referred to as millimeter waves. The vicinity of millimeter waves can be reduced to a frequency of 3GHz with a wavelength of 100 mm. The ultra high frequency (SHF) band extends between 3GHz and 30GHz, also known as centimetre waves. Communication using millimeter wave/nearby radio frequency bands (e.g., 3GHz-300 GHz) has extremely high path loss and short range. The mmW base station 180 may use beamforming 182 with the UE 104 to compensate for extremely high path loss and short range.
The base station 180 may transmit the beamformed signals to the UEs 104 in one or more transmit directions 108 a. The UE 104 may receive the beamformed signals from the base station 180 in one or more receive directions 108 b. The UE 104 may also transmit the beamformed signals in one or more transmit directions to the base station 180. The base station 180 may receive the beamformed signals from the UEs 104 in one or more receive directions. The base station 180/UE 104 may perform beam training to determine the best reception and transmission direction of the base station 180/UE 104. The transmission and reception directions of the base station 180 may be the same or different. The transmit and receive directions of the UE 104 may be the same or different.
EPC160 may include a Mobile Management Entity (MME) 162, other MMEs 164, a serving gateway 166, a Multimedia Broadcast Multicast Service (MBMS) gateway 168, a broadcast multicast service center (BM-SC) 170, and a Packet Data Network (PDN) gateway 172.MME 162 may communicate with a Home Subscriber Server (HSS) 174. The MME 162 is a control node that handles signaling between the UE 104 and the EPC 160. In general, MME 162 provides bearer and connection management. All user Internet Protocol (IP) packets are forwarded through the serving gateway 166, which is connected to the PDN gateway 172. The PDN gateway 172 provides UE IP address allocation as well as other functions. The PDN gateway 172 and BM-SC 170 are connected to an IP service 176.IP services 176 may include the internet, intranets, IP Multimedia Subsystem (IMS), PS streaming services, and/or other IP services. The BM-SC 170 may provide the functions of MBMS user service deployment and delivery. The BM-SC 170 may be used as an entry point for content provider MBMS transmissions, may be used to authorize and initiate MBMS bearer services in a Public Land Mobile Network (PLMN), and may be used to schedule MBMS transmissions. The MBMS gateway 168 may be used to allocate MBMS traffic to base stations 102 that broadcast a particular service that belong to a Multicast Broadcast Single Frequency Network (MBSFN) area and may be responsible for session management (start/stop) and collecting eMBMS related charging information.
The core network 190 may include an access and mobility management function (AMF) 192, other AMFs 193, a Location Management Function (LMF) 198, a Session Management Function (SMF) 194, and a User Plane Function (UPF) 195. The AMF 192 may communicate with a Unified Data Management (UDM) 196. The AMF 192 is a control node that handles signaling between the UE 104 and the core network 190. In general, SMF 194 provides QoS flows and session management. All user Internet Protocol (IP) packets are forwarded through UPF 195. The UPF 195 provides UE IP address assignment as well as other functions. The UPF 195 is connected to an IP service 197. The IP services 197 may include the internet, intranets, IP Multimedia Subsystem (IMS), PS streaming services, and/or other IP services.
A base station may also be called a gNB, a node B, an evolved node B (eNB), an access point, a base transceiver station, a wireless base station, a wireless transceiver, a transceiver function, a Basic Service Set (BSS), an Extended Service Set (ESS), a Transmission and Reception Point (TRP), or other suitable terminology. The base station 102 provides an access point for the UE 104 to the EPC 160 or core network 190. Examples of UEs 104 include a cell phone, a smart phone, a Session Initiation Protocol (SIP) phone, a notebook computer, a Personal Digital Assistant (PDA), a satellite radio, a global positioning system, a multimedia device, a video device, a digital audio player (e.g., MP3 player), a camera, a game console, a tablet, a smart device, a wearable device, a vehicle, an electric meter, a fuel pump, a large, medium and small kitchen appliance, a medical device, an implanted device, a sensor/actuator, a display, or any other similar functioning device. Some UEs 104 may be referred to as internet of things devices (e.g., parking timers, fuel pumps, toasters, vehicles, heart monitors, etc.). The UE 104 may also be referred to as a site, mobile station, subscriber station, mobile unit, subscriber unit, wireless unit, remote unit, mobile device, wireless communication device, remote device, mobile subscriber station, access terminal, mobile terminal, wireless terminal, remote terminal, handheld device, user agent, mobile client, or other suitable terminology.
Although the present disclosure may refer to 5G New Radio (NR), the present disclosure may be applicable to other similar fields, such as LTE, LTE-advanced (LTE-a), code Division Multiple Access (CDMA), global system for mobile communications (GSM), or other wireless/radio access technologies.
Fig. 2 is a block diagram of communications between a base station 210 and a UE 250 in an access network. In the downlink, IP packets from EPC 160 may be provided to controller/processor 275. Controller/processor 275 implements layer 3 and layer 2 functions. Layer 3 includes a Radio Resource Control (RRC) layer, and layer 2 includes a Packet Data Convergence Protocol (PDCP) layer, a Radio Link Control (RLC) layer, and a Medium Access Control (MAC) layer. The controller/processor 275 provides RRC layer functions related to broadcast system information (e.g., MIB, SIB), RRC connection control (e.g., RRC connection paging, RRC connection setup, RRC connection modification, and RRC connection release), mobility of different Radio Access Technologies (RATs), and measurement configuration for UE measurement reporting; PDCP layer functions related to header compression/decompression, security (ciphering, deciphering, integrity protection, integrity verification) and handover support functions; RLC layer functions related to transmission of upper layer Packet Data Units (PDUs), error correction by ARQ, concatenation, segmentation and reassembly of RLC Service Data Units (SDUs), re-segmentation of RLC data PDUs, and re-ordering of RLC data PDUs; and MAC layer functions related to mapping between logical channels and transport channels, multiplexing of MAC SDUs to Transport Blocks (TBs), demultiplexing of MAC SDUs from TBs, scheduling information reporting, error correction by HARQ, priority handling and logical channel priority.
The Transmit (TX) processor 216 and the Receive (RX) processor 270 perform layer 1 functions associated with various signal processing functions. Layer 1 includes a Physical (PHY) layer that may include error detection over a transport channel, forward Error Correction (FEC) encoding/decoding of the transport channel, interleaving, rate matching, mapping to physical channels, modulation/demodulation of the physical channels, and MIMO antenna processing. TX processor 216 processes a mapping of signal constellations based on various modulation schemes such as Binary Phase Shift Keying (BPSK), quadrature Phase Shift Keying (QPSK), M-phase shift keying (M-PSK), M-quadrature amplitude modulation (M-QAM). The coded and modulated symbols may be split into parallel streams. Each stream may be mapped to Orthogonal Frequency Division Multiplexing (OFDM) subcarriers, multiplexed with a reference signal (e.g., pilot signal) in the time and/or frequency domain, and then combined using a fast fourier transform (IFFT) into a physical channel carrying the time domain signal of the OFDM symbol stream. The OFDM stream is spatially precoded to produce multiple spatial streams. The channel estimates from the channel estimator 274 may be used to determine the coding and modulation schemes and to perform spatial processing. The channel estimate may be derived from a reference signal and/or channel condition feedback transmitted by the UE 250. Each spatial stream may be provided to a different antenna 220 by a separate transmitter 218 TX. Each transmitter 218TX may transmit with a respective spatial stream modulated RF carrier.
At the UE 250, each receiver 254RX receives a signal through its respective antenna 252. Each receiver 254RX recovers information modulated onto an RF carrier and provides the information to a Receive (RX) processor 256.TX processor 268 and RX processor 256 perform layer 1 functions associated with various signal processing functions. RX processor 256 may perform spatial processing on the information to recover any spatial streams assigned to UE 250. If multiple spatial streams are assigned to UE 250, they may be combined into a single OFDM symbol stream by RX processor 256. The RX processor 256 then converts the OFDM symbol stream from the time domain to the frequency domain using a Fast Fourier Transform (FFT). The frequency domain signal comprises a separate OFDM symbol stream for each subcarrier of the OFDM signal. The symbols on each subcarrier, as well as the reference signal, may be recovered and demodulated by determining the most likely signal constellation points transmitted by base station 210. These soft decisions may be based on channel estimates computed by channel estimator 258. The soft decisions are then decoded and deinterleaved to recover the data and control signals that were originally transmitted by the base station 210 on the physical channel. The data and control signals are then provided to a controller/processor 259 which implements layer 3 and layer 2 functions.
The controller/processor 259 can be associated with a memory 260 that stores program codes and data. Memory 260 may be referred to as a computer-readable medium. In the uplink, controller/processor 259 provides demultiplexing between transport and logical channels, packet reassembly, deciphering, header decompression, and control signal processing to recover IP packets from EPC 160. The controller/processor 259 is also responsible for error detection using an Acknowledgement (ACK) and/or Negative Acknowledgement (NACK) protocol to support HARQ operations.
Similar to the functionality related to downlink transmissions of base station 210, controller/processor 259 provides RRC layer functions related to system information (e.g., MIB, SIB) acquisition, RRC connection, and measurement reporting; PDCP layer functions related to header compression/decompression and security (ciphering, deciphering, integrity protection, integrity verification); RLC layer functions related to transmission of upper layer PDUs, error correction by ARQ, concatenation, segmentation and reassembly of RLC SDUs, re-segmentation of RLC data PDUs, and re-ordering of RLC data PDUs; and MAC layer functions related to mapping between logical channels and transport channels, multiplexing of MAC SDUs to TBs, demultiplexing of MAC SDUs from TBs, scheduling information reporting, error correction by HARQ, priority handling and logical channel priority.
Channel estimator 258 can be used by TX processor 268 to select an appropriate coding and modulation scheme and facilitate spatial processing as a result of channel estimation derived from a reference signal or feedback transmitted by base station 210. The spatial streams generated by TX processor 268 may be provided to different antenna 252 via separate transmitters 254 TX. Each transmitter 254TX may transmit with a respective spatial stream modulation RF carrier. UL transmissions are processed at base station 210 in a manner similar to the receiver function at UE 250. Each receiver 218RX receives a signal via its respective antenna 220. Each receiver 218RX recovers information modulated onto an RF carrier and provides the information to the RX processor 270.
The controller/processor 275 may be associated with a memory 276 that stores program codes and data. Memory 276 may be referred to as a computer-readable medium. In the UL, the controller/processor 275 provides demultiplexing between transport and logical channels, packet reassembly, deciphering, header decompression, and control signal processing to recover IP packets from the UE 250. IP packets from controller/processor 275 may be provided to EPC 160. The controller/processor 275 is also responsible for error detection using an Acknowledgement (ACK) and/or Negative Acknowledgement (NACK) protocol to support HARQ operations.
A New Radio (NR) may refer to a radio configured according to a new air interface (e.g., a non-Orthogonal Frequency Division Multiple Access (OFDMA) based air interface) or a fixed transport layer (e.g., other than Internet Protocol (IP)). NR may utilize OFDM with Cyclic Prefix (CP) on uplink and downlink and may include half-duplex operation support using Time Division Duplex (TDD). NR may include enhanced mobile broadband (eMBB) services to target broadband (e.g., above 80 MHz), millimeter wave (mmW) to target high carrier frequencies (e.g., 60 GHz), large-scale MTC (mctc) to target non-backward compatible MTC technologies, and/or mission critical target ultra-reliable low latency communication (URLLC) services.
A single component carrier bandwidth of 100MHz may be supported. In one example, an NR Resource Block (RB) may span 12 subcarriers with a 60kHz subcarrier bandwidth, with a duration of 0.25ms, or a 30kHz subcarrier bandwidth, with a duration of 0.5ms (similarly, for 15kHz SCS,50MHz BW, 1 ms). Each radio frame may consist of 10 subframes (10, 20, 40 or 80 NR slots) of length 10ms. Each time slot may indicate a link direction (i.e., downlink or uplink) for data transmission, and the link direction of each time slot may be dynamically switched. Each time slot may include downlink/uplink data and downlink/uplink control data. The uplink and downlink timeslots of the NR may be described in more detail as described below with respect to fig. 5 and 6.
The NR RAN may include a Central Unit (CU) and a Distribution Unit (DU). An NR BS (e.g., a gNB, a 5G node B, a Transmission Reception Point (TRP), an Access Point (AP)) may correspond to one or more BSs. The NR cell may be configured as an access cell (ACell) or a data only cell (DCell). For example, the RAN (e.g., a central unit or a distribution unit) may configure the cells. The DCell may be a cell for carrier aggregation or dual connectivity and may not be used for initial access, cell selection/reselection or handover. In some cases, the DCell may not transmit a Synchronization Signal (SS), and in some cases, the DCell may transmit the SS. The NR BS may send a downlink signal to the UE to indicate the cell type. The UE may communicate with the NR BS according to cell type explicit. For example, the UE may determine NR BSs to consider for cell selection, access, handover, and/or measurement based on the indicated cell type.
Fig. 3 illustrates an example logical architecture of a distributed RAN 300 in accordance with aspects of the present disclosure. The 5G access node 306 may include an Access Node Controller (ANC) 302. The ANC may be a Central Unit (CU) of the distributed RAN. The backhaul interface to the next generation core network (NG-CN) 304 may terminate at the ANC. The backhaul interface to the neighboring next generation access node (NG-AN) 310 may terminate at the ANC. ANC may include one or more TRP 308 (may also be referred to as BS, NR BS, nodeb, 5G NB, AP, or some other terminology). As described above, TRP may be used interchangeably with "cell".
TRP 308 may be a Distribution Unit (DU). TRP may be connected to one ANC (ANC 302) or multiple ANCs (not shown). For example, for an ANC deployment with RAN sharing, wireless as a service (RaaS), and specific services, TRPs may be connected to multiple ANCs. The TRP may include one or more antenna ports. The TRP may be configured to serve the UE traffic individually (e.g., dynamically selected) or jointly (e.g., jointly transmitted).
The local architecture of the distributed RAN 300 may be used to illustrate the fronthaul definition. An architecture may be defined that supports a forward-drive solution across different deployment types. For example, the architecture may be based on transport network functions (e.g., bandwidth, delay, and/or jitter). The architecture may share features and/or components with LTE. According to these aspects, the next generation AN (NG-AN) 310 can support dual connectivity for NR. NG-AN may share a common preamble for LTE and NR.
The architecture may enable cooperation between TRPs 308. For example, the preset collaboration may be within the TRP and/or across TRPs by ANC 302. According to these aspects, an inter-TRP interface may not be needed/present.
According to these aspects, dynamic configuration of the split logic functions may occur in the architecture of the distributed RAN 300. The PDCP, RLC, MAC protocol may be adaptively used in ANC or TRP.
Fig. 4 illustrates an example physical architecture of a distributed RAN 400 in accordance with these aspects of the present disclosure. A centralized core network unit (C-CU) 402 may host core network functions. The C-CUs may be deployed centrally. The C-CU function may be offloaded (e.g., to Advanced Wireless Services (AWS)) to handle peak capacity. A centralized RAN unit (C-RU) 404 may host one or more ANC functions. Alternatively, the C-RU may host the core network functions locally. The C-RU may have a distributed deployment. The C-RU may be closer to the network edge. A Distribution Unit (DU) 406 may host one or more TRPs. The DUs may be located at the network edge, with Radio Frequency (RF) functionality.
Fig. 5 shows an example of DL-centric time slots. The DL-centric time slot may comprise a control portion 502. The control portion 502 may exist in an initial or beginning portion of a DL-centric time slot. The control portion 502 may include various scheduling information and/or control information corresponding to various portions of the DL-centric time slot. In some configurations, the control portion 502 may be a Physical DL Control Channel (PDCCH), as shown in fig. 5. The DL-centric time slot may also include a DL data portion 504.DL data portion 504 may also sometimes be referred to as the payload of a DL-centric time slot. The DL data portion 504 may include communication resources for communicating DL data from a scheduling entity (e.g., UE or BS) to a subordinate entity (e.g., UE). In some configurations, DL data portion 504 may be a Physical DL Shared Channel (PDSCH).
DL-centric time slots may also include a common UL portion 506. The common UL portion 506 may also sometimes be referred to as a UL burst, a common UL burst, and/or various other suitable terms. The common UL portion 506 may include feedback information corresponding to various other portions of the DL-centric time slot. For example, the common UL portion 506 may include feedback information corresponding to the control portion 502. Non-limiting examples of feedback information may include an ACK signal, a NACK signal, a HARQ indicator, and/or various other suitable types of information. The common UL portion 506 may include additional or alternative information, such as information related to a Random Access Channel (RACH) procedure, a Scheduling Request (SR), and the like.
As shown in fig. 5, the end of DL data portion 504 may be separated in time from the beginning of common UL portion 506. Such time separation is sometimes referred to as a gap, guard period, guard interval, and/or various other suitable terms. This separation provides time for switching from DL communication (e.g., a receiving operation of a subordinate entity (e.g., UE)) to UL communication (e.g., a transmitting of a subordinate entity (e.g., UE)). Those skilled in the art will appreciate that the above is just one example of a DL-centric time slot and that alternative structures with similar features may exist without necessarily departing from the aspects described herein.
Fig. 6 is a diagram showing one example of UL-centric time slots. The UL-centric time slot may comprise a control portion 602. The control portion 602 may be present in an initial or beginning portion of a UL-centric time slot. The control portion 602 in fig. 6 may be similar to the control portion 502 described with reference to fig. 5. UL-centric time slots may also include UL data portion 604.UL data portion 604 may also sometimes be referred to as the payload of a UL-centric time slot. The UL portion may refer to communication resources for communicating UL data from subordinate entities (e.g., UEs) to scheduling entities (e.g., UEs or BSs). In some configurations, the control portion 602 may be a Physical DL Control Channel (PDCCH).
As shown in fig. 6, the end of the control portion 602 may be separated in time from the beginning of the UL data portion 604. Such time separation is sometimes referred to as a gap, guard period, guard interval, and/or various other suitable terms. This separation provides time for switching from DL communication (e.g., a receiving operation of a scheduling entity) to UL communication (e.g., a transmitting of a scheduling entity). UL-centric time slots may also include a common UL portion 606. The common UL portion 606 in fig. 6 may be similar to the common UL portion 506 described with reference to fig. 5. The common UL portion 606 may additionally or alternatively include information related to Channel Quality Indicators (CQIs), channel Sounding Reference Signals (SRS), and the like. Those skilled in the art will appreciate that the above is merely one example of UL-centric time slots, and that alternative structures with similar features may exist without necessarily departing from the aspects described herein.
In some cases, two or more subordinate entities (e.g., UEs) may communicate with each other using edge signals. Practical applications for such edge communications may include public safety, proximity services, UE-to-network relay, vehicle-to-vehicle (V2V) communications, internet of everything (IoE) communications, internet of things (IoT) communications, mission critical mesh networks, and/or various other suitable applications. In general, an edge signal may refer to a communication of a signal from one subordinate entity (e.g., UE 1) to another subordinate entity (e.g., UE 2) without relaying the communication through a scheduling entity (e.g., UE or BS), even though the scheduling entity may be used for scheduling and/or control purposes. In some examples, the edge signal may communicate using a licensed spectrum (as opposed to a wireless local area network that typically uses an unlicensed spectrum).
Fig. 7 is a 700 th diagram schematically illustrating AI/ML models for spatial and temporal beam prediction. In this example, base station 702 transmits beams 711-734 simultaneously in all directions over channel 780. After identifying the incoming beams, the UE 704 may calculate L1-received signal received Power (L1-RSRP, L1-RECEIVED SIGNAL RECEIVED Power) for each beam. L1-RSRP is the average received signal power per resource element of the physical layer, measured on the resource elements carrying the secondary synchronization signal or channel state information-reference signal (CSI-RS).
Machine learning algorithms are used to analyze signal strength histories from a subset of beams and attempt to find patterns or trends in the data. This helps predict the signal strength of the remaining unmeasured beams. By identifying historical data patterns from a subset of beams, the algorithm can predict the signal strengths of other beams even if the user device is moving.
In this example, base station 702 is equipped with multiple antennas capable of transmitting 24 different beams 711-734 in each direction simultaneously. The user equipment 704 is constantly moving, equipped with its own antenna, and periodically measures channel metrics such as RSRP for strategically selected 4 of the 24 beams (e.g., beams 715, 716, 729, and 730) transmitted by the base station 702. The set of beams (e.g., beams 715, 716, 729, and 730) measured as artificial intelligence/machine learning inputs (sensing beams) are referred to as the B-set beams. The set of beams (e.g., 24 beams) predicted as artificial intelligence/machine learning outputs (typically communication beams) is referred to as the a-set beam.
The measurement data for these 4 beams is saved over time as historical data. The historical data captures channel metrics for a subset of the beams over time, depicting dynamic interaction images of the user device 704 with the beams. The user device 704 may be configured with a historical data time window 760 during which measurements are stored at the user device 704. In an example, the current time is t_0. The historical data time window 760 is from time t_ (-3) to time t_0. The measurement data for beams 715, 716, 729 and 730 obtained during the historical data time window 760 is stored at the user device 704 and used as input to the artificial intelligence/machine learning model 750 to predict the measurement data for beams that were not measured at the current time t_0 and the measurement data for all beams at the future times t_1, t_2.
The historical data is provided as input to a machine learning algorithm to predict channel metrics for unmeasured beams, directing the user device 704 to which beam should be focused when communication with the base station 702 is desired. Thus, the algorithm may predict channel metrics for all beams based on analyzing patterns and trends in the historical data from the subset of beams.
Another subset of beams (e.g., beams 711, 721, 724, and 734), which are not typically measured, may be periodically sampled and recorded with their channel metrics. This helps verify the predictions of the algorithm compared to real world performance, while also updating the model. Periodic measurements help to perfect the algorithm by updating its weights and parameters. As machine learning algorithms mature, their predictions of the best beam become more and more accurate. When the user device 704 initiates communication, it may select a user device transmit or receive beam 770 (e.g., a best beam) that may produce a better signal quality based on the predictions.
The method involves predicting the top k best beams that may have the highest channel metrics, rather than identifying a single best beam. Focusing on the first k beams provides better accuracy in many cases. The prediction of the first k beams is achieved by an estimation based on the first k channel metric values. This is very compatible with real world communication requirements, improving system performance.
The main outputs of the classification-based AI/ML model include the Identifiers (IDs) of the top k best beams predicted for communication, and the RSRP that provides a corresponding prediction confidence score or prediction for each beam. These beams are determined to be the best beams for communication based on their expected signal strength and reliability. For example, if k is set to a value of 5, the model may predict that the best five communication beams in the entire beam set are beams numbered 732, 730, 734, 728, and 733. This predictive ranking enables the UE 704 to make informed decisions about which beams the base station 702 and UE 704 can use to communicate with each other at any given time, thereby optimizing performance in terms of signal strength and likelihood of successful data transmission.
Further, the output of the regression-based artificial intelligence/machine learning model includes a predicted RSRP value for each of the a-set communication beams. This predictive output directly estimates the expected signal strength of each individual beam in the communication set, allowing the user equipment 704 to more finely select a beam based on the predicted RSRP value.
FIG. 8 is a diagram number 800, schematically illustrating AI/ML model monitoring. After training and deploying the AI/ML model 750 on the base station 702 and/or the UE 704 for reasoning, a mechanism is needed to identify whether the prediction of the AI/ML model 750 is no longer accurate and to monitor and ensure the prediction accuracy of the AI/ML model 750. In one embodiment, the AI/ML model may be deployed on the UE 704. For example, when a new situation occurs between the base station 702 and the UE 704 that is not contained in the training data set, or one or more perceived beams (e.g., beams 715, 716, 729, and 730) are blocked by an obstacle, resulting in a very low RSRP measurement.
The input 810 of the model 750 is a measurement of the selected beam (e.g., beams 715, 716, 729, and 730). For the regression model, the output 820 is the predicted RSRP for all communication beams. Base station 702 can configure a portion 830 of the communication beam for measurement by user device 704. Subset 830 indicates which beams need to be measured and compared. The prediction 840 may be derived by filtering the output 820 using a filter 832 to include only the predicted RSRP of the beams of the subset of measurements 830. RSRP prediction accuracy 860 may be calculated by comparing the predictions 840 to actual RSRP measurements 850 of the beams of the subset 830.
Statistical measures, such as Mean Square Error (MSE) or variance, may be used to evaluate the prediction 840 and the prediction accuracy 860 of the RSRP measurement 850. For example, MSE evaluates overall prediction accuracy, while variance measures the generalization ability of the model. Lower MSE values indicate better accuracy, while lower variance values indicate better generalization. The MSE evaluates the degree of matching of the model's predictions to the actual data, while the variance checks the stability and consistency of the model's performance over different data sets.
For the classification model, the output 822 of the AI/ML model 750 is the top K beams of all communication beams with the highest confidence of prediction or highest predicted RSRP. The base station 702 may configure the communication beams of the subset 830 for the UE 704 for measurement. The subset 830 indicates which beams need to be measured and compared. By filtering the output 822 to include only the top ranked beams in the subset of measurements 830, the predictions 842 are derived from their predicted confidence scores or predicted RSRP values.
Beam prediction ranking accuracy 862 is calculated by comparing predictions 842 with measurements 852 in the subset 830 that are ranked with actual measured RSRP values. Accuracy may be expressed as TRUE/FALSE (two rank sequences match) or a rank correlation metric, such as spearman correlation, kendel correlation.
In one full-scale approach, the AI/ML model provides the RSRP values for all the beams in set A. The accuracy or performance of the model is assessed by comparing the predicted RSRP value with the actual RSRP value, the so-called base true value. This comparison involves measuring the difference between the two sets of RSRP values of set a to determine the performance of the model. The accuracy or performance of the model can also be assessed by comparing the top K beams with the highest predicted confidence scores or predicted RSRP with the top K beams with the highest actual RSRP values.
The full-scale approach may incur significant beam management overhead, essentially requiring the same effort as the traditional non-AI beam management approach. If the beams of the full set a are used for model monitoring, a large reporting overhead will result, since the user equipment has to report all these RSRP measurements to the network base station.
To alleviate this disadvantage, in the low overhead approach, the user equipment is configured to measure only a portion (e.g., half) of the set a beams, rather than the entire set. By using the measurements of this subset, they can evaluate the performance of the model, while less accurate than measuring all beams (because only a portion is measured and some monitoring errors or omissions may occur), significantly reducing the need for measurement and reporting overhead.
In the full-scale approach, the user device measures all beams of set a and takes set B as input to the model. The output of the model is then compared to the measurements of set a to evaluate its performance. In the low overhead approach, the user equipment measures only a portion (e.g., half) of the set a beams for performance comparison. This will reduce measurement requirements and reporting overhead compared to the full-scale approach, although some loss is accepted in the accuracy of the model predictive assessment.
FIG. 9 is a 900-degree diagram schematically illustrating an example of the joint use of different AI/ML model monitoring methods with customizable monitoring periods. The present example features the concurrent application of three different model monitoring methods. The first is a full-volume monitoring method 910 with a configurable period 912 that requires measurement of all 24 communication beams. The second is a low overhead monitoring method 920 with a configurable period 922 involving measuring half of the communication beams, e.g., 12 beams. The third is another low overhead monitoring method 930, with a configurable period 932, that allows for a smaller subset of 6 beams.
Periods 912, 922, and 932 are adjustable according to network management requirements. The selection of the monitoring frequency is affected by the performance comparison or the level of predictive accuracy required for network operation. For example, if the monitoring method 930 (least overhead method) detects a degradation in the AI/ML model, it may trigger a reduction in the period 922 of the low overhead monitoring method 920 and the period 912 of the full-scale monitoring method 910 to improve the performance of the AI/ML model output. This adaptive approach ensures that the beam management process remains efficient, balancing the trade-off between accuracy and resource usage.
Furthermore, if a certain scenario requires a high prediction accuracy of the AI/ML model, the network may adjust the period 912 of the full-scale monitoring method 910 more frequently, thereby obtaining finer data to optimize the model prediction.
Graph 900 provides a visual representation of how these monitoring methods are deployed, each with a different measurement range and update rate. The flexibility of this monitoring strategy enables the network to maintain a high level of quality of service by identifying and correcting any potential model predictive performance degradation in time.
Fig. 10 is a diagram 1000 schematically illustrating a scenario in which a User Equipment (UE) reports its monitoring and reporting capabilities based on artificial intelligence/machine learning (AI/ML) beam management functions. The user equipment 1004 communicates with the base station 1002 wherein the user equipment 1004 is capable of transmitting a capability report summarizing its ability to monitor and report AI/ML beam management models or functions.
The capability report includes several components describing the capabilities of the UE 1004:
1. Support monitoring reports: this boolean parameter indicates whether the UE 1004 is able to perform reporting functions for AI/ML based beam management models or function monitoring. If the UE 1004 is capable, the value is true; if not, the value is false.
2. Supported performance metrics: this component specifies the performance metric types that the UE 1004 can support. These metrics include:
beam prediction accuracy: it refers to the UE's capability to derive and report the accuracy of its best beam for predictive communication. This metric relates to the probability that the UE accurately predicts the best beam or beams for communication.
Beam prediction ranking accuracy: this metric involves ranking a set of beams based on their measured RSRP and the predicted confidence of the model output. The UE evaluates whether the two ranking sequences match, the accuracy may be expressed in terms of a true/false binary result or using a spearman or kendel like ranking correlation method.
RSRP/RSRQ/SINR prediction accuracy: these metrics may be expressed as statistical scores, such as Mean Square Error (MSE), normalized MSE (NMSE), root Mean Square Error (RMSE), variance, etc., indicating the accuracy of the UE predictions.
Confidence score for the a-set beam: based on the model output, the UE may quantify a confidence level of the likelihood of each predicted beam being the best beam for communication.
Measurement: this simply represents the ability of the UE to report the measured signal strength of each beam.
The value of the supported performance metric is a sequence of identifiers comprising the supported metrics, each identifier corresponding to a particular accuracy measurement or predictive modeling method. For example, the value may be SEQUENCE of{"beam prediction accuracy::Top-1","beam prediction accuracy::Top-2",…,"beam prediction accuracy::L1-RSRP difference",…,"beam prediction ranking accuracy::binary","beam prediction ranking accuracy::spearman_corr",…,"RSRP prediction accuracy::NMSE",…"confidence_scores",…,"RSRP Measurement","RSRQ Measurement"}. the list of performance metrics that the metrics present by the SEQUENCE structure indicates that the user equipment can report its supported performance metrics, inform the base station of the full range of available monitoring capabilities, and implement an accurate and flexible beam management strategy.
3. The supported container method comprises the following steps: it represents a reporting format (container method) supported by the user equipment for communicating the monitored performance metric. This includes various formats for reporting the different types of metrics described above, such as beam prediction accuracy, RSRP/RSRQ/SINR prediction accuracy, signal strength measurements, etc. The value of the supported container method is a sequence of identifiers including various reporting formats or methods supported by the user equipment. For example, the value may be SEQUENCE of { content_method1, content_method2, content_method3, content_method4, … }.
4. Number of supported resources for performance metrics: this parameter indicates how much resources the user equipment can use at most to calculate a performance metric from the measurements it makes. This value may be SEQUENCE of ENUMERATED { n1, n2, n3, n4,., nK }, representing the ability of the user equipment to calculate a metric using n1, n2, n3, n4, nK resources or samples.
5. Supported monitoring frequency: this component represents the monitored frequency range of collected data and reported beam management system performance that the user equipment can handle. It indicates how often the user equipment can perform measurement and monitoring tasks according to network requirements. The value may be SEQUENCE of ENUMERATED { n1, n2, n3, n4,..once., nK }, referring to a list of predefined frequency levels that the user equipment supports for monitoring purposes.
Each capability reported by the user equipment 1004 enables the base station 1002 to perform better network management, allowing it to configure beam management operations best suited for the user equipment capabilities, thereby optimizing communication efficiency and maintaining high quality of service.
For the user equipment side model, although the monitoring method depends on the implementation of the user equipment, there may be a mechanism that allows the Network (NW) to configure the user equipment with cell-specific monitoring conditions to monitor its AI/ML model/functionality. This means that although the user equipment may choose its own monitoring method, the network may still configure some standardized parameters to align the monitoring conditions with the user equipment.
In particular, such a normalization mechanism should allow the network to align the following parameters with the user device so that the user device can monitor its AI/ML model using the corresponding method configuration:
metrics for monitoring: this refers to performance metrics used to evaluate the AI/ML model. Some options include "beam prediction accuracy", "beam prediction ranking accuracy", or "RSRP prediction error". For example, beam prediction ranking accuracy 862 as shown in fig. 8 may use true/false matching or correlation methods to compare ranking accuracy metrics of a ranking sequence. As another example, the RSRP prediction accuracy 860, as shown in FIG. 8, may use an error metric, such as MSE or RMSE, to compare the predicted and measured RSRP values.
Number of resources used to calculate metrics: this refers to the number of Reference Signal (RS) resources or beams that will be used to calculate the performance metric. This may be, for example, the same number of RS resources measured, or the number of beams in a ranking sequence based on the top K beams of the measurement and model outputs, as compared in fig. 8.
Monitoring frequency/periodicity: this parameter specifies the frequency of use of the monitoring method used by the User Equipment (UE) to collect performance data, such as once every X slots.
In this way, the network can normalize and configure key parameters of the UE-side model monitoring to be consistent with the monitoring implementation of the UE itself.
Furthermore, the network may configure additional parameters for the UE to determine whether the performance of the artificial intelligence/machine learning (AI/ML) model performs well or poorly:
Threshold of performance metric: the network may assign a threshold to the monitored performance metric, such as beam prediction accuracy. The threshold is used to determine whether the AI/ML model predictions for a single monitored sample are good or bad. For example, if the threshold for predictive accuracy is set to 90%, then a single monitoring sample up to 95% accuracy would be considered good, while an 85% accuracy would be considered bad.
Statistical threshold of performance metric: in addition to the threshold for a single sample, the network may configure a statistical threshold that is applied to multiple monitoring samples. For example, this may be the percentage of monitoring samples that the metric must exceed a threshold over time. If the statistical threshold is 60% and the threshold for a single sample is 90%, then at least 60% of the monitored samples need to be more than 90% accurate for a single sample to be considered good overall performance.
Sample number: this parameter indicates the minimum number of monitoring samples that must be collected before a statistical threshold is applied. For example, the network may be configured to require at least 100 samples to determine if 60% of them exceed the accuracy level of 90% for a single sample.
These additional parameters allow the network to specify thresholds for individual monitoring samples and overall performance accumulated over time in determining whether the AI/ML model meets desired performance criteria. The number of samples indicates the minimum sample set size for statistical observation.
More specifically, in this example, based on the capability report of the user device 1004, the base station 1002 determines that the user device 1004 supports monitoring an artificial intelligence/machine learning (AI/ML) based beam management model and configures a CSI-ReportConfig configuration for the user device 1004. One "enableAIreporting" parameter is set to "ON". This instructs the user device to monitor the AI/ML model and report its findings according to specified parameters.
In addition, a "monitoring_reporting" parameter indicates that the reporting configuration is dedicated to Monitoring and reporting the performance of the AI/ML model functionality. This "monitoring_reporting" parameter essentially serves as a flag to enable the Monitoring function when the network configures CSI reports for the user equipment to monitor the AI/ML model. It tells the user equipment that the subsequent performance metrics, container methods, resources, frequencies, etc. parameters are dedicated to monitoring and reporting the AI/ML model, rather than the normal CSI reporting.
A performance_metrics parameter is used to indicate which Performance metrics the user equipment 1004 should use to determine the Performance of the model and report to the base station 1002. The parameters may include beam prediction accuracy, beam prediction ranking accuracy, RSRP/RSRQ/SINR prediction accuracy, confidence scores for the a-set beams based on the model output, and measurements, as described above. This may be selected from "supported performance metrics" of the user equipment. For example, performance_metrics may be set to "beam prediction ranking accuracy".
One "containers methods" parameter indicates which CSI reporting format (Container method) the user equipment 1004 should use to report the performance of the monitored performance metrics and/or decision model. This may be a selection of a predefined container method format. For example, container method formats "container method 1", "container method 2", etc. may be defined, with the network configuration user device reporting the monitoring data using one of these particular container method formats. This allows different reporting formats to be used for different types of metrics or decisions. The parameter may instruct the user equipment to report metrics or simply determine the performance of the model. This may also be selected from "supported container methods" of the user equipment.
One "number_of_resources" parameter indicates the Number of beams in the measurement beam that will be used to calculate the performance metric. This allows the base station 1002 to control the beam measurement cost. The value may come from the "number of resources supported for the performance metric" of the user equipment.
A "monitoring_freq" parameter indicates how often the user equipment 1004 will measure the configured reference signal resources for Monitoring purposes. The monitoring frequency may be from a "supported monitoring frequency" of the user equipment. For example, if monitoring_freq is set to 20 slots by the network configuration, this means that the user equipment requires that the configured reference signal be measured once every 20 slots to check the beam prediction or model output.
Next, the user device 1004 monitors by configuration, measures the configured resource set, runs model reasoning, calculates specified performance metrics (e.g., beam prediction ranking accuracy), and reports the results according to the configured container method. This completes a round of model monitoring. The user equipment 1004 repeats this process at intervals specified by the monitoring frequency parameter.
The entire process is iterative and consistent with the ability of the user equipment to perform AI/ML model monitoring and reporting, ultimately enabling the base station 1002 to efficiently manage and optimize beam management operations.
Fig. 11 is a diagram 1100 schematically illustrating an embodiment in which a user equipment reports a monitoring event. Similar to the example of fig. 10, base station 1102 configures "enableAIreporting"、"Monitoring_reporting"、"Performance_metrics"、"Container_method"、"Number_of_resources"、"Monitoring_freq" parameters for user equipment 1104.
Further, in this example, base station 1102 can configure UE 1104 to report a monitoring event to determine whether the performance of the AI/ML model is good or bad. The event is defined based on successive "occurrence instances" observed at a certain monitored frequency. One occurrence means that in a single monitoring sample, the monitored performance metric (e.g., MSE or RSRP prediction) meets the desired threshold.
To configure this event report, base station 1102 may send out additional parameters to user device 1104. One "metrics_threshold" parameter indicates the threshold value for comparison. For example, if the metric threshold for MSE is 0.8, then an instance occurs when the MSE monitored in a single monitored sample is less than 0.8. One occurrence instance marks one performance problem in a single monitoring sample. The "number_of_samples" parameter indicates the Number of consecutive instances needed to determine the occurrence of an event. For example, number_of_samples=10 means performance_metrics < metrics_threshold for 10 consecutive instances, and constitutes an event.
"Monitoring_duration" indicates that the user device 1104 may stop Monitoring for events if no event is detected after a time slot or number of milliseconds for a period of time. If number_of_samples is 10, then MSE less than 0.8 in 10 consecutive monitoring instances constitutes an event reporting poor performance of the model. The monitoring frequency parameter described above instructs the user equipment to collect monitoring metrics to check the frequency of a series of occurrence instances. Samples with continuous performance degradation at regular frequencies indicate that there is a continuing performance problem, resulting in reporting events, i.e., the model is not operating properly. "monitoring_duration" indicates that the user device 1104 may stop Monitoring for events if no event is detected after a period of time. Typically, once the user equipment detects an event by observing the required consecutive occurrence instances, the event is reported and the monitoring is completed according to other parameters such as number_of_samples. However, if the event criteria are not met within the expected time, the "monitoring_duration" parameter sets an upper limit time limit in time slots or milliseconds for which the user device remains monitored before ending the Monitoring process without detecting an event.
The user equipment 1104 then operates according to this configuration, performs monitoring at specified intervals, calculates configured performance metrics, determines whether an event occurrence is detected based on a threshold comparison, and reports the event detection status to the base station 1102.
Fig. 12 is a diagram 1200 schematically illustrating an embodiment of user equipment reporting metric index statistics. Similar to the example of fig. 10, the base station 1202 configures "enableAIreporting"、"Monitoring_reporting"、"Performance_metrics"、"Container_method"、"Number_of_resources"、"Monitoring_freq" parameters for the user equipment 1204.
In this example, the base station 1202 may configure the user equipment 1204 to report a monitoring event to determine whether the performance of the AI/ML model is good or bad. The event is defined based on a statistical analysis of a plurality of monitoring instances over a particular duration. Specifically, the base station sets a "STATISTICAL THRESHOLD" parameter that indicates the percentage of monitoring instances that must meet the occurrence criteria to trigger an event. An "occurrence instance" means that in a single monitoring instance, the value of the monitored performance metric (e.g., MSE) is less than/greater than the "metrics threshold" value. For example, if the metric threshold for MSE is 0.8, then an occurrence is considered when the MSE monitored in a single monitored sample is less than 0.8. The user equipment collects multiple monitoring instances at a fixed "monitoring frequency" over a time "duration". An event is triggered if the percentage of instances meeting the occurrence criteria exceeds a statistical threshold. In this case, the user equipment determines whether the model is underperforming based on whether there are enough instances over a period of time to meet the desired performance threshold based on a statistical threshold percentage of the base station configuration.
To configure this event report, the base station 1202 may send out additional parameters to the user equipment 1204. The "STATISTIC _threshold" parameter set must meet the percentage of monitored instances of the occurrence criteria to trigger an event that determines that the model is not performing well. In particular, the statistical threshold refers to the proportion or ratio of "instances of occurrence" in the total monitored instance over the duration. As previously described, an "occurrence instance" refers to a performance metric violating a desired threshold in a single monitoring sample, indicating a performance degradation. If STATISTIC _threshold is configured to 60%, this means that 60% of the total monitored instances/samples need to be occurrence instances in order to declare that the model is not performing well as a whole. To configure this event report, the base station 1202 may send out additional parameters to the UE 1204. A "STATISTIC _threshold" parameter sets the percentage of monitoring instances where the monitoring instances must meet the occurrence criteria to trigger an event that determines that the model is not performing well. In particular, the statistical threshold is the proportion or ratio of "instances of occurrence" in the total monitored instance over a period of time. As previously mentioned, one example of an occurrence is when in a single monitored sample, the performance metric violates a desired threshold, indicating a performance degradation. If STATISTIC _threshold is set to 60%, this means that the UE 1204 needs to be 60% of the total monitored instances/samples to be occurrences in order to declare and report that the model as a whole is not performing well.
The "metrics_threshold" parameter indicates a threshold value for a monitored performance metric that is used to determine an event occurrence instance. In this example, metrics_threshold is set to 0.8, and each occurrence can be defined as a performance_metrics (e.g., NMSE) less than 0.8.
The "number_of_samples" parameter indicates the total Number of monitored samples that the user device 1204 needs to measure and monitor so that statistical observations of events can be reported. For example, if number_of_samples is set to 100, then the user device 1204 needs to collect 100 monitoring instances to determine if the event condition defined by STATISTIC _threshold is satisfied.
In this example, the user equipment 1204 performs a monitoring procedure according to the configuration set by the base station 1202. It measures the subset of beams configured, performs AI/ML model reasoning, calculates the NMSE of RSRP prediction error, and evaluates whether each instance corresponds to an occurrence instance (e.g., compares the NMSE result to 0.8 of metrics_threshold).
Each completion of this process constitutes a round of model monitoring. In this example, the user equipment 1204 starts the next round of model Monitoring after 20 time slots according to the Monitoring frequency/periodicity parameter monitoring_freq. In this example, number_of_samples is 100. Thus, once 100 rounds of measurements are completed, the user device 1204 evaluates whether at least 60% (or other percentage by STATISTIC _threshold) of these instances have an RSRP prediction error NMSE below 0.8.
If the conditions are met, the user device 1204 reports that the AI/ML model performs appropriately according to predefined statistical thresholds. Conversely, if the condition is not met, the user device 1204 reports that the performance of the model is below expected. This process ensures that the network can effectively monitor the performance of the AI/ML model and make the necessary adjustments over time to maintain optimal beam management.
Fig. 13 is a 1300 diagram schematically illustrating an embodiment of a user equipment configured with different resource sets and different frequency monitoring. The user equipment 1304 transmits a capability report 1308 to the base station 1302. Based on the capability report of the user equipment 1304, the base station 1302 configures the monitoring report in CSI-ReportConfig for the user equipment 1304 by prompting the necessary parameters to monitor the AI/ML model.
The base station 1302 may configure the monitoring report 1310 with the following configuration: metric = beam predictive ranking accuracy, monitoring period = 45 slots. This report will be linked to the set of resources measured by the user equipment, which contains all predicted candidate beams.
The base station 1302 may configure the monitoring report 1312 with the following configuration: metric = beam predictive ranking accuracy, monitoring period = 15 slots. This report will be linked to the set of resources measured by the user equipment, which contains the predicted subset of candidate beams (1/2 of the size of the full set of candidate beams).
The base station 1302 may configure the monitoring report 1314 with the following configuration: metric = beam prediction accuracy, monitoring period = 5 slots. This report will be linked to a set of resources measured by the user equipment that contains a smaller subset of predicted candidate beams than the subset used in the monitoring report 1312.
The user device 1304 follows the configuration and measures each subset at the configured frequency and reports the calculated metric.
The embodiment shown in fig. 13 demonstrates the flexibility of AI/ML model monitoring, wherein the base station 1302 can dynamically adjust reporting frequencies and resource sets based on the capabilities of the user equipment 1304 and network conditions. Such a configuration enables the base station 1302 and user equipment 1304 to coordinate the adjustment of the beam prediction process, optimize network performance, and enhance the user experience by ensuring that the predictions of the AI/ML model are accurate and up-to-date.
FIG. 14 is a flowchart 1400 illustrating a method (process) for monitoring an AI/ML model. The method may be performed by a UE (e.g., UE 1004). In operation 1401, the UE transmits to the base station the UE's capabilities indicating support for monitoring the AI/ML model. The capabilities of the UE include at least one of: an explicit list of monitoring reports, a list of supported performance metrics, a list of supported reporting formats, a list of supported number of resources supported by the second subset, and a list of supported monitoring frequencies are supported.
In operation 1402, the UE receives a first monitoring configuration for monitoring one AI/ML model managing a set of beams from a base station. In some configurations, the first monitoring configuration includes at least one of: one or more performance metrics; a container method parameter specifying a reporting format that is used when the UE reports one or more performance metrics or whether the one or more performance metrics meet an indication of one or more metric thresholds; a resource quantity parameter specifying a quantity of resources for calculating one or more performance metrics; a monitoring frequency parameter, which designates the frequency at which the UE performs monitoring and sample collection; a metric threshold parameter specifying a threshold value for one or more metrics; a statistical threshold parameter specifying a percentage of monitored instances that must meet an occurrence criterion over the number of samples collected to determine model performance; a sample number parameter specifying the number of samples collected prior to application of the statistical threshold; a monitor duration parameter specifying a duration for performing the monitor; and an explicit indication of a set of resources, the set of resources comprising at least one of the first subset or the second subset.
In operation 1404, the UE measures a first subset of the set of beams. In operation 1406, the UE performs reasoning using an AI/ML model based on measurements of the first subset to determine predicted values of beams of a second subset, wherein the second subset is selected based on the first monitoring configuration. In operation 1408, the UE measures the beams of the second subset to determine the measurements of the second subset. In operation 1410, the UE calculates one or more performance metrics based on the predicted values and measured values of the second subset, wherein the one or more performance metrics are selected based on the first monitoring configuration.
In some configurations, the one or more performance metrics include at least one of: a beam prediction accuracy metric, a beam prediction ranking accuracy metric, a Received Signal Received Power (RSRP) prediction accuracy metric, a Reference Signal Received Quality (RSRQ) prediction accuracy metric, and a signal to interference plus noise ratio (SINR) prediction accuracy metric. In some configurations, to calculate the one or more performance metrics, the UE compares each predicted value to a measured value corresponding to the second subset and determines the one or more performance metrics based on the comparison.
In some configurations, in operation 1412, the UE reports the one or more performance metric indicators to the base station. In some configurations, in operation 1414, the UE determines whether the one or more performance metric metrics meet one or more metric thresholds. In operation 1416, the UE reports one or more explicit indications to the base station indicating whether the one or more performance metric indicators meet the one or more metric thresholds.
In some configurations, the UE receives at least one additional monitoring configuration. Each monitoring configuration is associated with a different monitoring frequency, a different set of resources, and a different performance metric. The UE performs monitoring at different monitoring frequencies by measuring the associated resource sets and calculates a performance metric based on the monitoring.
It should be understood that the specific order or hierarchy in the flows/flowcharts disclosed is an illustration of exemplary approaches. It will be appreciated that the specific order or hierarchy in the flow/flow diagrams may be rearranged according to design preferences. Further, some blocks may be combined or omitted. The accompanying method claims present elements of the various blocks in a sample order, and are not meant to be limited to the specific order or hierarchy presented.
The preceding description is intended to enable any person of ordinary skill to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects. Thus, the claims are not intended to be limited to the aspects shown herein, but is to be accorded the full scope consistent with the language claims, wherein reference to an element is not intended to mean "only one" unless specifically so stated, but rather "one or more". The use of the term "exemplary" herein refers to "serving as an example, instance, or illustration. Any aspect described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other aspects. The term "some" means one or more unless specifically indicated. Combinations such as "one of at least A, B or C", "one or more of A, B or C", "one of at least A, B and C", "one or more of A, B and C", and "A, B, C or any combination" include any combination of A, B and/or C, and may include a plurality of a, B plurality or C plurality. In particular, combinations such as "one of at least A, B or C", "one or more of A, B or C", "one of at least A, B and C", "one or more of A, B and C", and "A, B, C or any combination" may be a alone, B alone, C alone, a and B, a and C, B and C, or A, B and C, wherein any such combination may comprise one or more members of A, B or C. All structural and functional equivalents to the various aspects described herein that are presently known or later come to be known to those of ordinary skill in the art in the disclosure are expressly incorporated herein by reference and are intended to be encompassed by the claims. Furthermore, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. The terms "module," mechanism, "" element, "" device, "and the like are not intended to be substituted for the term" means. Thus, unless a "for" phrase is used to explicitly recite an element, any claim element should not be construed as a means-plus-function.

Claims (20)

1. A method of wireless communication for a User Equipment (UE), characterized by: comprising the following steps:
Receiving a first monitoring configuration of an artificial intelligence/machine learning (AI/ML) model for monitoring a set of beams from a base station;
measuring a first subset of the set of beams, wherein the first subset is selected based on the first monitoring configuration;
Performing inference using the AI/ML model based on measurements of the first subset to determine predicted values for a second subset of the set of beams, wherein the second subset is selected based on the first monitoring configuration;
Measuring the beams of the second subset to determine measurements of the second subset; and
One or more performance metrics are calculated from the predicted values and measured values of the second subset, wherein the one or more performance metrics are selected based on the first monitoring configuration.
2. A method according to claim 1, characterized in that: wherein the one or more performance metrics include at least one of:
A beam prediction accuracy metric, a beam prediction ranking accuracy metric, a Received Signal Received Power (RSRP) prediction accuracy metric, a Reference Signal Received Quality (RSRQ) prediction accuracy metric, and a signal to interference plus noise ratio (SINR) prediction accuracy.
3. A method according to claim 1, characterized in that: wherein calculating the one or more performance metrics comprises: comparing each predicted value to a measured value corresponding to the second subset; and
The one or more performance metrics are determined based on the results of the comparison.
4. A method according to claim 1, characterized in that: further comprises:
Reporting the one or more performance metrics to the base station.
5. A method according to claim 1, characterized in that: further comprises:
Determining whether the one or more performance metrics meet one or more metric thresholds; and
One or more explicit indications are reported to the base station indicating whether the one or more performance metric indicators meet the one or more metric thresholds.
6. A method according to claim 1, characterized in that: wherein the first monitoring configuration comprises at least one of: the one or more performance metrics;
a container method parameter specifying a reporting format that is used when the UE reports one or more performance metrics or whether the one or more performance metrics meet an indication of one or more metric thresholds;
a resource quantity parameter designated to calculate a quantity of resources for said one or more performance metric measurements;
a monitoring frequency parameter specifying a frequency at which the UE performs monitoring and collecting samples;
A metric threshold parameter specifying values of the one or more metric thresholds;
a statistical threshold parameter specifying a percentage of monitored instances that must meet an occurrence criterion over the number of samples collected to determine modeling;
a sample number parameter specifying the number of samples to be collected before applying the statistical threshold;
a monitor duration parameter specifying a duration for performing the monitor; and
One explicit indication of a set of resources, said set of resources comprising at least one of said first subset or said second subset.
7. A method according to claim 1, characterized in that: further comprises:
The ability to transmit a UE to the base station indicates support to monitor the AI/ML model.
8. The method according to claim 7, characterized in that: wherein the UE capabilities include at least one of:
Supporting the explicit indication of the monitoring report;
A list of supported performance metrics;
a list of supported reporting formats;
a first subset number of resources supported;
A second subset number of resources supported; and
A list of supported monitoring frequencies.
9. A method according to claim 1, characterized in that: further comprises:
Receiving at least one additional monitoring configuration, each monitoring configuration associated with a different monitoring frequency, a different set of resources, and a different performance metric;
performing monitoring at different monitoring frequencies by measuring the associated resource sets; and
The performance metric is calculated based on the monitoring.
10. An apparatus for wireless communication, characterized in that: the apparatus is a User Equipment (UE), comprising:
A memory; and
At least one processor is coupled with the memory and configured to:
Receiving a first monitoring configuration of an artificial intelligence/machine learning (AI/ML) model for monitoring a set of beams from a base station;
measuring a first subset of the set of beams, wherein the first subset is selected based on the first monitoring configuration;
Performing inference using the AI/ML model based on measurements of the first subset to determine predicted values for a second subset of the set of beams, wherein the second subset is selected based on the first monitoring configuration;
Measuring the beams of the second subset to determine measurements of the second subset; and
One or more performance metrics are calculated from the predicted values and measured values of the second subset, wherein the one or more performance metrics are selected based on the first monitoring configuration.
11. The apparatus according to claim 10, characterized in that: wherein the one or more performance metrics include at least one of: a beam prediction accuracy metric, a beam prediction ranking accuracy metric, a Received Signal Received Power (RSRP) prediction accuracy metric, a Reference Signal Received Quality (RSRQ) prediction accuracy metric, and a signal to interference plus noise ratio (SINR) prediction accuracy.
12. The apparatus according to claim 10, characterized in that: wherein the at least one processor is further configured to calculate the one or more performance metric metrics comprising:
Comparing each predicted value to a measured value corresponding to the second subset; and
The one or more performance metrics are determined based on the results of the comparison.
13. The apparatus according to claim 10, characterized in that: wherein the at least one processor is further configured to: reporting the one or more performance metrics to the base station.
14. The apparatus according to claim 10, characterized in that: wherein the at least one processor is further configured to: determining whether the one or more performance metrics meet one or more metric thresholds; and
One or more explicit indications are reported to the base station indicating whether the one or more performance metric indicators meet the one or more metric thresholds.
15. The apparatus according to claim 10, characterized in that: wherein the first monitoring configuration comprises at least one of:
the one or more performance metrics;
a container method parameter specifying a reporting format that is used when the UE reports one or more performance metrics or whether the one or more performance metrics meet an indication of one or more metric thresholds;
a resource quantity parameter designated to calculate a quantity of resources for said one or more performance metric measurements;
a monitoring frequency parameter specifying a frequency at which the UE performs monitoring and collecting samples;
A metric threshold parameter specifying values of the one or more metric thresholds;
a statistical threshold parameter specifying a percentage of monitored instances that must meet an occurrence criterion over the number of samples collected to determine model performance;
a sample number parameter specifying the number of samples to be collected before applying the statistical threshold;
a monitor duration parameter specifying a duration for performing the monitor; and
A representation of a set of resources, said set of resources comprising at least one of said first subset or said second subset.
16. The apparatus according to claim 10, characterized in that: wherein the at least one processor is further configured to: the ability to transmit a UE to the base station indicates support to monitor the AI/ML model.
17. The apparatus of claim 16, wherein: wherein the UE capabilities include at least one of:
Supporting the explicit indication of the monitoring report;
A list of supported performance metrics;
a list of supported reporting formats;
a first subset number of resources supported;
A second subset number of resources supported; and
A list of supported monitoring frequencies.
18. The apparatus according to claim 10, characterized in that: wherein the at least one processor is further configured to:
Receiving at least one additional monitoring configuration, each monitoring configuration associated with a different monitoring frequency, a different set of resources, and a different performance metric;
performing monitoring at different monitoring frequencies by measuring the associated resource sets; and
The performance metric is calculated based on the monitoring.
19. A computer-readable medium for User Equipment (UE) wireless communication, characterized by: storing computer executable code, comprising:
Receiving a first monitoring configuration of an artificial intelligence/machine learning (AI/ML) model for monitoring a set of beams from a base station;
measuring a first subset of the set of beams, wherein the first subset is selected based on the first monitoring configuration;
Performing inference using the AI/ML model based on measurements of the first subset to determine predicted values for a second subset of the set of beams, wherein the second subset is selected based on the first monitoring configuration;
Measuring the beams of the second subset to determine measurements of the second subset; and
One or more performance metrics are calculated from the predicted values and measured values of the second subset, wherein the one or more performance metrics are selected based on the first monitoring configuration.
20. The computer readable medium of claim 19, wherein: wherein the one or more performance metrics include at least one of: a beam prediction accuracy metric, a beam prediction ranking accuracy metric, a Received Signal Received Power (RSRP) prediction accuracy metric, a Reference Signal Received Quality (RSRQ) prediction accuracy metric, and a signal to interference plus noise ratio (SINR) prediction accuracy.
CN202311670027.7A 2022-12-07 2023-12-07 Artificial intelligence/machine learning method and device based on beam management Pending CN118158700A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US63/386,330 2022-12-07
US18/529,092 2023-12-05
US18/529,092 US20240196242A1 (en) 2022-12-07 2023-12-05 Method and apparatus for ai/ml based beam management

Publications (1)

Publication Number Publication Date
CN118158700A true CN118158700A (en) 2024-06-07

Family

ID=91289452

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311670027.7A Pending CN118158700A (en) 2022-12-07 2023-12-07 Artificial intelligence/machine learning method and device based on beam management

Country Status (1)

Country Link
CN (1) CN118158700A (en)

Similar Documents

Publication Publication Date Title
US11190287B2 (en) Proactive beam management to avoid channel failure or degraded channel conditions
CN110100396B (en) Wireless communication method, wireless communication apparatus, and computer-readable medium thereof
CN112055941B (en) Handling dynamic blocking in millimeter wave communication systems
CN112956233B (en) Event triggering based on resource utilization in wireless communications
CN116325878A (en) Quality of experience measurement for mobility robustness
CN114731195A (en) IAB node cell coverage adjustment
CN112042148B (en) Wireless communication method, wireless communication device, and computer-readable medium thereof
CN111543013B (en) Method and apparatus for dynamic beam pair determination
US11889358B2 (en) Sidelink resource reevaluation
CN114830726A (en) Neighbor cell layer 1metric for fast cell change
CN117597871A (en) Early beam fault detection
CN116391426A (en) Measurement gap and scheduling
KR20230026326A (en) Reduced detection procedure for sidelink communications
US20240196242A1 (en) Method and apparatus for ai/ml based beam management
CN118158700A (en) Artificial intelligence/machine learning method and device based on beam management
CN115735380A (en) Vehicle-to-all cell reselection
US20240106510A1 (en) Spatial and frequency domain beam management using time series information
US12015959B2 (en) Mobility enhancements: CHO execution condition
US11832219B2 (en) Frame-matching sidelink communication around sidelink RS
WO2023050445A1 (en) Minimization of drive time for wireless communication including sidelink
US20230088234A1 (en) Data collection enhancements for multicast broadcast services
WO2023044702A1 (en) Rssi measurement for cbr calculation
US11979903B2 (en) Channel occupancy ratio calculation
US20240214852A1 (en) Cross-link interference (cli) measurement and reporting
US20220167351A1 (en) Coordinated and distributed collision reporting in cellular vehicle-to-everything (cv2x) networks

Legal Events

Date Code Title Description
PB01 Publication