WO2024150208A1 - Improving accuracy of artificial intelligence/machine learning (ai/ml) based channel state information (csi) feedback - Google Patents

Improving accuracy of artificial intelligence/machine learning (ai/ml) based channel state information (csi) feedback Download PDF

Info

Publication number
WO2024150208A1
WO2024150208A1 PCT/IB2024/053118 IB2024053118W WO2024150208A1 WO 2024150208 A1 WO2024150208 A1 WO 2024150208A1 IB 2024053118 W IB2024053118 W IB 2024053118W WO 2024150208 A1 WO2024150208 A1 WO 2024150208A1
Authority
WO
WIPO (PCT)
Prior art keywords
matrix
information
network entity
csi
processor
Prior art date
Application number
PCT/IB2024/053118
Other languages
French (fr)
Inventor
Venkata SRINIVAS KOTHAPALLI
Ahmed HINDY
Vahid POURAHMADI
Vijay Nangia
Original Assignee
Lenovo (Singapore) Pte Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo (Singapore) Pte Limited filed Critical Lenovo (Singapore) Pte Limited
Publication of WO2024150208A1 publication Critical patent/WO2024150208A1/en

Links

Abstract

Various aspects of the present disclosure relate to systems and methods for reporting (e.g., transmitting, outputting, signaling) channel state information (CSI) feedback between network nodes using artificial intelligence/machine learning (AI/ML) models. The systems and methods may transmit channel state information (CSI) and a set of side information associated with the CSI. Using the set of side information, the systems and methods (e.g., a decoder) may detect errors when reconstructing the CSI and correct these errors to output a matrix that is the same as an initial matrix input into an encoder before transmission of the CSI.

Description

Lenovo Docket No. SMM920220317-WO-PCT 1 IMPROVING ACCURACY OF ARTIFICIAL INTELLIGENCE/MACHINE LEARNING (AI/ML) BASED CHANNEL STATE INFORMATION (CSI) FEEDBACK CROSS REFERENCE TO RELATED APPLICATIONS [0001] This application claims priority to U.S. Provisional Patent Application No. 63/493,573, filed on March 31, 2023, entitled IMPROVING ACCURACY OF ARTIFICIAL INTELLIGENCE/MACHINE LEARNING (AI/ML) CHANNEL STATE INFORMATION (CSI) FEEDBACK, which is incorporated by reference in its entirety. TECHNICAL FIELD [0002] The present disclosure relates to wireless communications, and more specifically to artificial intelligence/machine learning (AI/ML) based channel state information (CSI) feedback. BACKGROUND [0003] A wireless communications system may include one or multiple network communication devices, such as base stations, which may be otherwise known as an eNodeB (eNB), a next-generation NodeB (gNB), or other suitable terminology. Each network communication device, such as a base station, may support wireless communications for one or multiple user communication devices, which may be otherwise known as user equipment (UE), or other suitable terminology. The wireless communications system may support wireless communications with one or multiple user communication devices by utilizing resources of the wireless communication system (e.g., time resources (e.g., symbols, slots, subframes, frames, or the like) or frequency resources (e.g., subcarriers, carriers). Additionally, the wireless communications system may support wireless communications across various radio access technologies including third generation (3G) radio access technology, fourth generation (4G) radio access technology, fifth generation (5G) radio access technology, among other suitable radio access technologies beyond 5G (e.g., sixth generation (6G)). Attorney Docket No.793MS0085PC Lenovo Docket No. SMM920220317-WO-PCT 2 [0004] The wireless communications may include a UE equipped with ^^^^ antennas and a base station (e.g., gNB) equipped with ^^^^ antennas. A channel between the UE and the base station may have a total of ^^^^ ^^^^ number of paths. For downlink communication, in which the base station transmits information to the UE, a discrete-time channel can be represented as a ^^^^ × ^^^^ dimensional complex-valued matrix ^^^^, with element ℎ ^^^^ ^^^^ of ^^^^ denoting a complex-valued channel gain between an ^^^^ ^^^^ℎ receive antenna and a ^^^^ ^^^^ℎ transmit antenna, 1 ≤ ^^^^ ≤ ^^^^, 1 ≤ ^^^^ ≤ ^^^^. The channel matrix ^^^^ may depend on a physical propagation medium (e.g., a wireless channel), and, due to a dynamic nature of the physical propagation medium, the wireless channel may be a time-varying channel. Further, channel gains depend on a frequency of operation. For example, with a multicarrier waveform, such as Orthogonal Frequency-Division Multiplexing (OFDM), the channel matrix ^^^^ may include different values at different sub-carriers (e.g., frequencies) at a same instant of time. The channel matrix ^^^^ may be stochastic in nature, and can vary across time, frequency, and spatial dimensions. Thus, in adapting a transmission method as per the channel realization and/or pre-processing the information signal to be transmitted according to the current channel realization, the wireless communication system can experience high throughput over a communication link, while making the communication link more reliable. [0005] A network communication device (e.g., a gNB) may use CSI to support adaptive transmission and/or to implement pre-processing at the network communication device. Thus, the network communication device may have knowledge of the channel matrix ^^^^ over an entire frequency range of operation (e.g., sub-carriers associated with OFDM/multi- carrier waveforms) when the wireless channel changes. A user communication device (e.g., a UE) can estimate the wireless channel based at least in part on downlink reference/pilot signals transmitted by the network communication device. For example, the user communication device may determine (e.g., measure, obtain, acquire) CSI and transmit the determined CSI to the network communication device (e.g., as CSI feedback). Thus, during wireless communication (e.g., downlink communication from the network communication device to the user communication device), the user communication device may estimate CSI (e.g., a channel matrix or a channel covariance matrix) based on the downlink pilot/reference signals transmitted by the network communication device and report the Attorney Docket No.793MS0085PC Lenovo Docket No. SMM920220317-WO-PCT 3 estimated CSI to the network device. Additionally, or alternatively, during wireless communication (e.g., uplink communication from the user communication device to the network communication device), the network communication device may estimate CSI based on uplink reference/pilot signals received from the user communication device and report the CSI to the user communication device. [0006] In some cases, reporting of CSI feedback may result in added overhead to the wireless communication system. Thus, the wireless communication system, including one or more of the network communication device and/or the user communication device, may attempt to manage (e.g., balance, control) the overhead (e.g., user communication device transmitted CSI) when measuring a channel with a network communication device seeking to improve communications over the wireless channel using the acquired CSI. SUMMARY [0007] The present disclosure relates to methods, apparatuses, and systems that support reporting (e.g., transmitting, outputting, signaling) CSI feedback between network nodes using AI/ML models. The systems and methods may transmit CSI and a set of side information associated with the CSI, which may be encoded/compressed. Using the set of side information, the systems and methods (e.g., a decoder) may detect errors when reconstructing the CSI and correct these errors, to output a matrix that is the same as an initial matrix input into an encoder before transmission of the CSI. [0008] Some implementations of the method and apparatuses described herein may further include a network entity, comprising at least one memory, and at least one processor coupled with the at least one memory and configured to cause the network entity to: receive, from a UE, a first set of information comprising a representation of a matrix, and a set of side information, generate a second set of information based at least in part on the first set of information, and generate a third set of information using the set of side information, wherein the third set of information corresponds to a corrective version of the second set of information. Attorney Docket No.793MS0085PC Lenovo Docket No. SMM920220317-WO-PCT 4 [0009] In some implementations of method and apparatuses described herein, to generate the third set of information, the processor is configured to cause the network entity to perform an error detection operation on the second set of information using the set of side information and perform an error correction operation on the second set of information using the set of side information. [0010] In some implementations of the method and apparatuses described herein, the first set of information comprises CSI, wherein the CSI comprises one or more of a channel matrix of the CSI, a channel covariance matrix of the CSI, or a precoding matrix based on the CSI. [0011] In some implementations of the method and apparatuses described herein, the set of side information comprises one or more eigenvalues of the matrix, or wherein the set of side information comprises one or more null vectors of the matrix, or wherein the set of side information comprises one or more eigenvalues of the matrix and one or more null vectors of the matrix. [0012] In some implementations of the method and apparatuses described herein, the processor is configured to cause the network entity to receive, from the UE, a first set of parameters comprising information identifying generation of the first set of information by the UE, or information identifying encoding of the first set of information by the UE, or both. [0013] In some implementations of the method and apparatuses described herein, the information identifying the encoding of the first set of information by the UE includes information identifying a channel matrix of the first set of information, a channel covariance matrix of the first set of information, or a precoding matrix of the first set of information, or a combination thereof. [0014] In some implementations of the method and apparatuses described herein, the processor is configured to cause the network entity to determine the first set of parameters based at least in part on one or more predefined values or based on an indication received from the UE or another network device. Attorney Docket No.793MS0085PC Lenovo Docket No. SMM920220317-WO-PCT 5 [0015] In some implementations of method and apparatuses described herein, the processor is configured to cause the network entity to generate the second set of information as a second matrix based at least in part on the first set of parameters and the first set of information. [0016] In some implementations of the method and apparatuses described herein, the processor is configured to cause the network entity to generate the second set of information using a machine learning model, wherein the machine learning model is based at least in part on a second set of parameters for the machine learning model. [0017] In some implementations of the method and apparatuses described herein, the processor is configured to cause the network entity to determine that the second set of parameters includes machine learning model information based at least in part on an indication from the UE or another network entity. [0018] In some implementations of the method and apparatuses described herein, the set of side information includes an eigenvalue, and to perform the error detection operation, the processor is configured to cause the network entity to compare the eigenvalue associated with the set of side information to an eigenvalue associated with the second matrix, and determine an absence of one or more errors in the second matrix based at least in part on the eigenvalue associated with the set of side information being equal to the eigenvalue associated with the second matrix, or determine a presence of one or more errors in the second matrix based at least in part on the eigenvalue associated with the set of side information being unequal to the eigenvalue associated with the second matrix. [0019] In some implementations of the method and apparatuses described herein, to perform the error correction operation, the processor is configured to cause the network entity to perform the error correction operation on the second set of information based at least in part on the determination that the second set of information includes one or more errors. [0020] In some implementations of the method and apparatuses described herein, the side information includes an eigenvalue, and to perform the error correction operation, the processor is configured to cause the network entity to decompose the second matrix into a Attorney Docket No.793MS0085PC Lenovo Docket No. SMM920220317-WO-PCT 6 product of a first matrix U, a second S, and a third matrix V based at least in part on a singular value decomposition operation, construct a matrix U1 by keeping a first N columns of the matrix U and discarding any remaining columns, where N is equal to one less than a rank of the second matrix of the second set of information, construct a matrix S1 by keeping a first N columns and first N rows of the matrix S and discarding any remaining columns and rows, where N is equal to one less than a rank of the second matrix of the second set of information, construct a matrix V1 by keeping a first N rows of the matrix V and discarding any remaining rows, where N is equal to one less than a rank of the second matrix of the second set of information, compute a matrix by a product of the three matrices U1, S1, and V1, and generate a matrix by adding the computed matrix to a matrix obtained by multiplying an eigenvalue derived from the set of side information with an identity matrix, where a size of the identity matrix is equal to a size of the second matrix, wherein the generated matrix is the third set of information. [0021] In some implementations of the method and apparatuses described herein, the side information includes a column vector, and to perform the error detection operation, the processor is configured to cause the network entity to compute a vector by post-multiplying the second matrix with the column vector, and when each element of the computer vector is equal to zero, determine that the second matrix is error free, or when any element of the computer vector is not equal to zero, determine that the second matrix has one or more errors. [0022] In some implementations of the method and apparatuses described herein, in response to determining that the second matrix has one or more errors, the processor is further configured to cause the network entity to determine a dual vector to the column vector, generate a vector by post-multiplying a negative of the second matrix with the column vector, compute a matrix by post-multiplying the generated vector with a conjugate transpose of the dual vector, and generate a matrix by adding the computed matrix to the second matrix, wherein the generated matrix is the third set of information. [0023] Some implementations of the method and apparatuses described herein may further include a method performed by a network entity, comprising receiving, from a UE, a first set of information comprising a representation of a matrix, and a set of side Attorney Docket No.793MS0085PC Lenovo Docket No. SMM920220317-WO-PCT 7 information, generating a second set of based at least in part on the first set of information, and generating a third set of information using the set of side information, wherein the third set of information corresponds to a corrective version of the second set of information. [0024] In some implementations of the method and apparatuses described herein, generating the third set of information comprises performing an error detection operation on the second set of information using the set of side information, and performing an error correction operation on the second set of information using the set of side information. [0025] In some implementations of the method and apparatuses described herein, the first set of information comprises CSI, and wherein the CSI comprises one or more of a channel matrix of the CSI, a channel covariance matrix of the CSI, or a precoding matrix based on the CSI. [0026] In some implementations of the method and apparatuses described herein, the set of side information comprises one or more eigenvalues of the matrix, or wherein the set of side information comprises one or more null vectors of the matrix, or wherein the set of side information comprises one or more eigenvalues of the matrix and one or more null vectors of the matrix. [0027] In some implementations of the method and apparatuses described herein, the method further comprises receiving, from the UE, a first set of parameters comprising information identifying generation of the first set of information by the UE, or information identifying encoding of the first set of information by the UE, or both. [0028] In some implementations of the method and apparatuses described herein, the information identifying the encoding of the first set of information by the UE includes information identifying a channel matrix of the first set of information, a channel covariance matrix of the first set of information, or a precoding matrix of the first set of information, or a combination thereof. [0029] In some implementations of the method and apparatuses described herein, the method further comprises determining the first set of parameters based at least in part on Attorney Docket No.793MS0085PC Lenovo Docket No. SMM920220317-WO-PCT 8 one or more predefined values or based an indication received from the UE or another network device. [0030] In some implementations of the method and apparatuses described herein, the method further comprises generating the second set of information as a second matrix based at least in part on the first set of parameters and the first set of information. [0031] In some implementations of the method and apparatuses described herein, the method further comprises generating the second set of information using a machine learning model, wherein the machine learning model is based at least in part on a second set of parameters for the machine learning model. [0032] In some implementations of the method and apparatuses described herein, the method further comprises determining that the second set of parameters includes machine learning model information based at least in part on an indication from the UE or another network entity. [0033] In some implementations of the method and apparatuses described herein, the set of side information includes an eigenvalue, and the error detection operation comprises comparing the eigenvalue associated with the set of side information to an eigenvalue associated with the second matrix and determining an absence of one or more errors in the second matrix based at least in part on the eigenvalue associated with the set of side information being equal to the eigenvalue associated with the second matrix, or determining a presence of one or more errors in the second matrix based at least in part on the eigenvalue associated with the set of side information being unequal to the eigenvalue associated with the second matrix. [0034] In some implementations of the method and apparatuses described herein, the error correction operation comprises performing the error correction operation on the second set of information based at least in part on the determination that the second set of information includes one or more errors. [0035] Some implementations of the method and apparatuses described herein may further include a UE, comprising at least one memory, and at least one processor coupled with the at least one memory and configured to cause the UE to: transmit, to a network Attorney Docket No.793MS0085PC Lenovo Docket No. SMM920220317-WO-PCT 9 entity, a first set of information that a representation of a matrix, and transmit, to the network entity, a set of side information that is derived from the representation of the matrix included in the first set of information. [0036] In some implementations of the method and apparatuses described herein, the first set of information includes CSI. [0037] In some implementations of the method and apparatuses described herein, the first set of information includes a channel matrix of CSI, a channel covariance matrix of the CSI, or a precoding matrix based on the CSI. [0038] In some implementations of the method and apparatuses described herein, the set of side information includes one or more eigenvalues derived from the CSI. [0039] In some implementations of the method and apparatuses described herein, the set of side information includes one or more column vectors derived from the CSI. [0040] Some implementations of the method and apparatuses described herein may further include a method performed by a UE, comprising transmitting, to a network entity, a first set of information that includes a representation of a matrix, and transmitting, to the network entity, a set of side information that is derived from the representation of the matrix included in the first set of information. [0041] In some implementations of the method and apparatuses described herein, the first set of information includes CSI. [0042] In some implementations of the method and apparatuses described herein, the first set of information includes a channel matrix of CSI, a channel covariance matrix of the CSI, or a precoding matrix based on the CSI. [0043] In some implementations of the method and apparatuses described herein, the set of side information includes one or more eigenvalues derived from the CSI. [0044] In some implementations of the method and apparatuses described herein, the set of side information includes one or more column vectors derived from the CSI. Attorney Docket No.793MS0085PC Lenovo Docket No. SMM920220317-WO-PCT 10 [0045] Some implementations of the and apparatuses described herein may further include a processor for wireless communication, comprising at least one controller coupled with at least one memory and configured to cause the processor to transmit, to a network entity, a first set of information that includes a representation of a matrix, and transmit, to the network entity, a set of side information that is derived from the representation of the matrix included in the first set of information. BRIEF DESCRIPTION OF THE DRAWINGS [0046] FIG.1 illustrates an example of a wireless communications system that supports enhanced AI/ML based CSI feedback in accordance with aspects of the present disclosure. [0047] FIG.2 illustrates an example of a wireless communications system that supports a two-sided model for AI/ML based CSI feedback in accordance with aspects of the present disclosure. [0048] FIG.3 illustrates an example of a wireless communications system that supports utilizing side information during AI/ML based CSI feedback in accordance with aspects of the present disclosure. [0049] FIG.4 illustrates an example of a block diagram of a device that supports AI/ML based CSI feedback in accordance with aspects of the present disclosure. [0050] FIG.5 illustrates a flowchart of a method that supports reconstructing AI/ML based CSI using side information in accordance with aspects of the present disclosure. [0051] FIG.6 illustrates a flowchart of a method that supports transmitting AI/ML based CSI feedback and side information in accordance with aspects of the present disclosure. DETAILED DESCRIPTION [0052] A two-sided AI/ML model facilitates the compression and transmission of CSI feedback (e.g., a channel matrix, a channel covariance matrix, or a precoding matrix based on the channel matrix or CSI) from one network node (or entity) to another network node Attorney Docket No.793MS0085PC Lenovo Docket No. SMM920220317-WO-PCT 11 (e.g., a first node to a second node). For the first node (e.g., a UE) compresses a matrix of CSI using an encoder (e.g., an encoder neural network, or NN), and sends the output of the encoder to the second node. [0053] The second node receives the encoded matrix and, using a decoder (e.g., a decoder NN), reconstructs the matrix. In some cases, the reconstructed matrix is not the same as the original matrix input into the encoder, and thus, the reconstructed matrix can include one or more errors. These errors can be caused by (1) errors caused by the AI/ML model, (2) errors caused by the wireless transmission of the encoded matrix from the first node to the second node, (3) a mismatch between the encoder NN and the decoder NN, and so on. [0054] For example, the AI/ML model can introduce errors because it can be a deep learning model (e.g., a deep neural network, or DNN), which has a non-zero probability of error (e.g., a non-zero chance that its output is different than its input) [0055] As another example, errors can occur during the transmission of the data between nodes. The encoder output is a feature vector, which contains multiple floating- point values. These values are quantized, mapped into bits, and then to channel symbols, and then sent between the nodes. In some cases, the channel may corrupt/distort the symbols and/or the decoder can introduce additive noise. Thus, the decoder may receive a noisy/corrupted version of the feature vector due to quantization of the feature vector and possible distortions caused by transmitting the quantized feature vector over the wireless channel, among other errors. [0056] Further, as another example, the encoder NN and the decoder NN may not match. For example, the wireless communications system may include many user devices (e.g., UEs) served by one gNB or access point. Further, the UEs may move between gNBs at different points in time. Due to such movement, it is not practical to train or develop the UE encoders with the gNB decoders, which can introduce differences between the encoders and decoders, and, therefore, errors when reconstructing matrices that move through encoders and decoders that are separately trained and developed. Attorney Docket No.793MS0085PC Lenovo Docket No. SMM920220317-WO-PCT 12 [0057] Thus, even if additional and/or resources were employed or adapted to enhance the models and/or the communication resources, errors may still be introduced, and the fidelity or accuracy of CSI feedback may be compromised or otherwise unreliable for use with optimizing a network channel by a network. [0058] The systems and methods described herein seek to alleviate such problems associated with the use of a two-sided AI/ML model, regardless of whether the data transmitted within the model is CSI feedback between network nodes, or other types of data that can be compressed/encoded during transmission and later reconstructed after transmission. [0059] In some embodiments, the systems and methods transmit, along with the encoded CSI, a set of side information that is associated with the encoded CSI. Using the set of side information, the decoder can detect any errors in a reconstructed matrix and correct the errors to realize a matrix (e.g., a second matrix) that is the same as the initial matrix input into the encoder before transmission. [0060] In some cases, such a type of error detection can be effective when there are few differences or errors between the input CSI and the reconstructed CSI (e.g., the second matrix is not very distant from the input matrix). For example, when an output from the decoder deviates from a correct output within a “minimum distance,” where the distance is measured in a Frobenius norm or ^^^^2 norm, the technology described herein can effectively correct any errors introduced and/or reflected in the reconstructed output. [0061] Thus, the utilization of side information, which may introduce a small amount of overhead to a channel, enables a network to utilize a two-sided AI/ML model for transmitting CSI feedback without the drawbacks of errors that can be introduced during CSI transmission. While errors may occur during the reconstruction of data/CSI, the technology, using side information, can detect and correct the errors in an efficient and effective manner. [0062] Such a solution can be flexible in implementation. For example, any network node can be a transmitter and/or a receiver, and, thus, the model can be implemented with either a UE or gNB as the encoder or the decoder. Thus, the AI/ML model, and the Attorney Docket No.793MS0085PC Lenovo Docket No. SMM920220317-WO-PCT 13 transmission of side information can solve any problems associated with errors in the reconstruction of CSI during downlink or uplink communications. [0063] Further, while the technology is applicable to a network employing two-sided AI/ML models for CSI feedback transmission, such features can be applied to other implementations, such as one-sided AI/ML models, CSI feedback transmission using other models, and so on. Thus, any model or configuration that compresses/encodes/quantizes CSI or other information, such as into smaller sizes of communicated information (e.g., fewer bits), and then reconstructs/decompresses/decodes the information upon reception, can utilize associated side information to detect and/or correct errors that may be introduced during the operation, among other benefits. [0064] Aspects of the present disclosure are described in the context of a wireless communications system. Aspects of the present disclosure are further illustrated and described with reference to device diagrams and flowcharts. [0065] FIG.1 illustrates an example of a wireless communications system 100 that supports enhanced AI/ML based CSI feedback in accordance with aspects of the present disclosure. The wireless communications system 100 may include one or more network entities 102, one or more UEs 104, a core network 106, and a packet data network 108. The wireless communications system 100 may support various radio access technologies. In some implementations, the wireless communications system 100 may be a 4G network, such as an LTE network or an LTE-Advanced (LTE-A) network. In some other implementations, the wireless communications system 100 may be a 5G network, such as an NR network. In other implementations, the wireless communications system 100 may be a combination of a 4G network and a 5G network, or other suitable radio access technology including Institute of Electrical and Electronics Engineers (IEEE) 802.11 (Wi-Fi), IEEE 802.16 (WiMAX), IEEE 802.20. The wireless communications system 100 may support radio access technologies beyond 5G. Additionally, the wireless communications system 100 may support technologies, such as time division multiple access (TDMA), frequency division multiple access (FDMA), or code division multiple access (CDMA), etc. Attorney Docket No.793MS0085PC Lenovo Docket No. SMM920220317-WO-PCT 14 [0066] The one or more network 102 may be dispersed throughout a geographic region to form the wireless communications system 100. One or more of the network entities 102 described herein may be or include or may be referred to as a network node, a base station, a network element, a radio access network (RAN), a base transceiver station, an access point, a NodeB, an eNodeB (eNB), a next-generation NodeB (gNB), or other suitable terminology. A network entity 102 and a UE 104 may communicate via a communication link 110, which may be a wireless or wired connection. For example, a network entity 102 and a UE 104 may perform wireless communication (e.g., receive signaling, transmit signaling) over a Uu interface. [0067] A network entity 102 may provide a geographic coverage area 112 for which the network entity 102 may support services (e.g., voice, video, packet data, messaging, broadcast, etc.) for one or more UEs 104 within the geographic coverage area 112. For example, a network entity 102 and a UE 104 may support wireless communication of signals related to services (e.g., voice, video, packet data, messaging, broadcast, etc.) according to one or multiple radio access technologies. In some implementations, a network entity 102 may be moveable, for example, a satellite associated with a non-terrestrial network. In some implementations, different geographic coverage areas 112 associated with the same or different radio access technologies may overlap, but the different geographic coverage areas 112 may be associated with different network entities 102. Information and signals described herein may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof. [0068] The one or more UEs 104 may be dispersed throughout a geographic region of the wireless communications system 100. A UE 104 may include or may be referred to as a mobile device, a wireless device, a remote device, a remote unit, a handheld device, or a subscriber device, or some other suitable terminology. In some implementations, the UE 104 may be referred to as a unit, a station, a terminal, or a client, among other examples. Additionally, or alternatively, the UE 104 may be referred to as an Internet-of-Things (IoT) Attorney Docket No.793MS0085PC Lenovo Docket No. SMM920220317-WO-PCT 15 device, an Internet-of-Everything (IoE) or machine-type communication (MTC) device, among other examples. In some implementations, a UE 104 may be stationary in the wireless communications system 100. In some other implementations, a UE 104 may be mobile in the wireless communications system 100. [0069] The one or more UEs 104 may be devices in different forms or having different capabilities. Some examples of UEs 104 are illustrated in FIG.1. A UE 104 may be capable of communicating with various types of devices, such as the network entities 102, other UEs 104, or network equipment (e.g., the core network 106, the packet data network 108, a relay device, an integrated access and backhaul (IAB) node, or another network equipment), as shown in FIG.1. Additionally, or alternatively, a UE 104 may support communication with other network entities 102 or UEs 104, which may act as relays in the wireless communications system 100. [0070] A UE 104 may also be able to support wireless communication directly with other UEs 104 over a communication link 114. For example, a UE 104 may support wireless communication directly with another UE 104 over a device-to-device (D2D) communication link. In some implementations, such as vehicle-to-vehicle (V2V) deployments, vehicle-to-everything (V2X) deployments, or cellular-V2X deployments, the communication link 114 may be referred to as a sidelink. For example, a UE 104 may support wireless communication directly with another UE 104 over a PC5 interface. [0071] A network entity 102 may support communications with the core network 106, or with another network entity 102, or both. For example, a network entity 102 may interface with the core network 106 through one or more backhaul links 116 (e.g., via an S1, N2, N2, or another network interface). The network entities 102 may communicate with each other over the backhaul links 116 (e.g., via an X2, Xn, or another network interface). In some implementations, the network entities 102 may communicate with each other directly (e.g., between the network entities 102). In some other implementations, the network entities 102 may communicate with each other or indirectly (e.g., via the core network 106). In some implementations, one or more network entities 102 may include subcomponents, such as an access network entity, which may be an example of an access node controller (ANC). An ANC may communicate with the one or more UEs 104 through Attorney Docket No.793MS0085PC Lenovo Docket No. SMM920220317-WO-PCT 16 one or more other access network entities, which may be referred to as a radio heads, smart radio heads, or transmission-reception points (TRPs). [0072] In some implementations, a network entity 102 may be configured in a disaggregated architecture, which may be configured to utilize a protocol stack physically or logically distributed among two or more network entities 102, such as an integrated access backhaul (IAB) network, an open RAN (O-RAN) (e.g., a network configuration sponsored by the O-RAN Alliance), or a virtualized RAN (vRAN) (e.g., a cloud RAN (C- RAN)). For example, a network entity 102 may include one or more of a central unit (CU), a distributed unit (DU), a radio unit (RU), a RAN Intelligent Controller (RIC) (e.g., a Near- Real Time RIC (Near-RT RIC), a Non-Real Time RIC (Non-RT RIC)), a Service Management and Orchestration (SMO) system, or any combination thereof. [0073] An RU may also be referred to as a radio head, a smart radio head, a remote radio head (RRH), a remote radio unit (RRU), or a transmission reception point (TRP). One or more components of the network entities 102 in a disaggregated RAN architecture may be co-located, or one or more components of the network entities 102 may be located in distributed locations (e.g., separate physical locations). In some implementations, one or more network entities 102 of a disaggregated RAN architecture may be implemented as virtual units (e.g., a virtual CU (VCU), a virtual DU (VDU), a virtual RU (VRU)). [0074] Split of functionality between a CU, a DU, and an RU may be flexible and may support different functionalities depending upon which functions (e.g., network layer functions, protocol layer functions, baseband functions, radio frequency functions, and any combinations thereof) are performed at a CU, a DU, or an RU. For example, a functional split of a protocol stack may be employed between a CU and a DU such that the CU may support one or more layers of the protocol stack and the DU may support one or more different layers of the protocol stack. In some implementations, the CU may host upper protocol layer (e.g., a layer 3 (L3), a layer 2 (L2)) functionality and signaling (e.g., Radio Resource Control (RRC), service data adaption protocol (SDAP), Packet Data Convergence Protocol (PDCP)). The CU may be connected to one or more DUsor RUs, and the one or more DUs or RUs may host lower protocol layers, such as a layer 1 (L1) (e.g., physical (PHY) layer) or an L2 (e.g., radio link control (RLC) layer, medium access control (MAC) Attorney Docket No.793MS0085PC Lenovo Docket No. SMM920220317-WO-PCT 17 layer) functionality and signaling and be at least partially controlled by the CU 160. [0075] Additionally, or alternatively, a functional split of the protocol stack may be employed between a DU and an RU such that the DU may support one or more layers of the protocol stack and the RU may support one or more different layers of the protocol stack. The DU may support one or multiple different cells (e.g., via one or more RUs). In some implementations, a functional split between a CU and a DU, or between a DU and an RU may be within a protocol layer (e.g., some functions for a protocol layer may be performed by one of a CU, a DU, or an RU, while other functions of the protocol layer are performed by a different one of the CU, the DU, or the RU). [0076] A CU may be functionally split further into CU control plane (CU-CP) and CU user plane (CU-UP) functions. A CU may be connected to one or more DUs via a midhaul communication link (e.g., F1, F1-c, F1-u), and a DU may be connected to one or more RUs via a fronthaul communication link (e.g., open fronthaul (FH) interface). In some implementations, a midhaul communication link or a fronthaul communication link may be implemented in accordance with an interface (e.g., a channel) between layers of a protocol stack supported by respective network entities 102 that are in communication via such communication links. [0077] The core network 106 may support user authentication, access authorization, tracking, connectivity, and other access, routing, or mobility functions. The core network 106 may be an evolved packet core (EPC), or a 5G core (5GC), which may include a control plane entity that manages access and mobility (e.g., a mobility management entity (MME), an access and mobility management functions (AMF)) and a user plane entity that routes packets or interconnects to external networks (e.g., a serving gateway (S-GW), a Packet Data Network (PDN) gateway (P-GW), or a user plane function (UPF)). In some implementations, the control plane entity may manage non-access stratum (NAS) functions, such as mobility, authentication, and bearer management (e.g., data bearers, signal bearers, etc.) for the one or more UEs 104 served by the one or more network entities 102 associated with the core network 106. Attorney Docket No.793MS0085PC Lenovo Docket No. SMM920220317-WO-PCT 18 [0078] The core network 106 may with the packet data network 108 over one or more backhaul links 116 (e.g., via an S1, N2, N2, or another network interface). The packet data network 108 may include an application server 118. In some implementations, one or more UEs 104 may communicate with the application server 118. A UE 104 may establish a session (e.g., a protocol data unit (PDU) session, or the like) with the core network 106 via a network entity 102. The core network 106 may route traffic (e.g., control information, data, and the like) between the UE 104 and the application server 118 using the established session (e.g., the established PDU session). The PDU session may be an example of a logical connection between the UE 104 and the core network 106 (e.g., one or more network functions of the core network 106). [0079] In the wireless communications system 100, the network entities 102 and the UEs 104 may use resources of the wireless communication system 100 (e.g., time resources (e.g., symbols, slots, subframes, frames, or the like) or frequency resources (e.g., subcarriers, carriers)) to perform various operations (e.g., wireless communications). In some implementations, the network entities 102 and the UEs 104 may support different resource structures. For example, the network entities 102 and the UEs 104 may support different frame structures. In some implementations, such as in 4G, the network entities 102 and the UEs 104 may support a single frame structure. In some other implementations, such as in 5G and among other suitable radio access technologies, the network entities 102 and the UEs 104 may support various frame structures (i.e., multiple frame structures). The network entities 102 and the UEs 104 may support various frame structures based on one or more numerologies. [0080] One or more numerologies may be supported in the wireless communications system 100, and a numerology may include a subcarrier spacing and a cyclic prefix. A first numerology (e.g., ^^^^=0) may be associated with a first subcarrier spacing (e.g., 15 kHz) and a normal cyclic prefix. In some implementations, the first numerology (e.g., ^^^^=0) associated with the first subcarrier spacing (e.g., 15 kHz) may utilize one slot per subframe. A second numerology (e.g., ^^^^=1) may be associated with a second subcarrier spacing (e.g., 30 kHz) and a normal cyclic prefix. A third numerology (e.g., ^^^^=2) may be associated with a third subcarrier spacing (e.g., 60 kHz) and a normal cyclic prefix or an extended cyclic Attorney Docket No.793MS0085PC Lenovo Docket No. SMM920220317-WO-PCT 19 prefix. A fourth numerology (e.g., ^^^^=3) be associated with a fourth subcarrier spacing (e.g., 120 kHz) and a normal cyclic prefix. A fifth numerology (e.g., ^^^^=4) may be associated with a fifth subcarrier spacing (e.g., 240 kHz) and a normal cyclic prefix. [0081] A time interval of a resource (e.g., a communication resource) may be organized according to frames (also referred to as radio frames). Each frame may have a duration, for example, a 10 millisecond (ms) duration. In some implementations, each frame may include multiple subframes. For example, each frame may include 10 subframes, and each subframe may have a duration, for example, a 1 ms duration. In some implementations, each frame may have the same duration. In some implementations, each subframe of a frame may have the same duration. [0082] Additionally or alternatively, a time interval of a resource (e.g., a communication resource) may be organized according to slots. For example, a subframe may include a number (e.g., quantity) of slots. The number of slots in each subframe may also depend on the one or more numerologies supported in the wireless communications system 100. For instance, the first, second, third, fourth, and fifth numerologies (i.e., ^^^^=0, ^^^^=1, ^^^^=2, ^^^^=3, ^^^^=4) associated with respective subcarrier spacings of 15 kHz, 30 kHz, 60 kHz, 120 kHz, and 240 kHz may utilize a single slot per subframe, two slots per subframe, four slots per subframe, eight slots per subframe, and 16 slots per subframe, respectively. Each slot may include a number (e.g., quantity) of symbols (e.g., OFDM symbols). In some implementations, the number (e.g., quantity) of slots for a subframe may depend on a numerology. For a normal cyclic prefix, a slot may include 14 symbols. For an extended cyclic prefix (e.g., applicable for 60 kHz subcarrier spacing), a slot may include 12 symbols. The relationship between the number of symbols per slot, the number of slots per subframe, and the number of slots per frame for a normal cyclic prefix and an extended cyclic prefix may depend on a numerology. It should be understood that reference to a first numerology (e.g., ^^^^=0) associated with a first subcarrier spacing (e.g., 15 kHz) may be used interchangeably between subframes and slots. [0083] In the wireless communications system 100, an electromagnetic (EM) spectrum may be split, based on frequency or wavelength, into various classes, frequency bands, frequency channels, etc. By way of example, the wireless communications system 100 may Attorney Docket No.793MS0085PC Lenovo Docket No. SMM920220317-WO-PCT 20 support one or multiple operating bands, such as frequency range designations FR1 (410 MHz – 7.125 GHz), FR2 (24.25 GHz – 52.6 GHz), FR3 (7.125 GHz – 24.25 GHz), FR4 (52.6 GHz – 114.25 GHz), FR4a or FR4-1 (52.6 GHz – 71 GHz), and FR5 (114.25 GHz – 300 GHz). In some implementations, the network entities 102 and the UEs 104 may perform wireless communications over one or more of the operating frequency bands. In some implementations, FR1 may be used by the network entities 102 and the UEs 104, among other equipment or devices for cellular communications traffic (e.g., control information, data). In some implementations, FR2 may be used by the network entities 102 and the UEs 104, among other equipment or devices for short-range, high data rate capabilities. [0084] FR1 may be associated with one or multiple numerologies (e.g., at least three numerologies). For example, FR1 may be associated with a first numerology (e.g., ^^^^=0), which includes 15 kHz subcarrier spacing; a second numerology (e.g., ^^^^=1), which includes 30 kHz subcarrier spacing; and a third numerology (e.g., ^^^^=2), which includes 60 kHz subcarrier spacing. FR2 may be associated with one or multiple numerologies (e.g., at least 2 numerologies). For example, FR2 may be associated with a third numerology (e.g., ^^^^=2), which includes 60 kHz subcarrier spacing; and a fourth numerology (e.g., ^^^^=3), which includes 120 kHz subcarrier spacing. [0085] As described herein, in some embodiments, a wireless network can utilize side information (or additional information) when transmitting CSI between nodes of the network. The side information, in some cases, can be small and/or take on various forms, such as one real-valued number and/or one vector having a length equal to a column vector of a matrix representing the CSI. Thus, the side information may only minimally impact network resources, while enhancing the use of AI/ML models for CSI feedback, among other benefits. [0086] Further, while many implementations are described using a two-sided AI/ML model, the technology can optimize or enhance other AI/ML models, such as one-side models. Attorney Docket No.793MS0085PC Lenovo Docket No. SMM920220317-WO-PCT 21 [0087] For example, during a UE can send CSI to a gNB. The UE encodes/compresses the CSI corresponding to singular/eigen vectors of the communication channel or a channel covariance matrix without employing an AI/ML model during encoding/compression. The gNB may then employ an AI/ML model to decode/decompress the CSI. Such a scenario, where a one-sided model is utilized, can result in errors like those in a two-sided model, and the technology described herein can mitigate any introduced errors. Thus, the technology can enhance or optimize transmission across any wireless MIMO (multiple-input, multiple-output) channel with various statistical characteristics. [0088] In some embodiments, after CSI is measured by a network node (e.g., the UE or a receiver node), the network node employs an autoencoder (AE) to compress/encode the CSI for use within the two-sided AI/ML model. [0089] An autoencoder is a deep neural network that can be used for dimensionality reduction. An AE can include two parts, an encoder ^^^^ ^^^^, which can be a DNN with learnable/trainable parameters denoted by ^^^^,
Figure imgf000023_0001
decoder ^^^^ ^^^^, another DNN with ^^^^ as its set of trainable/learnable parameters. The encoder learns a representation of the input signal/data (in other words, encodes the input signal/data), such that the key attributes of the input signal/data are captured as low-dimensional feature vector(s). The decoder validates the encoding and helps the encoder to refine its encoding by regenerating the input signal/data from the feature vectors generated by the encoder. Thus, the encoder and the decoder can be trained and developed together, such that the signal/data at the input to the encoder is reconstructed, as accurately as possible, at the output of the decoder. Thus, in some cases, the two neural networks, or the two models ^^^^ ^^^^ and ^^^^ ^^^^ together, are an encoder.
Figure imgf000023_0002
[0090] As described herein, the wireless network can utilize an autoencoder for CSI compression and transmission, such as during downlink transmission of CSI from a base station (e.g., gNB) to a UE. The encoder is trained to efficiently encode/decode channel matrices, where the training data set includes a large number of wireless channel matrices (e.g., collected from the field or generated through simulations). The AE is trained such that Attorney Docket No.793MS0085PC Lenovo Docket No. SMM920220317-WO-PCT 22 the encoder generates lower- representation of the input channel matrix and the decoder reconstructs the channel matrix from the latent representation generated by the encoder. After training, the encoder part of the AE, ^^^^ ^^^^, is deployed at the UE (e.g., the UE 104) and the decoder part of the AE, ^^^^ ^^^^, is at the base station (e.g., network
Figure imgf000024_0001
entity 102). [0091] The UE estimates the channel matrix using reference/pilot signals received from the base station, encodes the channel matrix using the encoder ^^^^ ^^^^, and transmits the encoded output (e.g., feature vectors/latent representation channel matrix) computed
Figure imgf000024_0002
by the encoder over the wireless channel towards the base station. The base station, using the decoder ^^^^ ^^^^, decodes/reconstructs the channel matrix from the feature vectors received from the UE. Thus, the AE can achieve a high amount of dimensionality reduction, and can therefore achieve a high and useful amount of compression of CSI information/data transmitted over the channel, where compressed CSI data at the output of the encoder is “features/feature vectors” computed by the encoder. [0092] FIG.2 illustrates an example of a wireless communications system 200 that supports a two-sided model for AI/ML based CSI feedback in accordance with aspects of the present disclosure. The wireless communications system 200 may implement or be implemented by aspects of the wireless communications system 100 as described in FIG.1. For example, the wireless communications system 200 may include a base station 102 and a UE 104, which may be examples of base stations and UEs 104 as described with reference to FIG.1. [0093] The UE 104 may include components for supporting a two-sided model for AI/ML based CSI feedback, such as an encoder 205. The encoder 205 may be part of an AE implemented by the wireless communication system 200 (e.g., a network). In the example of FIG.2, the UE 104 may receive, at the encoder 205, a CSI 210. The CSI 210 may be input data for the encoder 205. The encoder 205 may encode the CSI 210, which may result in a CSI representation 220 (e.g., a feature vector or a latent representation of the of the CSI 210). The base station 102 may include components for supporting a two- Attorney Docket No.793MS0085PC Lenovo Docket No. SMM920220317-WO-PCT 23 sided model for AI/ML based CSI such as a decoder 215. The decoder 215 may be part of an AE implemented by the wireless communication system 200 (e.g., a network). [0094] The UE 104, including the encoder 205, may output CSI representation 220 of the CSI 210 to the base station 102, including the decoder 215. For example, the UE 104 may transmit (e.g., output) the CSI representation 220 of the CSI 210 to the base station 102 over a wireless channel (e.g., a physical uplink control channel (PUSCH), a physical uplink shared channel (PUSCH), or the like). The base station 102 may, via the decoder 215, decoder and reconstruct the CSI 210 and output the reconstructed CSI 230. [0095] In some cases, the CSI 210 inputted to the encoder 205 of the UE 104 may include information derived from a channel matrix, such as one or more singular/eigen vectors of the channel matrix obtained through singular value decomposition/eigenvalue decomposition. In some other cases, the CSI 210 inputted to the encoder 205 of the UE 104 may include information derived from a channel covariance matrix. In other cases, the CSI 210 inputted to the encoder 205 of the UE 104 may include information derived from a precoder matrix derived from the channel matrix. Thus, input data to the encoder 205 of the UE 104 may include: a channel matrix ^^^^, a covariance matrix ^^^^ ^^^^ or ^^^^ ^^^^ of the channel, a precoding matrix ^^^^, derived from ^^^^ and used by the base station 102 for beamforming and wireless communication of information, and so on. [0096] As described herein, in some embodiments, wireless communications system 200 (e.g., a network) may facilitate and/or employ transmission of side information between the encoder 205 and the decoder 215 (e.g., along with the encoded CSI 210). [0097] FIG.3 illustrates an example of a wireless communications system 300 that supports utilizing side information during AI/ML based CSI feedback in accordance with aspects of the present disclosure. The wireless communications system 300 may implement or be implemented by aspects of the wireless communications system 100 and 200 as described in FIGs.1 and 2. For example, the wireless communications system 200 may include a base station 102 and a UE 104, which may be examples of base stations and UEs 104 as described with reference to FIGs.1 and 2. Attorney Docket No.793MS0085PC Lenovo Docket No. SMM920220317-WO-PCT 24 [0098] As shown, the encoder 205 transmit side information 310 to the decoder 215, such as along with or in addition to the CSI representation 220. The decoder 215 receives the side information 310 and the CSI representation 220. Using the side information 310, the decoder 215 may detect whether a reconstructed matrix (e.g., the reconstructed CSI 230) is error free or includes errors. When one or more errors are detected, the decoder 215, using the side information 310, may correct the errors and output corrected CSI 320 (e.g., a corrected reconstructed matrix). [0099] In some embodiments, the side information 310 may be defined as information that supports or is additional to main information (e.g., the CSI 210). The side information 310 may provide functionality to various reconstruction processes, such as error detection and/or error correction operations. In some cases, the side information 310 may be an eigenvalue derived from the main information (e.g., the CSI 210). In other cases, the side information 310 may be a null vector derived from the main information (e.g., the CSI 210). [0100] The following notation is employed when describing various techniques for enhancing the accuracy of CSI feedback, as described herein. For a length- ^^^^ column vector ^^^^ ∈ ℝ1× ^^^^, or a row vector ^^^^ ∈ ℝ ^^^^×1, ^^^^ ^^^^ denotes the ^^^^ ^^^^ℎ element of the vector. For a matrix ^^^^ ∈ ℝ ^^^^× ^^^^, ^^^^ ^^^^ ^^^^ , ^^^^ ^^^^,1: ^^^^, ^^^^1: ^^^^, ^^^^ denotes ( ^^^^, ^^^^) ^^^^ℎ element, ^^^^ ^^^^ℎ row, ^^^^ ^^^^ℎ column of matrix ^^^^, respectively. A block of matrix ^^^^ including elements in row ^^^^ to ^^^^ and column ^^^^ to ^^^^′ is denoted by ^^^^ ^^^^: ^^^^′, ^^^^: ^^^^′. Superscript ^^^^ and superscript ∗ denotes conjugate transpose (also known as Hermitian transpose) of a vector or a matrix. Thus, for a vector ^^^^, ^^^^ ^^^^ denotes transpose of ^^^^, and ^^^^ denotes conjugate transpose (or, Hermitian) of ^^^^ and the same holds true for a matrix ^^^^. For a real-valued vector ^^^^, conjugate transpose is equivalent to transpose, e.g., ^^^^ = ^^^^ ^^^^ and similarly, for a real-valued matrix ^^^^, ^^^^ = ^^^^ ^^^^. Further, ‖ ^^^^ ‖ denotes Euclidean length, or, equivalently Euclidean norm, or, of vector ^^^^ and ∥ ^^^^ ∥ ^^^^ denotes Frobenius norm of vector ^^^^. [0101] In some embodiments, the side information 310 can be an eigenvalue of a matrix being sent as CSI (e.g., where the matrix is a channel matrix ^^^^, the channel covariance matrix (either ^^^^ ^^^^ or ^^^^ ^^^^), and/or a precoder matrix ^^^^). Attorney Docket No.793MS0085PC Lenovo Docket No. SMM920220317-WO-PCT 25 [0102] For example, CSI be a matrix ^^^^ ∈ ℂ ^^^^× ^^^^, ^^^^ > 1, and the side information is an eigenvalue of the matrix ^^^^, denoted by λ. The process of CSI feedback is as follows: [0103] The UE constructs a matrix ^^^^ after estimating the channel matrix ^^^^. The matrix ^^^^ can be (1) equal to ^^^^, (2) equal to either ^^^^ ^^^^ or ^^^^ ^^^^, or equal to a precoding matrix ^^^^ derived from ^^^^. [0104] The UE computes λ, a non-zero eigenvalue of matrix ^^^^, where λ can be any non-zero eigenvalue of matrix ^^^^. [0105] The UE compresses matrix ^^^^ using its encoder NN (e.g., encoder 205), generating a latent representation/feature vector (e.g., the CSI representation 220) as an output of its encoder NN. [0106] The UE transmits the latent representation/feature vector computed by the encoder NN corresponding to ^^^^, and the eigenvalue λ (e.g., the side information 310) towards the gNB. [0107] The gNB supplies the received latent representation/feature vector and provides the vector as input to its decoder NN (e.g., the decoder 215). The decoder outputs a matrix ^^^^ ∈ ℂ ^^^^× ^^^^ (e.g, output CSI 230). [0108] In some cases, when generating the matrix ^^^^ using the received latent representation/feature vector, the gNB may know the details of the decoder NN, such as its detailed structure, neural layers, neuron weights, hyper-parameters, and so on). Such information may be referred to as a “set of parameters,” which the gNB may use to reconstruct the CSI from the received low-dimensional/compressed/encoded information of the CSI. The “set of parameters” can be available at the gNB through signaling from another node (either the UE or some other network entity), or the gNB can learn the parameters during training of the decoder NN. Further, the gNB can include a specific set of pre-defined parameter values and make use of these pre-defined values of the parameters to implement decoder NN and generate matrix ^^^^. Attorney Docket No.793MS0085PC Lenovo Docket No. SMM920220317-WO-PCT 26 [0109] As described herein, the can be implemented for any transmission of encoded/compressed information and is not limited to AI/ML based methods of compressed/decompression. For example, a network can employ compression/encoding method based on a non-AI/ML procedure/algorithm, which has a corresponding non- AI/ML procedure/algorithm for reconstructing the matrix from the compressed information. In such implementations, the “set of parameters” described herein may correspond to details or vales associated with the non-AI/ML procedure/algorithm implemented by the gNB for generating matrix ^^^^. [0110] When there are no errors, ^^^^ = ^^^^, which results in accurate CSI feedback and enables a gNB to have accurate channel knowledge. However, as described herein, due to various factors (e.g., quantization of the feature vector, channel distortions, imperfect AI/ML models, mismatches between the encoder NN and the decoder NN, and so on), there is a non-zero probability that ^^^^ ≠ ^^^^. In such cases, the decoder NN can perform error detection operations and error correction operations based on λ, the eigenvalue of ^^^^ transmitted as side information, as follows. [0111] For example, reconstructed CSI (e.g., decoder NN output ^^^^ ∈ ℂ ^^^^× ^^^^) and side information λ is received as input. The error detection operation performs the following steps: [0112] Determine the highest eigenvalue of matrix ^^^^; [0113] If the determined highest eigenvalue of ^^^^ is equal to ^^^^, declare that there is no error in the reconstructed CSI, or, equivalently, in the decoder NN output, and stop; and/or [0114] If the determined highest eigenvalue of ^^^^ is NOT equal to ^^^^, determine that the reconstructed CSI, or the decoder NN output, is erroneous (e.g., has one or more errors) and proceed to an error correction operation. [0115] The error correction operation performs the following steps: [0116] Let Rank( ^^^^) = ^^^^, and compute Singular Valued Decomposition (SVD) of matrix ^^^^: Attorney Docket No.793MS0085PC Lenovo Docket No. SMM920220317-WO-PCT 27 SVD( ^^^^) = ^^^^ [0117] where ^^^^ ∈ ℂ ^^^^× ^^^^ and ^^^^ ∈ ℂ ^^^^× ^^^^ are unitary matrices and ^^^^ is ^^^^ × ^^^^ matrix with its elements having the following property: ^^^^ ^^^^ ^^^^ = 0, for ^^^^ ≠ ^^^^, 1 ≤ ^^^^ ≤ ^^^^, 1 ≤ ^^^^ ≤ ^^^^ and ^^^^ ^^^^ ^^^^ , ^^^^ = 1, … , ^^^^ ^^^^ ^^^^{ ^^^^1, ^^^^2}, represents the ^^^^ ^^^^ℎ singular value of ^^^^. [0118] Compute matrix ^ ^^^ ∈ ℂ ^^^^× ^^^^ , where ^^^^−1 ^ ^^^ =� ^^^^ ^^^^ ^^^^ ^^^^ ^^^^ ^^^^ [0119] In the denotes ^^^^ ^^^^ℎ column of matrix ^^^^, ^^^^ ^^^^,:1: ^^^^ ∈ ℂ1× ^^^^ denotes ^^^^ ^^^^ℎ
Figure imgf000029_0001
row ^^^^ ^^^^ ^^^^ ^^^^ℎ diagonal element of matrix ^^^^. [0120] Equivalently, compute ^ ^^^ as ^ ^^^ = ^^^^1: ^^^^,1: ^^^^−1 ^^^^1: ^^^^−1,1: ^^^^−1 ^^^^1 : ^^^^−1,1: ^^^^ , where ^^^^1: ^^^^,1: ^^^^−1 ∈ ℂ ^^^^× ^^^^−1 is the matrix containing the first ^^^^ − 1 columns of matrix ^^^^, ^^^^1 : ^^^^−1,1: ^^^^ ∈ ℂ ^^^^−1× ^^^^ is the matrix containing the first ^^^^ − 1 rows of matrix ^^^^, and ^^^^1: ^^^^−1,1: ^^^^−1 ∈ ℂ ^^^^−1× ^^^^−1 is the matrix containing the first ^^^^ − 1 rows and first ^^^^ − 1
Figure imgf000029_0002
of matrix ^^^^. [0121] Determine that the corrected CSI matrix is matrix ( ^ ^^^ + ^^^^ ^^^^) ∈ ℂ ^^^^× ^^^^ , where ^^^^ is an identity matrix of size ^^^^ × ^^^^. Equivalently, replace the decoder NN output matrix ^^^^ with matrix ( ^ ^^^ + ^^^^ ^^^^) ∈ ℂ ^^^^× ^^^^ , where ^^^^ is an identity matrix of size ^^^^ × ^^^^. [0122] In some cases, the error correction is based on computing the nearest matrix to ^^^^ having λ as one of its eigenvalues. However, as eigen decomposition and hence, eigenvalues, can be determined for square matrices, in some cases, the error correction operation is applicable when the matrix is a square matrix. [0123] In some embodiments, the error correction operation can be extended to multiple eigenvalues. For example, the side information can include multiple eigenvalues, which are utilized by the error correction operation and/or error detection operation. For example, the operations can be performed iteratively using the multiple eigenvalues or simultaneously using the multiple eigenvalues at the same time. Attorney Docket No.793MS0085PC Lenovo Docket No. SMM920220317-WO-PCT 28 [0124] In some embodiments, the side information 310 can be a null vector of a matrix being sent as CSI (e.g., where the matrix is a channel matrix ^^^^, the channel covariance matrix (either ^^^^ ^^^^ or ^^^^ ^^^^), and/or a precoder matrix ^^^^). [0125] For example, CSI information can be a matrix ^^^^ ∈ ℂ ^^^^× ^^^^, ^^^^ ≥ 1, ^^^^ ≥ 1, and the side information is a null vector of the matrix ^^^^, denoted by ^^^^ ∈ ℂ ^^^^×1 where ^^^^ ≠ ^^^^ and ^^^^ × ^^^^ = ^^^^ and boldface zero ( ^^^^), denotes an all zero vector (e.g., a vector having all its elements equal to zero). The process of CSI feedback is as follows: [0126] The UE constructs a matrix ^^^^ ∈ ℂ ^^^^× ^^^^ after estimating the channel matrix ^^^^. The matrix ^^^^ can be (1) either equal to ^^^^, (2) equal to either ^^^^ ^^^^ or ^^^^ ^^^^, or (3) equal to a precoding matrix ^^^^ derived from ^^^^. [0127] The UE computes ^^^^, a null vector of matrix ^^^^, where ^^^^ ∈ ℂ ^^^^×1 can be any null vector of matrix ^^^^. [0128] The UE then compresses matrix ^^^^ using its encoder NN (e.g., the encoder 205), generating a latent representation/feature vector (e.g., the CSI representation 220) as an output of its encoder NN. [0129] The UE transmits the latent representation/feature vector computed by the encoder NN corresponding to ^^^^, and the null vector ^^^^ (e.g., the side information 31), to the gNB. [0130] The gNB supplies the received latent representation/feature vector and gives it as input to its decoder NN. The decoder outputs a matrix ^^^^ ∈ ℂ ^^^^× ^^^^ . [0131] When there are no errors, ^^^^ = ^^^^, which results in accurate CSI feedback and enables a gNB to have accurate channel knowledge. However, as described herein, due to various factors (e.g., quantization of the feature vector, channel distortions, imperfect AI/ML models, mismatches between the encoder NN and the decoder NN, and so on), there is a non-zero probability that ^^^^ ≠ ^^^^. In such cases, the decoder NN can perform error detection operations and error correction operations based on ^^^^, the null vector of ^^^^, transmitted as side information, as follows. Attorney Docket No.793MS0085PC Lenovo Docket No. SMM920220317-WO-PCT 29 [0132] For example, reconstructed (e.g., decoder NN output ^^^^ ∈ ℂ ^^^^× ^^^^) and side information ^^^^ ∈ ℂ ^^^^×1 is received as input. The error detection operation performs the following steps: [0133] Compute ^^^^ × ^^^^; [0134] If ^^^^ × ^^^^ is equal to an all zero vector, e.g., if ^^^^ × ^^^^ = ^^^^, determine there is no error in the reconstructed CSI (e.g., the decoder NN output), and stop; [0135] If ^^^^ × ^^^^ is NOT equal to an all zero vector, e.g., if ^^^^ × ^^^^ ≠ ^^^^, determine that the reconstructed CSI, or the decoder NN output, is erroneous (e.g., has one or more errors) and proceed to an error correction operation. [0136] The error correction operation performs the following steps: [0137] Find a vector ^^^^ which is a dual vector to vector ^^^^. [0138] Compute ^ ^^^ ∈ ℂ ^^^^× ^^^^ , where ^ ^^^ = ^^^^ + (− ^^^^ ^^^^) ^^^^ . [0139] Determine that ^^^^ is the corrected CSI matrix (e.g., the corrected CSI 320) and/or decoder NN output matrix ^^^^ with matrix ^ ^^^. [0140] In some cases, a dual vector to a given vector is found by choosing an operator norm associated with a vector norm. Finding a dual vector to a given vector ^^^^ can be finding a vector ^^^^ such that ^^^^ ^^^^ =∥ ^^^^ ∥ | ^^^^∗ ^^^^| ^^^^⋅ ‖ ^^^^ ‖ = 1, where ∥ ^^^^ ∥ ^^^^= ^^^^ ^^^^ ^^ ^^^^^^ ‖ ^^^^‖ and where ‖ ^^^^ ‖ denotes a vector norm (such
Figure imgf000031_0001
of ^^^^. [0141] In some embodiments, the error correction operation can be extended to multiple null vectors. For example, the side information can include multiple null vectors, which are utilized by the error correction operation and/or error detection operation. For example, the operations can be performed iteratively using the multiple null vectors or simultaneously using the multiple null vectors at the same time. [0142] Further, in some embodiments, the error detection operations and/or error correction operations can utilize both eigenvalues and null vectors as the side information 310. Thus, the decoder 215 can perform one or more operations using eigenvalues as side Attorney Docket No.793MS0085PC Lenovo Docket No. SMM920220317-WO-PCT 30 information 310, one or more operations null vectors as side information 310, or each operation with both eigenvalues and null vectors as the side information 310. [0143] Thus, in some embodiments, the technology described herein can be implemented across any variations of network channel statistics. The technology can enhance, improve, or optimize CSI feedback systems that utilize AI/ML autoencoder models, non-AI/ML models, one-sided models, two-sided models, and/or any AI/ML models (e.g., trained or off the shelf, because the technology is applied to the input/output of the models, and not the models themselves. [0144] FIG.4 illustrates an example of a block diagram 400 of a device 402 that supports AI/ML based CSI feedback in accordance with aspects of the present disclosure. The device 402 may be an example of a network entity 102 or UE 104 as described herein. The device 402 may support wireless communication with one or more network entities 102, UEs 104, or any combination thereof. The device 402 may include components for bi- directional communications including components for transmitting and receiving communications, such as a processor 404, a memory 406, a transceiver 408, and an I/O controller 410. These components may be in electronic communication or otherwise coupled (e.g., operatively, communicatively, functionally, electronically, electrically) via one or more interfaces (e.g., buses). [0145] The processor 404, the memory 406, the transceiver 408, or various combinations thereof or various components thereof may be examples of means for performing various aspects of the present disclosure as described herein. For example, the processor 404, the memory 406, the transceiver 408, or various combinations or components thereof may support a method for performing one or more of the operations described herein. [0146] In some implementations, the processor 404, the memory 406, the transceiver 408, or various combinations or components thereof may be implemented in hardware (e.g., in communications management circuitry). The hardware may include a processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a field- programmable gate array (FPGA) or other programmable logic device, a discrete gate or Attorney Docket No.793MS0085PC Lenovo Docket No. SMM920220317-WO-PCT 31 transistor logic, discrete hardware or any combination thereof configured as or otherwise supporting a means for performing the functions described in the present disclosure. In some implementations, the processor 404 and the memory 406 coupled with the processor 404 may be configured to perform one or more of the functions described herein (e.g., executing, by the processor 404, instructions stored in the memory 406). [0147] For example, the processor 404 may support wireless communication at the device 402 in accordance with examples as disclosed herein. The processor 404 may be configured as or otherwise support a means for receiving, from a UE a first set of information comprising a representation of a matrix, and a set of side information, generating a second set of information based at least in part on the first set of information, and generating a third set of information using the set of side information, wherein the third set of information corresponds to a corrective version of the second set of information. [0148] As another example, the processor 404 may support wireless communication at the device 402 in accordance with examples as disclosed herein. The processor 404 may be configured as or otherwise support a means for transmitting, to a network entity, a first set of information that includes a representation of a matrix, and transmitting, to the network entity, a set of side information that is derived from the representation of the matrix included in the first set of information. [0149] The processor 404 may include an intelligent hardware device (e.g., a general- purpose processor, a DSP, a CPU, a microcontroller, an ASIC, an FPGA, a programmable logic device, a discrete gate or transistor logic component, a discrete hardware component, or any combination thereof). In some implementations, the processor 404 may be configured to operate a memory array using a memory controller. In some other implementations, a memory controller may be integrated into the processor 404. The processor 404 may be configured to execute computer-readable instructions stored in a memory (e.g., the memory 406) to cause the device 402 to perform various functions of the present disclosure. [0150] The memory 406 may include random access memory (RAM) and read-only memory (ROM). The memory 406 may store computer-readable, computer-executable Attorney Docket No.793MS0085PC Lenovo Docket No. SMM920220317-WO-PCT 32 code including instructions that, when by the processor 404 cause the device 402 to perform various functions described herein. The code may be stored in a non-transitory computer-readable medium such as system memory or another type of memory. In some implementations, the code may not be directly executable by the processor 404 but may cause a computer (e.g., when compiled and executed) to perform functions described herein. In some implementations, the memory 406 may include, among other things, a basic I/O system (BIOS) which may control basic hardware or software operation such as the interaction with peripheral components or devices. [0151] The I/O controller 410 may manage input and output signals for the device 402. The I/O controller 410 may also manage peripherals not integrated into the device M02. In some implementations, the I/O controller 410 may represent a physical connection or port to an external peripheral. In some implementations, the I/O controller 410 may utilize an operating system such as iOS®, ANDROID®, MS-DOS®, MS-WINDOWS®, OS/2®, UNIX®, LINUX®, or another known operating system. In some implementations, the I/O controller 410 may be implemented as part of a processor, such as the processor M06. In some implementations, a user may interact with the device 402 via the I/O controller 410 or via hardware components controlled by the I/O controller 410. [0152] In some implementations, the device 402 may include a single antenna 412. However, in some other implementations, the device 402 may have more than one antenna 412 (i.e., multiple antennas), including multiple antenna panels or antenna arrays, which may be capable of concurrently transmitting or receiving multiple wireless transmissions. The transceiver 408 may communicate bi-directionally, via the one or more antennas 412, wired, or wireless links as described herein. For example, the transceiver 408 may represent a wireless transceiver and may communicate bi-directionally with another wireless transceiver. The transceiver 408 may also include a modem to modulate the packets, to provide the modulated packets to one or more antennas 412 for transmission, and to demodulate packets received from the one or more antennas 412. [0153] FIG.5 illustrates a flowchart of a method 500 that supports reconstructing AI/ML based CSI using side information in accordance with aspects of the present disclosure. The operations of the method 500 may be implemented by a device or its Attorney Docket No.793MS0085PC Lenovo Docket No. SMM920220317-WO-PCT 33 components as described herein. For the operations of the method 500 may be performed by the network entity 102 (e.g., the gNB) as described with reference to FIGs.1 through 3. In some implementations, the device may execute a set of instructions to control the function elements of the device to perform the described functions. Additionally, or alternatively, the device may perform aspects of the described functions using special- purpose hardware. [0154] At 505, the method 500 may include receiving, from a UE, a first set of information comprising a representation of a matrix, and a set of side information. The operations of 505 may be performed in accordance with examples as described herein. In some implementations, aspects of the operations of 505 may be performed by a device as described with reference to FIG.1. [0155] At 510, the method 500 may include generating a second set of information based at least in part on the first set of information. The operations of 510 may be performed in accordance with examples as described herein. In some implementations, aspects of the operations of 510 may be performed by a device as described with reference to FIG.1. [0156] At 515, the method 500 may include generating a third set of information using the set of side information, wherein the third set of information corresponds to a corrective version of the second set of information. The operations of 515 may be performed in accordance with examples as described herein. In some implementations, aspects of the operations of 515 may be performed by a device as described with reference to FIG.1. [0157] FIG.6 illustrates a flowchart of a method 600 that supports transmitting AI/ML based CSI feedback and side information in accordance with aspects of the present disclosure. The operations of the method 600 may be implemented by a device or its components as described herein. For example, the operations of the method 600 may be performed by the UE 104 as described with reference to FIGs.1 through 3. In some implementations, the device may execute a set of instructions to control the function elements of the device to perform the described functions. Additionally, or alternatively, the device may perform aspects of the described functions using special-purpose hardware. Attorney Docket No.793MS0085PC Lenovo Docket No. SMM920220317-WO-PCT 34 [0158] At 605, the method 600 may transmitting, to a network entity, a first set of information that includes a representation of a matrix. The operations of 605 may be performed in accordance with examples as described herein. In some implementations, aspects of the operations of 605 may be performed by a device as described with reference to FIG.1. [0159] At 610, the method 600 may include transmitting, to the network entity, a set of side information that is derived from the representation of the matrix included in the first set of information. The operations of 610 may be performed in accordance with examples as described herein. In some implementations, aspects of the operations of 610 may be performed by a device as described with reference to FIG.1. [0160] It should be noted that the methods described herein describes possible implementations, and that the operations and the steps may be rearranged or otherwise modified and that other implementations are possible. Further, aspects from two or more of the methods may be combined. [0161] The various illustrative blocks and components described in connection with the disclosure herein may be implemented or performed with a general-purpose processor, a DSP, an ASIC, a CPU, an FPGA or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices (e.g., a combination of a DSP and a microprocessor, multiple microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. [0162] The functions described herein may be implemented in hardware, software executed by a processor, firmware, or any combination thereof. If implemented in software executed by a processor, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Other examples and implementations are within the scope of the disclosure and appended claims. For example, due to the nature Attorney Docket No.793MS0085PC Lenovo Docket No. SMM920220317-WO-PCT 35 of software, functions described herein be implemented using software executed by a processor, hardware, firmware, hardwiring, or combinations of any of these. Features implementing functions may also be physically located at various positions, including being distributed such that portions of functions are implemented at different physical locations. [0163] Computer-readable media includes both non-transitory computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A non-transitory storage medium may be any available medium that may be accessed by a general-purpose or special-purpose computer. By way of example, and not limitation, non-transitory computer-readable media may include RAM, ROM, electrically erasable programmable ROM (EEPROM), flash memory, compact disk (CD) ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other non-transitory medium that may be used to carry or store desired program code means in the form of instructions or data structures and that may be accessed by a general-purpose or special-purpose computer, or a general-purpose or special-purpose processor. [0164] Any connection may be properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of computer-readable medium. Disk and disc, as used herein, include CD, laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above are also included within the scope of computer- readable media. [0165] As used herein, including in the claims, “or” as used in a list of items (e.g., a list of items prefaced by a phrase such as “at least one of” or “one or more of” or “one or both of”) indicates an inclusive list such that, for example, a list of at least one of A, B, or C means A or B or C or AB or AC or BC or ABC (i.e., A and B and C). Also, as used herein, Attorney Docket No.793MS0085PC Lenovo Docket No. SMM920220317-WO-PCT 36 the phrase “based on” shall not be as a reference to a closed set of conditions. For example, an example step that is described as “based on condition A” may be based on both a condition A and a condition B without departing from the scope of the present disclosure. In other words, as used herein, the phrase “based on” shall be construed in the same manner as the phrase “based at least in part on. Further, as used herein, including in the claims, a “set” may include one or more elements. [0166] The terms “transmitting,” “receiving,” or “communicating,” when referring to a network entity, may refer to any portion of a network entity (e.g., a base station, a CU, a DU, a RU) of a RAN communicating with another device (e.g., directly or via one or more other network entities). [0167] The description set forth herein, in connection with the appended drawings, describes example configurations and does not represent all the examples that may be implemented or that are within the scope of the claims. The term “example” used herein means “serving as an example, instance, or illustration,” and not “preferred” or “advantageous over other examples.” The detailed description includes specific details for the purpose of providing an understanding of the described techniques. These techniques, however, may be practiced without these specific details. In some instances, known structures and devices are shown in block diagram form to avoid obscuring the concepts of the described example. [0168] The description herein is provided to enable a person having ordinary skill in the art to make or use the disclosure. Various modifications to the disclosure will be apparent to a person having ordinary skill in the art, and the generic principles defined herein may be applied to other variations without departing from the scope of the disclosure. Thus, the disclosure is not limited to the examples and designs described herein but is to be accorded the broadest scope consistent with the principles and novel features disclosed herein. Attorney Docket No.793MS0085PC

Claims (1)

  1. Lenovo Docket No. SMM920220317-WO-PCT 37 What is claimed is: 1. A network entity, comprising: at least one memory; and at least one processor coupled with the at least one memory and configured to cause the network entity to: receive, from user equipment (UE), a first set of information comprising a representation of a matrix, and a set of side information; generate a second set of information based at least in part on the first set of information; and generate a third set of information using the set of side information, wherein the third set of information corresponds to a corrective version of the second set of information. 2. The network entity of claim 1, wherein, to generate the third set of information, the at least one processor is configured to cause the network entity to: perform an error detection operation on the second set of information using the set of side information; and perform an error correction operation on the second set of information using the set of side information. 3. The network entity of claim 1, wherein the first set of information comprises channel state information (CSI), and wherein the CSI comprises one or more of a channel matrix of the CSI, a channel covariance matrix of the CSI, or a precoding matrix based on the CSI. 4. The network entity of claim 1, wherein the set of side information comprises one or more eigenvalues of the matrix, or wherein the set of side information comprises one Attorney Docket No.793MS0085PC Lenovo Docket No. SMM920220317-WO-PCT 38 or more null vectors of the matrix, or the set of side information comprises one or more eigenvalues of the matrix and one or more null vectors of the matrix. 5. The network entity of claim 1, wherein the at least one processor is configured to cause the network entity to: receive, from the UE, a first set of parameters comprising information identifying generation of the first set of information by the UE, or information identifying encoding of the first set of information by the UE, or both. 6. The network entity of claim 5, wherein the information identifying the encoding of the first set of information by the UE includes information identifying a channel matrix of the first set of information, a channel covariance matrix of the first set of information, or a precoding matrix of the first set of information, or a combination thereof. 7. The network entity of claim 5, wherein the at least one processor is configured to cause the network entity to: determine the first set of parameters based at least in part on one or more predefined values or based on an indication received from the UE or another network device. 8. The network entity of claim 2, wherein the at least one processor is configured to cause the network entity to: generate the second set of information as a second matrix based at least in part on the first set of parameters and the first set of information. 9. The network entity of claim 8, wherein the at least one processor is configured to cause the network entity to: generate the second set of information using a machine learning model, wherein the machine learning model is based at least in part on a second set of parameters for the machine learning model. Attorney Docket No.793MS0085PC Lenovo Docket No. SMM920220317-WO-PCT 39 10. The network entity of claim the at least one processor is configured to cause the network entity to: determine that the second set of parameters includes machine learning model information based at least in part on an indication from the UE or another network entity. 11. The network entity of claim 8, wherein the set of side information includes an eigenvalue, and wherein to perform the error detection operation, the at least one processor is configured to cause the network entity to: compare the eigenvalue associated with the set of side information to an eigenvalue associated with the second matrix; and determine an absence of one or more errors in the second matrix based at least in part on the eigenvalue associated with the set of side information being equal to the eigenvalue associated with the second matrix, or determine a presence of one or more errors in the second matrix based at least in part on the eigenvalue associated with the set of side information being unequal to the eigenvalue associated with the second matrix. 12. The network entity of claim 2, wherein to perform the error correction operation, the at least one processor is configured to cause the network entity to: perform the error correction operation on the second set of information based at least in part on the determination that the second set of information includes one or more errors. 13. The network entity of claim 12, wherein the side information includes an eigenvalue, and wherein to perform the error correction operation, the at least one processor is configured to cause the network entity to: decompose the second matrix into a product of a first matrix U, a second matrix S, and a third matrix V based at least in part on a singular value decomposition operation; Attorney Docket No.793MS0085PC Lenovo Docket No. SMM920220317-WO-PCT 40 construct a matrix U1 by keeping first N columns of the matrix U and discarding any remaining columns, where N is equal to one less than a rank of the second matrix of the second set of information; construct a matrix S1 by keeping a first N columns and first N rows of the matrix S and discarding any remaining columns and rows, where N is equal to one less than a rank of the second matrix of the second set of information; construct a matrix V1 by keeping a first N rows of the matrix V and discarding any remaining rows, where N is equal to one less than a rank of the second matrix of the second set of information; compute a matrix by a product of the three matrices U1, S1, and V1; and generate a matrix by adding the computed matrix to a matrix obtained by multiplying an eigenvalue derived from the set of side information with an identity matrix, where a size of the identity matrix is equal to a size of the second matrix, wherein the generated matrix is the third set of information. 14. The network entity of claim 12, wherein the side information includes a column vector, and wherein to perform the error detection operation, the at least one processor is configured to cause the network entity to: compute a vector by post-multiplying the second matrix with the column vector; and when each element of the computer vector is equal to zero, determine that the second matrix is error free, or when any element of the computer vector is not equal to zero, determine that the second matrix has one or more errors. 15. The network entity of claim 12, wherein, in response to determining that the second matrix has one or more errors, the at least one processor is further configured to cause the network entity to: determine a dual vector to the column vector; Attorney Docket No.793MS0085PC Lenovo Docket No. SMM920220317-WO-PCT 41 generate a vector by post- a negative of the second matrix with the column vector; compute a matrix by post-multiplying the generated vector with a conjugate transpose of the dual vector; and generate a matrix by adding the computed matrix to the second matrix, wherein the generated matrix is the third set of information. 16. A method performed by a network entity, the method comprising: receiving, from user equipment (UE), a first set of information comprising a representation of a matrix, and a set of side information; generating a second set of information based at least in part on the first set of information; and generating a third set of information using the set of side information, wherein the third set of information corresponds to a corrective version of the second set of information. 17. A user equipment (UE), comprising: at least one memory; and at least one processor coupled with the at least one memory and configured to cause the UE to: transmit, to a network entity, a first set of information that includes a representation of a matrix; and transmit, to the network entity, a set of side information that is derived from the representation of the matrix included in the first set of information. 18. The UE of claim 17, wherein the first set of information includes channel state information (CSI). Attorney Docket No.793MS0085PC Lenovo Docket No. SMM920220317-WO-PCT 42 19. The UE of claim 17, wherein set of side information includes one or more eigenvalues derived from the CSI or one or more column vectors derived from the CSI. 20. A processor for wireless communication, comprising: at least one controller coupled with at least one memory and configured to cause the processor to: transmit, to a network entity, a first set of information that includes a representation of a matrix; and transmit, to the network entity, a set of side information that is derived from the representation of the matrix included in the first set of information. Attorney Docket No.793MS0085PC
PCT/IB2024/053118 2023-03-31 2024-03-29 Improving accuracy of artificial intelligence/machine learning (ai/ml) based channel state information (csi) feedback WO2024150208A1 (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US63/493,573 2023-03-31

Publications (1)

Publication Number Publication Date
WO2024150208A1 true WO2024150208A1 (en) 2024-07-18

Family

ID=

Similar Documents

Publication Publication Date Title
US11611379B2 (en) Precoding techniques for wireless communications
US20210184744A1 (en) User equipment feedback of multi-path channel cluster information to assist network beam management
US11251849B2 (en) Iterative multi-beam selection with uplink-downlink beam training
US20220060917A1 (en) Online training and augmentation of neural networks for channel state feedback
WO2022040160A2 (en) Neural network or layer configuration indicator for a channel state information scheme
US20240007174A1 (en) Channel statistics based adaptive beam weight estimation
US20240162924A1 (en) Power amplifier model estimation for digital post distortion in multi-antenna devices
EP4158781A1 (en) Techniques for processing digital post distortion using additional reference symbols
US20230246691A1 (en) Inter-stream interference measurement for non-linear precoding techniques
WO2024150208A1 (en) Improving accuracy of artificial intelligence/machine learning (ai/ml) based channel state information (csi) feedback
WO2024150203A1 (en) Error detection and correction in channel state information
US20230421219A1 (en) Closed-loop intelligent controlled transmission (clict) and enhancement with distributed source coding
WO2024016299A1 (en) Non-zero coefficient selection and strongest coefficient indicator for coherent joint transmission channel state information
US11929804B1 (en) Techniques for codeword-based radial beamforming for orbital angular momentum waveforms
US11917675B2 (en) Techniques for channel aware rank adaptation
WO2023206348A1 (en) Transmission reception point selection for coherent joint transmissions
WO2023245533A1 (en) Power scaling and splitting for uplink high resolution tpmi
US20230292154A1 (en) Methods for dynamic beamforming weight construction based on directional measurements
US20230103220A1 (en) Channel state information feedback based on full channel estimation
WO2024059994A1 (en) Multi-stage bit-level constellation shaping
US20240188006A1 (en) Power control for transmissions with time-based artificial noise
WO2024110948A1 (en) Feature vector compression for two-sided channel state information feedback models in wireless networks
WO2023097551A1 (en) Machine learning models for precoding
WO2024020709A1 (en) Signaling for dictionary learning techniques for channel estimation
US20230163907A1 (en) Channel compression for channel feedback reporting