WO2023208474A1 - Premier noeud sans fil, noeud d'opérateur et procédés dans un réseau de communication sans fil - Google Patents
Premier noeud sans fil, noeud d'opérateur et procédés dans un réseau de communication sans fil Download PDFInfo
- Publication number
- WO2023208474A1 WO2023208474A1 PCT/EP2023/056957 EP2023056957W WO2023208474A1 WO 2023208474 A1 WO2023208474 A1 WO 2023208474A1 EP 2023056957 W EP2023056957 W EP 2023056957W WO 2023208474 A1 WO2023208474 A1 WO 2023208474A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- precoders
- estimated
- training model
- wireless node
- reconstructed
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 95
- 238000004891 communication Methods 0.000 title claims abstract description 62
- 238000012549 training Methods 0.000 claims abstract description 204
- 230000006870 function Effects 0.000 claims description 175
- 230000005540 biological transmission Effects 0.000 claims description 47
- 230000009471 action Effects 0.000 claims description 46
- 239000011159 matrix material Substances 0.000 claims description 29
- 238000010801 machine learning Methods 0.000 claims description 25
- 238000004590 computer program Methods 0.000 claims description 20
- 238000012805 post-processing Methods 0.000 claims description 10
- 230000003247 decreasing effect Effects 0.000 claims description 7
- 230000003287 optical effect Effects 0.000 claims description 6
- 239000013598 vector Substances 0.000 description 86
- 239000010410 layer Substances 0.000 description 31
- 230000000875 corresponding effect Effects 0.000 description 17
- 238000013528 artificial neural network Methods 0.000 description 11
- 238000012545 processing Methods 0.000 description 11
- 238000004422 calculation algorithm Methods 0.000 description 7
- 238000005259 measurement Methods 0.000 description 7
- 230000011664 signaling Effects 0.000 description 6
- 238000007906 compression Methods 0.000 description 5
- 230000006835 compression Effects 0.000 description 5
- 238000000354 decomposition reaction Methods 0.000 description 5
- 230000008901 benefit Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000005457 optimization Methods 0.000 description 4
- 239000002356 single layer Substances 0.000 description 4
- 238000003491 array Methods 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 3
- 239000000969 carrier Substances 0.000 description 3
- 230000001413 cellular effect Effects 0.000 description 3
- 230000006855 networking Effects 0.000 description 3
- 238000010606 normalization Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000003044 adaptive effect Effects 0.000 description 2
- 230000010287 polarization Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 241000760358 Enodes Species 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000000052 comparative effect Effects 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000009472 formulation Methods 0.000 description 1
- 238000011478 gradient descent method Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000004043 responsiveness Effects 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
- 230000003245 working effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04B—TRANSMISSION
- H04B7/00—Radio transmission systems, i.e. using radiation field
- H04B7/02—Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas
- H04B7/04—Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas
- H04B7/0413—MIMO systems
- H04B7/0456—Selection of precoding matrices or codebooks, e.g. using matrices antenna weighting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/088—Non-supervised learning, e.g. competitive learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
- G06N3/0455—Auto-encoder networks; Encoder-decoder networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04B—TRANSMISSION
- H04B7/00—Radio transmission systems, i.e. using radiation field
- H04B7/02—Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas
- H04B7/04—Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas
- H04B7/06—Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station
- H04B7/0613—Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission
- H04B7/0615—Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission of weighted versions of same signal
- H04B7/0619—Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission of weighted versions of same signal using feedback from receiving side
- H04B7/0658—Feedback reduction
Definitions
- Embodiments herein relate to a first wireless node, an operator node and methods therein. In some aspects, they relate to determining one or more precoders and/or training a training model for providing one or more out of one or more preferred precoders maximizing a Signal-to-Noise Ratio received by a second wireless node, one or more reconstructed estimated channels and one or more reconstructed estimated channel features.
- wireless devices also known as wireless communication devices, mobile stations, stations (STA) and/or User Equipments (UE)s, communicate via a Wide Area Network or a Local Area Network such as a Wi-Fi network or a cellular network comprising a Radio Access Network (RAN) part and a Core Network (CN) part.
- RAN Radio Access Network
- CN Core Network
- the RAN covers a geographical area which is divided into service areas or cell areas, which may also be referred to as a beam or a beam group, with each service area or cell area being served by a radio network node such as a radio access node e.g., a Wi-Fi access point or a radio base station (RBS), which in some networks may also be denoted, for example, a NodeB, eNodeB (eNB), or gNB as denoted in Fifth Generation (5G) telecommunications.
- a service area or cell area is a geographical area where radio coverage is provided by the radio network node.
- the radio network node communicates over an air interface operating on radio frequencies with the wireless device within range of the radio network node.
- 3GPP is the standardization body for specify the standards for the cellular system evolution, e.g., including 3G, 4G, 5G and the future evolutions.
- EPS Evolved Packet System
- 4G Fourth Generation
- 3GPP 3rd Generation Partnership Project
- 5G New Radio 5G New Radio
- Frequency bands for 5G NR are being separated into two different frequency ranges, Frequency Range 1 (FR1) and Frequency Range 2 (FR2).
- FR1 comprises sub-6 GHz frequency bands. Some of these bands are bands traditionally used by legacy standards but have been extended to cover potential new spectrum offerings from 410 MHz to 7125 MHz
- FR2 comprises frequency bands from 24.25 GHz to 52.6 GHz. Bands in this millimeter wave range have shorter range but higher available bandwidth than bands in the FR1.
- Multi-antenna techniques may significantly increase the data rates and reliability of a wireless communication system.
- a wireless connection between a single user, such as UE, and a base station the performance is in particular improved if both the transmitter and the receiver are equipped with multiple antennas, which results in a Multiple-Input Multiple-Output (MIMO) communication channel.
- MIMO Multiple-Input Multiple-Output
- SU Single-User
- MIMO enables the users to communicate with the base station simultaneously using the same time-frequency resources by spatially separating the users, which increases further the cell capacity.
- MU-MIMO Multi-User
- MU-MIMO may benefit when each UE only has one antenna.
- Such systems and/or related techniques are commonly referred to as MIMO.
- the 5 H1 generation mobile wireless communication system uses OFDM with configurable bandwidths and subcarrier spacing to efficiently support a diverse set of usecases and deployment scenarios.
- NR improves deployment flexibility, user throughputs, latency, and reliability.
- the throughput performance gains are enabled, in part, by enhanced support for Multi-User MIMO (MU- MIMO) transmission strategies, where two or more UE receives data on the same OFDM time frequency resources, i.e. , spatially separated transmissions.
- MU- MIMO Multi-User MIMO
- the MU-MIMO transmission strategy is illustrated in Figure 1.
- a multi-antenna base station with N TX antenna ports transmit information to several UEs, with sequence intended for the j'th UE.
- Each UE demodulates its received signal and combines receiver antenna signals to obtain an estimate S® of corresponding transmitted sequence. This estimate S® for the j'th UE can be expressed as
- the second term represents the spatial multiplexing interference (due to MU -Ml MO transmission) seen by UE(j) and the third term represents other interference and noise sources.
- CSI Channel state information reporting in NR
- SRS uplink sounding reference signals
- the radio access network can estimate the uplink channel from SRS and by reciprocity obtain the downlink channel H®.
- TDD time division duplex
- RX chains receive branches
- SRS SRS
- CSI channel state information
- the radio access network configures a UE to report CSI in a certain way.
- the radio access network node transmits CSI reference signals (CSI-RS).
- CSI-RS CSI reference signals
- the UE estimates the downlink channel (or important features thereof) from the transmitted CSI-RS.
- the UE reports CSI over an uplink control and/or data channel.
- the radio access network uses the UE’s feedback for downlink user scheduling and precoding.
- important features of the channel may refer to a Gram matrix of the channel, one or more eigenvectors that correspond to the largest eigenvalues of an estimated channel covariance matrix, approximations of such aforementioned eigenvectors, one or more DFT base vectors, or orthogonal vectors from any other suitable and defined vector space, that best correlates with an estimated channel matrix, or an estimated channel covariance matrix, the channel delay profile.
- a UE can be configured to report CSI Type I and CSI Type II, where the CSI Type II reporting protocol has been specifically designed to enable MU -Ml MO operations from uplink UE reports.
- the CSI Type II can be configured in a normal reporting mode or in a port selection reporting mode.
- the CSI Type II normal reporting mode is based on the specification of sets of Discrete Fourier Transform DFT basis functions in a precoder codebook.
- the UE selects and reports the L DFT vectors from the codebook that best match its channel conditions (like the classical codebook precoding matrix indicator (PMI) from earlier 3GPP releases).
- the number of DFT vectors L is typically 2 or 4 and it configurable by the NW.
- the UE reports how the L DFT vectors should be combined in terms of relative amplitude scaling and co-phasing.
- DFT beams interchangeably with DFT vectors. This slight abuse of terminology is appropriate whenever the base station has a uniform planar array with antenna elements separated by half of the carrier wavelength.
- the CSI type II normal reporting mode is illustrated in Figure 2, see also technical specification [1],
- the selection and reporting of the L DFT vectors b n and their relative amplitudes a n is done in a wideband manner; that is, the same beams are used for both polarizations over the entire transmission band.
- the selection and reporting of the DFT vector co-phasing coefficients are done in a subband manner; that is, DFT vector cophasing parameters are determined for each of multiple subsets of contiguous subcarriers.
- the co-phasing parameters are quantized such that e i6n is taken from either a QPSK or 8PSK signal constellation.
- the NW transmits Channel State Information reference signals (CSI-RS) over the downlink using N ports.
- CSI-RS Channel State Information reference signals
- the UE estimates the downlink channel (or important features thereof) for each of the N ports from the transmitted CSI-RS.
- the UE reports CSI (e.g., channel quality index (CQI), precoding matrix indicator (PMI), rank indicator (Rl)) to the NW over an uplink control and/or data channel.
- CSI e.g., channel quality index (CQI), precoding matrix indicator (PMI), rank indicator (Rl)
- the NW uses the UE’s feedback for downlink user scheduling and MIMO precoding.
- both Type I and Type II reporting is configurable, where the CSI Type II reporting protocol has been specifically designed to enable MU -Ml MO operations from uplink UE reports.
- the CSI Type II normal reporting mode is based on the specification of sets of Discrete Fourier Transform DFT basis functions in a precoder codebook.
- the UE selects and reports the L DFT vectors from the codebook that best match its channel conditions (like the classical codebook precoding matrix indicator (PMI) from earlier 3GPP releases).
- the number of DFT vectors L is typically 2 or 4 and it configurable by the NW.
- the UE reports how the L DFT vectors should be combined in terms of relative amplitude scaling and co-phasing.
- the CSI type II normal reporting mode is illustrated in Figure 2, see also technical specification [1],
- the selection and reporting of the L DFT vectors b n and their relative amplitudes a n is done in a wideband manner; that is, the same beams are used for both polarizations over the entire transmission band.
- the selection and reporting of the DFT vector co-phasing coefficients are done in a sub band manner; that is, DFT vector cophasing parameters are determined for each of multiple subsets of contiguous subcarriers.
- the co-phasing parameters are quantized such that e j9n is taken from either a QPSK or 8PSK signal constellation.
- the precoder W v [k] reported by the UE to the NW can be expressed as follows:
- the Type II CSI report can be used by the NW to co-schedule multiple UEs on the same OFDM time-frequency resources. For example, the NW can select UEs that have reported different sets of DFT vectors with weak correlations.
- the CSI Type II report enables the UE to report a precoder hypothesis that trades CSI resolution against uplink transmission overhead.
- NR 3GPP Release 15 supports Type II CSI feedback using port selection mode, in addition to the above normal reporting mode. In this case,
- the base station transmits a CSI-RS port in each one of the beam directions.
- the UE does not use a codebook to select a DFT vector (a beam), instead the UE selects one or multiple antenna ports from the CSI-RS resource of multiple ports.
- Type II CSI feedback using port selection gives the base station some flexibility to use non-standardized precoders that are transparent to the UE.
- the precoder reported by the UE can be described as follows
- the vector e is a unit vector with only one non-zero feature, also referred to as element, which can be viewed a selection vector that selects a port from the set of ports in the measured CSI-RS resource. The UE thus feeds back which ports it has selected, the amplitude factors and the co-phasing factors.
- the CSI type II reporting as described above falls into the category of CSI-reporting framework that can be called precoding-vector feedback.
- precoding-vector feedback the UE is reporting suggested precoding-vectors to the NW in different ways and different frequency granularity.
- Another category of CSI reporting that can be considered, especially with the development of new powerful compression algorithms (e.g., based on AEs as described below) is full-channel feedback.
- the UE reports a compression or representation of the whole observed/estimated channel, and possibly also noise covariance estimates, in the feedback.
- AEs Neural network based autoencoders
- CSI compressing where a UE provides CSI feedback to a radio access network node by sending a CSI report that include a compressed and encoded version of the estimated downlink channel, or of important features thereof.
- a summary of recent academia work on this topic can be found in [3]
- 3GPP decided to start a study item for Rel.18 that includes the use case of Al-based CSI reporting in which AEs will play a central part of the study [2,4],
- An AE is a neural network, i.e. , a type of machine learning algorithm, that has been partitioned into one encoder and one decoder. This partitioning is illustrated in Figure 3 by considering a simple NN example with fully connected layers (a.k.a. dense NN).
- the encoder and decoder are separated by a bottleneck layer that holds a compressed representation, Y in Figure 3, of the input data X.
- the variable Y is sometimes called the latent representation of the input X. More specifically, The size of the bottleneck (latent representation) Y is significantly smaller than the size of the input data X.
- the AE encoder thus compresses the input features X to Y.
- the decoder part of the AE tries to invert the encoder’s compression and reconstruct X with minimal error, according to some predefined loss function.
- latent representation latent vector
- output is used interchangeably.
- latent space and output space are used interchangeably and refer to the space of all possible latent vectors, for a given architecture.
- input space is the space of all possible inputs, for a given architecture.
- the word space can be understood as, e.g., a linear vector space, in the mathematical sense.
- AEs can have different architectures.
- AEs can be based on dense NNs (like Figure 3), multi-dimensional convolution NNs, recurrent NNs, transformer NNs, or any combination thereof.
- dense NNs like Figure 3
- multi-dimensional convolution NNs like Figure 3
- recurrent NNs recurrent NNs
- transformer NNs transformer NNs
- all AEs architectures possess an encoder- bottleneck-decoder structure.
- a characteristic of AEs is that they can be used to compress and decompress data in an unsupervised manner.
- FIG. 3 An illustration of a fully connected autoencoder (AE).
- Figure 4 illustrates how an AE might be used for Al-enhanced CSI reporting in NR.
- the UE estimates the downlink channel (or important features thereof) from downlink reference signal(s), e.g., CSI-RS.
- the UE estimates the downlink channel as a 3D complex-valued tensor, with dimensions defined by the radio access network node CSI-RS antenna ports, the UE’s Rx antenna ports, and frequency (the granularity of which is configurable, e.g., subcarrier or subband).
- the UE uses a trained AE encoder to compress the estimated channel [features] down to a binary codeword.
- the binary codework is reported to the radio access network over an uplink control and/or data channel.
- this codeword will likely form one part of a channel state information (CSI) report that might also include rank, channel quality, and interference information.
- CSI channel state information
- the radio access network node uses a trained AE decoder to reconstruct the estimated channel [features].
- the decompressed output of the AE decoder is used by the radio access network in, for example, MIMO precoding, scheduling, and link adaption.
- Figure 4 Using Autoencoder (AE) for CSI Compression (inference phase).
- the architecture of an AE (e.g., structure, number of layers, nodes per layer, activation function etc) will need to be tailored for each particular use case. For example, properties of the data (e.g., CSI-RS channel estimates), the channel size, uplink feedback rate, and hardware limitations of the encoder and decoder all need to be considered when designing the AE’s architecture.
- the AE’s architecture After the AE’s architecture is fixed, it needs to be trained on one or more datasets. To achieve good performance during live operation (the so-called inference phase), the training datasets need to be representative of the actual data the AE will encounter during live operation.
- the training process involves numerically tuning the AE’s trainable parameters (e.g., the weights and biases of the underlying NN) to minimize a loss function on the training datasets.
- the loss function could be, for example, the MSE loss calculated as the average of the squared error between the UE’s downlink channel estimate H and the NN’s reconstruction H, i.e., ⁇ H - H
- the purpose of the loss function is to meaningfully quantify the reconstruction error for the particular use case at hand.
- the training process is typically based on some variant of the mini-batch gradient descent algorithm, which, at its core, has three components: a feedforward step, a back propagation step, and a parameter optimization step.
- Feedforward A batch of training data, such as a mini-batch, (e.g., several downlinkchannel estimates) is pushed through the AE, from the input to the output.
- the loss function is used to compute the reconstruction loss for all training samples in the batch.
- the reconstruction loss may refer to an average reconstruction loss for all training samples in the batch.
- BP Back propagation
- the gradients (partial derivatives of the loss function, L, with respect to each trainable parameter in the AE) are computed.
- the back propagation algorithm sequentially works backwards from the AE output, layer-by-layer, back through the AE to the input.
- the back propagation algorithm is built around the chain rule for differentiation: When computing the gradients for layer n in the AE, it uses the gradients for layer n+1.
- Parameter optimization The gradients computed in the back propagation step are used to update the AE’s trainable parameters using a gradient descent method with a learning rate hyperparameter that scales the gradients.
- the core idea is to make small adjustments to each parameter so that the average loss over the training batch decreases.
- An acceptable level of performance may refer to the AE achieving a pre-defined average reconstruction error over the training dataset (e.g., normalized MSE of the reconstruction error over the training dataset is less than, say, 0.1).
- it may refer to the AE achieving a pre-defined user data throughput gain with respect to a baseline CSI reporting method (e.g., a MIMO precoding method is selected, and user throughputs are separately estimated for the baseline and the AE CSI reporting methods).
- the AE decoder in the gNB should output a precoder that maximizes the received SNR.
- the SNR should be maximized with respect to the UE’s receiver weights w and the precoder p that is output by the AE, namely, max SNR. w,p
- the SNR can be expressed as a function of the channel H and the noise covariance matrix R as follows:
- the right-hand side of the above equation can be rewritten as The maximization of SNR(H, /?) is with respect to both w and p.
- the maximizer with respect to w can be formulated as a function of p. Namely, for a given p is attained by an eigenvector corresponding to the largest eigenvalue of the generalized eigenvalue problem and is then a function of p.
- the left-hand side is a rank-1 matrix, and by using that P H H H W is a scalar and that the normalization is not unique for an eigenvector, we can conclude that the optimal UE receiver weights are up to some normalization.
- the noise covariance R is Hermitian.
- the maximizer is given by an eigenvector corresponding to the largest eigenvalue of the eigenvalue problem where p is a complex constant, e.g., eigenvalue.
- MSE mean-square error
- NME normalized mean-square error
- cosine similarity or generalized cosine similarity as loss function and measure of performance. See, e.g., [15], [16], [17], [18], and [19], SUMMARY
- An object of embodiments herein is to improve the performance of a wireless communications network by using loss functions when training AI/ML models for CSI feedback.
- the object is achieved by a method performed by a first wireless node for determining one or more preferred precoders in a wireless communications network.
- the one or more precoders are maximizing a Signal- to-Noise Ratio, SNR, received by a second wireless node.
- the first wireless node obtains a training model.
- the training model has been trained, by minimizing a loss function indicative of a reconstruction loss of one or more reconstructed second channels and/or one or more reconstructed second channel features, to provide an output comprising any one or more out of:
- the first wireless node obtains a first compressed channel feature codeword from the second wireless node.
- the first channel codeword is indicative of one or more first channels and/or one or more first channel features estimated by the second wireless node.
- the first wireless node determines, based on the obtained first compressed channel feature codeword and the obtained training model, the one or more preferred precoders maximizing the SNR received by the second wireless node.
- the object is achieved by a method performed by an operator node for training a training model to provide any one or more out of: One or more preferred precoders maximizing a Signal-to-Noise Ratio, SNR, received by a second wireless node, one or more reconstructed estimated channels, and one or more reconstructed estimated channel features.
- One or more preferred precoders maximizing a Signal-to-Noise Ratio, SNR, received by a second wireless node, one or more reconstructed estimated channels, and one or more reconstructed estimated channel features.
- the operator node obtains one or more third compressed channel feature codeword.
- the third channel codeword is indicative of one or more third estimated channels and/or third estimated channel features.
- the operator node reconstructs the one or more third estimated channels and/or one or more third estimated channel features by using the training model.
- the operator node calculates based on the reconstructed one or more third channels and/or reconstructed one or more third estimated channel features and/or an output of the training model, a reconstruction loss using the loss function.
- the operator node trains the training model to provide any one or more out of: The one or more preferred precoders maximizing the SNR received by the second wireless node, the one or more reconstructed estimated channels, and the one or more reconstructed estimated channel features, based on the calculated reconstruction loss, to minimize the loss function.
- the object is achieved by a first wireless node configured to determine one or more preferred precoders in a wireless communications network.
- the one or more precoders are maximizing a Signal-to-Noise Ratio, SNR, received by a second wireless node.
- the first wireless node is further configured to:
- Obtain a training model wherein the training model is adapted to have been trained, by minimizing a loss function indicative of a reconstruction loss of one or more reconstructed second channels and/or one or more reconstructed second channel features, to provide an output comprising any one or more out of:
- the second wireless node obtains, from the second wireless node, a first compressed channel feature codeword indicative of one or more first channels and/or one or more first channel features estimated by the second wireless node, and determine, based on the obtained first compressed channel feature codeword and the obtained training model, the one or more preferred precoders maximizing the SNR received by the second wireless node.
- the object is achieved by an operator node configured to train a training model to provide any one or more out of: One or more preferred precoders maximizing a Signal-to-Noise Ratio, SNR, received by a second wireless node, one or more reconstructed estimated channels and one or more reconstructed estimated channel features.
- the operator node is further configured to:
- Embodiments herein target to determine one or more precoders, e.g., preferred precoders, that minimize an SNR received by a second wireless node.
- the first wireless node determines the one or more preferred precoders that maximizes the SNR received by the second wireless node. This based on the first channel codeword and the training model.
- Embodiments bring the advantage of an efficient mechanism improving the performance in the wireless communications network. This is achieved by using a training model and determine one or more preferred precoders that maximizes the SNR received by a second wireless node based at least on the training model. This e.g., leads to an increased downlink throughput, and results in an improved the performance in the wireless communications network.
- Figure 1 illustrates an example of a Ml MO transmission strategy according to prior art.
- Figure 2 illustrates an example of CSI reporting according to prior art.
- Figure 3 illustrates an example of an NN according to prior art.
- Figure 4 illustrates an example of an AE for CSI reporting.
- Figure 5 is a schematic block diagram illustrating embodiments of a wireless communications network.
- Figure 6 is a flowchart depicting embodiments of a method in a first wireless node.
- Figure 7 is a flowchart depicting embodiments of a method in an operator node.
- Figures 8 a and b are schematic block diagrams illustrating embodiments of a first wireless node.
- Figures 9 a and b are schematic block diagrams illustrating embodiments of an operator node.
- Figure 10 schematically illustrates a telecommunication network connected via an intermediate network to a host computer.
- Figure 11 is a generalized block diagram of a host computer communicating via a base station with a user equipment over a partially wireless connection.
- Figures 12 to 15 are flowcharts illustrating methods implemented in a communication system including a host computer, a base station and a user equipment.
- Cosine similarity and generalized cosine similarity are not a direct proxy for DL throughput. o
- the best and second-best precoding vectors are orthogonal.
- the received power of the best precoding vector at a UE is slightly larger than that of the second best.
- reconstructing/using the second-best precoding vector, or any vector that is close in the sense of cosine similarity to the second-best precoding vector will give good performance in terms DL throughput.
- it when measuring the quality of the reconstruction with a cosine similarity to the optimal precoding vector, it will evaluate it as a poor reconstruction/choice.
- Using MSE or NMSE to evaluate the feedback against the full channel regardless of if full-channel feedback or precoding vector feedback is used to construct approximations of the full channel, has the drawback that it might push the AI/ML model to try to reconstruct parts of the channel that is not relevant for the precoding. For example, instead of using bits to represent directions resulting in bad SNR and/or SI NR at the UE, these might be better used to represent the strong directions in from which precoding vectors will be chosen.
- An object of embodiments herein is to improve the performance of a wireless communications network using loss functions when training AI/ML models for CSI feedback.
- Some embodiments herein may provide loss functions that may be used for training a training model, such as an AI/ML model, for CSI feedback.
- the loss functions which may also be referred to as custom loss functions, may be designed, such as calculated or generated, for serving as improved proxies for downlink throughput.
- the loss functions may be related to received SNR, received SINR and/or instantaneous per-layer, e.g., Ml MO- layer, mutual information in the form of log(1+SNR) and or log(1+SINR).
- Embodiments herein may further provide methods for training a model, such as a training model which may also be referred to as an Al and/or ML model, e.g., using one or more of the provided loss functions.
- a model such as a training model which may also be referred to as an Al and/or ML model, e.g., using one or more of the provided loss functions.
- Embodiments herein may further provide methods for determining one or more precoders, e.g., by using the trained training model.
- Embodiments herein provide advantages such as e.g., loss functions that are better aligned with existing domain knowledge and the end-goal for what the AI/ML model output will be used for, e.g., maximizing downlink throughput.
- Training AEs with one or more of the provided loss functions to compress and reconstruct the UE’s CSI-RS based downlink channel estimate may provide an improved performance e.g., in terms of downlink throughput.
- the loss functions may also take higher-layer transmissions into account in a balanced way. Something that is not possible with existing solutions.
- FIG. 5 is a schematic overview depicting a wireless communications network 100 wherein embodiments herein may be implemented.
- the wireless communications network 100 comprises one or more RANs and one or more CNs.
- the wireless communications network 100 may use 5G NR but may further use a number of other different technologies, such as, Wi-Fi, (LTE), LTE-Advanced, Wideband Code Division Multiple Access (WCDMA), Global System for Mobile communications/enhanced Data rate for GSM Evolution (GSM/EDGE), or Ultra Mobile Broadband (UMB), just to mention a few possible implementations.
- LTE Long Term Evolution
- WCDMA Wideband Code Division Multiple Access
- GSM/EDGE Global System for Mobile communications/enhanced Data rate for GSM Evolution
- UMB Ultra Mobile Broadband
- Network nodes such as a first wireless node 110, 120, operate in the wireless communications network 100.
- the first wireless node 110, 120 may respectively e.g. provides a number of cells, and may use these cells for communicating with UEs, e.g. a second wireless node 110, 120.
- the first wireless node 110, 120 may respectively be a transmission and reception point e.g. a radio access network node such as a base station, e.g.
- a radio base station such as a NodeB, an evolved Node B (eNB, eNodeB, eNode B), an NR Node B (gNB), a base transceiver station, a radio remote unit, an Access Point Base Station, a base station router, a transmission arrangement of a radio base station, a stand-alone access point, a Wireless Local Area Network (WLAN) access point, an Access Point Station (AP ST A), an access controller, a UE acting as an access point or a peer in a Device to Device (D2D) communication, or any other suitable networking unit.
- the first wireless node 110, 120 may respectively e.g. further be able to communicate with each other via one or more CN nodes in the wireless communications network 100.
- the second wireless node 110, 120 may e.g. be an NR device, a mobile station, a wireless terminal, an NB-loT device, an eMTC device, an NR RedCap device, a CAT-M device, a Wi-Fi device, an LTE device and a non-access point (non-AP) STA, a STA, that communicates via a base station such as e.g. the network node 110.
- a base station such as e.g. the network node 110.
- the UE relates to a nonlimiting term which means any UE, terminal, wireless communication terminal, user equipment, (D2D) terminal, or node e.g. smart phone, laptop, mobile phone, sensor, relay, mobile tablets or even a small base station communicating within a cell.
- the communications network further comprises an operator node 130.
- the operator node may e.g., provide a training model to the first wireless device 110, 120 and/or train the training model.
- Methods herein may be performed by first wireless device 110 and the operator node 130.
- a Distributed Node (DN) and functionality e.g. comprised in a cloud 140 as shown in Figure 5, may be used for performing or partly performing the methods of embodiments herein.
- Embodiments herein may provide a set of loss functions that aligns with domain knowledge, e.g., in terms of received SNR and/or SINR at the UE and resulting instantaneous per-layer mutual information.
- Figure 6 shows an example method performed by the first wireless node 110.
- the method is e.g., for determining one or more preferred precoders in the wireless communications network 100.
- the one or more preferred precoders may e.g., maximize an SNR received by the second wireless node 120.
- the first wireless node 110 may e.g. be a network node 110 or a UE 110.
- the second wireless node 120 may e.g. be a network node 120 or a UE 120.
- the first wireless node 110 is a network node 110
- the second wireless node 120 is a UE 120.
- the second wireless node 120 is a network node 120.
- the method may comprise any one or more out of the following actions.
- the actions may be executed in any suitable order.
- the first wireless node 110 obtains a training model.
- the training model has been trained to provide, e.g., an output comprising, any one or more out of:
- the one or more preferred precoders e.g., maximizing the SNR received by the second wireless node 120, one or more reconstructed estimated channels and one or more reconstructed estimated channel features.
- the training model has been trained by minimizing a loss function indicative of a reconstruction loss of one or more reconstructed estimated second channels and/or reconstructed estimated second channel features.
- the loss function may e.g., be defined according to any or the Some First to Twelfth embodiments mentioned below.
- Obtaining the training model may e.g., comprise obtaining, such as receiving, the training model from another node in the wireless communications network 100.
- the other node may e.g., be the operator node 130.
- the training model may e.g., be an AE as described above.
- the training model may be pre-trained by e.g., the operator node 130 or the cloud 140, alternatively, the first wireless node 110 trains the training model.
- obtaining the training model comprises that the first wireless node 110 obtains one or more second compressed channel feature codewords.
- the one or more second compressed channel feature codewords are indicative of one or more estimated second channels and/or one or more estimated second channel features.
- the first wireless node 110 reconstructs the one or more estimated second channels and/or one or more estimated second channel features, e.g., by using the training model, and calculates a reconstruction loss using the loss function. The calculation may e.g., be based on the reconstructed one or more estimated second channels and/or reconstructed one or more estimated second channel features and/or an output of the training model.
- the wireless node 110 trains the training model to provide any one or more out of: the one or more preferred precoders, e.g., maximizing the SNR received by the second wireless node 120, one or more reconstructed estimated channels and one or more reconstructed estimated channel features.
- the training is performed by using machine learning, based on the calculated reconstruction loss, to minimize the loss function. Minimizing the loss function may comprise that an acceptable performance is achieved. This may mean that the above explained example is performed iteratively, such as several times, until e.g., the calculated reconstruction loss is below a threshold.
- the above explained example is performed iteratively until a pre-defined data throughput gain is achieved, e.g., in respect to a baseline method.
- the baseline method may be any other method used for similar purposes.
- the loss function may e.g., be defined according to any or the Some First to Twelfth embodiments mentioned below.
- the one or more second compressed channel code words may e.g., be a training dataset, such as estimates of the one or more second channels and/or channel features.
- the estimates of the one or more second channels and/or channel features may be estimates obtained, or measured, in a live network, or it may be constructed, or generated, to simulate the conditions in a live network.
- the reconstruction loss may be any one or more out of: A sum of a plurality of reconstruction losses calculated based on a plurality of sub-bands, a sum of reconstruction losses calculated based on a plurality of transmission layers, a weighted sum of reconstruction losses calculated based on inner products between precoders and eigenvectors, a weighted sum being calculated by calculating a set of values based on similarities between multiple precoders and multiple eigenvectors, wherein the weights are based on norms of precoders and/or eigenvalues associated to the multiple eigenvectors.
- the weights may be any one or more out of: The square of eigenvalues of a covariance matrix related to the one or more estimated channels, and the norms of the plurality of precoders and/or eigenvalues associated to the multiple eigenvectors.
- the loss function used for training the training model is e.g., a strictly increasing function of a second loss function.
- the function is a nondecreasing function.
- the loss function may e.g., be the logarithm of the sum of 1 and the second loss function.
- the second loss function may e.g., be defined according to any or the Some First to Twelfth embodiments mentioned below.
- the loss function comprises a penalizing term, e.g., as defined in the Some First Embodiments below, in order to provide orthogonal preferred precoders.
- the penalizing term may be added by the wireless node 110 when training the training model.
- the first wireless node 110 obtains, a first compressed channel feature codeword from the second wireless node 120.
- the first compressed channel feature codeword is indicative of one or more first channels and/or channel features estimated by the second wireless node 120.
- the first compressed control channel feature codeword is indicative of the full channel, such as all estimated aspects of the channel, or only certain features of the channel such as e.g., one or more eigenvectors related to the Tx- Tx covariance matrix, wherein the eigenvectors can be the true eigenvectors or approximations thereof and computed in a subband manner, in a wideband manner, or a combination thereof; one or more suggested precoding vectors, wherein the precoding vectors are computed in a subband manner, in a wideband manner, or a combination thereof; or updates (deltas) of aforementioned full channel, eigenvectors, and/or precoding vectors in relation to previously transmitted information.
- the first compressed channel feature codeword may be related to
- This may e.g., depend on configuration in the second wireless node 120 that may, e.g., be done by the first wireless node 110.
- the first wireless node 110 determines the one or more preferred precoders, e.g., maximizing the SNR received by the second wireless node 120.
- the one or more preferred precoders are determined based on the obtained first compressed channel feature codeword and the obtained training model.
- the wireless node 110 may determine the one or more preferred precoders e.g., based on feeding the training model with an input comprising the obtained first compressed channel feature codeword and receiving an output from the training model comprising any one or more out of the one or more preferred precoders and the one or more estimated first channels and/or channel features.
- the wireless node 110 may determine the one or more preferred precoders by selecting or more precoders from the received output.
- the wireless node 110 may calculate, such as e.g., generate or estimate, the one or more preferred precoders from the output, e.g., when the output comprises the one or more estimated first channels and/or channel features.
- the first wireless node 110 determines the one or more precoders.
- the first wireless node 110 may further receiving any one out of: The one or more precoders, e.g., maximizing the SNR received by the second wireless node 120, one or more reconstructed estimated first channels and one or more reconstructed estimated first channel features as output from the training model.
- the first wireless node 110 determines the one or more preferred precoders based on the output from the training model.
- the more than one preferred precoders are orthogonal.
- the more than one preferred precoders may be e.g., approximatively orthogonal.
- Orthogonal when used herein may mean having inner product equal to zero.
- the first wireless node 110 performs computational postprocessing on the output of the training model to orthogonalize the more than one precoders.
- Figure 7 shows an example method performed by the operator node 130.
- the method is e.g., for training a training model to provide any one or more out of:
- One or more preferred precoders e.g., maximizing a Signal-to-Noise Ratio, SNR, received by the second wireless node 120 one or more reconstructed estimated channels and one or more estimated channel features in the wireless communications network 100.
- the one or more preferred precoders may e.g., maximize an SNR received by the second wireless node 120.
- the second wireless node 120 may e.g., be a network node 120 or a UE 120.
- the method may comprise any one or more out of the following actions.
- the actions may be executed in any suitable order.
- the operator node 130 obtains one or more third compressed channel feature codeword, indicative of one or more third estimated channels and/or one or more third estimated channel features.
- the third compressed control channel feature codeword is indicative of the full channel, such as all estimated aspects of the channel, or only certain features of the channel such as e.g., one or more eigenvectors related to the Tx-Tx covariance matrix, wherein the eigenvectors can be the true eigenvectors or approximations thereof and computed in a subband manner, in a wideband manner, or a combination thereof; one or more suggested precoding vectors, wherein the precoding vectors are computed in a subband manner, in a wideband manner, or a combination thereof; or updates (deltas) of aforementioned full channel, eigenvectors, and/or precoding vectors in relation to previously transmitted information.
- the third compressed channel feature codeword may be related to a single layer transmission or a multi-layer transmission.
- the one or more third compressed channel code words may e.g., be a training dataset, such as estimates of the one or more third channels and/or channel features.
- the estimates of the one or more third channels and/or channel features may be estimates obtained, or measured, in a live network, or it may be constructed, or generated, to simulate the conditions in a live network.
- the operator node 130 reconstructs the one or more third estimated channels and/or one or more third estimated channel features, e.g., by using the training model.
- the reconstructing may e.g., comprise decompressing the one or more third compressed channel feature codeword.
- the operator node 130 calculates, e.g., based on the reconstructed one or more third channels and/or one or more third estimated channel features and/or an output of the training model, a reconstruction loss using the loss function.
- the loss function may e.g., be defined according to any or the Some First to Twelfth embodiments mentioned below.
- the operator node 130 trains the training model to provide any one or more out of:
- the one or more preferred precoders e.g., maximizing the SNR received by the second wireless node 120, one or more reconstructed estimated channels and/or reconstructed estimated channel features.
- the training is performed by using machine learning, based on the calculated reconstruction loss, to minimize the loss function. Minimizing the loss function may comprise that an acceptable performance is achieved. This may mean that the above explained example is performed iteratively, such as several times, until e.g., the calculated reconstruction loss is below a threshold. Alternatively, the above explained example is performed iteratively until a pre-defined data throughput gain is achieved, e.g., in respect to a baseline method. The baseline method may be any other method used for similar purposes.
- the loss function may e.g., be defined according to any or the Some First to Twelfth embodiments mentioned below.
- the reconstruction loss may be any one or more out of: A sum of a plurality of reconstruction losses calculated based on a plurality of sub-bands, a sum of reconstruction losses calculated based on a plurality of transmission layers, a weighted sum of reconstruction losses calculated based on inner products between precoders and eigenvectors, a weighted sum being calculated by calculating a set of values based on similarities between multiple precoders and multiple eigenvectors, wherein the weights are based on norms of precoders and/or eigenvalues associated to the multiple eigenvectors.
- the weights may be any one or more out of: The square of eigenvalues of a covariance matrix related to the one or more estimated channels, and the norms of the plurality of precoders and/or eigenvalues associated to the multiple eigenvectors.
- the loss function used for training the training model is e.g., a strictly increasing function of a second loss function.
- the function is a nondecreasing function.
- the loss function may e.g., be the logarithm of the sum of 1 and the second loss function.
- the second loss function may e.g., be defined according to any or the Some First to Twelfth embodiments mentioned below.
- the loss function comprises a penalizing term, e.g., as defined in the Some First Embodiments below, in order to provide orthogonal preferred precoders.
- the penalizing term may be added by the wireless node 110 when training the training model.
- the training model may also referred to as the AE.
- the loss functions are described as per-sample loss functions and when training is done on a batch of training data, then the loss function observed over the batch is a function of the per-sample losses, e.g., a sum or an average of all the per-sample losses computed over the samples of data.
- an output precoding vector p and an output L-tuple of precoding vectors may be computed in a wideband or subband manner.
- the presented loss functions will result in a different loss for each subband.
- the per-sample loss function can then be defined as a function of the per-subband losses, e.g., a sum or an average of all the per-subband losses computed over all subbands.
- the outputted precoding vectors for different layers are orthogonal to each other, on a per subband basis.
- the discussion on the below Some Second to Twelfth Embodiments assumes that this is, e.g., approximatively, the case.
- the orthogonality may be achieved in different ways.
- precoding-vector feedback there are different ways to, e.g., approximately, achieve this, e.g., any one out of:
- a ⁇ k is a parameter that determines the penalization (or cost), i.e., increase in the value of the loss function, for the two vectors p r and p k not being orthogonal.
- the penalization ranges between approximately zero, when the two vectors are approximately orthogonal, and up to k .
- the parameter X t k can depend on both £ and k, one of them, or it be constant for all both ⁇ and /c.
- the parameter is real-valued, non-negative, and should, e.g., be at least the same size as the largest singular value. This makes it dependent on the training batch and the subband processed for training, both of which can be avoided by considering the maximum over all the training data, possibly with some extra margin.
- the precoding vector may be computed in a wideband or subband manner. In the latter case, the above formula will result in a different loss for each subband. These subband losses may, e.g, simply be summed together to obtain a single loss for training the training model.
- the numerator is equal to the squared Euclidean norm of L ⁇ Hp, i.e.,
- the training model may be trained to output an L-tuple of precoding vectors (p 1; p 2 , ••• , PL) that maximizes the receive power of the second wireless node 120, e.g., summed over the L precoding vectors, subject to an orthogonality constraint, e.g., that the precoding vectors form an orthogonal basis.
- the following loss function may achieve this aim:
- the loss function may be rewritten as
- the training model may be trained to output an L-tuple of precoding vectors (Pi,p 2 , --- , PL) which maximizes the receive power of the second wireless node 120, e.g., summed over the L precoding vectors, subject to an orthogonality constraint, e.g., the vectors form an orthogonal basis.
- precoding vectors Pi,p 2 , --- , PL
- the vectors form an orthogonal basis.
- the loss function reduces to g(o- 2 ) times the cosine similarity between the output precoding vector and an eigenvector corresponding to the largest eigenvalue, i.e.,
- This loss function may be rewritten as: which reconnects to the Some Second Embodiments, but with the smaller singular values truncated (not considered).
- the training model may be trained to output an L-tuple of precoding vectors > PL) which maximizes the receive power of the second wireless node 120, e.g., summed over the L precoding vectors, subject to an orthogonality constraint, e.g., the vectors form an orthogonal basis.
- an orthogonality constraint e.g., the vectors form an orthogonal basis.
- the loss may then be given by:
- the loss function may be determined by first computing:
- the loss may then be given by:
- the training model may be trained to output an L-tuple of precoding vectors (Pi,p 2 >"- , PL) which maximizes the receive power of the second wireless node 120, e.g., summed over the L precoding vectors, subject to an orthogonality constraint, e.g., the vectors form an orthogonal basis.
- the Some Seventh Embodiments considers a closed loop, e.g., water filling, computation with a t per-antenna-port transmitpower constraint.
- the loss function may be determined by first computing:
- the loss may then be given by:
- the loss function may be determined by first computing:
- the loss function may then be given by:
- any monotonously increasing function f of the loss function £(p) will have the same optimum p as the loss function itself for a single sample, and may hence for an individual sample be used as a proxy or surrogate for the per-sample loss function.
- the function f may affect the results if f is not linear.
- the used loss function £'(pi,p 2 ,-" > PL) is a strictly increasing, or non-decreasing, function f of any of the loss functions in the Some Embodiments presented above, i.e.
- f may be chosen to be the logarithmic function, meaning the averaging takes place in the logarithmic domain, such as dB scale instead of linear SNR scale. More generally, the choice of f may be used to control how the neural network during training balances further reduced loss for samples with already small loss vs samples with fairly large loss.
- the function f takes one or more additional arguments, reflecting e.g. the ground truth optimal precoder v L or some other property of the precoders or the channel for the sample.
- an additional argument may be the overall magnitude of the channel/precoder coefficients/elements (the ground truth v L and/or the estimate p ( ).
- the function f may e.g., be designed to balance optimization towards high-quality or low-quality channels.
- the loss function reflects the loss in link capacity relative to the link capacity of the sample with the optimal precoder.
- Figure 8a and 8b shows an example of arrangement in the first wireless node 110.
- the first wireless node 110 may comprise an input and output interface configured to communicate with other networking entities in the wireless communications network 100, e.g. the second wireless node 110 and the operator node 130.
- the input and output interface may comprise a receiver, e.g. wired and/or wireless, (not shown) and a transmitter, e.g. wired and/or wireless, (not shown).
- the first wireless node 110 may comprise any one or more out of: An obtaining unit, and a determining unit to perform the method actions as described herein.
- the embodiments herein may be implemented through a processor or one or more processors, such as at least one processor of a processing circuitry in the first wireless node 110 depicted in Figure 8a, together with computer program code for performing the functions and actions of the embodiments herein.
- the program code mentioned above may also be provided as a computer program product, for instance in the form of a data carrier carrying computer program code for performing the embodiments herein when being loaded into the first wireless node 110.
- One such carrier may be in the form of a CD ROM disc. It is however feasible with other data carriers such as a memory stick.
- the computer program code may furthermore be provided as pure program code on a server and downloaded to the first wireless node 110.
- the first wireless node 110 may further comprise respective a memory comprising one or more memory units.
- the memory comprises instructions executable by the processor in the first wireless node 110.
- the memory is arranged to be used to store instructions, data, configurations, and applications to perform the methods herein when being executed in the first wireless node 110.
- a computer program comprises instructions, which when executed by the at least one processor, cause the at least one processor of the first wireless node 110 to perform the actions above.
- a respective carrier comprises the respective computer program, wherein the carrier is one of an electronic signal, an optical signal, an electromagnetic signal, a magnetic signal, an electric signal, a radio signal, a microwave signal, or a computer-readable storage medium.
- the functional modules in the first wireless node 110 may refer to a combination of analog and digital circuits, and/or one or more processors configured with software and/or firmware, e.g. stored in the first wireless node 110, that when executed by the respective one or more processors such as the at least one processor described above cause the respective at least one processor to perform actions according to any of the actions above.
- processors as well as the other digital hardware, may be included in a single Application-Specific Integrated Circuitry (ASIC), or several processors and various digital hardware may be distributed among several separate components, whether individually packaged or assembled into a system-on-a-chip (SoC).
- ASIC Application-Specific Integrated Circuitry
- SoC system-on-a-chip
- Figure 9a and 9b shows an example of arrangement in the operator node 130.
- the operator node 130 may comprise an input and output interface configured to communicate with other networking entities in the wireless communications network 100, e.g. the first wireless node 110 and the second wireless node 120.
- the input and output interface may comprise a receiver, e.g. wired and/or wireless, (not shown) and a transmitter, e.g. wired and/or wireless, (not shown).
- the operator node 130 may comprise any one or more out of: An obtaining unit, a reconstructing unit, an calculating unit and a training unit to perform the method actions as described herein.
- the embodiments herein may be implemented through a processor or one or more processors, such as at least one processor of a processing circuitry in the operator node 130 depicted in Figure 9a, together with computer program code for performing the functions and actions of the embodiments herein.
- the program code mentioned above may also be provided as a computer program product, for instance in the form of a data carrier carrying computer program code for performing the embodiments herein when being loaded into the operator node 130.
- One such carrier may be in the form of a CD ROM disc. It is however feasible with other data carriers such as a memory stick.
- the computer program code may furthermore be provided as pure program code on a server and downloaded to operator node 130.
- the operator node 130 may further comprise respective a memory comprising one or more memory units.
- the memory comprises instructions executable by the processor in the operator node 130.
- the memory is arranged to be used to store instructions, data, configurations, and applications to perform the methods herein when being executed in the operator node 130.
- a computer program comprises instructions, which when executed by the at least one processor, cause the at least one processor of the operator node 130 to perform the actions above.
- a respective carrier comprises the respective computer program, wherein the carrier is one of an electronic signal, an optical signal, an electromagnetic signal, a magnetic signal, an electric signal, a radio signal, a microwave signal, or a computer-readable storage medium.
- the functional modules in the operator node 130 may refer to a combination of analog and digital circuits, and/or one or more processors configured with software and/or firmware, e.g. stored in the operator node 130, that when executed by the respective one or more processors such as the at least one processor described above cause the respective at least one processor to perform actions according to any of the actions above.
- processors as well as the other digital hardware, may be included in a single Application-Specific Integrated Circuitry (ASIC), or several processors and various digital hardware may be distributed among several separate components, whether individually packaged or assembled into a system-on-a-chip (SoC).
- ASIC Application-Specific Integrated Circuitry
- SoC system-on-a-chip
- Embodiments 1-36 are shortly described. See e.g. Figures 5, 6, 7, 8a, 8b, 9a and 9b.
- Embodiment 1 A method performed by a first wireless node (110) e.g., for determining one or more preferred precoders in a wireless communications network (100), wherein the one or more precoders e.g., are maximizing a Signal-to-Noise Ratio, SNR, received by a second wireless node (120), the method comprising anyone or more out of: obtaining (601) a training model, wherein the training model has been trained, by minimizing a loss function indicative of a reconstruction loss of one or more reconstructed second channels and/or one or more reconstructed second channel features, to provide, e.g., an output comprising, any one or more out of:
- the one or more preferred precoders e.g., maximizing the SNR received by the second wireless node (120)
- Embodiment 2 The method according to Embodiment 1, wherein determining (603) the one or more preferred precoders comprises: providing the first compressed channel feature codeword as input to the obtained training model, which training model has been trained by minimizing the loss function, receiving any one or more out of: the one or more preferred precoders e.g., maximizing the SNR received by the second wireless node (120), one or more reconstructed estimated first channels and one or more reconstructed estimated first channel features, as output from the training model, and determining the one or more preferred precoders based on the output from the training model.
- the one or more preferred precoders e.g., maximizing the SNR received by the second wireless node (120), one or more reconstructed estimated first channels and one or more reconstructed estimated first channel features, as output from the training model, and determining the one or more preferred precoders based on the output from the training model.
- Embodiment 3 The method according to any of Embodiments 1-2, wherein obtaining (601) the training model comprises: obtaining one or more second compressed channel feature codeword, indicative of one or more estimated second channels and/or one or more estimated second channel features, reconstructing the one or more estimated second channels and/or one or more estimated second channel features, e.g., by using the training model, calculating, e.g., based on the reconstructed one or more estimated second channels and/or reconstructed one or more estimated second channel features and/or an output of the training model, a reconstruction loss using the loss function, and training the model to provide any one or more out of: the one or more preferred precoders, e.g., maximizing the SNR received by the second wireless node (120), one or more reconstructed estimated channels and one or more reconstructed estimated channel features, by using machine learning, e.g., based on the calculated reconstruction loss, to minimize the loss function.
- the training model comprises: obtaining one or more second compressed channel feature codeword,
- Embodiment 4 The method according to Embodiment 3, wherein the reconstruction loss is any one or more out of:
- Embodiment 5 The method according to Embodiment 4, wherein when the reconstruction loss is the weighted sum of reconstruction losses calculated based on inner products between precoders and eigenvectors, the weights are any one or more out of:
- Embodiment 6 The method according to any of Embodiments 1-5, wherein, when more than one preferred precoders are determined, the more than one preferred precoders are, e.g., approximatively, orthogonal.
- Embodiment 7 The method according to Embodiment 6, wherein determining the more than one orthogonal preferred precoders comprises any one or more out of:
- Embodiment 8 The method according to any of Embodiments 1-7, wherein the loss function used for training the training model is e.g., a strictly increasing or nondecreasing function of a second loss function.
- the loss function used for training the training model is e.g., a strictly increasing or nondecreasing function of a second loss function.
- Embodiment 9 The method according to Embodiment 8, wherein the loss function is the logarithm of the sum of 1 and the second loss function.
- Embodiment 10 A computer program comprising instructions, which when executed by a processor, causes the processor to perform actions according to any of the Embodiments 1-9.
- Embodiment 11 A carrier comprising the computer program of Embodiment 10, wherein the carrier is one of an electronic signal, an optical signal, an electromagnetic signal, a magnetic signal, an electric signal, a radio signal, a microwave signal, or a computer-readable storage medium.
- One or more preferred precoders e.g., maximizing a Signal-to-Noise Ratio, SNR, received by a second wireless node (120), one or more reconstructed estimated channels and one or more reconstructed estimated channel features, the method comprising anyone or more out of: obtaining (701) one or more third compressed channel feature codeword, indicative of one or more third estimated channels and/or third estimated channel features, reconstructing (702) the one or more third estimated channels and/or one or more third estimated channel features, e.g., by using the training model, calculating (703), e.g., based on the reconstructed one or more third channels and/or reconstructed one or more third estimated channel features and/or an output of the training model, a reconstruction loss using the loss function, and training (704) the training model to provide any one or more out of:
- Embodiment 13 The method according to Embodiment 12, wherein the reconstruction loss is any one or more out of:
- weighted sum being calculated by calculating a set of values based on similarities between multiple precoders and multiple eigenvectors, wherein the weights are based on norms of precoders and/or eigenvalues associated to the multiple eigenvectors.
- Embodiment 14 The method according to Embodiment 13, wherein when the reconstruction loss is the weighted sum of reconstruction losses calculated based on inner products between precoders and eigenvectors, the weights are any one or more out of:
- Embodiment 15 The method according to any of Embodiments 12-14, wherein, when more than one preferred precoders are determined, the more than one preferred precoders are, e.g., approximatively, orthogonal.
- Embodiment 16 The method according to Embodiment 15, wherein determining the more than one orthogonal preferred precoders comprises any one or more out of:
- Embodiment 17 The method according to any of Embodiments 12-16, wherein the loss function used for training the training model is e.g., a strictly increasing or nondecreasing function of a second loss function.
- the loss function used for training the training model is e.g., a strictly increasing or nondecreasing function of a second loss function.
- Embodiment 18 The method according to Embodiment 17, wherein the loss function is the logarithm of the sum of 1 and the second loss function.
- Embodiment 19 A computer program comprising instructions, which when executed by a processor, causes the processor to perform actions according to any of the Embodiments 12-18.
- Embodiment 20 A carrier comprising the computer program of Embodiment 20, wherein the carrier is one of an electronic signal, an optical signal, an electromagnetic signal, a magnetic signal, an electric signal, a radio signal, a microwave signal, or a computer-readable storage medium.
- a first wireless node (110) e.g., configured to determine one or more preferred precoders in a wireless communications network (100), wherein the one or more precoders e.g., are maximizing a Signal-to-Noise Ratio, SNR, received by a second wireless node (120), the first wireless node (110) further being configured to any one or more out of: obtain a training model, wherein the training model is adapted to have been trained, by minimizing a loss function indicative of a reconstruction loss of one or more reconstructed second channels and/or one or more reconstructed second channel features, to provide, e.g., an output comprising, any one or more out of:
- the one or more preferred precoders e.g., maximizing the SNR received by the second wireless node (120)
- the second wireless node (120 ) obtains from the second wireless node (120), a first compressed channel feature codeword indicative of one or more first channels and/or one or more first channel features estimated by the second wireless node (120), and determine, based on the obtained first compressed channel feature codeword and the obtained training model, the one or more preferred precoders e.g., maximizing the SNR received by the second wireless node (120).
- Embodiment 22 The first wireless node (110) according to Embodiment 21, wherein first wireless node (110) is configured to determine the one or more preferred precoders by further being configured to: provide the first compressed channel feature codeword as input to the obtained training model, which training model is adapted to have been trained by minimizing the loss function, receive any one or more out of: the one or more preferred precoders e.g., maximizing the SNR received by the second wireless node (120), one or more reconstructed estimated first channels and one or more reconstructed estimated first channel features, as output from the training model, and determine the one or more preferred precoders based on the output from the training model.
- the one or more preferred precoders e.g., maximizing the SNR received by the second wireless node (120), one or more reconstructed estimated first channels and one or more reconstructed estimated first channel features, as output from the training model, and determine the one or more preferred precoders based on the output from the training model.
- Embodiment 23 The first wireless node (110) according to any of Embodiments 21-22, wherein first wireless node (110) is further configured to obtain the training model by further being configured to: obtain one or more second compressed channel feature codeword, indicative of one or more estimated second channels and/or one or more estimated second channel features, reconstruct the one or more estimated second channels and/or one or more estimated second channel features, e.g., by using the training model, calculate, e.g., based on the reconstructed one or more estimated second channels and/or reconstructed one or more estimated second channel features and/or an output of the training model, a reconstruction loss using the loss function, and train the model to provide any one or more out of: the one or more preferred precoders, e.g., maximizing the SNR received by the second wireless node (120), one or more reconstructed estimated channels and/or one or more reconstructed estimated channel features, by using machine learning, e.g., based on the calculated reconstruction loss, to minimize the loss function.
- the training model
- Embodiment 24 The first wireless node (110) according to Embodiment 23, wherein the reconstruction loss is adapted to be any one or more out of:
- weighted sum being calculated by calculating a set of values based on similarities between multiple precoders and multiple eigenvectors, wherein the weights are based on norms of precoders and/or eigenvalues associated to the multiple eigenvectors.
- Embodiment 25 The first wireless node (110) according to Embodiment 24, wherein when the reconstruction loss is the weighted sum of reconstruction losses calculated based on inner products between precoders and eigenvectors, the weights are adapted to be any one or more out of:
- Embodiment 26 The first wireless node (110) according to any of Embodiments 21-25, wherein, when more than one preferred precoders are determined, the more than one preferred precoders are adapted to be, e.g., approximatively, orthogonal.
- Embodiment 27 The first wireless node (110) according to Embodiment 26, wherein first wireless node (110) is configured to determine the more than one orthogonal preferred precoders by further being configured to any one or more out of:
- Embodiment 28 The first wireless node (110) according to any of Embodiments 21-27, wherein the loss function used for training the training model is adapted to be e.g., a strictly increasing or non-decreasing function of a second loss function.
- Embodiment 29 The first wireless node (110) according to Embodiment 28, wherein the loss function is adapted to be the logarithm of the sum of 1 and the second loss function.
- Embodiment 30 An operator node (130) e.g., configured to train a training model to provide any one or more out of: One or more preferred precoders, e.g., maximizing a Signal-to-Noise Ratio, SNR, received by a second wireless node (120), one or more reconstructed estimated channels and one or more reconstructed estimated channel features, the operator node (130) further being configured to anyone or more out of: obtain one or more third compressed channel feature codeword, indicative of one or more third estimated channels and/or one or more third estimated channel features, reconstruct the one or more third estimated channels and/or one or more third estimated channel features, e.g., by using the training model, calculate, e.g., based on the reconstructed one or more third channels and/or reconstructed one or more third channel features and/or an output of the training model, a reconstruction loss using the loss function, and train the training model to provide any one or more out of: the one or more preferred precoders, e.g
- Embodiment 31 The operator node (130) according to Embodiment 30, wherein the reconstruction loss is adapted to be any one or more out of:
- weighted sum being calculated by calculating a set of values based on similarities between multiple precoders and multiple eigenvectors, wherein the weights are based on norms of precoders and/or eigenvalues associated to the multiple eigenvectors.
- Embodiment 32 The operator node (130) according to Embodiment 31, wherein when the reconstruction loss is adapted to be the weighted sum of reconstruction losses calculated based on inner products between precoders and eigenvectors, the weights are adapted to be any one or more out of:
- Embodiment 33 The operator node (130) according to any of Embodiments SO- 32, wherein, when more than one preferred precoders are determined, the more than one preferred precoders are adapted to be, e.g., approximatively, orthogonal.
- Embodiment 34 The operator node (130) according to Embodiment 33, wherein the operator node (130) is configured to determine the more than one orthogonal preferred precoders by further being configured to any one or more out of:
- Embodiment 35 The operator node (130) according to any of Embodiments SO- 34, wherein the loss function used for training the training model is adapted to be e.g., a strictly increasing or non-decreasing function of a second loss function.
- Embodiment 36 The operator node (130) according to Embodiment 35, wherein the loss function is adapted to be the logarithm of the sum of 1 and the second loss function.
- a communication system includes a telecommunication network 3210 such as the wireless communications network 100, e.g. an loT network, or a WLAN, such as a 3GPP-type cellular network, which comprises an access network 3211, such as a radio access network, and a core network 3214.
- the access network 3211 comprises a plurality of base stations 3212a, 3212b, 3212c, such as the network node 110, access nodes, AP STAs NBs, eNBs, gNBs or other types of wireless access points, each defining a corresponding coverage area 3213a, 3213b, 3213c.
- Each base station 3212a, 3212b, 3212c is connectable to the core network 3214 over a wired or wireless connection 3215.
- a first user equipment (UE) e.g. the UE 120 such as a Non-AP STA 3291 located in coverage area 3213c is configured to wirelessly connect to, or be paged by, the corresponding base station 3212c.
- a second UE 3292 e.g. the UE 120 such as a Non-AP STA in coverage area 3213a is wirelessly connectable to the corresponding base station 3212a. While a plurality of UEs 3291, 3292 are illustrated in this example, the disclosed embodiments are equally applicable to a situation where a sole UE is in the coverage area or where a sole UE is connecting to the corresponding base station 3212.
- the telecommunication network 3210 is itself connected to a host computer 3230, which may be embodied in the hardware and/or software of a standalone server, a cloud- implemented server, a distributed server or as processing resources in a server farm.
- the host computer 3230 may be under the ownership or control of a service provider, or may be operated by the service provider or on behalf of the service provider.
- the connections 3221, 3222 between the telecommunication network 3210 and the host computer 3230 may extend directly from the core network 3214 to the host computer 3230 or may go via an optional intermediate network 3220.
- the intermediate network 3220 may be one of, or a combination of more than one of, a public, private or hosted network; the intermediate network 3220, if any, may be a backbone network or the Internet; in particular, the intermediate network 3220 may comprise two or more sub-networks (not shown).
- the communication system of Figure 10 as a whole enables connectivity between one of the connected UEs 3291 , 3292 and the host computer 3230.
- the connectivity may be described as an over-the-top (OTT) connection 3250.
- the host computer 3230 and the connected UEs 3291, 3292 are configured to communicate data and/or signaling via the OTT connection 3250, using the access network 3211, the core network 3214, any intermediate network 3220 and possible further infrastructure (not shown) as intermediaries.
- the OTT connection 3250 may be transparent in the sense that the participating communication devices through which the OTT connection 3250 passes are unaware of routing of uplink and downlink communications.
- a base station 3212 may not or need not be informed about the past routing of an incoming downlink communication with data originating from a host computer 3230 to be forwarded e.g., handed over to a connected UE 3291. Similarly, the base station 3212 need not be aware of the future routing of an outgoing uplink communication originating from the UE 3291 towards the host computer 3230.
- a host computer 3310 comprises hardware 3315 including a communication interface 3316 configured to set up and maintain a wired or wireless connection with an interface of a different communication device of the communication system 3300.
- the host computer 3310 further comprises processing circuitry 3318, which may have storage and/or processing capabilities.
- the processing circuitry 3318 may comprise one or more programmable processors, application-specific integrated circuits, field programmable gate arrays or combinations of these (not shown) adapted to execute instructions.
- the host computer 3310 further comprises software 3311, which is stored in or accessible by the host computer 3310 and executable by the processing circuitry 3318.
- the software 3311 includes a host application 3312.
- the host application 3312 may be operable to provide a service to a remote user, such as a UE 3330 connecting via an OTT connection 3350 terminating at the UE 3330 and the host computer 3310. In providing the service to the remote user, the host application 3312 may provide user data which is transmitted using the OTT connection 3350.
- the communication system 3300 further includes a base station 3320 provided in a telecommunication system and comprising hardware 3325 enabling it to communicate with the host computer 3310 and with the UE 3330.
- the hardware 3325 may include a communication interface 3326 for setting up and maintaining a wired or wireless connection with an interface of a different communication device of the communication system 3300, as well as a radio interface 3327 for setting up and maintaining at least a wireless connection 3370 with a UE 3330 located in a coverage area (not shown) served by the base station 3320.
- the communication interface 3326 may be configured to facilitate a connection 3360 to the host computer 3310.
- connection 3360 may be direct or it may pass through a core network (not shown) in Figure 11 of the telecommunication system and/or through one or more intermediate networks outside the telecommunication system.
- the hardware 3325 of the base station 3320 further includes processing circuitry 3328, which may comprise one or more programmable processors, application-specific integrated circuits, field programmable gate arrays or combinations of these (not shown) adapted to execute instructions.
- the base station 3320 further has software 3321 stored internally or accessible via an external connection.
- the communication system 3300 further includes the UE 3330 already referred to.
- Its hardware 3335 may include a radio interface 3337 configured to set up and maintain a wireless connection 3370 with a base station serving a coverage area in which the UE 3330 is currently located.
- the hardware 3335 of the UE 3330 further includes processing circuitry 3338, which may comprise one or more programmable processors, applicationspecific integrated circuits, field programmable gate arrays or combinations of these (not shown) adapted to execute instructions.
- the UE 3330 further comprises software 3331 , which is stored in or accessible by the UE 3330 and executable by the processing circuitry 3338.
- the software 3331 includes a client application 3332.
- the client application 3332 may be operable to provide a service to a human or non-human user via the UE 3330, with the support of the host computer 3310.
- an executing host application 3312 may communicate with the executing client application 3332 via the OTT connection 3350 terminating at the UE 3330 and the host computer 3310.
- the client application 3332 may receive request data from the host application 3312 and provide user data in response to the request data.
- the OTT connection 3350 may transfer both the request data and the user data.
- the client application 3332 may interact with the user to generate the user data that it provides.
- the host computer 3310, base station 3320 and UE 3330 illustrated in Figure 11 may be identical to the host computer 3230, one of the base stations 3212a, 3212b, 3212c and one of the UEs 3291, 3292 of Figure 10, respectively.
- the inner workings of these entities may be as shown in Figure 11 and independently, the surrounding network topology may be that of Figure 10.
- the OTT connection 3350 has been drawn abstractly to illustrate the communication between the host computer 3310 and the use equipment 3330 via the base station 3320, without explicit reference to any intermediary devices and the precise routing of messages via these devices.
- Network infrastructure may determine the routing, which it may be configured to hide from the UE 3330 or from the service provider operating the host computer 3310, or both. While the OTT connection 3350 is active, the network infrastructure may further take decisions by which it dynamically changes the routing e.g., on the basis of load balancing consideration or reconfiguration of the network.
- the wireless connection 3370 between the UE 3330 and the base station 3320 is in accordance with the teachings of the embodiments described throughout this disclosure.
- One or more of the various embodiments improve the performance of OTT services provided to the UE 3330 using the OTT connection 3350, in which the wireless connection 3370 forms the last segment. More precisely, the teachings of these embodiments may improve the applicable RAN effect: data rate, latency, power consumption, and thereby provide benefits such as corresponding effect on the OTT service: e.g. reduced user waiting time, relaxed restriction on file size, better responsiveness, extended battery lifetime.
- a measurement procedure may be provided for the purpose of monitoring data rate, latency and other factors on which the one or more embodiments improve.
- the measurement procedure and/or the network functionality for reconfiguring the OTT connection 3350 may be implemented in the software 3311 of the host computer 3310 or in the software 3331 of the UE 3330, or both.
- sensors (not shown) may be deployed in or in association with communication devices through which the OTT connection 3350 passes; the sensors may participate in the measurement procedure by supplying values of the monitored quantities exemplified above, or supplying values of other physical quantities from which software 3311 , 3331 may compute or estimate the monitored quantities.
- the reconfiguring of the OTT connection 3350 may include message format, retransmission settings, preferred routing etc.; the reconfiguring need not affect the base station 3320, and it may be unknown or imperceptible to the base station 3320. Such procedures and functionalities may be known and practiced in the art.
- measurements may involve proprietary UE signaling facilitating the host computer’s 3310 measurements of throughput, propagation times, latency and the like.
- the measurements may be implemented in that the software 3311, 3331 causes messages to be transmitted, in particular empty or ‘dummy’ messages, using the OTT connection 3350 while it monitors propagation times, errors etc.
- FIG 12 is a flowchart illustrating a method implemented in a communication system, in accordance with one embodiment.
- the communication system includes a host computer, a base station such as the network node 110, and a UE such as the UE 120, which may be those described with reference to Figure 10 and Figure 11. For simplicity of the present disclosure, only drawing references to Figure 12 will be included in this section.
- the host computer provides user data.
- the host computer provides the user data by executing a host application.
- the host computer initiates a transmission carrying the user data to the UE.
- the base station transmits to the UE the user data which was carried in the transmission that the host computer initiated, in accordance with the teachings of the embodiments described throughout this disclosure.
- the UE executes a client application associated with the host application executed by the host computer.
- FIG. 13 is a flowchart illustrating a method implemented in a communication system, in accordance with one embodiment.
- the communication system includes a host computer, a base station such as an AP STA, and a UE such as a Non-AP STA which may be those described with reference to Figure 10 and Figure 11. For simplicity of the present disclosure, only drawing references to Figure 13 will be included in this section.
- the host computer provides user data.
- the host computer provides the user data by executing a host application.
- the host computer initiates a transmission carrying the user data to the UE. The transmission may pass via the base station, in accordance with the teachings of the embodiments described throughout this disclosure.
- the UE receives the user data carried in the transmission.
- FIG 14 is a flowchart illustrating a method implemented in a communication system, in accordance with one embodiment.
- the communication system includes a host computer, a base station such as an AP STA, and a UE such as a Non-AP STA which may be those described with reference to Figure 10 and Figure 11.
- a first action 3610 of the method the UE receives input data provided by the host computer.
- the UE provides user data.
- the UE provides the user data by executing a client application.
- the UE executes a client application which provides the user data in reaction to the received input data provided by the host computer.
- the executed client application may further consider user input received from the user.
- the UE initiates, in an optional third sub action 3630, transmission of the user data to the host computer.
- the host computer receives the user data transmitted from the UE, in accordance with the teachings of the embodiments described throughout this disclosure.
- FIG 15 is a flowchart illustrating a method implemented in a communication system, in accordance with one embodiment.
- the communication system includes a host computer, a base station such as an AP STA, and a UE such as a Non-AP STA which may be those described with reference to Figure 10 and Figure 11.
- a first action 3710 of the method in accordance with the teachings of the embodiments described throughout this disclosure, the base station receives user data from the UE.
- the base station initiates transmission of the received user data to the host computer.
- the host computer receives the user data carried in the transmission initiated by the base station.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Signal Processing (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Computer Networks & Wireless Communication (AREA)
- Mobile Radio Communication Systems (AREA)
Abstract
L'invention concerne un procédé mis en œuvre par un premier nœud sans fil pour déterminer un ou plusieurs précodeurs préférés dans un réseau de communication sans fil. Lesdits un ou plusieurs précodeurs maximisent un rapport signal sur bruit, SNR, reçu par un second nœud sans fil. Le premier nœud sans fil obtient (601) un modèle d'apprentissage. Le modèle d'apprentissage a été entraîné pour permettre une sortie comprenant l'un quelconque ou plusieurs parmi : lesdits un ou plusieurs précodeurs préférés maximisant le SNR reçu par le second nœud sans fil, un ou plusieurs canaux estimés reconstruits et une ou plusieurs caractéristiques de canal estimées reconstruites. Le premier nœud sans fil obtient (602) un premier mot de code caractéristique de canal compressé à partir du second nœud sans fil. Le premier nœud sans fil détermine (603), sur la base du premier mot de code caractéristique de canal compressé obtenu et du modèle d'apprentissage obtenu, lesdits un ou plusieurs précodeurs préférés maximisant le SNR reçu par le second nœud sans fil.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US202263363811P | 2022-04-29 | 2022-04-29 | |
US63/363,811 | 2022-04-29 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023208474A1 true WO2023208474A1 (fr) | 2023-11-02 |
Family
ID=85772819
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2023/056957 WO2023208474A1 (fr) | 2022-04-29 | 2023-03-17 | Premier noeud sans fil, noeud d'opérateur et procédés dans un réseau de communication sans fil |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2023208474A1 (fr) |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021217519A1 (fr) * | 2020-04-29 | 2021-11-04 | 华为技术有限公司 | Procédé et appareil de réglage de réseau neuronal |
-
2023
- 2023-03-17 WO PCT/EP2023/056957 patent/WO2023208474A1/fr unknown
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021217519A1 (fr) * | 2020-04-29 | 2021-11-04 | 华为技术有限公司 | Procédé et appareil de réglage de réseau neuronal |
US20230118031A1 (en) * | 2020-04-29 | 2023-04-20 | Huawei Technologies Co., Ltd. | Neural network adjustment method and apparatus |
Non-Patent Citations (16)
Title |
---|
"Physical layer procedures for data (Release 16", 3GPP TS 38.214 |
A. GALANTAICS. J. HEGEDUS: "Jordan's principal angles in complex vector spaces", NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS, 2006 |
C. TRABELSIO. BILANIUKY. ZHANGD. SERDYUKS. SUBRAMANIANJ. F. SANTOSS. MEHRIN. ROSTAMZADEHY. BENGIOC. J. PA: "Deep complex networks", ARXIV 1705.09792, 2018 |
D. KINGMAJ. BA: "A method for stochastic optimization", ARXIV, 1412.6980, December 2014 (2014-12-01) |
ERICSSON: "Discussions on AI-CSI", vol. RAN WG1, no. Online; 20220516 - 20220527, 29 April 2022 (2022-04-29), XP052152910, Retrieved from the Internet <URL:https://ftp.3gpp.org/tsg_ran/WG1_RL1/TSGR1_109-e/Docs/R1-2203282.zip R1-2203282 Discussions on AI-CSI.docx> [retrieved on 20220429] * |
J. DUCHIE. HAZANY. SINGER: "Adaptive subgradient methods for online learning and stochastic optimization", JOURNAL OF MACHINE LEARNING RESEARCH, 2010 |
K. SCHARNHORST: "Angles in Complex Vector Spaces", ARXIV: 9904077, 1999 |
KUMAR PRATIKRANA ALI AMJADARASH BEHBOODIJOSEPH B. SORIAGAMAX WELLING: "Neural Augmentation of Kalman Filter with Hypernetwork for Channel Tracking", ARXIV: 2109.12561, 2021 |
LIU WENDONG ET AL: "EVCsiNet: Eigenvector-Based CSI Feedback Under 3GPP Link-Level Channels", IEEE WIRELESS COMMUNICATIONS LETTERS, IEEE, PISCATAWAY, NJ, USA, vol. 10, no. 12, 15 September 2021 (2021-09-15), pages 2688 - 2692, XP011892287, ISSN: 2162-2337, [retrieved on 20211207], DOI: 10.1109/LWC.2021.3112747 * |
MUHAN CHEN ET AL: "Deep Learning-based Implicit CSI Feedback in Massive MIMO", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 21 May 2021 (2021-05-21), XP081967393 * |
MUHAN CHENJIAJIA GUOCHAO-KAI WENSHI JINGEOFFREY YE LIANG YANG: "Deep Learning-based Implicit CSI Feedback in Massive MIMO", ARXIV: 2105.10100, 2021 |
NELSON COSTASIMON HAYKIN: "Multiple-Output Channel Models: Theory and Practice", 2010, JOHN WILEY & SONS, article "Multiple-Input" |
PRANAV MADADIJEONGHO JEONJOONYOUNG CHOCALEB LOJUHO LEEJIANZHONG ZHANG: "PolarDenseNet: A Deep Learning Model for CSI Feedback in MIMO Systems", ARXIV: 2202.01246, 2022 |
S. LOFFEC. SZEGDY: "Batch normalization: Accelerated deep network training by reducing internal covariance shift", ARXIV 1502.03167, March 2015 (2015-03-01) |
ZHILIN LUJINTAO WANGJIAN SONG: "Multi-resolution CSI Feedback with deep learning in Massive MIMO System", ARXIV: 1910.14322, 2019 |
ZHILIN LUXUDONG ZHANGHONGYI HEJINTAO WANGJIAN SONG: "Binarized Aggregated Network with Quantization: Flexible Deep Learning Deployment for CSI Feedback in MassiveMIMO System", ARXIV, 2105.00354, vol. 1, May 2021 (2021-05-01) |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20240007164A1 (en) | Methods for reducing overhead of nr type ii channel state information feedback using angle and delay reciprocity | |
US12009888B2 (en) | Network nodes and methods performed in a wireless communication network | |
US20230412430A1 (en) | Inforamtion reporting method and apparatus, first device, and second device | |
WO2023158360A1 (fr) | Évaluation des performances d'un codeur ae | |
CN115173909A (zh) | 用于频率参数化线性组合码本的码本子集限制 | |
WO2023208781A1 (fr) | Équipement utilisateur et procédé dans un réseau de communications sans fil | |
WO2023170655A1 (fr) | Amélioration d'indicateur de matrice de précodeur (pmi) de type ii pour une transmission conjointe cohérente (cjt) | |
WO2023208474A1 (fr) | Premier noeud sans fil, noeud d'opérateur et procédés dans un réseau de communication sans fil | |
CN117811627A (zh) | 一种通信的方法和通信装置 | |
Nezafati et al. | MSE minimized joint transmission in coordinated multipoint systems with sparse feedback and constrained backhaul requirements | |
EP4264849A1 (fr) | Procédé d'émission-réception entre un récepteur (rx) et un émetteur (tx) dans un canal de communication surchargé | |
TW202345556A (zh) | 用於經增強基於ml之csi報告的節點及方法 | |
WO2023211346A1 (fr) | Noeud et procédés d'entraînement d'un codeur de réseau neuronal pour des csi basées sur un apprentissage automatique | |
US20240356607A1 (en) | Channel state information omission for type ii channel state information | |
WO2023224533A1 (fr) | Nœuds et procédés de rapport de csi basés sur un apprentissage automatique (ml) | |
WO2023113677A1 (fr) | Nœuds et procédés de rapport de csi à base d'apprentissage automatique propriétaire | |
US20230087742A1 (en) | Beamforming technique using approximate channel decomposition | |
US20240195472A1 (en) | Baseband unit, radio unit and methods in a wireless communications network | |
WO2023195891A1 (fr) | Procédés de reconfiguration de rétroaction d'informations d'état de canal dynamique | |
WO2024055250A1 (fr) | Station de base, équipement utilisateur et procédés dans un réseau de communication sans fil | |
WO2023113668A1 (fr) | Nœuds de communication et procédés de signalisation de csi propriétaire reposant sur l'apprentissage automatique | |
WO2024172732A1 (fr) | Indicateur de qualité de canal (cqi) cible basé sur information d'état de canal (csi) | |
WO2024172716A1 (fr) | Configuration d'un dispositif sans fil avec une taille de charge utile de csi | |
WO2023140772A1 (fr) | Solution d'apprentissage de modèle hybride pour signalement de csi | |
WO2024172744A1 (fr) | Procédé et appareil de rapport d'informations d'état de canal à l'aide d'un autocodeur |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23713337 Country of ref document: EP Kind code of ref document: A1 |