CN110166089B - Superposition coding CSI feedback method based on deep learning - Google Patents

Superposition coding CSI feedback method based on deep learning Download PDF

Info

Publication number
CN110166089B
CN110166089B CN201910442100.2A CN201910442100A CN110166089B CN 110166089 B CN110166089 B CN 110166089B CN 201910442100 A CN201910442100 A CN 201910442100A CN 110166089 B CN110166089 B CN 110166089B
Authority
CN
China
Prior art keywords
csi
net1
det
sequence
net2
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910442100.2A
Other languages
Chinese (zh)
Other versions
CN110166089A (en
Inventor
卿朝进
蔡斌
阳庆瑶
万东琴
张岷涛
郭奕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xihua University
Original Assignee
Xihua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xihua University filed Critical Xihua University
Priority to CN201910442100.2A priority Critical patent/CN110166089B/en
Publication of CN110166089A publication Critical patent/CN110166089A/en
Application granted granted Critical
Publication of CN110166089B publication Critical patent/CN110166089B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/02Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas
    • H04B7/04Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas
    • H04B7/0413MIMO systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/02Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas
    • H04B7/04Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas
    • H04B7/0413MIMO systems
    • H04B7/0417Feedback systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/02Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas
    • H04B7/04Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas
    • H04B7/06Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station
    • H04B7/0613Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission
    • H04B7/0615Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission of weighted versions of same signal
    • H04B7/0619Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission of weighted versions of same signal using feedback from receiving side
    • H04B7/0621Feedback content
    • H04B7/0626Channel coefficients, e.g. channel state information [CSI]

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The invention discloses a superposition coding CSI feedback method based on deep learning, which comprises the following steps: a, a user side: (a1) a user side reads channel state information and an uplink user sequence; (a2) performing spread spectrum processing on the channel state information to obtain a spread spectrum sequence; (a3) carrying out weighted superposition on the spread spectrum sequence and the uplink user sequence to obtain a superposed sequence; (a4) the client transmits the superimposed sequence. b: a base station end: (b1) a base station receives and obtains a receiving sequence; (b2) and the base station recovers the channel state information and the uplink user sequence by using the multi-task neural network obtained by training. Compared with non-superposition coding CSI feedback, the method completely avoids uplink bandwidth resource occupation; compared with the superposition coding CSI feedback, the method can improve the CSI recovery precision and improve the detection performance of the uplink user sequence; meanwhile, the invention can simplify the system architecture and reduce the processing complexity of the system.

Description

Superposition coding CSI feedback method based on deep learning
Technical Field
The invention relates to the technical field of superposition feedback of a large-scale MIMO (multiple input multiple output) system, in particular to a deep learning-based superposition coding CSI (channel State information) feedback method.
Background
As a key technology for meeting the efficient spectrum efficiency and energy efficiency of future 5g (radio generation) networks, a large-scale MIMO system can provide wireless data services for more users without increasing the transmission power and system bandwidth through hundreds of antennas deployed at a base station. Meanwhile, many operations (such as multi-user scheduling, rate allocation, transmitting end precoding, and the like) which bring performance improvement in a large-scale MIMO system depend on the acquisition of accurate downlink Channel State Information (CSI). In Frequency Division Duplex (FDD) massive MIMO systems, the reciprocity between channels is no longer applicable, and CSI can only be fed back from the user end to the base station.
The traditional CSI scheme based on the codebook is difficult to apply due to the fact that the number of antennas is large and the dimension of the required codebook is huge; while the Compressed Sensing (CS) feedback technology using the signal sparsity can reduce the feedback overhead of the system to some extent, but occupies a certain spectrum resource in the feedback process.
In recent years, a Superposition (SC) technique has been widely used in various fields of wireless communication due to its characteristic of being able to efficiently use spectrum resources. Meanwhile, deep learning attracts wide attention due to the advantages of high precision, high calculation speed and the like. The invention combines the advantages of SC and CS technology, superimposes the CSI weighting on the information sequence and feeds back the information sequence to the base station, and recovers the CSI at the receiving end by utilizing the deep learning technology. And then, under the condition of not increasing the frequency spectrum overhead, the reconstruction accuracy and the reconstruction rate of the CSI are improved, more implementable schemes are brought for channel feedback research, and the method has great significance.
Disclosure of Invention
The invention aims to provide a superposition coding CSI feedback method based on deep learning, which completely avoids uplink bandwidth resource occupation compared with non-superposition coding CSI feedback; compared with the superposition coding CSI feedback, the method can improve the CSI recovery precision and improve the detection performance of the uplink user sequence; meanwhile, the invention can simplify the system architecture and reduce the processing complexity of the system.
The specific technical scheme is as follows:
the superposition coding CSI feedback method based on deep learning comprises the following steps: :
a, a user side:
(a1) a user side reads channel state information H with the length of N and an uplink user sequence D with the length of M;
(a2) spreading the channel state information H to obtain a spreading sequence H with the length of Mspread
The matrix P is a walsh spreading matrix and satisfies PTP=MINThe superscript "T" denotes the transposition operation, INRepresenting an N-order identity matrix;
(a3) spreading sequence HspreadCarrying out weighted superposition with the uplink user sequence D to obtain a superposed sequence;
(a4) the user side transmits the superposition sequence;
b: a base station end:
(b1) a base station receives and obtains a receiving sequence R;
(b2) the base station recovers the channel state information H and the uplink user sequence D by utilizing a multi-task neural network obtained by training based on a sub-network-by-sub-network training method;
wherein the weighted overlap-add of the method step (a3) can be expressed as:
Figure BDA0002072331590000021
wherein rho is [0,1 ]]Denotes the superposition factor, EKRepresenting the user transmission power, N representing the channel state information frame length, and S representing the superposition sequence;
wherein, the "multitasking neural network" in the step (b2) of the method includes four sub-networks: CSI-NET1, CSI-NET2, DET-NET1 and DET-NET 2;
the sub-networks CSI-NET1, CSI-NET2, DET-NET1 and DET-NET2 are connected together in a cascade mode;
the sub-networks CSI-NET1, CSI-NET2, DET-NET1 and DET-NET2 each comprise only an input layer, a hidden layer and an output layer;
the input of the sub-networks CSI-NET1, CSI-NET2, DET-NET1 and DET-NET2 adopts standardized processing;
hidden layer activation functions of the sub-networks CSI-NET1, CSI-NET2, DET-NET1 and DET-NET2 all adopt swish functions;
the numbers of nodes of the input layer, the hidden layer and the output layer of the CSI-NET1 and the CSI-NET2 sub-networks are respectively 2N, 16N and 2N;
the number of nodes of the input layer, the hidden layer and the output layer of the DET-NET1 and the DET-NET2 sub-networks is respectively 2M, 16M and 2M;
between the sub-networks CSI-NET1 and DET-NET1, the superimposed interference of the channel state information is reduced by the following method:
Figure BDA0002072331590000022
wherein the content of the first and second substances,
Figure BDA0002072331590000023
represents the output of CSI-NET1, ρ ∈ [0,1 ]]Denotes the superposition factor, EKIndicating the user transmission power, N the channel state information frame length, R the receive sequence,
Figure BDA0002072331590000031
represents the output after eliminating the superimposed interference of the channel state information and will
Figure BDA0002072331590000032
As an input to DET-NET 1;
between the sub-networks DET-NET1 and CSI-NET2, the superimposed interference of the uplink user sequence is reduced by the following method, namely
Figure BDA0002072331590000033
Wherein the content of the first and second substances,
Figure BDA0002072331590000034
represents the output of DET-NET1, ρ ∈ [0,1 ]]Denotes the superposition factor, EKIndicating the user transmit power, R the receive sequence,
Figure BDA0002072331590000035
represents the output after eliminating the superimposed interference of the uplink user sequence and will
Figure BDA00020723315900000310
As an input to the CSI-NET 2;
between the sub-networks CSI-NET2 and DET-NET2, the superimposed interference of the channel state information is reduced by the following method
Figure BDA0002072331590000036
Wherein the content of the first and second substances,
Figure BDA0002072331590000037
represents the output of CSI-NET2, ρ ∈ [0,1 ]]Denotes the superposition factor, EKIndicating the user transmission power, N the channel state information frame length, R the receive sequence,
Figure BDA0002072331590000038
represents the output after eliminating the superimposed interference of the channel state information and will
Figure BDA0002072331590000039
As an input to DET-NET 2;
the normalization processing formula is norm (x) ═ x- μ)/σ;
wherein x represents a vector to be normalized, μ represents a mean value of x, and σ represents a standard deviation of x;
wherein, the "subnet-by-subnet training method" described in step (b2) of the method:
b21 network parameters (W) for training CSI-NET1h11,bh11,Wh12,bh12);
b22, keeping CSI-NET1 network parameters (W)h11,bh11,Wh12,bh12) Training the DET-NET1 network parameters (W) unchangedd11,bd11,Wd12,bd12);
b23, maintaining CSI-NET1 and DET-NET1 network parameters (W)h11,bh11,Wh12,bh12,Wd11,bd11,Wd12,bd12) Training CSI-NET2 network parameters (W) unchangedh21,bh21,Wh22,bh22);
b24, maintaining CSI-NET1, DET-NET1 and CSI-NET2 network parameters (W)h11,bh11,Wh12,bh12,Wd11,bd11,Wd12,bd12,Wh21,bh21,Wh22,bh22) Training the DET-NET2 network parameters (W) unchangedd21,bd21,Wd22,bd22);
b25, saving network parameters of each layer (W)h11,bh11,Wh12,bh12,Wd11,bd11,Wd12,bd12,Wh21,bh21,Wh22,bh22,Wd21,bd21,Wd22,bd22);
W ishij,Wdij(i-1, 2; j-1, 2) represents a weighting matrix, bhij,bdij(i ═ 1, 2; j ═ 1,2) denotes a bias vector.
The invention has the beneficial effects that:
compared with non-superposition coding CSI feedback, the method completely avoids uplink bandwidth resource occupation; compared with the superposition coding CSI feedback, the method can improve the CSI recovery precision and improve the detection performance of the uplink user sequence; meanwhile, the invention can simplify the system architecture and reduce the processing complexity of the system.
Drawings
FIG. 1 is a schematic flow diagram of the present invention;
FIG. 2 is a schematic structural diagram of the "multitask neural network" according to the present invention.
Detailed Description
The technical solutions of the present invention are further described in detail below with reference to the accompanying drawings, but the scope of the present invention is not limited to the following.
As shown in fig. 1, the method for feedback of superposition coding CSI based on deep learning includes:
a, a user side:
(a1) a user side reads channel state information H with the length of N and an uplink user sequence D with the length of M;
(a2) spreading the channel state information H to obtain a spreading sequence H with the length of Mspread
The matrix P is a walsh spreading matrix and satisfies PTP=MINThe superscript "T" denotes the transposition operation, INRepresenting an N-order identity matrix;
in this embodiment, the step a2) is exemplified as follows:
suppose that: n-4, M-8, H-0.2 +0.3j,0.4+0.5j,0.6+0.7j,0.8+0.9j,
Figure BDA0002072331590000041
the spreading sequence is then:
Figure BDA0002072331590000051
(a3) spreading sequence HspreadCarrying out weighted superposition with the uplink user sequence D to obtain a superposed sequence;
the weighted overlap-add in step (a3) of the method may be expressed as:
Figure BDA0002072331590000052
wherein rho is [0,1 ]]Denotes the superposition factor, EKRepresenting the user transmission power, N representing the channel state information frame length, and S representing the superposition sequence;
in this embodiment, the step a3) is exemplified as follows:
suppose that: ρ 0.2, EK100, N-4, the uplink user sequence D ═ 1-1j, -1+1j,1+1j, -1-1j,
spreading sequence Hspread=(0.2+0.3j,0.4+0.5j,0.6+0.7j,0.8+0.9j),
According to weighted superposition formula
Figure BDA0002072331590000053
The overlap-and-add sequence can be calculated:
S=(9.3915-8.2735j,-8.0498+10.0623j,10.2859+10.5059j,-7.1554-6.9318j)。
(a4) the user side transmits the superposition sequence;
b: a base station end:
(b1) a base station receives and obtains a receiving sequence R;
(b2) the base station recovers the channel state information H and the uplink user sequence D by utilizing a multi-task neural network obtained by training based on a sub-network-by-sub-network training method;
as shown in fig. 2, in the embodiment of the present application, the "multitasking neural network" described in step b 2):
the multitask neural network comprises four sub-networks: CSI-NET1, CSI-NET2, DET-NET1 and DET-NET 2;
the sub-networks CSI-NET1, CSI-NET2, DET-NET1 and DET-NET2 are connected together in a cascade mode;
the sub-networks CSI-NET1, CSI-NET2, DET-NET1 and DET-NET2 each comprise only an input layer, a hidden layer and an output layer;
the input of the sub-networks CSI-NET1, CSI-NET2, DET-NET1 and DET-NET2 adopts standardized processing;
hidden layer activation functions of the sub-networks CSI-NET1, CSI-NET2, DET-NET1 and DET-NET2 all adopt swish functions;
the numbers of nodes of the input layer, the hidden layer and the output layer of the CSI-NET1 and the CSI-NET2 sub-networks are respectively 2N, 16N and 2N;
the number of nodes of the input layer, the hidden layer and the output layer of the DET-NET1 and the DET-NET2 sub-networks is respectively 2M, 16M and 2M;
between the sub-networks CSI-NET1 and DET-NET1, the superimposed interference of the channel state information is reduced by the following method
Figure BDA0002072331590000061
Wherein the content of the first and second substances,
Figure BDA0002072331590000062
represents the output of CSI-NET1, ρ ∈ [0,1 ]]Denotes the superposition factor, EKIndicating the user transmission power, N the channel state information frame length, R the receive sequence,
Figure BDA0002072331590000063
represents the output after eliminating the superimposed interference of the channel state information and will
Figure BDA0002072331590000064
As an input to DET-NET 1;
between the sub-networks DET-NET1 and CSI-NET2, the superimposed interference of the uplink user sequence is reduced by the following method, namely
Figure BDA0002072331590000065
Wherein the content of the first and second substances,
Figure BDA0002072331590000066
represents the output of DET-NET1, ρ ∈ [0,1 ]]Denotes the superposition factor, EKIndicating the user transmit power, R the receive sequence,
Figure BDA0002072331590000067
represents the output after eliminating the superimposed interference of the uplink user sequence and will
Figure BDA0002072331590000068
As an input to the CSI-NET 2;
between the sub-networks CSI-NET2 and DET-NET2, the superimposed interference of the channel state information is reduced by the following method
Figure BDA0002072331590000069
Wherein the content of the first and second substances,
Figure BDA00020723315900000610
represents the output of CSI-NET2, ρ ∈ [0,1 ]]Denotes the superposition factor, EKIndicating the user transmission power, N the channel state information frame length, R the receive sequence,
Figure BDA00020723315900000611
represents the output after eliminating the superimposed interference of the channel state information and will
Figure BDA00020723315900000612
As an input to DET-NET 2;
the normalization process formula is norm (x) ═ (x- μ)/σ,
wherein x represents a vector to be normalized, μ represents a mean value of x, and σ represents a standard deviation of x;
the "training method by sub-network" in step b2) of the method is characterized in that:
b21 network parameters (W) for training CSI-NET1h11,bh11,Wh12,bh12);
b22, keeping CSI-NET1 network parameters (W)h11,bh11,Wh12,bh12) Training the DET-NET1 network parameters (W) unchangedd11,bd11,Wd12,bd12);
b23, maintaining CSI-NET1 and DET-NET1 network parameters (W)h11,bh11,Wh12,bh12,Wd11,bd11,Wd12,bd12) Training CSI-NET2 network parameters (W) unchangedh21,bh21,Wh22,bh22);
b24, maintaining CSI-NET1, DET-NET1 and CSI-NET2 network parameters (W)h11,bh11,Wh12,bh12,Wd11,bd11,Wd12,bd12,Wh21,bh21,Wh22,bh22) Training the DET-NET2 network parameters (W) unchangedd21,bd21,Wd22,bd22);
b25, saving network parameters of each layer
(Wh11,bh11,Wh12,bh12,Wd11,bd11,Wd12,bd12,Wh21,bh21,Wh22,bh22,Wd21,bd21,Wd22,bd22);
W ishij,Wdij(i-1, 2; j-1, 2) represents a weighting matrix, bhij,bdij(i ═ 1, 2; j ═ 1,2) denotes a bias vector.
It is to be understood that the embodiments described herein are for the purpose of assisting the reader in understanding the manner of practicing the invention and are not to be construed as limiting the scope of the invention to such particular statements and embodiments. Those skilled in the art can make various other specific changes and combinations based on the teachings of the present invention without departing from the spirit of the invention, and these changes and combinations are within the scope of the invention.

Claims (1)

1. The superposition coding CSI feedback method based on deep learning is characterized by comprising the following steps:
a, a user side:
(a1) a user side reads channel state information H with the length of N and an uplink user sequence D with the length of M;
(a2) spreading the channel state information H to obtain a spreading sequence H with the length of Mspread
The matrix P is a walsh spreading matrix, and satisfies PTP=MINThe superscript "T" denotes the transposition operation, INRepresenting an N-order identity matrix;
(a3) spreading sequence HspreadCarrying out weighted superposition with the uplink user sequence D to obtain a superposed sequence;
the weighted overlap-add of step (a 3):
Figure FDA0003009010210000011
wherein rho is [0,1 ]]Denotes the superposition factor, EKRepresenting the user transmission power, N representing the channel state information frame length, and S representing the superposition sequence;
(a4) the user side transmits the superposition sequence;
b: a base station end:
(b1) a base station receives and obtains a receiving sequence R;
(b2) the base station recovers the channel state information H and the uplink user sequence D by utilizing a multi-task neural network obtained by training based on a sub-network-by-sub-network training method;
the "multitasking neural network" in the step (b2), comprising four sub-networks: CSI-NET1, CSI-NET2, DET-NET1 and DET-NET 2;
the sub-networks CSI-NET1, CSI-NET2, DET-NET1 and DET-NET2 are connected together in a cascade mode;
the sub-networks CSI-NET1, CSI-NET2, DET-NET1 and DET-NET2 each comprise only an input layer, a hidden layer and an output layer;
the input of the sub-networks CSI-NET1, CSI-NET2, DET-NET1 and DET-NET2 adopts standardized processing;
hidden layer activation functions of the sub-networks CSI-NET1, CSI-NET2, DET-NET1 and DET-NET2 all adopt swish functions;
the numbers of nodes of the input layer, the hidden layer and the output layer of the CSI-NET1 and the CSI-NET2 sub-networks are respectively 2N, 16N and 2N;
the number of nodes of the input layer, the hidden layer and the output layer of the DET-NET1 and the DET-NET2 sub-networks is respectively 2M, 16M and 2M;
between the sub-networks CSI-NET1 and DET-NET1, the superimposed interference of the channel state information is reduced by the following method:
Figure FDA0003009010210000021
wherein the content of the first and second substances,
Figure FDA0003009010210000022
represents the output of CSI-NET1, ρ ∈ [0,1 ]]Denotes the superposition factor, EKIndicating the user transmission power, N the channel state information frame length, R the receive sequence,
Figure FDA0003009010210000023
represents the output after eliminating the superimposed interference of the channel state information and will
Figure FDA0003009010210000024
As an input to DET-NET 1;
between the sub-networks DET-NET1 and CSI-NET2, the superimposed interference of the uplink user sequence is reduced by the following method:
Figure FDA0003009010210000025
wherein the content of the first and second substances,
Figure FDA0003009010210000026
represents the output of DET-NET1, ρ ∈ [0,1 ]]Denotes the superposition factor, EKIndicating the user transmit power, R the receive sequence,
Figure FDA0003009010210000027
represents the output after eliminating the superimposed interference of the uplink user sequence and will
Figure FDA0003009010210000028
As an input to the CSI-NET 2;
between the sub-networks CSI-NET2 and DET-NET2, the superimposed interference of the channel state information is reduced by the following method:
Figure FDA0003009010210000029
wherein
Figure FDA00030090102100000210
Represents the output of CSI-NET2, ρ ∈ [0,1 ]]Denotes the superposition factor, EKIndicating the user transmission power, N the channel state information frame length, R the receive sequence,
Figure FDA00030090102100000211
represents the output after eliminating the superimposed interference of the channel state information and will
Figure FDA00030090102100000212
As an input to DET-NET 2;
the normalization process formula is norm (x) ═ x- μ)/σ;
wherein x represents a vector to be normalized, μ represents a mean value of x, and σ represents a standard deviation of x;
the sub-network-by-sub-network training method comprises the following steps:
b21 network parameter W of training CSI-NET1h11,bh11,Wh12,bh12
b22, keeping CSI-NET1 network parameter Wh11,bh11,Wh12,bh12Training the DET-NET1 network parameter W without changed11,bd11,Wd12,bd12
b23, keeping CSI-NET1 and DET-NET1 network parameters Wh11,bh11,Wh12,bh12,Wd11,bd11,Wd12,bd12Training the CSI-NET2 network parameter W without changeh21,bh21,Wh22,bh22
b24, keeping CSI-NET1, DET-NET1 and CSI-NET2 network parameters Wh11,bh11,Wh12,bh12,Wd11,bd11,Wd12,bd12,Wh21,bh21,Wh22,bh22Training the DET-NET2 network parameter W without changed21,bd21,Wd22,bd22
b25, saving network parameters of each layer
Wh11,bh11,Wh12,bh12,Wd11,bd11,Wd12,bd12,Wh21,bh21,Wh22,bh22,Wd21,bd21,Wd22,bd22
W ishij,Wdij(i-1, 2; j-1, 2) represents a weighting matrix, bhij,bdij(i ═ 1, 2; j ═ 1,2) denotes a bias vector.
CN201910442100.2A 2019-05-24 2019-05-24 Superposition coding CSI feedback method based on deep learning Active CN110166089B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910442100.2A CN110166089B (en) 2019-05-24 2019-05-24 Superposition coding CSI feedback method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910442100.2A CN110166089B (en) 2019-05-24 2019-05-24 Superposition coding CSI feedback method based on deep learning

Publications (2)

Publication Number Publication Date
CN110166089A CN110166089A (en) 2019-08-23
CN110166089B true CN110166089B (en) 2021-06-04

Family

ID=67632589

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910442100.2A Active CN110166089B (en) 2019-05-24 2019-05-24 Superposition coding CSI feedback method based on deep learning

Country Status (1)

Country Link
CN (1) CN110166089B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111193535B (en) * 2020-01-14 2022-05-31 西华大学 Feedback method based on ELM superposition CSI in FDD large-scale MIMO system
CN111597877A (en) * 2020-04-02 2020-08-28 浙江工业大学 Fall detection method based on wireless signals
CN113765830B (en) * 2020-06-03 2022-12-27 华为技术有限公司 Method for acquiring channel information and communication device
CN112564757A (en) * 2020-12-03 2021-03-26 西华大学 Deep learning 1-bit compression superposition channel state information feedback method
CN113472412B (en) * 2021-07-13 2023-10-24 西华大学 Enhanced ELM-based superposition CSI feedback method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107070588A (en) * 2016-12-28 2017-08-18 深圳清华大学研究院 A kind of multiple access of simplification accesses the receiver and method of reseptance of Transmission system
CN108390706A (en) * 2018-01-30 2018-08-10 东南大学 A kind of extensive mimo channel state information feedback method based on deep learning
CN108737032A (en) * 2018-05-22 2018-11-02 西华大学 A kind of compression superposition sequence C SI feedback methods
CN109687897A (en) * 2019-02-25 2019-04-26 西华大学 Superposition CSI feedback method based on the extensive mimo system of deep learning

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018204917A1 (en) * 2017-05-05 2018-11-08 Ball Aerospace & Technologies Corp. Spectral sensing and allocation using deep machine learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107070588A (en) * 2016-12-28 2017-08-18 深圳清华大学研究院 A kind of multiple access of simplification accesses the receiver and method of reseptance of Transmission system
CN108390706A (en) * 2018-01-30 2018-08-10 东南大学 A kind of extensive mimo channel state information feedback method based on deep learning
CN108737032A (en) * 2018-05-22 2018-11-02 西华大学 A kind of compression superposition sequence C SI feedback methods
CN109687897A (en) * 2019-02-25 2019-04-26 西华大学 Superposition CSI feedback method based on the extensive mimo system of deep learning

Also Published As

Publication number Publication date
CN110166089A (en) 2019-08-23

Similar Documents

Publication Publication Date Title
CN110166089B (en) Superposition coding CSI feedback method based on deep learning
CN109687897B (en) Superposition CSI feedback method based on deep learning large-scale MIMO system
Liao et al. CSI feedback based on deep learning for massive MIMO systems
Jang et al. Deep autoencoder based CSI feedback with feedback errors and feedback delay in FDD massive MIMO systems
Perlaza et al. From spectrum pooling to space pooling: Opportunistic interference alignment in MIMO cognitive networks
CN103117970B (en) The system of selection of full-duplex antenna in mimo system
CN103209051B (en) The two step method for precoding of a kind of coordinate multipoint joint transmission system under multi-user scene
CN105337651A (en) User selection method of non-orthogonal multiple access system downlink under limited feedback
CN109714091B (en) Iterative hybrid precoding method based on hierarchical design in millimeter wave MIMO system
CN107086886B (en) Double-layer precoding design for large-scale MIMO system fusion zero forcing and Taylor series expansion
CN103220024A (en) Beam forming algorithm of multi-user pairing virtual multi-input multi-output (MIMO) system
Elbir et al. Federated learning for physical layer design
CN110289898A (en) A kind of channel feedback method based on the perception of 1 bit compression in extensive mimo system
CN112564757A (en) Deep learning 1-bit compression superposition channel state information feedback method
CN109167618B (en) FDD large-scale MIMO downlink channel reconstruction and multi-user transmission method
Boukhedimi et al. LMMSE receivers in uplink massive MIMO systems with correlated Rician fading
CN114219354A (en) Resource allocation optimization method and system based on federal learning
CN110808764A (en) Joint information estimation method in large-scale MIMO relay system
Xiao et al. AI enlightens wireless communication: Analyses, solutions and opportunities on CSI feedback
CN102404031A (en) Self-adaptive user scheduling method based on maximum throughput
CN113726376B (en) 1bit compression superposition CSI feedback method based on feature extraction and mutual-difference fusion
CN111193535B (en) Feedback method based on ELM superposition CSI in FDD large-scale MIMO system
CN104158575A (en) Method of user scheduling of multi-cell MIMO (Multiple Input Multiple Output) system under ZF (Zero Frequency) pre-coding strategy
CN111865844B (en) Channel estimation method and device for large-scale MIMO full-duplex relay system
CN109039402B (en) MIMO topological interference alignment method based on user compression

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20190823

Assignee: Chengdu Tiantongrui Computer Technology Co.,Ltd.

Assignor: XIHUA University

Contract record no.: X2023510000028

Denomination of invention: Stacked Encoding CSI Feedback Method Based on Deep Learning

Granted publication date: 20210604

License type: Common License

Record date: 20231124