CN113726376A - 1bit compression superposition CSI feedback method based on feature extraction and mutual-difference fusion - Google Patents

1bit compression superposition CSI feedback method based on feature extraction and mutual-difference fusion Download PDF

Info

Publication number
CN113726376A
CN113726376A CN202111011688.XA CN202111011688A CN113726376A CN 113726376 A CN113726376 A CN 113726376A CN 202111011688 A CN202111011688 A CN 202111011688A CN 113726376 A CN113726376 A CN 113726376A
Authority
CN
China
Prior art keywords
csi
vector
amplitude
downlink csi
downlink
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111011688.XA
Other languages
Chinese (zh)
Other versions
CN113726376B (en
Inventor
卿朝进
叶青
刘文慧
黄小莉
曹太强
黄永茂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xihua University
Original Assignee
Xihua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xihua University filed Critical Xihua University
Priority to CN202111011688.XA priority Critical patent/CN113726376B/en
Publication of CN113726376A publication Critical patent/CN113726376A/en
Application granted granted Critical
Publication of CN113726376B publication Critical patent/CN113726376B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/02Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas
    • H04B7/04Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas
    • H04B7/0413MIMO systems
    • H04B7/0417Feedback systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L5/00Arrangements affording multiple use of the transmission path
    • H04L5/14Two-way operation using the same type of signal, i.e. duplex
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Health & Medical Sciences (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The invention discloses a 1bit compression superposition CSI feedback method based on feature extraction and mutual-difference fusion, which comprises the following steps: obtaining the learning amplitude of the corresponding downlink CSI through a first neural network according to the amplitude of the uplink CSI estimation vector; according to the recovery feedback vector obtained by the base station, CSI (channel state information) feature extraction is carried out by using expert knowledge, and the feature amplitude and the feature angle of downlink CSI are recovered; obtaining a fusion amplitude of the downlink CSI through a second neural network according to a downlink CSI splicing amplitude obtained by splicing the characteristic amplitude of the downlink CSI and the learning amplitude of the downlink CSI; and recovering to obtain a downlink CSI reconstruction vector according to the fusion amplitude of the downlink CSI and the characteristic angle. Compared with the single-bit CS superposition CSI feedback, the method can recover the amplitude of the downlink CSI lost by the single-bit CS according to the bidirectional reciprocity of the uplink and downlink channels, greatly improve the reconstruction precision of the CSI, and obviously improve the reconstruction efficiency of the CSI at the same time.

Description

1bit compression superposition CSI feedback method based on feature extraction and mutual-difference fusion
Technical Field
The invention relates to the technical field of superposition feedback of an FDD (frequency division duplex) large-scale MIMO (multiple input multiple output) system, in particular to a 1bit compression superposition Channel State Information (CSI) feedback method based on feature extraction and mutual-anisotropy fusion.
Background
As a key technology for meeting the efficient spectrum efficiency and energy efficiency of a future 5g (radio generation wireless communication) network, an FDD massive MIMO system can provide wireless data services for more users without increasing the transmission power and system bandwidth through hundreds of antennas deployed at a base station end. Meanwhile, many operations (such as multi-user scheduling, rate allocation, transmitting end precoding, and the like) which bring performance improvement in the FDD massive MIMO system depend on the acquisition of accurate downlink Channel State Information (CSI). In Frequency Division Duplex (FDD) massive MIMO systems, there is weak reciprocity between channels, and CSI can only be fed back from the user end to the base station.
Although the traditional Compressed Sensing (CS) feedback technology using signal sparsity can reduce the system feedback overhead to a certain extent, a large amount of calculation overhead is generated in the reconstruction process; the feedback technology based on Deep Learning (DL) draws attention due to its advantages such as simple structure and fast training speed, but still occupies certain spectrum resources in its feedback process.
In recent years, Superposition Coding (SC) technology has been widely used in various fields of wireless communication due to its characteristic of being able to efficiently use spectrum resources, but superposition coding causes mutual interference, which can be reduced by discarding amplitude information of CSI by a single-bit CS. The loss of CSI amplitude information results in a low accuracy of CSI reconstruction.
Disclosure of Invention
Compared with single-bit CS (circuit switched) superposed CSI feedback, the method recovers the downlink CSI amplitude lost by a single-bit CS according to the bidirectional reciprocity of an uplink channel and a downlink channel, greatly improves the CSI reconstruction precision, and improves the CSI reconstruction efficiency by utilizing a shallow neural network and a simplified version CSI reconstruction method to reconstruct the CSI.
The technical scheme of the invention is as follows:
the 1bit compression superposition CSI feedback method based on feature extraction and mutual-difference fusion comprises the following steps:
estimating vectors from uplink CSI
Figure BDA0003239194990000021
Amplitude of (2)
Figure BDA0003239194990000022
Obtaining learning amplitude of corresponding downlink CSI through a first neural network
Figure BDA0003239194990000023
Based on recovered feedback vector
Figure BDA0003239194990000024
CSI characteristic extraction is carried out by utilizing expert knowledge, and the characteristic amplitude of downlink CSI is recovered
Figure BDA0003239194990000025
Angle of the characteristic
Figure BDA0003239194990000026
Wherein the recovery feedback vector
Figure BDA0003239194990000027
The feedback vector is a corresponding feedback vector obtained by a base station receiving end after recovery according to a feedback vector w sent by a transmitting end;
according to the characteristic amplitude of the downlink CSI
Figure BDA0003239194990000028
Learning amplitude with downlink CSI
Figure BDA0003239194990000029
Splicing amplitude of downlink CSI (channel state information) obtained by splicing
Figure BDA00032391949900000210
Obtaining a fusion amplitude of downlink CSI through a second neural network
Figure BDA00032391949900000211
According to the fusion amplitude of the downlink CSI
Figure BDA00032391949900000212
Angle to said characteristic
Figure BDA00032391949900000213
Recovering to obtain downlink CSI reconstruction vector
Figure BDA00032391949900000214
In some embodiments, the uplink CSI estimation vector magnitude
Figure BDA00032391949900000215
Obtained by the following model:
Figure BDA00032391949900000216
Figure BDA00032391949900000217
wherein,
Figure BDA00032391949900000218
an estimation vector representing the uplink CSI
Figure BDA00032391949900000219
The nth element, |, represents the modulo operation of the complex number, and superscript T represents the transpose operation;
in some embodiments, the uplink CSI estimation vector
Figure BDA00032391949900000220
Receiving sequences by base station for channel estimation
Figure BDA00032391949900000221
And performing uplink CSI estimation, wherein the estimation method is selected from one or more of LS estimation, MMSE estimation, ML estimation, MAP estimation and pilot auxiliary estimation.
Preferably, the estimation method is selected from an LS estimation method, and the estimation vector of the uplink CSI
Figure BDA00032391949900000222
Satisfies the following conditions:
Figure BDA00032391949900000223
wherein s represents a base station known signal sequence transmitted by the user terminal,
Figure BDA00032391949900000224
showing that Moore-Penrose pseudo-inverse operation is taken,
Figure BDA00032391949900000225
represents a base station receiving sequence used for channel estimation, and satisfies the following conditions:
Figure BDA0003239194990000031
where N denotes channel noise, and g denotes an actual uplink CSI vector, i.e., an estimated vector of uplink CSI
Figure BDA0003239194990000032
Is an estimate of the vector g.
In some preferred embodiments, the first neural network comprises:
an input layer containing a linear activation function, a hidden layer containing a LeakyReLU activation function, and an output layer containing a linear activation function; the number of nodes of the input layer, the hidden layer and the output layer is N, mN and N respectively, and m represents a hidden layer node coefficient determined according to engineering presetting.
In a specific application, the input of the first neural network is the vector magnitude of the uplink CSI estimation
Figure BDA0003239194990000033
Outputting the learning amplitude of the downlink CSI with the length of N
Figure BDA0003239194990000034
More preferably, the training loss function of the first neural network is a mean square error loss function.
In a specific application, the expert knowledge is an existing reconstruction method based on compressed sensing.
In some embodiments, the existing compressed sensing-based reconstruction methods include, for example, minimization based on the norm of L1, basis pursuit algorithm, matching pursuit algorithm, orthogonal matching pursuit algorithm, BIHT algorithm, SCA-BIHT algorithm, and the like.
In some embodiments, the recovery feedback vector
Figure BDA0003239194990000035
Compress the feedback vector in the superposition vector x for the 1bit recovered at the base station, and:
the 1-bit compressed superposition vector
Figure BDA0003239194990000036
Obtained at the transmitting end by the following model:
Figure BDA0003239194990000037
wherein d represents an uplink user data sequence with the length of P, E represents user transmission power, and rho belongs to [0,1 ]]The representation superposition factor can be set according to engineering experience, Q represents a spreading matrix with dimension of P multiplied by L, and Q is satisfiedTQ=P·ILWherein the superscript "T" denotes the transposition operation, L denotes the modulation signal length, ILRepresents an L-dimensional identity matrix, r represents a modulated signal sequence of length L, and:
r=fmode(w);
wherein w represents a feedback vector of a transmitting end with length K, and the recovered feedback vector
Figure BDA0003239194990000038
For the estimate of the feedback vector w at the base station, K is 2L, and:
w=[preal,pimag,z];
wherein z ∈ {0,1}1×NRepresenting a length N support set vector, p, of a downlink CSI vector hrealAnd pimagRespectively representing the real part and the imaginary part of the downlink CSI with the vector length of M after being compressed and quantized by adopting a 1-bit compressed sensing technology.
Preferably, p isrealAnd pimagSatisfies the following conditions:
Figure BDA0003239194990000041
wherein h represents a downlink CSI vector, and the downlink CSI reconstructed vector
Figure BDA0003239194990000042
The method comprises the steps of representing downlink CSI reconstructed at a base station, phi representing a compression matrix adopting a 1-bit compressed sensing technology, and sign (·), Re (·) and Im (·) respectively representing a sign taking operation, a real part taking operation and an imaginary part taking operation.
In some preferred embodiments, based on the above application, the recovery feedback vector
Figure BDA0003239194990000043
The obtaining of (a) further comprises: de-spreading the receiving sequence Y, and obtaining the recovery feedback vector by MMSE detection and interference elimination
Figure BDA0003239194990000044
Such as, it includes:
obtaining the received sequence by following an over-channel model
Figure BDA0003239194990000045
Y=gx+N;
Wherein, Y represents the receiving sequence of the base station end, x represents the 1-bit compressed superposition vector sent by the user end, N represents the channel noise, and g represents the uplink channel vector.
Despread signal obtained by despreading processing model
Figure BDA0003239194990000046
Figure BDA0003239194990000047
Obtaining a detection signal by the following MMSE detection model
Figure BDA0003239194990000048
Figure BDA0003239194990000049
Where dec (-) denotes a hard decision operation (-)-1Representing the inverse operation of the matrix, (-)HRepresents the conjugate transpose operation of the matrix,
Figure BDA00032391949900000410
representing the variance of the uplink channel;
by interference ofObtaining interference-removed data sequence by eliminating model
Figure BDA00032391949900000411
Figure BDA00032391949900000412
Obtaining a recovered feedback vector by the following feedback vector recovery model
Figure BDA00032391949900000413
Figure BDA00032391949900000414
Wherein f isdemoDenotes demodulation processing, and a recovery feedback vector is obtained
Figure BDA00032391949900000415
Comprises the following steps:
Figure BDA00032391949900000416
wherein,
Figure BDA0003239194990000051
a support set with length N representing the recovered downlink CSI vector h,
Figure BDA0003239194990000052
and
Figure BDA0003239194990000053
and respectively representing the real part and the imaginary part of the length M of the recovered downlink CSI vector h compressed and quantized by adopting the 1-bit compressed sensing technology.
In some preferred embodiments, the feedback vector is recovered according to the obtained base station side
Figure BDA0003239194990000054
Performing characteristic amplitude of downlink CSI
Figure BDA0003239194990000055
Angle of the characteristic
Figure BDA0003239194990000056
The process of recovering (1) comprises:
the obtained parameters
Figure BDA0003239194990000057
The SCA-BIHT algorithm is used as the input of the SCA-BIHT algorithm, wherein the SCA-BIHT algorithm is expert knowledge;
outputting the characteristic value of the downlink CSI after circulating n times through an SCA-BIHT algorithm
Figure BDA0003239194990000058
Wherein n can be preset according to engineering experience;
according to the characteristic value of downlink CSI
Figure BDA0003239194990000059
Obtaining the downlink CSI characteristic amplitude by the following formula
Figure BDA00032391949900000510
Angle of the characteristic
Figure BDA00032391949900000511
Figure BDA00032391949900000512
Wherein f isampDenotes the amplitude operation on a complex number, fang(. cndot.) denotes performing an angle operation on a complex number.
In some preferred embodiments, the second neural network comprises:
an input layer containing a linear activation function, a hidden layer containing a LeakyReLU activation function, and an output layer containing a linear activation function; the number of nodes of the input layer, the hidden layer and the output layer is 2N, kN and N respectively, and k represents a hidden layer node coefficient determined according to engineering presetting.
In a specific application, the input of the second neural network is the downlink CSI splicing amplitude
Figure BDA00032391949900000513
Outputting the fusion amplitude of the downlink CSI with the length of N
Figure BDA00032391949900000514
Wherein the downlink CSI splicing amplitude
Figure BDA00032391949900000515
Obtained by the following model:
Figure BDA00032391949900000516
wherein [ · ] denotes the splicing operation of the vectors.
More preferably, the training loss function of the second neural network is a mean square error loss function.
In some preferred embodiments, the downlink CSI reconstruction vector
Figure BDA00032391949900000517
Recovery of (c) is obtained by the following model:
Figure BDA0003239194990000061
wherein, e represents a Hadamard product, e represents a natural index, and j represents an imaginary number unit.
The invention utilizes the bidirectional reciprocity of the uplink and downlink channels, recovers the amplitude of the downlink CSI through a shallow amplitude learning network, combines the technical advantages of single-bit CS, SC and DL, superimposes the downlink CSI processed by the single-bit CS to the uplink user sequence to be fed back to the base station, recovers the downlink CSI at the base station end by using the traditional UL-US detection and the simplified version of the traditional downlink CSI reconstruction method, and then fuses the amplitude of the downlink CSI obtained by the traditional method and the amplitude learning network through the shallow amplitude fusion network, improves the accuracy of the amplitude of the downlink CSI, and can effectively reduce the mutual interference caused by superposition coding to ensure the reconstruction accuracy of the CSI under the condition of not increasing the frequency spectrum overhead.
Compared with the single-bit CS overlapped CSI feedback, the method recovers the downlink CSI amplitude lost by the single-bit CS according to the bidirectional reciprocity of the uplink and downlink channels, greatly improves the CSI reconstruction precision, and improves the CSI reconstruction efficiency by utilizing a shallow neural network and a simplified version of the traditional downlink CSI reconstruction method.
Drawings
FIG. 1 is a schematic overall flow diagram of the present invention;
FIG. 2 is a schematic diagram of a first neural network structure according to the present invention;
FIG. 3 is a structural diagram of a second neural network according to the present invention.
Detailed Description
The present invention is described in detail below with reference to the following embodiments and the attached drawings, but it should be understood that the embodiments and the attached drawings are only used for the illustrative description of the present invention and do not limit the protection scope of the present invention in any way. All reasonable variations and combinations that fall within the spirit of the invention are intended to be within the scope of the invention.
Referring to fig. 1, a specific 1-bit compression superposition CSI feedback method based on feature extraction and mutual-difference fusion includes:
a1) estimating vectors from uplink CSI
Figure BDA0003239194990000062
Amplitude of (2)
Figure BDA0003239194990000063
Obtaining learning amplitude of corresponding downlink CSI through a first neural network
Figure BDA0003239194990000071
More specific embodiments are as follows:
the uplink CSI estimation vector
Figure BDA0003239194990000072
Receiving sequences by base station for channel estimation
Figure BDA0003239194990000073
And performing uplink CSI estimation.
The estimation of the uplink CSI may be further implemented by using uplink CSI estimation methods in the prior art, such as LS estimation, MMSE estimation, ML estimation, MAP estimation, pilot-assisted estimation, and the like, for example, in a specific embodiment, the uplink CSI estimation performed by LS estimation is as follows:
Figure BDA0003239194990000074
wherein,
Figure BDA0003239194990000075
indicating the received estimation sequence at the base station end, s indicating the base station known signal transmitted by the user end,
Figure BDA0003239194990000076
represents the Moore-Penrose pseudo-inverse operation.
Referring to fig. 2, the first neural network includes the following neural network structure:
the device comprises an input layer, a hidden layer and an output layer, wherein the input layer adopts a linear activation function, the hidden layer adopts an activation function LeakyReLU, and the output layer adopts a linear activation function.
The number of nodes of an input layer, a hidden layer and an output layer of the first neural network is N, mN and N respectively, m represents a node coefficient of the hidden layer and can be obtained according to engineering presetting.
Obtaining a learning amplitude of downlink CSI through the first neural network
Figure BDA0003239194990000077
The process comprises the following steps:
estimating a vector according to the uplink CSI
Figure BDA0003239194990000078
Obtaining uplink CSI amplitude by
Figure BDA0003239194990000079
Figure BDA00032391949900000710
Wherein,
Figure BDA00032391949900000711
representing vectors
Figure BDA00032391949900000712
The nth element of (a), i |, represents a modulo operation of a complex number;
estimating vector magnitude of the obtained uplink CSI
Figure BDA00032391949900000713
Inputting the first neural network from an input layer, and outputting the learning amplitude of the downlink CSI with the length of N
Figure BDA00032391949900000714
The training loss function of the first neural network adopts a mean square error loss function.
a2) Recovering feedback vector obtained from base station
Figure BDA00032391949900000715
CSI characteristic extraction is carried out by utilizing expert knowledge, and the characteristic amplitude of downlink CSI is recovered
Figure BDA00032391949900000716
Angle of the characteristic
Figure BDA00032391949900000717
a3) According to the characteristic amplitude of the downlink CSI
Figure BDA00032391949900000718
Learning amplitude with downlink CSI
Figure BDA00032391949900000719
Splicing amplitude of downlink CSI (channel state information) obtained by splicing
Figure BDA00032391949900000720
Obtaining a fusion amplitude of downlink CSI through a second neural network
Figure BDA00032391949900000721
More specific embodiments are as follows:
referring to fig. 3, the second neural network is a neural network structure including:
the device comprises an input layer, a hidden layer and an output layer, wherein the input layer adopts a linear activation function, the hidden layer adopts a LeakyReLU activation function, and the output layer adopts a linear activation function.
The number of nodes of an input layer, a hidden layer and an output layer of the second neural network is 2N, kN and N respectively, k represents a node coefficient of the hidden layer and can be obtained according to engineering presetting.
Obtaining a fusion amplitude of downlink CSI through the second neural network
Figure BDA0003239194990000081
The process comprises the following steps:
by the characteristic amplitude of the downlink CSI
Figure BDA0003239194990000082
Learning range with the downlink CSI
Figure BDA0003239194990000083
Obtaining downlink CSI splicing amplitude
Figure BDA0003239194990000084
Figure BDA0003239194990000085
Wherein [ ·]A stitching operation that represents a vector is performed,
Figure BDA0003239194990000086
representing a real number set with dimensions of 1 × 2N;
splicing the obtained downlink CSI
Figure BDA0003239194990000087
Inputting the second neural network from the input layer, and outputting the fusion amplitude of the downlink CSI with the length of N
Figure BDA0003239194990000088
The training loss function of the amplitude fusion network adopts a mean square error loss function.
a4) According to the fusion amplitude of the downlink CSI
Figure BDA0003239194990000089
Angle to said characteristic
Figure BDA00032391949900000810
Recovering to obtain downlink CSI reconstruction vector
Figure BDA00032391949900000811
More specific embodiments are as follows:
the recovery obtains a downlink CSI reconstruction vector
Figure BDA00032391949900000812
The method of (1) is as follows:
Figure BDA00032391949900000813
wherein, e represents a Hadamard product, e represents a natural index, and j represents an imaginary number unit.
Example 1
In step a1), obtaining an uplink CSI estimation vector through LS estimation
Figure BDA00032391949900000814
One specific example of (a) is as follows:
suppose that: n-2, P-4, base station-side received sequence for channel estimation
Figure BDA00032391949900000815
Comprises the following steps:
Figure BDA00032391949900000816
the base station known signal s sent by the user side is:
s=[0.7528-0.6083i -0.1666-0.1308i 0.9869+0.4514i 0.4556+0.2695i];
pseudo inverse matrix of base station known signal s transmitted by user terminal
Figure BDA0003239194990000091
Comprises the following steps:
Figure BDA0003239194990000092
according to LS estimation processing formula
Figure BDA0003239194990000093
Can calculate the uplink CSI estimation vector
Figure BDA0003239194990000094
Comprises the following steps:
Figure BDA0003239194990000095
example 2
In the step a2), the lengths of the recovered downlink CSI vectors h compressed and quantized by adopting the 1-bit compressed sensing technology are all the real parts of M
Figure BDA0003239194990000096
And imaginary part
Figure BDA0003239194990000097
And recovering the feedback vector
Figure BDA0003239194990000098
Obtaining a support set with the length of N of the recovered downlink CSI vector h
Figure BDA0003239194990000099
One specific example of (a) is as follows:
suppose that: n2 and M3, recovering the feedback vector
Figure BDA00032391949900000910
Comprises the following steps:
Figure BDA00032391949900000911
according to the formula
Figure BDA00032391949900000912
The recovered real part of the length M of the downlink CSI vector h compressed and quantized by the 1-bit compressed sensing technology can be obtained
Figure BDA00032391949900000913
Comprises the following steps:
Figure BDA00032391949900000914
the lengths of the recovered downlink CSI vectors h compressed and quantized by the 1-bit compressed sensing technology are M imaginary parts
Figure BDA00032391949900000915
Comprises the following steps:
Figure BDA00032391949900000916
support set with length N of recovered downlink CSI vector h
Figure BDA00032391949900000917
Comprises the following steps:
Figure BDA00032391949900000918
example 3
In step a1), the vector is estimated by uplink CSI
Figure BDA00032391949900000919
Obtaining uplink CSI estimation vector magnitude
Figure BDA00032391949900000920
One specific example of (a) is as follows:
estimating vector of CSI obtained in example 1
Figure BDA00032391949900000921
According to the formula
Figure BDA00032391949900000922
Computing inputs in an amplitude learning network
Figure BDA00032391949900000923
Comprises the following steps:
Figure BDA0003239194990000101
example 4
In step a3), obtaining downlink CSI splicing amplitude
Figure BDA0003239194990000102
One specific example of (a) is as follows:
suppose that: n2, characteristic amplitude of downlink CSI
Figure BDA0003239194990000103
Learning amplitude of downlink CSI
Figure BDA0003239194990000104
According to the model
Figure BDA0003239194990000105
The input namely downlink CSI splicing amplitude in the amplitude fusion network can be obtained
Figure BDA0003239194990000106
Comprises the following steps:
Figure BDA0003239194990000107
example 5
In the step a4), recovering to obtain the downlink CSI reconstruction vector
Figure BDA0003239194990000108
One specific example of (a) is as follows:
suppose that: n2, fusion amplitude of downlink CSI
Figure BDA0003239194990000109
Characteristic angle of downlink CSI
Figure BDA00032391949900001010
According to the model
Figure BDA00032391949900001011
The recovered downlink CSI reconstruction vector can be calculated
Figure BDA00032391949900001012
Comprises the following steps:
Figure BDA00032391949900001013
the above examples are merely preferred embodiments of the present invention, and the scope of the present invention is not limited to the above examples. All technical schemes belonging to the idea of the invention belong to the protection scope of the invention. It should be noted that modifications and embellishments within the scope of the invention may be made by those skilled in the art without departing from the principle of the invention, and such modifications and embellishments should also be considered as within the scope of the invention.

Claims (10)

1. The 1bit compression superposition CSI feedback method based on feature extraction and mutual-difference fusion is characterized by comprising the following steps:
estimation vector based on uplink CSI
Figure FDA0003239194980000011
Amplitude of (2)
Figure FDA0003239194980000012
Obtaining learning amplitude of corresponding downlink CSI through a first neural network
Figure FDA0003239194980000013
Based on recovered feedback vector
Figure FDA0003239194980000014
CSI characteristic extraction is carried out by utilizing expert knowledge, and the characteristic amplitude of downlink CSI is recovered
Figure FDA0003239194980000015
Angle of the characteristic
Figure FDA0003239194980000016
Wherein the recovery feedback vector
Figure FDA0003239194980000017
The feedback vector is a corresponding feedback vector obtained by a base station receiving end after recovery according to a feedback vector w sent by a transmitting end;
according to the characteristic amplitude of the downlink CSI
Figure FDA0003239194980000018
Learning amplitude with downlink CSI
Figure FDA0003239194980000019
Splicing amplitude of downlink CSI (channel state information) obtained by splicing
Figure FDA00032391949800000110
Obtaining a fusion amplitude of downlink CSI through a second neural network
Figure FDA00032391949800000111
According to the fusion amplitude of the downlink CSI
Figure FDA00032391949800000112
Angle to said characteristic
Figure FDA00032391949800000113
Recovering to obtain downlink CSI reconstruction vector
Figure FDA00032391949800000114
2. The method of claim 1, wherein the uplink CSI estimation vector magnitude
Figure FDA00032391949800000115
Obtained by the following model:
Figure FDA00032391949800000116
Figure FDA00032391949800000117
wherein,
Figure FDA00032391949800000118
an estimation vector representing the uplink CSI
Figure FDA00032391949800000119
The nth element, |, represents the modulo operation of the complex number, and the superscript T represents the transpose operation.
3. The method of claim 2, wherein the estimation vector of the uplink CSI is estimated based on a Channel State Information (CSI)
Figure FDA00032391949800000120
Obtained by the following model:
Figure FDA00032391949800000121
wherein s represents a base station known signal sequence transmitted by the user terminal,
Figure FDA00032391949800000122
showing that Moore-Penrose pseudo-inverse operation is taken,
Figure FDA00032391949800000123
represents a base station receiving sequence used for channel estimation, and satisfies the following conditions:
Figure FDA00032391949800000124
where N denotes channel noise and g denotes an actual uplink CSI vector.
4. The method of claim 2, wherein the first neural network comprises:
an input layer containing a linear activation function, a hidden layer containing a Leaky ReLU activation function, and an output layer containing a linear activation function; the number of nodes of the input layer, the hidden layer and the output layer is N, mN and N respectively, and m represents a hidden layer node coefficient determined according to engineering presetting.
5. The method of claim 1, wherein the recovery feedback vector
Figure FDA0003239194980000021
The feedback vector in the superposition vector x is compressed for 1bit recovered at the base station, and:
the 1-bit compressed superposition vector
Figure FDA0003239194980000022
Obtained at the transmitting end by the following model:
Figure FDA0003239194980000023
wherein d represents an uplink user data sequence with the length of P, E represents user transmission power, and rho belongs to [0,1 ]]The representation superposition factor can be set according to engineering experience, Q represents a spreading matrix with dimension of P multiplied by L, and Q is satisfiedTQ=P·ILWhere the superscript T denotes the transposition operation, L denotes the modulation signal length, ILDenotes an L-dimensional identity matrix, and r denotes a modulated signal sequence of length L.
6. The method of claim 5, wherein the recovered feedback vector is obtained
Figure FDA0003239194980000024
The process comprises the following steps:
despread signal obtained by despreading processing model
Figure FDA0003239194980000025
Figure FDA0003239194980000026
Wherein, Y represents a receiving sequence of NxP dimension at the base station end;
obtaining a detection signal by the following MMSE detection model
Figure FDA0003239194980000027
Figure FDA0003239194980000028
Where dec (-) denotes a hard decision operation and g denotes an upstream channel vector, (.)-1Representing the inverse operation of the matrix, (-)HRepresents the conjugate transpose operation of the matrix,
Figure FDA0003239194980000029
representing the variance of the uplink channel;
obtaining a de-interfered data sequence by the following interference elimination model
Figure FDA00032391949800000210
Figure FDA00032391949800000211
Obtaining a recovered feedback vector by the following feedback vector recovery model
Figure FDA00032391949800000212
Figure FDA00032391949800000213
Wherein f isdemoDenotes demodulation processing, and a recovery feedback vector is obtained
Figure FDA00032391949800000214
Comprises the following steps:
Figure FDA00032391949800000215
wherein,
Figure FDA0003239194980000031
a support set with length N representing the recovered downlink CSI vector h,
Figure FDA0003239194980000032
and
Figure FDA0003239194980000033
and respectively representing the real part and the imaginary part of the length M of the recovered downlink CSI vector h compressed and quantized by adopting the 1-bit compressed sensing technology.
7. The method of claim 6, wherein a characteristic amplitude of the downlink CSI is
Figure FDA0003239194980000034
Angle of the characteristic
Figure FDA0003239194980000035
The process of recovering (1) comprises:
the obtained parameters
Figure FDA0003239194980000036
The SCA-BIHT algorithm is used as the input of the SCA-BIHT algorithm, wherein the SCA-BIHT algorithm is expert knowledge;
outputting the characteristic value of the downlink CSI after circulating n times through an SCA-BIHT algorithm
Figure FDA0003239194980000037
Wherein n can be preset according to engineering experience;
according to the characteristic value of downlink CSI
Figure FDA0003239194980000038
Obtaining the downlink CSI characteristic amplitude by the following formula
Figure FDA0003239194980000039
Angle of the characteristic
Figure FDA00032391949800000310
Figure FDA00032391949800000311
Wherein f isampDenotes the amplitude operation on a complex number, fang(. cndot.) denotes performing an angle operation on a complex number.
8. The method of claim 2, wherein the second neural network comprises:
an input layer containing a linear activation function, a hidden layer containing a Leaky ReLU activation function, and an output layer containing a linear activation function; the number of nodes of the input layer, the hidden layer and the output layer is 2N, kN and N respectively, and k represents a hidden layer node coefficient determined according to engineering presetting.
9. The method of claim 1, wherein the neural network is trained by a mean square error loss function.
10. The method of claim 1, wherein the downlink CSI reconstruction vector
Figure FDA00032391949800000312
Recovery of (c) is obtained by the following model:
Figure FDA00032391949800000313
wherein, e represents a Hadamard product, e represents a natural index, and j represents an imaginary number unit.
CN202111011688.XA 2021-08-31 2021-08-31 1bit compression superposition CSI feedback method based on feature extraction and mutual-difference fusion Active CN113726376B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111011688.XA CN113726376B (en) 2021-08-31 2021-08-31 1bit compression superposition CSI feedback method based on feature extraction and mutual-difference fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111011688.XA CN113726376B (en) 2021-08-31 2021-08-31 1bit compression superposition CSI feedback method based on feature extraction and mutual-difference fusion

Publications (2)

Publication Number Publication Date
CN113726376A true CN113726376A (en) 2021-11-30
CN113726376B CN113726376B (en) 2022-05-20

Family

ID=78679649

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111011688.XA Active CN113726376B (en) 2021-08-31 2021-08-31 1bit compression superposition CSI feedback method based on feature extraction and mutual-difference fusion

Country Status (1)

Country Link
CN (1) CN113726376B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114826343A (en) * 2022-04-26 2022-07-29 西华大学 Superimposed channel state information feedback method and device for AI enabling data emptying
CN117807383A (en) * 2024-03-01 2024-04-02 深圳市大数据研究院 Channel state information recovery method and device, equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108737032A (en) * 2018-05-22 2018-11-02 西华大学 A kind of compression superposition sequence C SI feedback methods
CN109687897A (en) * 2019-02-25 2019-04-26 西华大学 Superposition CSI feedback method based on the extensive mimo system of deep learning
CN110289898A (en) * 2019-07-18 2019-09-27 中国人民解放军空军预警学院 A kind of channel feedback method based on the perception of 1 bit compression in extensive mimo system
US20200220593A1 (en) * 2019-01-04 2020-07-09 Industrial Technology Research Institute Communication system and codec method based on deep learning and known channel state information
CN112564757A (en) * 2020-12-03 2021-03-26 西华大学 Deep learning 1-bit compression superposition channel state information feedback method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108737032A (en) * 2018-05-22 2018-11-02 西华大学 A kind of compression superposition sequence C SI feedback methods
US20200220593A1 (en) * 2019-01-04 2020-07-09 Industrial Technology Research Institute Communication system and codec method based on deep learning and known channel state information
CN109687897A (en) * 2019-02-25 2019-04-26 西华大学 Superposition CSI feedback method based on the extensive mimo system of deep learning
CN110289898A (en) * 2019-07-18 2019-09-27 中国人民解放军空军预警学院 A kind of channel feedback method based on the perception of 1 bit compression in extensive mimo system
CN112564757A (en) * 2020-12-03 2021-03-26 西华大学 Deep learning 1-bit compression superposition channel state information feedback method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
卿朝进 等: "部分支撑集辅助的压缩感知CSI 反馈方法", 《电子学报》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114826343A (en) * 2022-04-26 2022-07-29 西华大学 Superimposed channel state information feedback method and device for AI enabling data emptying
CN117807383A (en) * 2024-03-01 2024-04-02 深圳市大数据研究院 Channel state information recovery method and device, equipment and storage medium

Also Published As

Publication number Publication date
CN113726376B (en) 2022-05-20

Similar Documents

Publication Publication Date Title
CN109687897B (en) Superposition CSI feedback method based on deep learning large-scale MIMO system
CN113726376B (en) 1bit compression superposition CSI feedback method based on feature extraction and mutual-difference fusion
JP4966190B2 (en) Method and apparatus for transmitting a signal in a multi-antenna system, signal and method for estimating a corresponding transmission channel
CN102843222B (en) Broadband analog channel information feedback method
CN112564757A (en) Deep learning 1-bit compression superposition channel state information feedback method
CN102474314B (en) Precoding method and device
CN103297111A (en) Multiple input multiple output (MIMO) uplink multi-user signal detection method, detection device and receiving system
CN110166089B (en) Superposition coding CSI feedback method based on deep learning
CN101207600A (en) Method, system and apparatus for MIMO transmission of multi transmitting antennas
CN109818645B (en) Superposition CSI feedback method based on signal detection and support set assistance
CN104158573A (en) Precoding method and precoding system for eliminating interference
CN104065462B (en) There is the process of signal transmission method of diversity gain under relaying interference channel
Oyerinde Comparative study of Overloaded and Underloaded NOMA schemes with two Multiuser Detectors
CN111193535B (en) Feedback method based on ELM superposition CSI in FDD large-scale MIMO system
CN105610484A (en) Large-scale MIMO iterative receiving method with low complexity
CN106230754B (en) A kind of interference elimination-matched filtering channel estimation methods of extensive mimo system
CN102075220B (en) Channel estimating device and method based on time domain noise reduction
TW201944745A (en) Feedback method for use as a channel information based on deep learning
Peng LLL aided MIMO detection algorithm based on BP neural network optimization
Liu et al. Robust and Energy Efficient Sparse-Coded OFDM-DCSK System via Matrix Recovery
CN100544327C (en) A kind of detector for serial interference deletion in minimum mean square error of low complex degree
CN101286805B (en) Detecting method and apparatus for multiple transmitted signal
Wei et al. Optimal binary/quaternary adaptive signature design for code-division multiplexing
Sreesudha et al. An efficient channel estimation for BER improvement of MC CDMA system using KGMO algorithm
Zhang et al. Continuous Online Learning-based CSI Feedback in Massive MIMO Systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20211130

Assignee: Chengdu Suyouyun Information Technology Co.,Ltd.

Assignor: XIHUA University

Contract record no.: X2023510000030

Denomination of invention: A 1-bit compressed overlay CSI feedback method based on feature extraction and mutual anisotropy fusion

Granted publication date: 20220520

License type: Common License

Record date: 20231201

EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20211130

Assignee: Chengdu Yingling Feifan Technology Co.,Ltd.

Assignor: XIHUA University

Contract record no.: X2023510000032

Denomination of invention: A 1-bit compressed overlay CSI feedback method based on feature extraction and mutual anisotropy fusion

Granted publication date: 20220520

License type: Common License

Record date: 20231212

Application publication date: 20211130

Assignee: Sichuan Shenglongxing Technology Co.,Ltd.

Assignor: XIHUA University

Contract record no.: X2023510000031

Denomination of invention: A 1-bit compressed overlay CSI feedback method based on feature extraction and mutual anisotropy fusion

Granted publication date: 20220520

License type: Common License

Record date: 20231211

EE01 Entry into force of recordation of patent licensing contract