US20170163445A1 - Apparatus for and method of channel estimation buffer compression via decimation, prediction, and error encoding - Google Patents

Apparatus for and method of channel estimation buffer compression via decimation, prediction, and error encoding Download PDF

Info

Publication number
US20170163445A1
US20170163445A1 US15/010,486 US201615010486A US2017163445A1 US 20170163445 A1 US20170163445 A1 US 20170163445A1 US 201615010486 A US201615010486 A US 201615010486A US 2017163445 A1 US2017163445 A1 US 2017163445A1
Authority
US
United States
Prior art keywords
channel estimates
channel
processor
decimated
estimates
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US15/010,486
Other versions
US9692616B1 (en
Inventor
Raul H. Etkin
Bhaskar Nallapureddy
Jungwon Lee
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Priority to US15/010,486 priority Critical patent/US9692616B1/en
Assigned to SAMSUNG ELECTRONICS CO., LTD. reassignment SAMSUNG ELECTRONICS CO., LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LEE, JUNGWON, ETKIN, RAUL, NALLAPUREDDY, BHASKAR
Priority to KR1020160054099A priority patent/KR102481008B1/en
Priority to TW105121836A priority patent/TWI704778B/en
Priority to CN201611033869.1A priority patent/CN106817325B/en
Publication of US20170163445A1 publication Critical patent/US20170163445A1/en
Application granted granted Critical
Publication of US9692616B1 publication Critical patent/US9692616B1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L25/00Baseband systems
    • H04L25/02Details ; arrangements for supplying electrical power along data transmission lines
    • H04L25/0202Channel estimation
    • H04L25/0224Channel estimation using sounding signals
    • H04L25/0228Channel estimation using sounding signals with direct estimation from sounding signals
    • H04L25/023Channel estimation using sounding signals with direct estimation from sounding signals with extension to other symbols
    • H04L25/0232Channel estimation using sounding signals with direct estimation from sounding signals with extension to other symbols by interpolation between sounding signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L25/00Baseband systems
    • H04L25/02Details ; arrangements for supplying electrical power along data transmission lines
    • H04L25/0202Channel estimation
    • H04L25/0222Estimation of channel variability, e.g. coherence bandwidth, coherence time, fading frequency
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L25/00Baseband systems
    • H04L25/02Details ; arrangements for supplying electrical power along data transmission lines
    • H04L25/0202Channel estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/004Arrangements for detecting or preventing errors in the information received by using forward error control
    • H04L1/0041Arrangements at the transmitter end
    • H04L1/0042Encoding specially adapted to other signal generation operation, e.g. in order to reduce transmit distortions, jitter, or to improve signal shape
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/004Arrangements for detecting or preventing errors in the information received by using forward error control
    • H04L1/0056Systems characterized by the type of code used
    • H04L1/0057Block codes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L27/00Modulated-carrier systems
    • H04L27/26Systems using multi-frequency codes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets

Definitions

  • the present disclosure relates generally to an apparatus for and a method of channel estimation, and more particularly, to an apparatus for and a method of compressing channel estimates to reduce buffer size.
  • a typical communication receiver using coherent detection has a channel estimation (CE) block that estimates various channel gains affecting a signal as it travels from a transmitter to a receiver.
  • the coherent receiver uses CE for data demodulation.
  • the channel estimates produced by this block are often stored in a CE output buffer to be used by other blocks (e.g. a symbol detector) in the receiver chain.
  • the number of stored channel estimates increases as communication protocols require support for multiple antennas and modulation carriers, such as increased bandwidth (e.g. multiple frequency carriers) and increased number of transmit and receive antennas. This causes the CE output buffer size to increase and occupy a significant portion of an integrated circuit (IC).
  • IC integrated circuit
  • an apparatus configured to include a receiver configured to receive a signal over channels; a channel estimator configured to estimate the channels as first channel estimates; and a processor configured to compress the channel estimates to reduce buffer size.
  • a method includes receiving, by a receiver, a signal over channels; estimating, by a channel estimator, the channels as a first channel estimate; and compressing, by a processor, the channel estimates to reduce buffer size.
  • a method includes receiving a signal over channels by a receiver; estimating the channels as first channel estimates by a processor; selecting a predetermined subset of the estimated channels as decimated channels by the processor; generating second channel estimates as predicted channel estimates of the first channel estimates by the processor using the decimated channels; computing channel estimation errors between the first channel estimates and the predicted channel estimates by the processor; compressing the channel estimation errors by the processor; buffering the decimated channel estimates and the compressed channel estimation errors by the processor; and computing the predicted channel estimates by the processor; approximating the first channel estimates by the processor using the predicted channel estimates and the compressed channel estimation errors.
  • FIG. 1 is a block diagram of an apparatus for channel reception, channel estimation buffer compression via decimation, prediction, error compression, and estimated channel transmission according to an embodiment of the present disclosure
  • FIG. 2 is a block diagram of a processor of FIG. 1 according to an embodiment of the present disclosure
  • FIG. 3 is a block diagram of a processor of FIG. 1 according to an embodiment of the present disclosure
  • FIG. 4 is an illustration of a decimated channel according to an embodiment of the present disclosure
  • FIG. 5 is an illustration of original channel estimates, decimated channel estimates, predicted channel estimates, and errors between original channel estimates and predicted channel estimates according to an embodiment of the present disclosure
  • FIG. 6 is a flowchart of a method of receiving a signal over channels, reducing buffer requirements for storing decimated channel estimates and compressed channel estimation errors according to an embodiment of the present disclosure
  • FIG. 7 is a flowchart of a method of receiving a signal over channels, compressing a channel estimation buffer via decimation, prediction, error compression, approximating original channel estimates from decimated channel estimates and compressed errors according to an embodiment of the present disclosure, and transmitting approximated channel estimates.
  • first, second, etc. may be used for describing various elements, the structural elements are not restricted by the terms. The terms are only used to distinguish one element from another element. For example, without departing from the scope of the present disclosure, a first structural element may be referred to as a second structural element. Similarly, the second structural element may also be referred to as the first structural element. As used herein, the term “and/or” includes any and all combinations of one or more associated items.
  • the channel estimates exhibit various forms of correlation, across frequency, time or space (antennas). These correlations can be exploited to reduce buffering requirements via data compression techniques.
  • a buffer for storing channel estimates takes up a significant portion of an integrated circuit (IC). This will increase as the number of component carriers (CCs) and antennas increase. However, channel estimates exhibit correlation over frequency and time, which may be used to reduce buffering requirements via a compression algorithm.
  • IC integrated circuit
  • Data compression may be suitable for various applications, such as an audio file, an image file, a video file, a text file, and a database.
  • Data compression techniques may be classified in various ways including, but not limited to, lossless vs. lossy, universal vs. non-universal, one dimensional (1D) video vs. two dimensional (2D) video, and online vs. offline.
  • Data compression codes and coding methods include Huffman coding, arithmetic coding, Lempel-Ziv, joint photographic experts group (JPEG), moving picture experts group audio layer 3 (MP3), and International Telecommunication Union (ITU) video compression standard H.264 (also known as moving picture experts group 4 (MPEG-4)).
  • Data compression methods are applicable to different applications based on factors such as statistical properties of the data source to be compressed, compression performance, implementation complexity, reconstruction quality/distortion metrics, and causality.
  • FIG. 1 is a block diagram of an apparatus 100 according to an embodiment of the present disclosure.
  • the apparatus 100 compresses channel estimates to reduce buffer size.
  • the apparatus 100 also compresses channel estimates produced by a channel estimation method, thus decoupling the roles of channel estimation and data compression.
  • the apparatus 100 may be part of, but is not limited to, a long term evolution (LTE) system that uses orthogonal frequency-division multiplexing (OFDM).
  • LTE long term evolution
  • OFDM orthogonal frequency-division multiplexing
  • the apparatus 100 may apply to other communications systems and other multiple access methods.
  • the apparatus 100 includes a receiver 101 , a channel estimator 103 , and a processor 105 .
  • the apparatus 100 provides low implementation complexity, no error propagation, random access for decompression, low distortion, and flexibility.
  • the receiver 101 receives a signal over channels.
  • the channel estimator 103 estimates the channels as first channel estimates.
  • the processor 105 compresses the channel estimates to reduce buffer size.
  • the channel estimator 103 estimates channels a first time (e.g. “first channel estimates”).
  • the set of first channel estimates is denoted in OFDM symbol l by H l .
  • the first channel estimates may be stored with a finite precision format (e.g. fixed point with a given bit-width), but is not limited thereto.
  • channel estimates may be given as a vector over frequency subcarriers.
  • channel estimates may span several dimensions in addition to frequency, such as time (e.g. OFDM symbol index), and space (e.g. transmit and receive antennas).
  • FIG. 2 is a block diagram of the processor 105 of FIG. 1 according to an embodiment of the present disclosure.
  • the processor 105 is configured to perform channel decimation 203 , channel prediction 205 , error computation 207 , compression 209 , and buffering 211 .
  • the processor 105 provides the low implementation complexity, no error propagation, random access for decompression, low distortion, and flexibility of apparatus 100 of FIG. 1 .
  • Decimation 203 is conducted on to the result of channel estimation.
  • Decimation 203 is accomplished by selecting a subset of the first channel estimates (e.g. “decimated channel estimates”).
  • the decimated channel estimates may be based on a predetermined representation (e.g. fixed point with a given bit-width).
  • a predetermined representation e.g. fixed point with a given bit-width.
  • the choice of the decimated channel estimates which will be used for prediction 205 , enables the use of prediction methods with low implementation complexity while achieving small prediction error.
  • the choice of decimated channel estimates allows random access during decompression, as described in more detail below with reference to FIG. 3 .
  • Equation (1) The subset selected by decimation 203 is represented in Equation (1) as follows:
  • decimation factor DF is a constant
  • allocation size K may be, but is not limited to, a multiple of DF.
  • decimation 203 is accomplished by selecting every other one of the first channel estimates produced by channel estimation, and selecting the last of the first channel estimates in the allocation at a subcarrier with index K ⁇ 1, as illustrated in FIG. 4 and described below.
  • the set H l D is stored or buffered 211 as described below.
  • Decimation 203 may produce decimated channel estimates that are uniformly spaced or not uniformly spaced (e.g. non-uniformly spaced). In addition, the decimated channel estimates need not be selected at the end of an allocation to avoid extrapolation during prediction.
  • Prediction 205 is performed on the result of decimation 203 .
  • Prediction 205 generates second channel estimates (e.g. “predicted channel estimates”) of the first channel estimates using the decimated channel estimates.
  • Predictor 205 may be, but is not limited to, a linear interpolator or a minimum mean square error (MMSE) estimator.
  • MMSE minimum mean square error
  • Equation (2) predicted channel estimates are represented by Equation (2) as follows:
  • the predicted channel estimates may be represented with higher precision than the first channel estimates, since they are not buffered 211 as described below.
  • Prediction 205 of FIG. 2 may use any suitable estimation method. For example, if second order statistics of the channel estimates are available, they may be used to derive MMSE estimates. Alternatively, instead of linear (e.g. 1 st order) interpolation, alternative embodiments may use n th order polynomial interpolation. If required, in addition to interpolation, other embodiments may use extrapolation in prediction 205 .
  • Error computation 207 is performed on the results of channel estimation and prediction 205 .
  • Error computation 207 receives the first channel estimates and the predicted channel estimates and computes channel estimation errors that include the difference between the first channel estimates and the predicted channel estimates, but is not limited thereto.
  • Error computation 207 generates the difference between the first channel estimates and the predicted channel estimates according to Equation (3) as follows:
  • the set of errors may be denoted by E l .
  • These errors are buffered 211 as described below in a finite precision format, for example, fixed point representation with bit-width BitW.
  • the errors may exceed the maximum representation range with bit-width BitW, in which case they may be rounded and saturated.
  • Rounded and saturated errors are denoted by ⁇ tilde over (e) ⁇ l , and the corresponding set ⁇ tilde over (E) ⁇ l .
  • prediction 205 uses decimated channel estimates instead of reconstructed channel estimates, there is no error propagation. That is, if one predicted channel estimate includes an error, that error does not affect other predicted channel estimates.
  • rate-distortion tradeoff rate-distortion tradeoff
  • prediction 205 If prediction 205 generates predicted channel estimates for the first channel estimates not selected for the decimated channel estimates, the errors computed by error computation 207 are small and have smaller dynamic range as compared to the first channel estimates. As a result, the errors can be compressed using fewer bits for a given decompression approximation quality target as described in more detail below with reference to FIG. 3 . This results in compression gains.
  • Compression 209 is performed on the results of error computation 207 .
  • Compression 209 compresses the channel estimation errors.
  • Compression 209 may encode the channel estimation errors using a predetermined representation (e.g., fixed point with a given bit-width, quantization compression, lossless compression, or lossy compression), but is not limited thereto.
  • channel estimation errors may be encoded in pseudo floating point (PFLP) format to obtain a larger dynamic range.
  • PFLP pseudo floating point
  • Buffering 211 is performed on the result of decimation 203 and compression 209 .
  • Buffering 211 stores the decimated channel estimates and the compressed channel estimation errors.
  • the compressed channel estimation errors may be decompressed and used with the decimated channel estimates to approximate the first channel estimates.
  • the first channel estimates may be buffered with the same representation after decimation. In an embodiment of the present disclosure, the first channel estimates may be buffered with a different representation than after decimation 203 by varying the bit-width or changing fixed point for floating point, etc.
  • the compressed channel estimation errors may be buffered with a different representation (e.g. a different bit-width) depending on the prediction error statistics or distance with respect to the decimated channel estimates. They may also be buffered using floating point representation even in cases where the decimated channel estimates are stored in fixed point representation, or any other format.
  • a different representation e.g. a different bit-width
  • block pseudo floating point (BPFLP) format may be used to represent decimated channel estimates and compressed channel estimation errors buffering 211 to cover a larger dynamic range.
  • a common exponent is used for a block of decimated channel estimates or compressed channel estimation errors.
  • the exponent may also be shared across real and imaginary parts of a complex number.
  • exponent sharing in PFLP is extended to a block of resource elements (REs).
  • a block may be a group of consecutive REs in the frequency domain within an OFDM symbol, in which case the block size B is defined as the number of consecutive REs in the frequency domain.
  • the performance of BPFLP may be good as long as B is smaller than the coherence bandwidth of the channel.
  • the block can also include REs of nearby OFDM symbols.
  • FIG. 3 is a block diagram of a processor 105 of FIG. 1 according to an embodiment of the present disclosure.
  • the processor 105 of FIG. 3 includes all of the operations of FIG. 2 plus decompression 301 and approximation 303 .
  • the operations in FIG. 3 that are in common with the operations in FIG. 2 operate in the same manner. Thus, descriptions of the operations in FIG. 3 that are in common with operations in FIG. 2 are not repeated below.
  • the decompression 301 is performed on the buffering 211 decimated channel estimates and compressed channel estimation errors concerning a set of first channel estimates.
  • Decompression 301 uses the decimated channel estimates to compute the predicted channel estimates according to Equation (2) above.
  • Approximation 303 is performed on the result of decompression 301 .
  • Approximation 303 uses the predicted channel estimates generated by decompression 301 with the compressed channel estimation errors ⁇ tilde over (E) ⁇ l to approximate the first channel estimates according to Equation (4) as follows:
  • the result of approximation 303 includes the approximate channel estimates and the decimated channel estimates H l D .
  • ⁇ tilde over (h) ⁇ l [k] may differ from h l [k] due to rounding and saturation of the errors prior to buffering 211 .
  • Distortion and compression performance may be traded off by varying the bit-width BitW.
  • FIG. 4 is an illustration of a decimated channel 401 according to an embodiment of the present disclosure.
  • each box represents a first channel estimate (e.g. 0 to 11 ).
  • the boxes that include an underline are selected by way of decimation 203 in FIG. 2 or 3 .
  • FIG. 5 is an illustration of first channel estimates, decimated channel estimates, predicted channel estimates, and errors between first channel estimates and predicted channel estimates according to an embodiment of the present disclosure.
  • FIG. 6 is a flowchart of a method of receiving a signal over channels, reducing buffer requirements for storing decimated channel estimates and compressed channel estimation errors according to an embodiment of the present disclosure.
  • the method compresses channel estimates to reduce buffer size.
  • the method also compresses channel estimates produced by a channel estimation method, thus decoupling the roles of channel estimation and data compression.
  • the method may be part of, but is not limited to, a long term evolution (LTE) system that uses orthogonal frequency-division multiplexing (OFDM).
  • LTE long term evolution
  • OFDM orthogonal frequency-division multiplexing
  • the method may apply to other communications systems and other multiple access methods.
  • the method provides low implementation complexity, no error propagation, random access for decompression, low distortion, and flexibility.
  • a signal over channels is received by a receiver.
  • channels are estimated (hereinafter “first channel estimates”) by channel estimation (e.g. by a channel estimator 103 of FIG. 1 ).
  • the set of first channel estimates is denoted in OFDM symbol l by H l .
  • the first channel estimates may be stored with a finite precision format (e.g. fixed point with a given bit-width), but is not limited thereto.
  • decimated channel estimates a subset of the first channel estimates (hereinafter “decimated channel estimates”) is selected by decimation (e.g. decimation 203 of FIG. 2 ).
  • the decimated channel estimates may be based on a predetermined representation (e.g. fixed point with a given bit-width).
  • a predetermined representation e.g. fixed point with a given bit-width.
  • the choice of the decimated channel estimates which will be used to predict channel estimates, enables the use of prediction methods with low implementation complexity while achieving small prediction error.
  • the choice of decimated channel estimates allows random access during decompression, as described in more detail above with reference to FIG. 3 .
  • the subset selected by decimation 203 is represented in Equation (1) above.
  • second channel estimates (hereinafter “predicted channel estimates”) of the first channel estimates are predicted using the decimated channel estimates by prediction (e.g. prediction 205 of FIG. 2 ).
  • the method of predicting second channel estimates may be, but is not limited to, a linear interpolation or MMSE estimation.
  • predicted channel estimates are represented by Equation (2) above.
  • the predicted channel estimates may be represented with higher precision than the first channel estimates, since they are not buffered.
  • channel estimation errors are computed by error computation (e.g. error computation 207 of FIG. 2 ) using the first channel estimates and the predicted channel estimates, where the channel estimation errors include the difference between the first channel estimates and the predicted channel estimates, but is not limited thereto.
  • error computation e.g. error computation 207 of FIG. 2
  • the difference between the first channel estimates and the predicted channel estimates is generated in accordance with Equation (3) above.
  • the set of errors may be denoted by E l .
  • These errors are buffered (e.g. buffering 211 of FIG. 1 ) as described below in a finite precision format, for example, fixed point representation with bit-width BitW.
  • the errors may exceed the maximum representation range with bit-width BitW, in which case they may be rounded and saturated.
  • Rounded and saturated errors are denoted by ⁇ tilde over (e) ⁇ l , and the corresponding set ⁇ tilde over (E) ⁇ l .
  • second channel estimates are predicted using decimated channel estimates instead of reconstructed channel estimates, there is no error propagation. That is, if one predicted channel estimate includes an error, that error does not affect other predicted channel estimates.
  • rate-distortion tradeoff rate-distortion tradeoff
  • the errors computed are small and have a smaller dynamic range as compared to the first channel estimates.
  • the errors may be compressed using fewer bits for a given decompression approximation quality target as described in more detail above with reference to FIG. 3 . This results in compression gains.
  • the channel estimation errors are compressed by compression (e.g. compression 209 of FIG. 2 ).
  • the compression method may be, but is not limited to, encoding the channel estimation errors using a predetermined representation (e.g., fixed point with a given bit-width, quantization compression, lossless compression, or lossy compression).
  • the decimated channel estimates and the compressed channel estimation errors are buffered (e.g. buffering 211 of FIG. 2 ).
  • the compressed channel estimation errors may be decompressed and used with the decimated channel estimates to approximate the first channel estimates.
  • FIG. 7 is a flowchart of a method of receiving a signal over channels, compressing a channel estimation buffer via decimation, prediction, error compression, approximating original channel estimates from decimated channel estimates and compressed errors according to an embodiment of the present disclosure, and transmitting approximated channel estimates.
  • FIG. 7 includes all of the operations of FIG. 6 plus computing predicted channel estimates, approximating original channel estimates, and transmitting the approximated channel estimates.
  • the operations in FIG. 7 that are in common with the operations in FIG. 6 operate in the same manner. Thus, descriptions of the operations in FIG. 7 that are in common with operations in FIG. 6 are not repeated below.
  • predicted channel estimates are computed from buffered decimated channel estimates and compressed channel estimation errors concerning a set of first channel estimates according to Equation (2) above by decompression (e.g. decompression 301 of FIG. 3 ) in 701 .
  • the channel estimates and compressed channel estimation errors may be received after being buffered (e.g. buffering 211 of FIG. 2 ).
  • the first channel estimates are approximated from the predicted channel estimates generated in 701 and the compressed channel estimation errors ⁇ tilde over (E) ⁇ l according to Equation (4) above, by approximation (i.e., approximation 303 of FIG. 3 ).
  • the result of 703 includes the approximate channel estimates and the decimated channel estimates H l D .
  • ⁇ tilde over (h) ⁇ l [k] may differ from h l [k] due to rounding and saturation of the errors prior to storage. Distortion and compression performance may be traded off by varying the bit-width BitW.
  • the approximated first channel estimates are transmitted by a transmitter (e.g. transmitter 105 of FIG. 1 ).

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Power Engineering (AREA)
  • Multimedia (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)
  • Radio Transmission System (AREA)

Abstract

An apparatus and a method are provided. The apparatus includes a receiver configured to receive a signal over channels, a channel estimator configured to estimate the channels as first channel estimates; and a processor configured to compress the channel estimates to reduce buffer size. The method includes receiving a signal over channels by a receiver, estimating, by a channel estimator, the channels as a first channel estimate; and compressing, by a processor, the channel estimates to reduce buffer size.

Description

    PRIORITY
  • This application claims priority under 35 U.S.C. §119(e) to a U.S. Provisional Patent Application filed on Dec. 2, 2015 in the United States Patent and Trademark Office and assigned Ser. No. 62/262,072, the entire contents of which are incorporated herein by reference.
  • BACKGROUND OF THE DISCLOSURE
  • Field of the Disclosure
  • The present disclosure relates generally to an apparatus for and a method of channel estimation, and more particularly, to an apparatus for and a method of compressing channel estimates to reduce buffer size.
  • Description of the Related Art
  • A typical communication receiver using coherent detection has a channel estimation (CE) block that estimates various channel gains affecting a signal as it travels from a transmitter to a receiver. The coherent receiver uses CE for data demodulation. The channel estimates produced by this block are often stored in a CE output buffer to be used by other blocks (e.g. a symbol detector) in the receiver chain. The number of stored channel estimates increases as communication protocols require support for multiple antennas and modulation carriers, such as increased bandwidth (e.g. multiple frequency carriers) and increased number of transmit and receive antennas. This causes the CE output buffer size to increase and occupy a significant portion of an integrated circuit (IC).
  • SUMMARY
  • According to an aspect of the present disclosure, an apparatus is provided. The apparatus includes a receiver configured to receive a signal over channels; a channel estimator configured to estimate the channels as first channel estimates; and a processor configured to compress the channel estimates to reduce buffer size.
  • In accordance with another aspect of the present disclosure, a method is provided. The method includes receiving, by a receiver, a signal over channels; estimating, by a channel estimator, the channels as a first channel estimate; and compressing, by a processor, the channel estimates to reduce buffer size.
  • In accordance with another aspect of the present disclosure, a method is provided. The method includes receiving a signal over channels by a receiver; estimating the channels as first channel estimates by a processor; selecting a predetermined subset of the estimated channels as decimated channels by the processor; generating second channel estimates as predicted channel estimates of the first channel estimates by the processor using the decimated channels; computing channel estimation errors between the first channel estimates and the predicted channel estimates by the processor; compressing the channel estimation errors by the processor; buffering the decimated channel estimates and the compressed channel estimation errors by the processor; and computing the predicted channel estimates by the processor; approximating the first channel estimates by the processor using the predicted channel estimates and the compressed channel estimation errors.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The above and other aspects, features, and advantages of certain embodiments of the present disclosure will be more apparent from the following detailed description, taken in conjunction with the accompanying drawings, in which:
  • FIG. 1 is a block diagram of an apparatus for channel reception, channel estimation buffer compression via decimation, prediction, error compression, and estimated channel transmission according to an embodiment of the present disclosure;
  • FIG. 2 is a block diagram of a processor of FIG. 1 according to an embodiment of the present disclosure;
  • FIG. 3 is a block diagram of a processor of FIG. 1 according to an embodiment of the present disclosure;
  • FIG. 4 is an illustration of a decimated channel according to an embodiment of the present disclosure;
  • FIG. 5 is an illustration of original channel estimates, decimated channel estimates, predicted channel estimates, and errors between original channel estimates and predicted channel estimates according to an embodiment of the present disclosure;
  • FIG. 6 is a flowchart of a method of receiving a signal over channels, reducing buffer requirements for storing decimated channel estimates and compressed channel estimation errors according to an embodiment of the present disclosure; and
  • FIG. 7 is a flowchart of a method of receiving a signal over channels, compressing a channel estimation buffer via decimation, prediction, error compression, approximating original channel estimates from decimated channel estimates and compressed errors according to an embodiment of the present disclosure, and transmitting approximated channel estimates.
  • DETAILED DESCRIPTION OF EMBODIMENTS OF THE PRESENT DISCLOSURE
  • Hereinafter, embodiments of the present disclosure are described in detail with reference to the accompanying drawings. It should be noted that the same elements will be designated by the same reference numerals although they are shown in different drawings. In the following description, specific details such as detailed configurations and components are merely provided to assist the overall understanding of the embodiments of the present disclosure. Therefore, it should be apparent to those skilled in the art that various changes and modifications of the embodiments described herein may be made without departing from the scope and spirit of the present disclosure. In addition, descriptions of well-known functions and constructions are omitted for clarity and conciseness. The terms described below are terms defined in consideration of the functions in the present disclosure, and may be different according to users, intentions of the users, or customs. Therefore, the definitions of the terms should be determined based on the contents throughout the specification.
  • The present disclosure may have various modifications and various embodiments, among which embodiments are described below in detail with reference to the accompanying drawings. However, it should be understood that the present disclosure is not limited to the embodiments, but includes all modifications, equivalents, and alternatives within the spirit and the scope of the present disclosure.
  • Although terms including an ordinal number such as first, second, etc. may be used for describing various elements, the structural elements are not restricted by the terms. The terms are only used to distinguish one element from another element. For example, without departing from the scope of the present disclosure, a first structural element may be referred to as a second structural element. Similarly, the second structural element may also be referred to as the first structural element. As used herein, the term “and/or” includes any and all combinations of one or more associated items.
  • The terms used herein are merely used to describe various embodiments of the present disclosure but are not intended to limit the present disclosure. Singular forms are intended to include plural forms unless the context clearly indicates otherwise. In the present disclosure, it should be understood that the terms “include” or “have” indicate existence of a feature, a number, a step, an operation, a structural element, parts, or a combination thereof, and do not exclude the existence or probability of additional one or more other features, numerals, steps, operations, structural elements, parts, or combinations thereof.
  • Unless defined differently, all terms used herein have the same meanings as those understood by a person skilled in the art to which the present disclosure belongs. Such terms as those defined in a generally used dictionary are to be interpreted to have the same meanings as the contextual meanings in the relevant field of art, and are not to be interpreted to have ideal or excessively formal meanings unless clearly defined in the present disclosure.
  • The channel estimates exhibit various forms of correlation, across frequency, time or space (antennas). These correlations can be exploited to reduce buffering requirements via data compression techniques.
  • A buffer for storing channel estimates takes up a significant portion of an integrated circuit (IC). This will increase as the number of component carriers (CCs) and antennas increase. However, channel estimates exhibit correlation over frequency and time, which may be used to reduce buffering requirements via a compression algorithm.
  • Data compression may be suitable for various applications, such as an audio file, an image file, a video file, a text file, and a database. Data compression techniques may be classified in various ways including, but not limited to, lossless vs. lossy, universal vs. non-universal, one dimensional (1D) video vs. two dimensional (2D) video, and online vs. offline.
  • Data compression codes and coding methods include Huffman coding, arithmetic coding, Lempel-Ziv, joint photographic experts group (JPEG), moving picture experts group audio layer 3 (MP3), and International Telecommunication Union (ITU) video compression standard H.264 (also known as moving picture experts group 4 (MPEG-4)). Data compression methods are applicable to different applications based on factors such as statistical properties of the data source to be compressed, compression performance, implementation complexity, reconstruction quality/distortion metrics, and causality.
  • Data compression of channel estimates have very specific requirements that limit the applicability of typical methods. While some distortion may be tolerated, the distortion metrics to be used (e.g. frame error rates) are different than those used in audio or video compression.
  • FIG. 1 is a block diagram of an apparatus 100 according to an embodiment of the present disclosure. The apparatus 100 compresses channel estimates to reduce buffer size. The apparatus 100 also compresses channel estimates produced by a channel estimation method, thus decoupling the roles of channel estimation and data compression. The apparatus 100 may be part of, but is not limited to, a long term evolution (LTE) system that uses orthogonal frequency-division multiplexing (OFDM). The apparatus 100 may apply to other communications systems and other multiple access methods.
  • Referring to FIG. 1, the apparatus 100 includes a receiver 101, a channel estimator 103, and a processor 105. The apparatus 100 provides low implementation complexity, no error propagation, random access for decompression, low distortion, and flexibility.
  • The receiver 101 receives a signal over channels. The channel estimator 103 estimates the channels as first channel estimates. The processor 105 compresses the channel estimates to reduce buffer size.
  • The channel estimator 103 estimates channels a first time (e.g. “first channel estimates”). According to an embodiment of the present disclosure, the first channel estimates obtained by the channel estimator 103 may be represented as, but are not limited to, a vector with complex entries hl[k] for k=0, . . . , K−1, where k is a frequency subcarrier index and l is an OFDM symbol index. The set of first channel estimates is denoted in OFDM symbol l by Hl. The first channel estimates may be stored with a finite precision format (e.g. fixed point with a given bit-width), but is not limited thereto.
  • According to an embodiment of the present disclosure, channel estimates may be given as a vector over frequency subcarriers. In general, channel estimates may span several dimensions in addition to frequency, such as time (e.g. OFDM symbol index), and space (e.g. transmit and receive antennas).
  • FIG. 2 is a block diagram of the processor 105 of FIG. 1 according to an embodiment of the present disclosure.
  • Referring to FIG. 2 the processor 105 is configured to perform channel decimation 203, channel prediction 205, error computation 207, compression 209, and buffering 211. The processor 105 provides the low implementation complexity, no error propagation, random access for decompression, low distortion, and flexibility of apparatus 100 of FIG. 1.
  • Decimation 203 is conducted on to the result of channel estimation. Decimation 203 is accomplished by selecting a subset of the first channel estimates (e.g. “decimated channel estimates”). The decimated channel estimates may be based on a predetermined representation (e.g. fixed point with a given bit-width). For decimation 203, the choice of the decimated channel estimates, which will be used for prediction 205, enables the use of prediction methods with low implementation complexity while achieving small prediction error. In addition, the choice of decimated channel estimates allows random access during decompression, as described in more detail below with reference to FIG. 3.
  • The subset selected by decimation 203 is represented in Equation (1) as follows:

  • H l D ={h l [n*DF]:n=0,1, . . . ,└(K−1)/DF┘}∪{h l [K−DF+1], . . . ,h l [K−1]},  (1)
  • where decimation factor DF is a constant, and allocation size K may be, but is not limited to, a multiple of DF. For example, if DF=2, decimation 203 is accomplished by selecting every other one of the first channel estimates produced by channel estimation, and selecting the last of the first channel estimates in the allocation at a subcarrier with index K−1, as illustrated in FIG. 4 and described below. The set Hl D is stored or buffered 211 as described below.
  • Decimation 203 may produce decimated channel estimates that are uniformly spaced or not uniformly spaced (e.g. non-uniformly spaced). In addition, the decimated channel estimates need not be selected at the end of an allocation to avoid extrapolation during prediction.
  • Prediction 205 is performed on the result of decimation 203. Prediction 205 generates second channel estimates (e.g. “predicted channel estimates”) of the first channel estimates using the decimated channel estimates. Predictor 205 may be, but is not limited to, a linear interpolator or a minimum mean square error (MMSE) estimator.
  • In an embodiment of the present disclosure where prediction 205 provides linear interpolation as illustrated in FIG. 5 described below, predicted channel estimates are represented by Equation (2) as follows:
  • h ^ l [ k + d ] = ( 1 - d DF ) h l [ k ] + d DF h l [ k + DF ] , k = 0 , DF , , DF * ( K - 1 ) / DF , ( 2 )
  • where d=1, 2, . . . , DF−1, and where “└x┘” is a floor function of x.
  • While the first channel estimates may be represented with a predetermined finite precision, the predicted channel estimates may be represented with higher precision than the first channel estimates, since they are not buffered 211 as described below.
  • Prediction 205 of FIG. 2 may use any suitable estimation method. For example, if second order statistics of the channel estimates are available, they may be used to derive MMSE estimates. Alternatively, instead of linear (e.g. 1st order) interpolation, alternative embodiments may use nth order polynomial interpolation. If required, in addition to interpolation, other embodiments may use extrapolation in prediction 205.
  • Error computation 207 is performed on the results of channel estimation and prediction 205. Error computation 207 receives the first channel estimates and the predicted channel estimates and computes channel estimation errors that include the difference between the first channel estimates and the predicted channel estimates, but is not limited thereto.
  • Error computation 207 generates the difference between the first channel estimates and the predicted channel estimates according to Equation (3) as follows:

  • e l [k+d]=h l [k+d]−ĥ l [k+d], k=0,DF, . . . ,DF*└(K−1)/DF┘,  (3)
  • where d=1, 2, . . . , DF−1.
  • The set of errors may be denoted by El. These errors are buffered 211 as described below in a finite precision format, for example, fixed point representation with bit-width BitW. The errors may exceed the maximum representation range with bit-width BitW, in which case they may be rounded and saturated. Rounded and saturated errors are denoted by {tilde over (e)}l, and the corresponding set {tilde over (E)}l. After buffering 211 of the sets Hl D and {tilde over (E)}l, compression is completed.
  • Since prediction 205 uses decimated channel estimates instead of reconstructed channel estimates, there is no error propagation. That is, if one predicted channel estimate includes an error, that error does not affect other predicted channel estimates. By varying the number of bits used to represent channel estimation errors, it is possible to trade off compression performance and decompression quality (rate-distortion tradeoff) as described in more detail below with reference to FIG. 3. This enables flexibility to suit various communication channels, modulation and coding rates, and signal propagation environments.
  • If prediction 205 generates predicted channel estimates for the first channel estimates not selected for the decimated channel estimates, the errors computed by error computation 207 are small and have smaller dynamic range as compared to the first channel estimates. As a result, the errors can be compressed using fewer bits for a given decompression approximation quality target as described in more detail below with reference to FIG. 3. This results in compression gains.
  • Compression 209 is performed on the results of error computation 207. Compression 209 compresses the channel estimation errors. Compression 209 may encode the channel estimation errors using a predetermined representation (e.g., fixed point with a given bit-width, quantization compression, lossless compression, or lossy compression), but is not limited thereto. In an embodiment of the present disclosure, channel estimation errors may be encoded in pseudo floating point (PFLP) format to obtain a larger dynamic range.
  • Buffering 211 is performed on the result of decimation 203 and compression 209. Buffering 211 stores the decimated channel estimates and the compressed channel estimation errors. As described in more detail below with reference to FIG. 3, the compressed channel estimation errors may be decompressed and used with the decimated channel estimates to approximate the first channel estimates.
  • In an embodiment of the present disclosure, the first channel estimates may be buffered with the same representation after decimation. In an embodiment of the present disclosure, the first channel estimates may be buffered with a different representation than after decimation 203 by varying the bit-width or changing fixed point for floating point, etc.
  • In addition, the compressed channel estimation errors may be buffered with a different representation (e.g. a different bit-width) depending on the prediction error statistics or distance with respect to the decimated channel estimates. They may also be buffered using floating point representation even in cases where the decimated channel estimates are stored in fixed point representation, or any other format.
  • Furthermore, block pseudo floating point (BPFLP) format may be used to represent decimated channel estimates and compressed channel estimation errors buffering 211 to cover a larger dynamic range. In this format, a common exponent is used for a block of decimated channel estimates or compressed channel estimation errors. The exponent may also be shared across real and imaginary parts of a complex number. In the BPFLP representation, exponent sharing in PFLP is extended to a block of resource elements (REs). For example, a block may be a group of consecutive REs in the frequency domain within an OFDM symbol, in which case the block size B is defined as the number of consecutive REs in the frequency domain. The performance of BPFLP may be good as long as B is smaller than the coherence bandwidth of the channel. For slow fading channels, the block can also include REs of nearby OFDM symbols.
  • FIG. 3 is a block diagram of a processor 105 of FIG. 1 according to an embodiment of the present disclosure. The processor 105 of FIG. 3 includes all of the operations of FIG. 2 plus decompression 301 and approximation 303. The operations in FIG. 3 that are in common with the operations in FIG. 2 operate in the same manner. Thus, descriptions of the operations in FIG. 3 that are in common with operations in FIG. 2 are not repeated below.
  • Referring to FIG. 3, the decompression 301 is performed on the buffering 211 decimated channel estimates and compressed channel estimation errors concerning a set of first channel estimates. Decompression 301 uses the decimated channel estimates to compute the predicted channel estimates according to Equation (2) above.
  • Approximation 303 is performed on the result of decompression 301. Approximation 303 uses the predicted channel estimates generated by decompression 301 with the compressed channel estimation errors {tilde over (E)}l to approximate the first channel estimates according to Equation (4) as follows:

  • {tilde over (h)} l [k+d]=ĥ l [k+d]+{tilde over (e)} l [k+d], k=0,DF, . . . ,DF*└(K−1)/DF┘,  (4)
  • where d=1, 2, . . . , DF−1.
  • The result of approximation 303 includes the approximate channel estimates and the decimated channel estimates Hl D. However, {tilde over (h)}l[k] may differ from hl[k] due to rounding and saturation of the errors prior to buffering 211. Distortion and compression performance may be traded off by varying the bit-width BitW.
  • FIG. 4 is an illustration of a decimated channel 401 according to an embodiment of the present disclosure. In FIG. 4, DF=2 and K=12.
  • Referring to FIG. 4, each box represents a first channel estimate (e.g. 0 to 11). The boxes that include an underline are selected by way of decimation 203 in FIG. 2 or 3.
  • FIG. 5 is an illustration of first channel estimates, decimated channel estimates, predicted channel estimates, and errors between first channel estimates and predicted channel estimates according to an embodiment of the present disclosure.
  • Referring to FIG. 5, linear prediction based on decimated channel estimates is illustrated, where DF=3.
  • FIG. 6 is a flowchart of a method of receiving a signal over channels, reducing buffer requirements for storing decimated channel estimates and compressed channel estimation errors according to an embodiment of the present disclosure. The method compresses channel estimates to reduce buffer size. The method also compresses channel estimates produced by a channel estimation method, thus decoupling the roles of channel estimation and data compression. The method may be part of, but is not limited to, a long term evolution (LTE) system that uses orthogonal frequency-division multiplexing (OFDM). The method may apply to other communications systems and other multiple access methods. The method provides low implementation complexity, no error propagation, random access for decompression, low distortion, and flexibility.
  • In 601, a signal over channels is received by a receiver.
  • In 603, channels are estimated (hereinafter “first channel estimates”) by channel estimation (e.g. by a channel estimator 103 of FIG. 1). According to an embodiment of the present disclosure, the first channel estimates obtained by the channel estimator 103 may be represented as, but are not limited to, a vector with complex entries hl[k] for k=0, . . . , K−1, where k is a frequency subcarrier index and l is an OFDM symbol index. The set of first channel estimates is denoted in OFDM symbol l by Hl. The first channel estimates may be stored with a finite precision format (e.g. fixed point with a given bit-width), but is not limited thereto.
  • In 605, a subset of the first channel estimates (hereinafter “decimated channel estimates”) is selected by decimation (e.g. decimation 203 of FIG. 2). The decimated channel estimates may be based on a predetermined representation (e.g. fixed point with a given bit-width). For decimation 203, the choice of the decimated channel estimates, which will be used to predict channel estimates, enables the use of prediction methods with low implementation complexity while achieving small prediction error. In addition, the choice of decimated channel estimates allows random access during decompression, as described in more detail above with reference to FIG. 3. The subset selected by decimation 203 is represented in Equation (1) above.
  • In 607, second channel estimates (hereinafter “predicted channel estimates”) of the first channel estimates are predicted using the decimated channel estimates by prediction (e.g. prediction 205 of FIG. 2). The method of predicting second channel estimates may be, but is not limited to, a linear interpolation or MMSE estimation. In an embodiment of the present disclosure where second channel estimates are predicted using linear interpolation as illustrated in FIG. 5 described above, predicted channel estimates are represented by Equation (2) above.
  • While the first channel estimates may be represented with a predetermined finite precision, the predicted channel estimates may be represented with higher precision than the first channel estimates, since they are not buffered.
  • In 609, channel estimation errors are computed by error computation (e.g. error computation 207 of FIG. 2) using the first channel estimates and the predicted channel estimates, where the channel estimation errors include the difference between the first channel estimates and the predicted channel estimates, but is not limited thereto. The difference between the first channel estimates and the predicted channel estimates is generated in accordance with Equation (3) above.
  • The set of errors may be denoted by El. These errors are buffered (e.g. buffering 211 of FIG. 1) as described below in a finite precision format, for example, fixed point representation with bit-width BitW. The errors may exceed the maximum representation range with bit-width BitW, in which case they may be rounded and saturated. Rounded and saturated errors are denoted by {tilde over (e)}l, and the corresponding set {tilde over (E)}l. After buffering of the sets Hl D and {tilde over (E)}l, compression is completed.
  • Since second channel estimates are predicted using decimated channel estimates instead of reconstructed channel estimates, there is no error propagation. That is, if one predicted channel estimate includes an error, that error does not affect other predicted channel estimates. By varying the number of bits used for represent channel estimation errors, it is possible to trade off compression performance and decompression quality (rate-distortion tradeoff) as described in more detail above with reference to FIG. 3. This enables flexibility to suit various communication channels, modulation and coding rates, and signal propagation environments.
  • If the predicted channel estimates are generated for the first channel estimates not selected for the decimated channel estimates, the errors computed are small and have a smaller dynamic range as compared to the first channel estimates. As a result, the errors may be compressed using fewer bits for a given decompression approximation quality target as described in more detail above with reference to FIG. 3. This results in compression gains.
  • In 611, the channel estimation errors are compressed by compression (e.g. compression 209 of FIG. 2). The compression method may be, but is not limited to, encoding the channel estimation errors using a predetermined representation (e.g., fixed point with a given bit-width, quantization compression, lossless compression, or lossy compression).
  • In 613, the decimated channel estimates and the compressed channel estimation errors are buffered (e.g. buffering 211 of FIG. 2). As described in more detail above with reference to FIG. 3, the compressed channel estimation errors may be decompressed and used with the decimated channel estimates to approximate the first channel estimates.
  • FIG. 7 is a flowchart of a method of receiving a signal over channels, compressing a channel estimation buffer via decimation, prediction, error compression, approximating original channel estimates from decimated channel estimates and compressed errors according to an embodiment of the present disclosure, and transmitting approximated channel estimates. FIG. 7 includes all of the operations of FIG. 6 plus computing predicted channel estimates, approximating original channel estimates, and transmitting the approximated channel estimates. The operations in FIG. 7 that are in common with the operations in FIG. 6 operate in the same manner. Thus, descriptions of the operations in FIG. 7 that are in common with operations in FIG. 6 are not repeated below.
  • Referring to FIG. 7, predicted channel estimates are computed from buffered decimated channel estimates and compressed channel estimation errors concerning a set of first channel estimates according to Equation (2) above by decompression (e.g. decompression 301 of FIG. 3) in 701. The channel estimates and compressed channel estimation errors may be received after being buffered (e.g. buffering 211 of FIG. 2).
  • In 703, the first channel estimates are approximated from the predicted channel estimates generated in 701 and the compressed channel estimation errors {tilde over (E)}l according to Equation (4) above, by approximation (i.e., approximation 303 of FIG. 3).
  • The result of 703 includes the approximate channel estimates and the decimated channel estimates Hl D. However, {tilde over (h)}l[k] may differ from hl[k] due to rounding and saturation of the errors prior to storage. Distortion and compression performance may be traded off by varying the bit-width BitW.
  • In 705, the approximated first channel estimates are transmitted by a transmitter (e.g. transmitter 105 of FIG. 1).
  • Although certain embodiments of the present disclosure have been described in the detailed description of the present disclosure, the present disclosure may be modified in various forms without departing from the scope of the present disclosure. Thus, the scope of the present disclosure shall not be determined merely based on the described embodiments, but rather determined based on the accompanying claims and equivalents thereto.

Claims (20)

1. An apparatus, comprising:
a receiver configured to receive a signal over channels;
a channel estimator configured to estimate the channels as first channel estimates; and
a processor configured to:
select a predetermined subset of the first channels estimated as decimated channels;
generate second channel estimates as predicted channel estimates of the first channel estimates using the decimated channels;
compute channel estimation errors between the first channel estimates and the predicted channel estimates;
compress the channel estimation errors; and
buffer the decimated channel estimates and the compressed channel estimation errors.
2. (canceled)
3. The apparatus of claim 1, wherein the first channel estimates are represented as a vector spanning several dimensions including frequency, time, and space, wherein frequency includes subcarriers, wherein time includes symbol index, and wherein space includes transmit and receive antennas.
4. The apparatus of claim 1, wherein the decimated channel estimates are based on a predetermined representation including fixed point with a bit-width, pseudo floating point, and block pseudo floating point, and wherein the decimated channel estimates may be uniformly spaced or non-uniformly spaced.
5. The apparatus of claim 1, wherein the processor is further configured to generate second channel estimates as predicted channel estimates of the first channel estimates by linear interpolation, first order linear interpolation, nth order polynomial interpolation, minimum mean square error (MMSE) estimation, or extrapolation.
6. The apparatus of claim 1, wherein the predicted channel estimates are represented with a predetermined finite precision, where the predetermined finite precision may be higher than that of the original channel estimates.
7. The apparatus of claim 1, wherein the processor is further configured to compute channel estimation errors between the first channel estimates and the predicted channel estimates by determining differences between the first channel estimates and the predicted channel estimates.
8. The apparatus of claim 1, wherein the processor is further configured to compute channel estimation errors between the first channel estimates and the predicted channel estimates by rounding and saturating channel estimation errors represented in fixed point with a bit-width that exceeds a maximum representation range with the bit-width.
9. The apparatus of claim 1, wherein the processor is further configured to compress the channel estimation errors by encoding the channel estimation errors using a predetermined representation, wherein the predetermined representation includes fixed point with a bit-width, pseudo floating point with n bits for real and imaginary part mantissas and m bits for an exponent shared between the real and the imaginary mantissas, quantization compression, lossless compression, lossy compression.
10. The apparatus of claim 1, wherein the processor is further configured to buffer the decimated channel estimates and the compressed channel estimation errors by buffering the decimated channel estimates in a different representation than the original channel estimates by varying bit width or changing fixed/floating point to floating/fixed point, including block pseudo floating point (BPFLP), and the compressed channel estimates may be stored with a different representation depending on prediction error statistics, bit width, floating point, including BPFLP, or distance with respect to the decimated channel estimates.
11. The apparatus of claim 1, wherein the processor is further configured to:
compute the predicted channel estimates; and
approximate the first channel estimates using the predicted channel estimates and the compressed channel estimation errors.
12. A method, comprising:
receiving a signal over channels by a receiver;
estimating, by a channel estimator, the channels as a first channel estimate;
compressing, by a processor, the channel estimates to reduce buffer size;
selecting a predetermined subset of the estimated channels as decimated channels by the processor;
generating second channel estimates as predicted channel estimates of the first channel estimates by the processor using the decimated channels;
computing channel estimation errors between the first channel estimates and the predicted channel estimates by the processor;
compressing the channel estimation errors by the processor; and
buffering the decimated channel estimates and the compressed channel estimation errors by the processor.
13. (canceled)
14. The method of claim 12, wherein the first channel estimates are represented as a vector spanning several dimensions including frequency, time, and space, wherein frequency includes subcarriers, wherein time includes symbol index, and wherein space includes transmit and receive antennas.
15. The method of claim 12, wherein the decimated channel estimates are based on a predetermined representation including fixed point with a bit-width, pseudo floating point, and block pseudo floating point, and wherein the decimated channel estimates may be uniformly spaced or non-uniformly spaced.
16. The method of claim 12, wherein generating the predicted channel estimates is comprised of generating the predicted channel estimates by linear interpolation, first order linear interpolation, nth order polynomial interpolation, minimum mean square error (MMSE) estimation, or extrapolation.
17. The method of claim 12, wherein the predicted channel estimates are represented with a predetermined finite precision, where the predetermined finite precision may be higher than that of the original channel estimates.
18. The method of claim 12, wherein computing channel estimation errors is comprised of computing channel estimation errors by determining differences between the original channel estimates and the predicted channel estimates.
19. The method of claim 12, wherein computing channel estimation errors is comprised of rounding and saturating channel estimation errors represented in fixed point with a bit-width that exceeds a maximum representation range with the bit-width.
20. A method, comprising:
receiving a signal over channels by a receiver;
estimating the channels as first channel estimates by a processor;
selecting a predetermined subset of the estimated channels as decimated channels by the processor;
generating second channel estimates as predicted channel estimates of the first channel estimates by the processor using the decimated channels;
computing channel estimation errors between the first channel estimates and the predicted channel estimates by the processor;
compressing the channel estimation errors by the processor;
buffering the decimated channel estimates and the compressed channel estimation errors by the processor;
computing the predicted channel estimates by the processor; and
approximating the first channel estimates by the processor using the predicted channel estimates and the compressed channel estimation errors.
US15/010,486 2015-12-02 2016-01-29 Apparatus for and method of channel estimation buffer compression via decimation, prediction, and error encoding Active US9692616B1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
US15/010,486 US9692616B1 (en) 2015-12-02 2016-01-29 Apparatus for and method of channel estimation buffer compression via decimation, prediction, and error encoding
KR1020160054099A KR102481008B1 (en) 2015-12-02 2016-05-02 Apparatus for and method of channel estimation buffer compression via decimation, prediction, and error encoding
TW105121836A TWI704778B (en) 2015-12-02 2016-07-12 Apparatus for and method of channel estimation buffer compression via decimation, prediction, and error encoding
CN201611033869.1A CN106817325B (en) 2015-12-02 2016-11-15 Through the channel estimation buffer compression device and method that extract prediction and error coding

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201562262072P 2015-12-02 2015-12-02
US15/010,486 US9692616B1 (en) 2015-12-02 2016-01-29 Apparatus for and method of channel estimation buffer compression via decimation, prediction, and error encoding

Publications (2)

Publication Number Publication Date
US20170163445A1 true US20170163445A1 (en) 2017-06-08
US9692616B1 US9692616B1 (en) 2017-06-27

Family

ID=58798756

Family Applications (1)

Application Number Title Priority Date Filing Date
US15/010,486 Active US9692616B1 (en) 2015-12-02 2016-01-29 Apparatus for and method of channel estimation buffer compression via decimation, prediction, and error encoding

Country Status (4)

Country Link
US (1) US9692616B1 (en)
KR (1) KR102481008B1 (en)
CN (1) CN106817325B (en)
TW (1) TWI704778B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220101109A1 (en) * 2020-09-25 2022-03-31 Samsung Electronics Co., Ltd. Deep learning-based channel buffer compression

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100002791A1 (en) * 2008-07-07 2010-01-07 Samsung Electronics Co. Ltd. Apparatus and method for operating valid bit in a wireless communication system
WO2014180510A1 (en) * 2013-05-10 2014-11-13 Telefonaktiebolaget L M Ericsson (Publ) Channel estimation for a subset of resource elements of a resource block

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4882754A (en) 1987-08-25 1989-11-21 Digideck, Inc. Data compression system and method with buffer control
WO2000062501A2 (en) 1999-04-13 2000-10-19 Broadcom Corporation Gateway with voice
US8611311B2 (en) 2001-06-06 2013-12-17 Qualcomm Incorporated Method and apparatus for canceling pilot interference in a wireless communication system
US9113846B2 (en) 2001-07-26 2015-08-25 Given Imaging Ltd. In-vivo imaging device providing data compression
US7428262B2 (en) 2003-08-13 2008-09-23 Motorola, Inc. Channel estimation in a rake receiver of a CDMA communication system
KR100603572B1 (en) 2004-09-30 2006-07-24 삼성전자주식회사 Appratus and method for detecting pilot signal in mobile communication system
EP2297912A4 (en) 2008-07-01 2016-11-30 Ikanos Communications Inc Reduced memory vectored dsl
CN101645733B (en) * 2008-08-07 2013-01-16 中兴通讯股份有限公司 Rake receiving device of hardware and receiving method thereof
CN101877686A (en) * 2009-04-30 2010-11-03 上海锐合通信技术有限公司 Method and device for processing data by terminal receiver
US8774294B2 (en) * 2010-04-27 2014-07-08 Qualcomm Incorporated Compressed sensing channel estimation in OFDM communication systems
CN103125104B (en) 2010-07-22 2015-10-21 伊卡诺斯通讯公司 For the method for operating vector VDSL sets of lines
CN101895505B (en) * 2010-07-23 2013-01-16 华亚微电子(上海)有限公司 Channel estimation method and device
US8644330B2 (en) 2011-03-29 2014-02-04 Intel Corporation Architecture and method of channel estimation for wireless communication system
US8908587B2 (en) * 2012-03-14 2014-12-09 Xiao-an Wang Channel feedback in OFDM systems
US9094164B2 (en) * 2012-04-17 2015-07-28 Qualcomm Incorporated Methods and apparatus to improve channel estimation in communication systems
US9088350B2 (en) 2012-07-18 2015-07-21 Ikanos Communications, Inc. System and method for selecting parameters for compressing coefficients for nodescale vectoring

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100002791A1 (en) * 2008-07-07 2010-01-07 Samsung Electronics Co. Ltd. Apparatus and method for operating valid bit in a wireless communication system
WO2014180510A1 (en) * 2013-05-10 2014-11-13 Telefonaktiebolaget L M Ericsson (Publ) Channel estimation for a subset of resource elements of a resource block

Also Published As

Publication number Publication date
TW201722105A (en) 2017-06-16
KR102481008B1 (en) 2022-12-23
CN106817325A (en) 2017-06-09
KR20170064976A (en) 2017-06-12
TWI704778B (en) 2020-09-11
US9692616B1 (en) 2017-06-27
CN106817325B (en) 2019-02-22

Similar Documents

Publication Publication Date Title
US10735736B2 (en) Selective mixing for entropy coding in video compression
US8325622B2 (en) Adaptive, scalable packet loss recovery
KR101483898B1 (en) Transmission resources in time or frequency using analog modulation
US11101867B2 (en) Reducing beamforming feedback size in WLAN communication
US20120188899A1 (en) Method for processing channel state information terminal and base station
US7376192B2 (en) Delta modulation for channel feedback in transmit diversity wireless communication systems
US10869029B2 (en) Hybrid digital-analog coding
US20120219078A1 (en) Subband Indexing Methods and Systems
US9692616B1 (en) Apparatus for and method of channel estimation buffer compression via decimation, prediction, and error encoding
EP2587803A1 (en) Methods for coding and reconstructing a pixel block and corresponding devices.
US8781023B2 (en) Method and apparatus for improving transmission of data on a bandwidth expanded channel
US9271229B2 (en) Methods, systems, and media for partial downloading in wireless distributed networks
US9356627B2 (en) Method and apparatus for improving transmission of data on a bandwidth mismatched channel
CN108599901A (en) It is a kind of based on multiphase like the radio transmitting method of Space Time Coding
Aguerri et al. Distortion exponent in MIMO fading channels with time-varying source side information
Shkel et al. Secure lossless compression
Aguerri et al. Compute-remap-compress-and-forward for limited backhaul uplink multicell processing
US8605805B2 (en) Receiver, channel state information compressing method, and computer program
WO2018228704A1 (en) A network device, a user equipment and a method for wireless data transmission
US11645079B2 (en) Gain control for multiple description coding
US20240364929A1 (en) Communication scheme for distributed video coding
US11582766B2 (en) Wireless communication system, communication method, transmitter and receiver
Aguerri et al. Lossy compression for compute-and-forward in limited backhaul wireless relay networks
EP4047829A1 (en) System and method for uplink coordinated transmission in an open radio access network
CN114915376A (en) Decoding method, encoding method, device, equipment and storage medium

Legal Events

Date Code Title Description
AS Assignment

Owner name: SAMSUNG ELECTRONICS CO., LTD., KOREA, REPUBLIC OF

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ETKIN, RAUL;NALLAPUREDDY, BHASKAR;LEE, JUNGWON;SIGNING DATES FROM 20160128 TO 20160129;REEL/FRAME:038134/0327

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4