WO2017031149A1 - Method and system for compression of radar signals - Google Patents

Method and system for compression of radar signals Download PDF

Info

Publication number
WO2017031149A1
WO2017031149A1 PCT/US2016/047254 US2016047254W WO2017031149A1 WO 2017031149 A1 WO2017031149 A1 WO 2017031149A1 US 2016047254 W US2016047254 W US 2016047254W WO 2017031149 A1 WO2017031149 A1 WO 2017031149A1
Authority
WO
WIPO (PCT)
Prior art keywords
block
compression
component
range values
compressed
Prior art date
Application number
PCT/US2016/047254
Other languages
French (fr)
Inventor
Anil MANI
Sandeep Rao
Karthik Ramasubramanian
Original Assignee
Texas Instruments Incorporated
Texas Instruments Japan Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US15/061,728 external-priority patent/US20170054449A1/en
Application filed by Texas Instruments Incorporated, Texas Instruments Japan Limited filed Critical Texas Instruments Incorporated
Priority to JP2018509614A priority Critical patent/JP7037028B2/en
Priority to CN201680047315.9A priority patent/CN107923971B/en
Priority to EP16837718.2A priority patent/EP3338109B1/en
Publication of WO2017031149A1 publication Critical patent/WO2017031149A1/en
Priority to JP2022002079A priority patent/JP7379546B2/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/02Systems using reflection of radio waves, e.g. primary radar systems; Analogous systems
    • G01S13/06Systems determining position data of a target
    • G01S13/08Systems for measuring distance only
    • G01S13/32Systems for measuring distance only using transmission of continuous waves, whether amplitude-, frequency-, or phase-modulated, or unmodulated
    • G01S13/34Systems for measuring distance only using transmission of continuous waves, whether amplitude-, frequency-, or phase-modulated, or unmodulated using transmission of continuous, frequency-modulated waves while heterodyning the received signal, or a signal derived therefrom, with a locally-generated signal related to the contemporaneously transmitted signal
    • G01S13/343Systems for measuring distance only using transmission of continuous waves, whether amplitude-, frequency-, or phase-modulated, or unmodulated using transmission of continuous, frequency-modulated waves while heterodyning the received signal, or a signal derived therefrom, with a locally-generated signal related to the contemporaneously transmitted signal using sawtooth modulation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/02Systems using reflection of radio waves, e.g. primary radar systems; Analogous systems
    • G01S13/50Systems of measurement based on relative movement of target
    • G01S13/58Velocity or trajectory determination systems; Sense-of-movement determination systems
    • G01S13/583Velocity or trajectory determination systems; Sense-of-movement determination systems using transmission of continuous unmodulated waves, amplitude-, frequency-, or phase-modulated waves and based upon the Doppler effect resulting from movement of targets
    • G01S13/584Velocity or trajectory determination systems; Sense-of-movement determination systems using transmission of continuous unmodulated waves, amplitude-, frequency-, or phase-modulated waves and based upon the Doppler effect resulting from movement of targets adapted for simultaneous range and velocity measurements
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/28Details of pulse systems
    • G01S7/285Receivers
    • G01S7/295Means for transforming co-ordinates or for evaluating data, e.g. using computers
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/35Details of non-pulse systems
    • G01S7/352Receivers
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/35Details of non-pulse systems
    • G01S7/352Receivers
    • G01S7/356Receivers involving particularities of FFT processing
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M7/00Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
    • H03M7/14Conversion to or from non-weighted codes
    • H03M7/24Conversion to or from floating-point codes
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M7/00Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
    • H03M7/30Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
    • H03M7/40Conversion to or from variable length codes, e.g. Shannon-Fano code, Huffman code, Morse code
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M7/00Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
    • H03M7/30Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
    • H03M7/40Conversion to or from variable length codes, e.g. Shannon-Fano code, Huffman code, Morse code
    • H03M7/4031Fixed length to variable length coding

Definitions

  • This relates generally to radar systems, and more particularly to compression of radar signals in radar systems.
  • FMCW radar systems are useful in a number of applications associated with a vehicle such as adaptive cruise control, collision warning, blind spot warning, lane change assist, parking assist and rear collision warning.
  • Processing of radar signals in an FMCW radar system to obtain a three dimensional image (range, velocity, and angle) of objects in the field of view of the radar system includes multi-dimensional Fourier transform processing which requires a significant amount of memory to store the radar data.
  • the amount of on-chip memory on radar transceiver integrated circuits (ICs) used in embedded FMCW radar systems constrains the amount of data that can be stored, and thus limits the capabilities of the radar transceiver ICs. Including larger memory capacity incurs an undesirable increase in both die size and cost of the IC.
  • a radar system includes a compression component configured to compress blocks of range values to generate compressed blocks of range values, and a radar data memory configured to store compressed blocks of range values generated by the compression component.
  • an example method For compression of radar signals in a radar system, an example method includes receiving blocks of range values generated from processing of digitized intermediate frequency (IF) signals, compressing each block of range values to generate a compressed block of range values, the compressing performed by a compression component of the radar system, and storing the compressed blocks of range values in radar data memory.
  • IF intermediate frequency
  • FIG. 1 is an example of binary floating point (BFP) compression.
  • FIG. 2 is an example of bit packing (PAC) compression.
  • FIG. 3 is an example of order k exponential Golomb (EG) compression.
  • FIGS. 4 and 5 are block diagrams of an example high-level architecture of a compression management component.
  • FIGS. 6, 7, 8 and 9 are flow diagrams of example methods for determining compression parameters.
  • FIGS. 10 and 11 are block diagrams of an example high-level architecture of a compression management component.
  • FIG. 12 illustrates an example format of a BFP compressed sample block.
  • FIG. 13 illustrates an example format of an EG compressed sample block.
  • FIG. 14 is a flow diagram of an example method for extracting the mantissas of samples for BFP compression.
  • FIG. 15 is a flow diagram of an example method for EG encoding of samples in a block of samples.
  • FIG. 16 illustrates an example format of a variable bit width BFP (VBBFP) compressed sample block.
  • VBBFP variable bit width BFP
  • FIG. 17 is a flow diagram of an example method for determining VBBFP compression parameters.
  • FIG. 18 is a flow diagram of an example method for extracting the mantissas of samples for VBBFP encoding.
  • FIG. 19 is a flow diagram of an example method for decompression of a VBBFP encoded sample block.
  • FIG. 20 is a block diagram of an example frequency modulated continuous wave (FMCW) radar system.
  • FMCW frequency modulated continuous wave
  • FIGS. 21-26 are block diagrams of example direct memory access architectures.
  • FIG. 27 is a flow diagram of a method for compressing radar signals in radar system. DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS
  • a frequency modulated continuous wave (FMCW) radar transmits, via one or more transmit antennas, a radio frequency (RE) frequency ramp referred to as a chirp. Further, multiple chirps may be transmitted in a unit referred to as a frame. The transmitted chirps are reflected from any objects in the field of view (FOV) the radar and are received by one or more receive antennas. The received signal for each receive antenna is down-converted to an intermediate frequency (IF) signal and then digitized. The digitized samples are pre-processed and stored in memory, which is referred to as radar data memory herein. After the data for an entire frame is stored in the radar data memory, the data is post-processed to detect any objects in the FOV and to identify the range, velocity and angle of arrival of detected objects.
  • IF intermediate frequency
  • the pre-processing may include performing a range fast Fourier transform (FFT) on the digitized samples of each reflected chirp to convert the data to the frequency domain.
  • FFT range fast Fourier transform
  • This range FFT may also be referred to as a one-dimensional (ID) FFT. Peak values correspond to ranges (distances) of objects.
  • This processing is usually performed in-line, so the range FFT is performed on the digitized samples of a previous chirp while samples are being collected for the current chirp.
  • the results of the range FFTs i.e., range values
  • the results of the range FFTs are saved in the radar data memory for further processing.
  • the results of the range FFTs are stored row-wise in the radar data memory, forming an array of range values.
  • a Doppler FFT is performed over each of the corresponding range values of the chirps in the frame. Accordingly, a Doppler FFT is performed on each of the column of the array of range values stored in the radar data memory.
  • This Doppler may also be referred to as a two-dimensional (2D) FFT.
  • the peaks in the resulting range-Doppler array correspond to the range and relative speed (velocity) of potential objects.
  • each column of range values is read from the radar data memory and a Doppler FFT is performed on the range values of the column.
  • the column data access may be referred to as transpose access as the column data access is mathematically equivalent to a transpose operation on the data followed by a row access.
  • the Doppler FFT values may be stored back in the same column memory locations.
  • All of the digitized data are required to be in the radar data memory before the post-processing (such as Doppler FFT, angle estimation or object detection) can begin.
  • resolution expectations i.e., range resolution, which is controlled by the number of digitized samples per chirp
  • velocity resolution which is controlled by the number of chirps per frame
  • angle resolution which is controlled by the number of receive antennas
  • the current radar data memory size needed to meet resolution expectations is on the order of one to two megabytes (MB) and is expected to increase in coming years as increased resolution is demanded.
  • Embodiments of the disclosure provide memory compression techniques for radar data that permit more radar data to be stored in radar data memory, thus allowing for increased resolution in the same amount of memory. Accordingly, the compression techniques are useful to reduce on-chip memory requirements while maintaining the capabilities of a larger device.
  • the compression techniques are designed for radar signal processing and are performed after the range FFT when the samples output by the range FFT are stored in radar data memory.
  • block floating point (BFP) compression of radar data is performed after the ID FFT.
  • Block floating point representations in signal processing increase the dynamic range that can be represented by a limited number of bits. Accordingly, a block floating point representation can cover a wide dynamic range while maintaining a reduced accuracy of signal data.
  • a block of samples is represented as an exponent common to each sample and a mantissa for each sample. The common exponent is determined for the block of samples based on the largest magnitude sample in the block.
  • the mantissa for each sample in the group is represented by the number of bits that accommodates the mantissa of the largest sample.
  • the size of the mantissa is fixed based on the desired accuracy and compression size.
  • the mantissa for each sample is the k most significant bits of each sample beginning with the most significant one bit in the sample, where k is the desired mantissa size.
  • the bits representing the common exponent and the mantissas for the block may be packed consecutively to represent compressed samples for the block.
  • Block floating point representations are useful for signal dynamics where the amplitudes fluctuate over time yet neighboring samples have similar amplitudes in a particular group.
  • the term "common scale factor" is used in lieu of "common exponent”.
  • the common scale factor is closely relate to the common exponent in BFP with a subtle difference.
  • the mantissa is considered to be a fraction between 0 and 1.
  • the sample is regenerated by computing mantissa x 2 e where e is the exponent.
  • the mantissa is an integer between 0 and 2 mantlssabw - 1, where bw is bit width.
  • a chirp after the ID FFT may have a dynamic range as high as 90 dB.
  • This high dynamic range is the result of the path loss difference between nearby targets and faraway targets.
  • Such a high dynamic range may not be desirable for BFP representation as a 90 dB dynamic range would require approximately fifteen bits of mantissa as each bit provides approximately 6 dB of dynamic range.
  • the dynamic range across antennas for a single range bin may be relatively small (such as less than 30 dB, which would require approximately five bits of mantissa).
  • the dynamic range for adjacent range bins and for the same range bin across different chirps may also quite be small.
  • a block floating point compression technique is useful to compress samples after the ID FFT that are either in the same or in adjacent range bins.
  • the output of the ID FFT is a complex sample of 32 bits, 16 bits for the in-band (I) channel, and 16 bits for the quadrature (Q) channel of each receive channel.
  • the dynamic range possible per bin is approximately 42 dB. Recall that the per bin dynamic range requirement of 30 dB due to the dynamic range across antennas for a single range bin is also met with approximately 12 dB margin.
  • FIG. 1 is an example of BFP compression for a block of eight bit samples [23 127 64 124]. For simplicity, this example uses a sample bit width of eight. A more typical sample bit width in current FMCW radar systems is sixteen, and may be larger in future systems.
  • the sample values in the block are written in binary as [00010101b, 0111111 lb, 01000000b, 01111100b].
  • the original bit width of the block of samples is 32 bits.
  • the compressed block size should be 16 bits. Of the 16 bits, three bits are allocated for the scale factor as each sample is eight bits and twelve of the remaining thirteen bits are divided among the four mantissas, such that each is allocated three bits. The thirteenth bit is not used.
  • the scale factor is based on the maximum value of the four samples, 127, which is seven bits wide. Therefore, the three bits of the mantissa for each sample will be bits [6, 5, 4], and the common scale factor will be four, because four bits [3, 2, 1, 0] per sample are dropped.
  • the compressed block is then the three bit scale factor 100b followed by the four three bit mantissas, each of which is the three most significant bits (MSBs) of the respective block.
  • sample values are rounded before truncation to reduce the effect of quantization.
  • the rounding is as follows. If n bits are to be dropped from a sample value, 2 n_1 is added to the value and the result is truncated by n bits. As explained in more detail herein, in some embodiments, dither may be added rather than 2 n_1 . In the example of FIG. 1, rounding and dither are not used.
  • a specialized type of BFP compression referred to as bit packing (PAC) is performed.
  • the input samples are stored using a fixed scale factor and mantissa bit width. Storage of the scale factor is unnecessary, because the value is fixed. For example, assuming 32 bit samples, a common scale factor of fourteen, and mantissa bit widths of 18 bits, 32-bit I and 32-bit Q samples can be stored as 18-bit I and 18-bit Q samples.
  • FIG. 2 illustrates PAC using the example of FIG. 1 with a fixed scale factor of four and a mantissa bit width of four bits.
  • exponential Golomb (EG) compression is performed after the ID FFT. Radar data is expected to be sparse in the range dimension, because usually a few large samples correspond to object reflections, and the remaining samples are relatively small. Thus, the average bit width across the range dimension is small. Accordingly, a variable bit width compression technique in which each sample occupies a space proportional to the sample bit width can significantly reduce the average bit width (per sample) needed to store the data.
  • variable bit width technique is order k exponential Golomb (EG) coding.
  • EG order k exponential Golomb
  • a description of such coding is located in "Exponential-Golomb coding," Wikipedia, available at https://en.wikipedia.org/wiki/Exponential-Golomb_coding on January 22, 2016, which is incorporated by reference herein.
  • order k exponential Golomb codes are parameterized by a value "k”, which may be referred to as the Golomb parameter k herein.
  • the Golomb parameter k represents the most common bit width in the input vector and is used to determine the boundary between the variable bit width quotient of the encoded value and the fixed bit width remainder.
  • the value of k is selected by searching a list of possible values, allowing the value to be optimized based on input sample values.
  • the sample value is divided 300 into a quotient and a remainder, the remainder being the least significant k bits of the sample value.
  • a value of 1 is then added 302 to the quotient, which is the equivalent of adding 2 k to sample value x.
  • the bit width n ext ra of the incremented quotient is then determined 304 and the compressed sample is constructed 306.
  • the first two bits are the unary representation of n ext ra - 1
  • the middle three bits are the binary representation of the incremented quotient
  • the final two bits are the binary representation of the remainder.
  • a range FFT is first performed on the digitized time domain samples corresponding to a chirp.
  • the range FFT samples are then stored in an array in radar data memory.
  • the storage is assumed to be row-wise.
  • the data is stored column-wise.
  • the data in this array is accessed column wise, which requires a 'transpose access' operation. Further, when accessing samples in a column, these samples are not contiguous in memory.
  • DMA direct memory access
  • both the BFP and EG schemes compress a fixed number of samples, which may be referred to a block of samples herein, into a fixed number of bits. This is straightforward when using BFP compression, because the mantissa and common scale factor bit widths are fixed, so the compressed size is constant.
  • the textbook EG encoding is a variable bit width technique with no guarantee of the bit width of the encoded output.
  • quantization is performed if the desired bit width is not achieved using textbook EG encoding. In some embodiments, this quantization takes the form of dropping some of the least significant bits to guarantee the desired bit width.
  • the number of bits to drop is referred to as a scale factor or EG scale factor herein.
  • FIGS. 4, 5, 10 and 11 are block diagrams of an example high-level architecture for a compression management component 400 implementing both BFP compression and order k EG compression.
  • the particular type of compression to be used during operation is user configurable. In some embodiments, only one type of compression is implemented.
  • the compression management component 400 is suitable for use in an embedded radar system and manages the compression and decompression of samples output by the ID FFT of radar signal processing. As indicated in the block diagram of FIG. 4, the compression management component 400 is designed to interface with a direct memory access device (DMA).
  • DMA direct memory access device
  • the compression management component 400 ensures that the compressed output size in bits is less than or equal to a desired value to ensure a predictable and known usage of the available memory.
  • the compression management component 400 provides two-pass compression for both BFP compression and EG compression in which the parameters for the compression operation are determined in the first pass and the actual compression is performed in the second pass according to the determined parameters.
  • the first pass determines the common scale factor for the block of samples to be compressed.
  • the first pass determines: the optimal value of the Golomb parameter k for the block of samples to be compressed; and the scale factor to use to guarantee a desired compression ratio. This EG scale factor is also referred to as the number of least significant bits to drop.
  • the compression management component 400 includes a parameter determination engine 402, a compression engine 418, a decompression engine 420, input ping/pong buffers 410, 412, output ping/pong buffers 414, 416, and a linear feedback shift register (LFSR) 408.
  • LFSR linear feedback shift register
  • the LFSR 408 provides a dither signal that is used to add dither to encoded samples.
  • the input ping/pong buffers 410, 412 are coupled: between the DMA and the compression engine 418 to alternately receive blocks of samples to be compressed; and between the DMA and the decompression engine 420 to alternately receive compressed sample blocks to be decompressed.
  • the output ping/pong buffers 414, 416 are coupled: between the compression engine 418 and the DMA to alternately receive compressed sample blocks to be stored by the DMA in the radar data memory; and between the decompression engine and the DMA to alternately receive decompressed blocks of samples to be stored in memory by the DMA.
  • the ping/pong buffer mechanism is such that if the compression engine or the decompression engine is working on the input ping buffer, the DMA has access to the input pong buffer and vice-versa. Similarly, if the compression engine or the decompression engine is working on the output ping buffer, the DMA has access to the output pong buffer and vice-versa.
  • the parameter determination engine 402 implements the first pass of the compression process.
  • the parameter determination engine 402 is coupled to receive a stream of input samples from the DMA as the samples are being stored in the input ping/pong buffers 410, 412.
  • the parameter determination engine 402 includes functionality to compute the parameter values for the BFP compression and for the EG compression. Accordingly, the parameter determination engine 402 includes functionality to determine the common scale factor for a block of samples and functionality to determine the Golomb parameter k and the scale factor for a block of samples.
  • the compression engine 418 implements the second pass of the compression process.
  • the compression engine 418 is coupled to the parameter determination engine 402 to receive the compression parameter or parameters to be used in compressing a block of samples.
  • the compression engine 418 includes functionality to perform BFP compression on a block of samples read from one of the input ping/pong buffers 410, 412 and to store the compressed sample block in one of the output ping/pong buffers 414, 416.
  • the compression engine 418 also includes functionality to perform EG compression on a block of samples read from one of the input ping/pong buffers 410, 412 and to store the compressed sample block in one of the output ping/pong buffers 414, 416.
  • the decompression engine 420 reverses the compression performed by the compression engine 418.
  • the decompression engine 420 includes functionality to perform BFP decompression on a compressed sample block read from one of the input ping/pong buffers 410, 412 and to store the decompressed block of samples in one of the output ping/pong buffers 414, 416.
  • the decompression engine 420 also includes functionality to perform EG decompression on a compressed sample block read from one of the input ping/pong buffers 410, 412 and to store the decompressed block of samples in one of the output ping/pong buffers 414, 416.
  • the parameter determination engine 402 includes a sign extend component 502, a leading bits counter component 504, a BFP parameter determination component 506, and an EG parameter determination component 508.
  • the sign extend component 502 sign extends each sample to 32 bits, if needed.
  • the leading bits counter component 504 included functionality to determine counts of consecutive leading zero bits and consecutive leading one bits following the leading zero bits as needed for the BFP parameter determination component 506 and the EG parameter determination component 506. More specifically, for the BFP parameter determination component 506, the leading bits counter component 504 includes functionality to determine the maximum of the absolute values of the samples in a block and to determine the number of consecutive leading zeros No in the most significant bits of the maximum.
  • the leading bits counter component 504 determines the maximum by performing OR operations to combine the absolute values of the samples to create a sample with the maximum possible bit width.
  • the leading bits determination component 504 is coupled to the BFP parameter determination component 506 to provide the value of N 0 .
  • the leading bits determination component 504 is coupled to the EG parameter determination component 508 to provide the values of both No and Ni for each sample.
  • the BFP parameter determination component 506 includes functionality to determine the common scale factor for a block of samples.
  • the common scale factor for a block of samples is based on the bit width of the absolute value of the largest sample in the block.
  • the leading bits determination component 504 determines the maximum sample value and the number of consecutive leading zeros N 0 in the most significant bits of the maximum.
  • FIG. 6 is a flow diagram of a method for determining the common scale factor that may be performed by the BFP parameter determination component 506 given the maximum sample value and No.
  • the bit width bw of the maximum sample value is computed 600.
  • the EG parameter determination component 508 includes functionality to determine the Golomb parameter k and a scale factor b for a block of samples.
  • the value of the Golomb parameter k is selected from an array of predetermined values. In some such embodiments the values in the array are user-specified. Any suitable number of predetermined values may be in the array. In some embodiments, the number of predetermined values is less than or equal to sixteen.
  • the leading bits determination component 504 determines the number of consecutive leading zeros No in the most significant bits and the number of consecutive leading ones N 1 following the N 0 consecutive leading zeros for each sample. FIGS.
  • 7-9 are flow diagrams of a method for determining the Golomb parameter k and a scale factor b for a block of samples given values of No and Ni for each sample, and an array of candidate values for k that may be performed by the EG parameter determination component 508.
  • an encoded block size S t in bits is computed 700 for each of the candidate Golomb parameter values k t in the array of candidate values.
  • An example of computation of the encoded block sizes is described below in reference to FIG. 8.
  • the optimal k t and scale factor b are then determined 702 for the sample block based on the encoded block sizes S t . Determination of the optimal k t and scale factor b is described below in reference to FIG. 9.
  • the index i of the optimal k t and the scale factor b are then output 704 to the compression engine 418.
  • FIG. 8 is a flow diagram of an example method for computation of the encoded block size Sj for each candidate Golomb parameter value k
  • an encoded bit width is computed for each candidate Golomb parameter kj 804 - 816 and a corresponding encoded block size Sj is updated 812.
  • the bit width bwi of the sample without the leading consecutive zero bits is computed 800.
  • the bit width bw 2 of the sample without the leading consecutive zero bits No and the following consecutive one bits Ni is also computed 802.
  • the bit width bw of the size after the addition of the Golomb parameter 2 h is computed 806 - 810 for the initial candidate Golomb parameter value k t .
  • the corresponding block size accumulator Si is then updated 812 with the total encoded bit width, which is given by 2bw - (kj + 1).
  • the steps of computing 806 - 810 the bit width bw and updating 812 the block size accumulator Si with the total encoded bit width are then repeated for the next candidate k if any 816.
  • FIG. 9 is a flow diagram of an example method for determination of the optimal k t and scale factor b given the encoded block sizes Si.
  • a scale factor b t for each candidate Golomb parameter value k t is computed 900 - 908 based in the corresponding encoded block size S t .
  • a scale factor b t is computed by first computing 902 the difference e t between the encoded block size Si and a desired encoded size. The computed difference e, is then used to compute 904 the number of bits that would need to be dropped to meet the desired size, in order to compute the scale factor b t .
  • Pseudo code for computing the scale factor b t is shown in Table 1. In this table, log2_nsamps_blk is the bit width of the number of samples in the block.
  • the minimum valid b t is selected 910 as the scale factor b for compressing the sample block and the corresponding candidate Golomb parameter value k t is selected as the Golomb parameter k.
  • a scale factor b t is valid if b t ⁇ k t .
  • the scale factor b and the index i of the corresponding k t are returned 912. If no valid scale factor exists, then an error may be signaled.
  • the compression engine 418 includes an input formatting component 1002, a BFP encoder component 1004, an EG encoder component 1006, a bit packing component 1008, and a compression control component 1010.
  • the input to the compression engine 418 is a block of samples and the output of the compression engine 418 is a compressed sample block in one of BFP format or EG format.
  • the desired size of the compressed sample block is user-specified and the compression engine operates to ensure that each compressed sample block fits within the desired size. In some embodiments, the desired size is a multiple of eight bits.
  • FIG. 12 illustrates an example format of a BFP compressed sample block.
  • the compressed block begins with a header containing the scale factor for the samples in the block.
  • the header is followed by a sequence of the mantissas of each sample in twos complement format.
  • the bit width of the scale factor and the bit width of the mantissa are user-specified. Padding may exist at the end of the compressed block, if the bit width of the scale factor and the bit widths of the mantissas are less than the desired bit width.
  • the number of padding bits depends on the desired bit width, the specified mantissa bit width, the specified scale factor bit width, and the number of samples per block.
  • FIG. 13 illustrates an example format of an EG compressed sample block.
  • the compressed block begins with a header containing the index of the Golomb parameter k for the compressed block in the Golomb parameter array and the scale factor for the compressed block.
  • the header is followed by the variable bit width EG compressed bit sequences for each sample in the block and the sign bits s for each sample. Because the EG encoding is performed on the absolute value of each sample, the sign of each encoded sample follows the encoded sample in the compressed sample block. Padding may exist at the end of the compressed block, if the bit width of the header and the bit widths of the compressed samples with appended sign bits are less than the desired bit width.
  • the input formatting component 1002 sign extends each I and Q sample to 32 bits, if needed.
  • the compression control component 1010 controls the overall operation of the compression engine 418.
  • the compression control component 1010 may include functionality to manage switching between the input ping/pong buffers 410, 412 and output ping/pong buffers 414, 416, to manage the address to which compressed data is written, to reset the compression engine 418 between input blocks, and to manage formatting of the compressed output.
  • the compression control component 1010 is implemented as a state machine.
  • the BFP encoder component 1004 uses the common scale factor b determined by the parameter determination engine 402 to extract a mantissa of the desired bit width from each sample in a sample block.
  • the BFP encoder component 1004 is coupled to the bit packing component 1008 to provide the mantissa bits of each sample.
  • FIG. 14 is a flow diagram of an example method for extracting the mantissa of each sample in a sample block that may be implemented by the BFP encoder component 1004.
  • the steps 1400-1406 are repeated 1408 for each sample in a sample block.
  • the method assumes that dither is added to the samples. In some embodiments, the addition of dither is optional, so it may be turned on or off by a user-specified parameter.
  • dither is added 1400 to the sample to prevent spurs.
  • the dither signal to be added is provided by the LFSR 408. Any suitable number of dither bits may be added.
  • the dither value may vary from sample to sample. Generally, dither is simply noise added before quantization to avoid patterns that could arise due to the quantization. Such patterns can result in spurs.
  • the dither signal is three bits because each bit of dither adds approximately 6 dB to the spur free dynamic range (SFDR) for a total SFDR protection of 18 dB.
  • the detection signal-to-noise ratio after the 2D FFT processing of the radar signal is usually 15 to 18 dB. Thus, the 18 dB SFDR protection may be sufficient to prevent spurs from affecting measurement of the noise floor.
  • dither is added to each sample, even though dither is needed only when samples are to be quantized, i.e., b > 1, to facilitate a simpler hardware design.
  • the sample with dither added is then right shifted 1402 by the sum of the scale factor b and the number of dither bits (such as 3) to generate the mantissa.
  • the value 011 101101b is right shifted by 3, resulting in a mantissa of 011101b
  • the value 011110010b is right shifted by 4
  • the resulting mantissa value is then saturated 1404 to the desired mantissa bit width if the bit width of the value is greater than the desired bit width.
  • the mantissa is then output 1406 to the bit packing component 1008.
  • FIG. 15 is a flow diagram of an example method for EG encoding of each sample of a block of samples that may be implemented by the EG encoder component 1006.
  • the steps 1500 - 1518 are repeated 1520 for each sample of sample block.
  • the method assumes that dither is added to the samples. In some embodiments, the addition of dither is optional, so it may be turned on or off by a user-specified parameter.
  • the sign s of the sample x is extracted 1500 and x is set 1502 to the absolute value of x.
  • 00010101b.
  • the Golomb parameter is then added 1504 to x.
  • the bit width bw of x is then computed 1506.
  • Dither is then added 1508 to x.
  • the dither signal to be added is provided by the LFSR 408. Any suitable number of dither bits may be added.
  • the dither value may vary from sample to sample. Generally, dither is simply noise added before quantization to avoid patterns that could arise due to the quantization. Such patterns can result in spurs.
  • the dither signal is three bits because each bit of dither adds approximately 6 dB to the spur free dynamic range (SFDR) for a total SFDR protection of 18 dB.
  • the detection signal-to-noise ratio after the 2D FFT processing of the radar signal is usually 15 to 18 dB. Thus, the 18 dB SFDR protection may be sufficient to prevent spurs from affecting measurement of the noise floor.
  • dither is added to each sample, even though dither is needed only when samples are to be quantized, i.e., b > 1, to facilitate a simpler hardware design.
  • the resulting value is then saturated 1510 to the bit width bw if the bit width of the value is greater than bw. In the example, adding the dither did not increase the bit width so saturation is not needed.
  • the binary part of the encoded sample is also computed 1514.
  • the sample with dither added is right shifted 1514 by b + 3 and the binary part is the bw - b least significant bits of the result.
  • x' + dither » 4 0001 11 lb and the binary part is 1111b.
  • the unary and binary parts are combined and the sign s is appended 1516 to generate the compressed sample and the compressed sample is output 1518 to the bit packing component 1008. Completing the example, the compressed sample is 011110b.
  • the bit packing component 1008 under control by the compression control component 1010, packs the bits of the header of a compressed sample block and the bits of the encoded samples received from one of the encoder components 1004, 1006 into output blocks.
  • the bit packing component 1008 packs a set of (variable bit width or fixed bit width) data into known chunks of "memory words" to enable easy storing of the output in memory.
  • the bit packing component 1008 is a shift register that accepts a bit stream, demarcates chunks of bits matching the output memory word size, and writes the bit stream to one of the output ping/pong buffers as chunks are ready.
  • the decompression engine 4208 includes a bit unpacking component 1102, a BFP decoder component 1104, an EG decoder component 1106, an output formatting component 1108, and a decompression control component 1110.
  • the output formatting component 1002 sign extends and saturates each decompressed sample to 16 bits or 32 bits as needed.
  • the decompression control component 1110 controls the overall operation of the decompression engine 420.
  • the decompression control component 1110 may include functionality to manage switching between the input ping/pong buffers 410, 412 and output ping/pong buffers 414, 416, to manage the address to which the decompressed data is written, and to reset the decompression engine 420 between input compressed blocks.
  • the decompression control component 1110 is implemented as a state machine.
  • the BFP decoder component 1104 performs BFP decoding of a compressed sample block.
  • the BFP decoder component 1104 is coupled to the output formatting component 1108 to provide the decoded samples.
  • the BFP decoder component 1104 is coupled to bit unpacking component 1102 to receive the scale factor b for a compressed sample block and the mantissas for each sample in the block. To decode each encoded sample, the BFP decoder component 1104 sign extends the corresponding mantissa to 32 bits and multiplies the result by 2 b to generate the output sample. Each output sample is output to the output formatting component 1108.
  • the EG decoder component 1106 performs exponential Golomb decoding of a compressed sample block.
  • the EG decoder component 1106 is coupled to the output formatting component 1108 to provide the decoded samples.
  • the EG decoder component 1106 is coupled to bit unpacking component 1102 to receive the index i of the Golomb parameter k for a compressed sample block, the scale factor b for the compressed sample block and each encoded sample in the block.
  • the EG decoder component 1106 communicates the bit width bw to the decompression control component 1110, which causes the bit unpacking component 1102 to provide the bw bits of the sample to the EG decoder component 1106.
  • the EG decoder component 1106 then multiples the bw bits by 2 b , removes the Golomb constant 2 k from the result, and applies the sign bit to generate the output sample. Each output sample is output to the output formatting component 1108.
  • the compressed sample is thus 1111b, which is multiplied by 2 b to give 11110b.
  • the bit unpacking component 110 under control of the decompression control component 1110, operates to read a compressed sample block and unpack the contents for decoding by the BFP decoder component 1104 or the EG decoder component 1 106.
  • the bit unpacking component 1102 implements two modes of operation: a leading zero count mode used to count the number of leading zeros in an EG encoded sample and extract the unary portion of the encoded sample, and a regular mode used to extract a specified number of bits from the input compressed sample block.
  • the regular mode is used to extract the bits of the scale factor and the bits of each mantissa and provide these to the BFP decoder component 1104.
  • the parameters determining the size of the block, the bit width of the scale factor, and the bit width of the mantissa are provided by the decompression control component 1110.
  • the regular mode is used to extract the bits of the index of the Golomb parameter k, the bits of the scale factor, and the bits of the binary portion of each EG encoded sample.
  • the parameters determining the size of the block, the bit width of the index, the bit width of the scale factor, and the bit width of the encoded sample are provided by the decompression control component 1110.
  • the bit unpacking component 1102 is a shift register.
  • the compression management component 400 is configurable to provide the PAC compression technique (which is a specialized type of BFP) described hereinabove.
  • the first pass of the BFP compression is skipped, because the scale factor and the mantissa bit width are known. Further, no header is included in the compressed sample block.
  • the compression management component 400 is configurable to use user-specified values for the Golomb parameter k and the scale factor for EG compression.
  • the first pass of the EG compression is skipped, because the parameter values for the EG compression are known.
  • no header is included in the compressed sample block.
  • the user-specified values for the Golomb parameter k may be an array of values. As blocks of samples are compressed by the EG encoder 1006, the values in the array are used in turn to encode a block of samples.
  • the first value is used to encode a block of samples
  • the second value is used to encode the next block of samples
  • the third value is used to encode the next block of samples, etc., until all 32 values have been used.
  • the cycle then repeats beginning with the first value in the array.
  • the array may include any suitable number of values.
  • the size of the array is based on the maximum number of sample blocks that may be stored in an input ping/pong buffer.
  • VBBFP variable bit width block floating point
  • a block of samples to be compressed is divided into multiple equal sized sub-blocks of m samples. Any suitable value of m may be used.
  • the block of samples is referred to as a super block herein.
  • the number of samples in a super block and the number of samples m in a sub-block may be determined empirically.
  • the mantissas for samples in a sub-block in a super block are determined using a super block scale factor computed for the super block in addition to a scale factor computed for the sub-block.
  • the VBBFP compression is a two pass process in which the parameters for compression of a super block of samples are determined in the initial pass and the actual compression of the samples is performed in the second pass using the parameters.
  • FIG. 17 is a flow diagram of an example method for determining the VBBFP compression parameters
  • FIG. 18 is a flow diagram of an example method for performing the VBBFP compression using these parameters.
  • FIG. 16 illustrates the format of a VBBFP compressed sample super block assuming the super block includes two sub-blocks.
  • the compressed block begins with a header containing the super block scale factor B for the samples in the super block followed by the BFP compressed sub-blocks of the superblock, each of which has the format of FIG. 12.
  • the bit width bw B of the super block scale factor B and the bit width bw of the common scale factors bl and b2 may be user-specified.
  • padding may exist at the end of the compressed block, if the total bit width of the compressed super block is less than the desired bit width.
  • the number of padding bits depends on the desired bit width, the specified mantissa bit width, the specified common scale factor bit width, the specified super block scale factor bit width, and the number of samples per super block.
  • FIG. 17 is a flow diagram of an example method for determining the VBBFP compression parameters of a super block of samples.
  • the numbers of bits for the mantissas for any of the sub-blocks is not fixed. Instead, in the first pass, the mantissas are computed assuming no quantization is necessary.
  • a sub-block scale factor is computed 1700 for each sub-block 1702.
  • the maximum of the absolute value for each sample in a sub-block is computed. The bit width of this maximum is the bit width for the mantissa of each sample in the sub-block, so the computed bit width is the sub-block scale factor b.
  • bit width of the compressed super block is estimated 1704.
  • n the number of sub-blocks in the super block
  • bi is the sub-block scale factor for sub-block i
  • bw is the bit width of each sub-block scale factor
  • bw B is the bit width of the super block scale factor.
  • One is added to each sub-block scale factor to accommodate storage of the sign bit.
  • a scale factor B for the super block is determined 1708. Otherwise, the scale factor B is set 1710 to zero.
  • the value of the scale factor B is the number of least significant bits to be dropped from each sample in the super block such that the compressed size of the super block will be less than or equal to the desired size.
  • the values of the sub-block scale factors and the super block scale factor are then output 1712 for use in encoding the super block.
  • FIG. 18 is a flow diagram of an example method for performing the VBBFP compression of each sample in a super block using the sub-block scale factors and the super block scale factor.
  • the sample value is truncated by dropping 1800 a number of least significant bits of the sample value as indicated by the super block scale factor B.
  • the mantissa of the sample is then computed 1802 and output 1806. Computation of the mantissa is similar to steps 1400-1404 of FIG. 14.
  • the super block scale factor is output before the encoded values of the sub-blocks, and the sub-block scale factor for a sub-block is output before the mantissas of the samples in the sub-blocks.
  • FIG. 19 is a flow diagram of an example method for VBBFP decoding of a compressed super block given the super block scale factor B and the sub-block scale factors.
  • the mantissa For each sample 1906 in each sub-block 1908 of the compressed super block, the mantissa is sign extended 1900 to 32 bits. The bit width of the mantissa is given by the sub-block scale factor b. The result is then multiplied 1902 by 2 B , where B is the super block scale factor to generate the decoded sample and the decoded sample is output 1904.
  • FIG. 20 is a block diagram of an example FMCW radar system 2000 configured to perform compression of radar signals as described herein.
  • the radar system is a radar integrated circuit (IC) suitable for use in embedded applications.
  • the radar IC 2000 may include multiple transmit channels 2004 for transmitting FMCW signals and multiple receive channels 2002 for receiving the reflected transmitted signals. Any suitable number of receive channels and transmit channels and the number of receive channels and the number of transmit channels may differ.
  • a transmit channel includes a suitable transmitter and antenna.
  • a receive channel includes a suitable receiver and antenna.
  • each of the receive channels 2002 are identical and include a low-noise amplifier (LNA) 2005, 2007 to amplify the received radio frequency (RF) signal, a mixer 2006, 2008 to mix the transmitted signal with the received signal to generate an intermediate frequency (IF) signal (alternatively referred to as a dechirped signal, beat signal, or raw radar signal), a baseband bandpass filter 2010, 2012 for filtering the beat signal, a variable gain amplifier (VGA) 2014, 2016 for amplifying the filtered IF signal, and an analog-to-digital converter (ADC) 2018, 2020 for converting the analog IF signal to a digital IF signal.
  • LNA low-noise amplifier
  • RF radio frequency
  • mixer 2006, 2008 to mix the transmitted signal with the received signal to generate an intermediate frequency (IF) signal (alternatively referred to as a dechirped signal, beat signal, or raw radar signal)
  • IF intermediate frequency
  • the receive channels 2002 are coupled to a digital front end (DFE) component 2022 to provide the digital IF signals to the DFE 2022.
  • the DFE includes functionality to perform decimation filtering on the digital IF signals to reduce the sampling rate and bring the signal back to baseband.
  • the DFE 2022 may also perform other operations on the digital IF signals, such as DC offset removal.
  • the DFE 2022 is coupled to the signal processor component 2044 to transfer the output of the DFE 2022 to the signal processor component 2044.
  • the signal processor component 2044 is configured to perform signal processing on the digital IF signals of a frame of radar data to detect any objects in the FOV of the radar system 2000 and to identify the range, velocity and angle of arrival of detected objects.
  • the signal processor component 2044 is coupled to the radar data memory component 2024 via the direct memory access (DMA) component 2046 to read and write data to the radar data memory 2026 during the signal processing.
  • DMA direct memory access
  • the signal processor component 2044 executes software instructions stored in the memory component 2048.
  • the signal processor component 2044 may include any suitable processor or combination of processors.
  • the signal processor component 2044 may be a digital signal processor, an MCU, an FFT engine, a DSP+MCU processor, a field programmable gate array (FPGA), or an application specific integrated circuit (ASIC).
  • the radar data memory component 2024 provides storage for radar data during the signal processing performed by the signal processor component 2044.
  • the radar data storage component 2024 includes a compression management component 2025 and a radar data memory component 2026.
  • the radar data memory component 2026 may be any suitable random access memory (RAM), such as static RAM.
  • RAM random access memory
  • the radar data memory component 2026 includes sufficient memory to store radar data corresponding to the largest expected frame of chirps.
  • the compression management component 2025 implements compression and decompression of blocks of range values. More specifically, the compression management component 2025 is coupled to the DMA component 2046 to receive the results of the range FFTs performed by the signal processor component 2044.
  • the compression management component 2025 includes functionality to compress blocks of the range values (i.e., range samples) and to provide the compressed sample blocks to the DMA component 2046 for storage in the radar data memory component 2026.
  • the compression management component 2025 is coupled to the DMA component 2046 to receive compressed sample blocks from the radar data memory component 2026.
  • the compression management component 2025 includes functionality to decompress the compressed sample blocks and to provide the decompressed samples (range values) to the DMA component 2046 for storage in the memory 2048 for further processing by the signal processor component 2044.
  • the compression management component 2025 may include functionality to implement BFP compression/decompression, EG compression/decompression, PAC compression/decompression, and/or VBBFP compression/decompression as described herein.
  • the compression management component 2025 may have the architecture of the compression management component 400 of FIG. 4.
  • the on-chip memory component 2048 provides on-chip storage (such as a computer readable medium) that may be used to communicate data between the various components of the radar IC 2000, and to store software programs executed by processors on the radar IC 2000.
  • the on-chip memory component 2048 may include any suitable combination of read-only memory and/or random access memory (RAM), such as static RAM.
  • the direct memory access (DMA) component 2046 is coupled to the radar data storage component 2024 to perform data transfers between the radar data memory 2026 and the signal processor component 2044.
  • DMA direct memory access
  • the control component 2027 includes functionality to control the operation of the radar IC 2000.
  • the control component 2027 may include an MCU that executes software to control the operation of the radar IC 2000.
  • the serial peripheral interface (SPI) 2028 provides an interface for external communication of the results of the radar signal processing.
  • the results of the signal processing performed by the signal processor component 2044 may be communicated to another processor for application specific processing, such as object tracking, rate of movement of objects and direction of movement.
  • the programmable timing engine 2042 includes functionality to receive chirp parameter values for a sequence of chirps in a radar frame from the control component 2027 and to generate chirp control signals that control the transmission and reception of the chirps in a frame based on the parameter values.
  • the chirp parameters are defined by the radar system architecture and may include a transmitter enable parameter for indicating which transmitters to enable, a chirp frequency start value, a chirp frequency slope, an analog-to-digital (ADC) sampling time, a ramp end time, and a transmitter start time.
  • the radio frequency synthesizer (RFSYNTH) 2030 includes functionality to generate FMCW signals for transmission based on chirp control signals from the timing engine 2042.
  • the RFSYNTH 2030 includes a phase locked loop (PLL) with a voltage controlled oscillator (VCO).
  • the multiplexer 2032 is coupled to the RFSYNTH 2030 and the input buffer 2036.
  • the multiplexer 2032 is configurable to select between signals received in the input buffer 2036 and signals generated by the RFSYNTH 2030.
  • the output buffer 2038 is coupled to the multiplexer 2032 and may be used to transmit signals selected by the multiplexer 2032 to the input buffer of another radar IC.
  • the clock multiplier 2040 increases the frequency of the transmission signal to the frequency of the mixers 2006, 2008.
  • the clean-up PLL (phase locked loop) 2034 operates to increase the frequency of the signal of an external low frequency reference clock (not shown) to the frequency of the RFSYNTH 2034 and to filter the reference clock phase noise out of the clock signal.
  • FIGS. 21-26 are block diagrams of example DMA architectures.
  • FIG. 21 is a block diagram illustrating the normal mode of operation without memory compression/decompression
  • FIGS. 22-27 illustrate modifications for inserting a compression management component between the DMA and the radar data memory storing the range values from the radar signal pre-processing, such as the radar memory component 2026 of FIG. 20.
  • FIGS. 21-26 use the terms ACNT, BCNT, SRC BINDX, and DST BINDX, which are commonly understood in the context of a DMA.
  • ACNT refers to the number of bytes transferred in the first dimension
  • BCNT refers to the total number of such first dimension transfers that constitute a two dimension transfer.
  • SRC BINDX and DST BINDX refer to the total number of increments of the source pointer or destination pointer after the completion of each first dimension transfer.
  • FIG. 22 an example of DMA for compression is illustrated.
  • a block of size ACNT bytes is input to the compression management component and the output of the compression management component is CB bytes.
  • the number of CB bytes is known by the compression management component. Everything else is the same as the normal DMA operation including the starting location of the destination.
  • the compression management component latches the write address from the DMA every ACNT input bytes and the compressed data of CB bytes is written contiguously into the radar data memory from the last latched address. This mode of operation is useful when the data to be compressed is available contiguously at the source.
  • FIG. 23 another example of DMA for compression is illustrated.
  • a block of size ACNT x BCNT is input to the compression management component and the output of the compression management component is CB bytes.
  • the number of CB bytes is known by the compression management component. Everything else is the same as the normal DMA operation including the starting location of the destination.
  • the compression management component latches the write address from the DMA every ACNT x BCNT input bytes and the compressed data of CB bytes is written contiguously into the radar data memory from the last latched address. This mode of operation is useful when the data to be compressed is not available contiguously at the source, such as for compressing data across receive channels into a single block.
  • FIG. 24 an example of DMA for decompression is illustrated.
  • a block of CB contiguous bytes is input to the compression management component, where the number of CB bytes is known by the compression management component.
  • the compression management component latches the read address every ACNT output bytes and reads CB bytes contiguously from radar data memory, starting from the last latched address.
  • FIG. 25 another example of DMA for decompression is illustrated.
  • a block of CB contiguous bytes is input to the compression management component, where the number of CB bytes is known by the compression management component.
  • the compression management component latches the read address every ACNT x BCNT output bytes and reads CB bytes contiguously from radar data memory, starting from the last latched address.
  • FIG. 26 another example of DMA for compression is illustrated. This mode of operation is useful for decompressing variable length codes such as exponential Golomb codes.
  • variable length codes such as exponential Golomb codes.
  • multiple bins of range values are compressed as a single block where the compressed block size is fixed while each bin included in the block may have a variable number of bits.
  • each bin's size is ACNT bytes.
  • the first block of compressed data starts at a source address SRC ADDR, with each subsequent block being placed SRC BINDX bytes away.
  • a number BCNT of such compressed blocks exist.
  • the management component traverses the BCNT blocks in sequence, picking up the next bin from each block.
  • Each bin is decompressed to ACNT bytes that are passed to the DMA. This process is repeated CCNT number of times such that the CCNT bins in each of the BCNT blocks are decompressed.
  • FIG. 27 is a flow diagram of a method for compressing radar signals in a radar system. This method may be performed to compress blocks of range values generated by range FFTs applied to digitized intermediate frequency (IF) signals from receive channels of the radar system. The range values are generated during processing of reflected signals received by the receive channels as the result of transmitting a frame of chirps. Accordingly, blocks of range values corresponding to a transmitted frame of chirps are compressed.
  • IF intermediate frequency
  • the block is compressed 2702 to generate a compressed block of range values, and the compressed block of range values is stored 2704 in radar data memory.
  • the block is compressed using BFP compression as described herein.
  • the block is compressed using order K EG compression as described herein.
  • the block is compressed using PAC compression as described herein.
  • the block is compressed using VBBFP compression as described herein.
  • compression 2702 of the block includes selecting the optimal type of compression for the block from two or more compression types based on the quantization error of each type of compression.
  • the compression types may include two or more of the compression types mentioned hereinabove.
  • Embodiments have been described herein in which a single compression technique is used for compressing range values.
  • the best method is selected for a block of samples. For example, the size of the compressed block and the number of bits to drop (quantization error) using EG, BFP, and/or VBBFP can be computed. Then, the method that adds the least quantization error (i.e., the method that uses the smallest scale factor) can be selected to compress the block. In such embodiments, one or more bits may be added to the compressed output to indicate which compression technique was used.
  • the quantization introduced i.e., the scale factor
  • the quantization introduced may be made available to the user as an indicator of the quality of the compression.
  • the quality of the compression increases as the value of the scale factor decreases. A user can use this information to decide whether too much information is lost during compression and adjust compression parameters accordingly.
  • a user-specified subset of those k samples may be provided as the decompression output rather than all of the decompressed samples.
  • the size of the decompressed data may exceed the available memory as much of the available memory may be storing compressed sample blocks.
  • a user may configure the compression, such that different portions of the range values are compressed by different amounts. For example, if the output of the range FFT is N samples, the user may specify that the initial K samples are to be compressed using BFP compression and the remaining N-K samples are to be compressed using EG compression.
  • the samples are complex samples, and the real and imaginary parts of the sample values are compressed separately.
  • the radar system is an embedded radar system in a vehicle.
  • Embodiments are possible for other applications of embedded radar systems, such as surveillance and security applications, and maneuvering a robot in a factory or warehouse.
  • Coupled and derivatives thereof include an indirect, direct, optical and/or wireless electrical connection. For example, if a first device couples to a second device, then such connection may be through a direct electrical connection, through an indirect electrical connection via other devices and connections, through an optical electrical connection, and/or through a wireless electrical connection.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

In described examples, a radar system (2000) includes a compression component (2025) configured to compress blocks of range values to generate compressed blocks of range values, and a radar data memory (2026) configured to store compressed blocks of range values generated by the compression component (2025).

Description

METHOD AND SYSTEM FOR COMPRESSION OF RADAR SIGNALS
[0001] This relates generally to radar systems, and more particularly to compression of radar signals in radar systems.
BACKGROUND
[0002] The use of embedded frequency modulated continuous wave (FMCW) radar systems in applications such as automotive applications is evolving rapidly. For example, embedded FMCW radar systems are useful in a number of applications associated with a vehicle such as adaptive cruise control, collision warning, blind spot warning, lane change assist, parking assist and rear collision warning. Processing of radar signals in an FMCW radar system to obtain a three dimensional image (range, velocity, and angle) of objects in the field of view of the radar system includes multi-dimensional Fourier transform processing which requires a significant amount of memory to store the radar data. The amount of on-chip memory on radar transceiver integrated circuits (ICs) used in embedded FMCW radar systems constrains the amount of data that can be stored, and thus limits the capabilities of the radar transceiver ICs. Including larger memory capacity incurs an undesirable increase in both die size and cost of the IC.
SUMMARY
[0003] In described examples of methods and apparatus for compression of radar signals in radar systems, a radar system includes a compression component configured to compress blocks of range values to generate compressed blocks of range values, and a radar data memory configured to store compressed blocks of range values generated by the compression component.
[0004] For compression of radar signals in a radar system, an example method includes receiving blocks of range values generated from processing of digitized intermediate frequency (IF) signals, compressing each block of range values to generate a compressed block of range values, the compressing performed by a compression component of the radar system, and storing the compressed blocks of range values in radar data memory.
BRIEF DESCRIPTION OF THE DRAWINGS
[0005] FIG. 1 is an example of binary floating point (BFP) compression.
[0006] FIG. 2 is an example of bit packing (PAC) compression. [0007] FIG. 3 is an example of order k exponential Golomb (EG) compression.
[0008] FIGS. 4 and 5 are block diagrams of an example high-level architecture of a compression management component.
[0009] FIGS. 6, 7, 8 and 9 are flow diagrams of example methods for determining compression parameters.
[0010] FIGS. 10 and 11 are block diagrams of an example high-level architecture of a compression management component.
[0011] FIG. 12 illustrates an example format of a BFP compressed sample block.
[0012] FIG. 13 illustrates an example format of an EG compressed sample block.
[0013] FIG. 14 is a flow diagram of an example method for extracting the mantissas of samples for BFP compression.
[0014] FIG. 15 is a flow diagram of an example method for EG encoding of samples in a block of samples.
[0015] FIG. 16 illustrates an example format of a variable bit width BFP (VBBFP) compressed sample block.
[0016] FIG. 17 is a flow diagram of an example method for determining VBBFP compression parameters.
[0017] FIG. 18 is a flow diagram of an example method for extracting the mantissas of samples for VBBFP encoding.
[0018] FIG. 19 is a flow diagram of an example method for decompression of a VBBFP encoded sample block.
[0019] FIG. 20 is a block diagram of an example frequency modulated continuous wave (FMCW) radar system.
[0020] FIGS. 21-26 are block diagrams of example direct memory access architectures.
[0021] FIG. 27 is a flow diagram of a method for compressing radar signals in radar system. DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS
[0022] Like elements in the various figures are denoted by like reference numerals for consistency.
[0023] A frequency modulated continuous wave (FMCW) radar transmits, via one or more transmit antennas, a radio frequency (RE) frequency ramp referred to as a chirp. Further, multiple chirps may be transmitted in a unit referred to as a frame. The transmitted chirps are reflected from any objects in the field of view (FOV) the radar and are received by one or more receive antennas. The received signal for each receive antenna is down-converted to an intermediate frequency (IF) signal and then digitized. The digitized samples are pre-processed and stored in memory, which is referred to as radar data memory herein. After the data for an entire frame is stored in the radar data memory, the data is post-processed to detect any objects in the FOV and to identify the range, velocity and angle of arrival of detected objects.
[0024] The pre-processing may include performing a range fast Fourier transform (FFT) on the digitized samples of each reflected chirp to convert the data to the frequency domain. This range FFT may also be referred to as a one-dimensional (ID) FFT. Peak values correspond to ranges (distances) of objects. This processing is usually performed in-line, so the range FFT is performed on the digitized samples of a previous chirp while samples are being collected for the current chirp. The results of the range FFTs (i.e., range values) for each receive channel are saved in the radar data memory for further processing. Usually, the results of the range FFTs are stored row-wise in the radar data memory, forming an array of range values.
[0025] For each range, a Doppler FFT is performed over each of the corresponding range values of the chirps in the frame. Accordingly, a Doppler FFT is performed on each of the column of the array of range values stored in the radar data memory. This Doppler may also be referred to as a two-dimensional (2D) FFT. The peaks in the resulting range-Doppler array correspond to the range and relative speed (velocity) of potential objects. To perform the Doppler FFTs, each column of range values is read from the radar data memory and a Doppler FFT is performed on the range values of the column. The column data access may be referred to as transpose access as the column data access is mathematically equivalent to a transpose operation on the data followed by a row access. The Doppler FFT values may be stored back in the same column memory locations.
[0026] After the Doppler FFTs, other post-processing (such as object detection and angle estimation) may be performed on the range-Doppler array stored in the radar data memory to detect objects in the FOV and to identify the range, velocity and angle of arrival of detected objects. After the post-processing is complete, the data in the radar data memory can be discarded.
[0027] All of the digitized data (corresponding to a frame of chirps) are required to be in the radar data memory before the post-processing (such as Doppler FFT, angle estimation or object detection) can begin. Further, resolution expectations (i.e., range resolution, which is controlled by the number of digitized samples per chirp), velocity resolution (which is controlled by the number of chirps per frame), and angle resolution (which is controlled by the number of receive antennas) directly impact the size of the radar data memory. In the automotive radar application space, the current radar data memory size needed to meet resolution expectations is on the order of one to two megabytes (MB) and is expected to increase in coming years as increased resolution is demanded.
[0028] Embodiments of the disclosure provide memory compression techniques for radar data that permit more radar data to be stored in radar data memory, thus allowing for increased resolution in the same amount of memory. Accordingly, the compression techniques are useful to reduce on-chip memory requirements while maintaining the capabilities of a larger device. The compression techniques are designed for radar signal processing and are performed after the range FFT when the samples output by the range FFT are stored in radar data memory.
[0029] In some embodiments, block floating point (BFP) compression of radar data is performed after the ID FFT. Block floating point representations in signal processing increase the dynamic range that can be represented by a limited number of bits. Accordingly, a block floating point representation can cover a wide dynamic range while maintaining a reduced accuracy of signal data. In an example block floating point representation, a block of samples is represented as an exponent common to each sample and a mantissa for each sample. The common exponent is determined for the block of samples based on the largest magnitude sample in the block. In some instances, the mantissa for each sample in the group is represented by the number of bits that accommodates the mantissa of the largest sample. In other instances, the size of the mantissa is fixed based on the desired accuracy and compression size. In such instances, the mantissa for each sample is the k most significant bits of each sample beginning with the most significant one bit in the sample, where k is the desired mantissa size.
[0030] The bits representing the common exponent and the mantissas for the block may be packed consecutively to represent compressed samples for the block. Block floating point representations are useful for signal dynamics where the amplitudes fluctuate over time yet neighboring samples have similar amplitudes in a particular group. In the rest of this document, the term "common scale factor" is used in lieu of "common exponent". The common scale factor is closely relate to the common exponent in BFP with a subtle difference. In generic BFP, the mantissa is considered to be a fraction between 0 and 1. The sample is regenerated by computing mantissa x 2e where e is the exponent. In the BFP described below, the mantissa is an integer between 0 and 2mantlssabw - 1, where bw is bit width.
[0031] In FMCW radar signal processing, a chirp after the ID FFT may have a dynamic range as high as 90 dB. This high dynamic range is the result of the path loss difference between nearby targets and faraway targets. Such a high dynamic range may not be desirable for BFP representation as a 90 dB dynamic range would require approximately fifteen bits of mantissa as each bit provides approximately 6 dB of dynamic range. However, the dynamic range across antennas for a single range bin may be relatively small (such as less than 30 dB, which would require approximately five bits of mantissa). Further, the dynamic range for adjacent range bins and for the same range bin across different chirps may also quite be small. Thus, a block floating point compression technique is useful to compress samples after the ID FFT that are either in the same or in adjacent range bins.
[0032] For example, consider an FMCW radar system with two receive channels where the output of the ID FFT is a complex sample of 32 bits, 16 bits for the in-band (I) channel, and 16 bits for the quadrature (Q) channel of each receive channel. Using BFP compression, the output of the ID FFT can be compressed by taking a block of two samples (2*2* 16 = 64 bits) corresponding to the same range bin across the two receive channels, using a common scale factor of four bits for the block, and using a mantissa of seven bits for each of the four samples in the block. The total compressed block size is 32 bits (2*2*7+4 = 32 bits) resulting in a 50 percent compression. Because each mantissa occupies seven bits, the dynamic range possible per bin is approximately 42 dB. Recall that the per bin dynamic range requirement of 30 dB due to the dynamic range across antennas for a single range bin is also met with approximately 12 dB margin.
[0033] Generally, to perform the BFP compression of the ID FFT samples, the common scale factor for a block of samples is determined and then the samples are compressed based on this common scale factor. The determination of the common scale factor is based on the absolute value of the largest sample in the block. FIG. 1 is an example of BFP compression for a block of eight bit samples [23 127 64 124]. For simplicity, this example uses a sample bit width of eight. A more typical sample bit width in current FMCW radar systems is sixteen, and may be larger in future systems. The sample values in the block are written in binary as [00010101b, 0111111 lb, 01000000b, 01111100b]. The original bit width of the block of samples is 32 bits.
[0034] To achieve fifty percent compression, the compressed block size should be 16 bits. Of the 16 bits, three bits are allocated for the scale factor as each sample is eight bits and twelve of the remaining thirteen bits are divided among the four mantissas, such that each is allocated three bits. The thirteenth bit is not used. The scale factor is based on the maximum value of the four samples, 127, which is seven bits wide. Therefore, the three bits of the mantissa for each sample will be bits [6, 5, 4], and the common scale factor will be four, because four bits [3, 2, 1, 0] per sample are dropped. The compressed block is then the three bit scale factor 100b followed by the four three bit mantissas, each of which is the three most significant bits (MSBs) of the respective block.
[0035] In some embodiments, sample values are rounded before truncation to reduce the effect of quantization. Mathematically, the rounding is as follows. If n bits are to be dropped from a sample value, 2n_1 is added to the value and the result is truncated by n bits. As explained in more detail herein, in some embodiments, dither may be added rather than 2n_1. In the example of FIG. 1, rounding and dither are not used.
[0036] In some embodiments, a specialized type of BFP compression referred to as bit packing (PAC) is performed. In the PAC compression technique, the input samples are stored using a fixed scale factor and mantissa bit width. Storage of the scale factor is unnecessary, because the value is fixed. For example, assuming 32 bit samples, a common scale factor of fourteen, and mantissa bit widths of 18 bits, 32-bit I and 32-bit Q samples can be stored as 18-bit I and 18-bit Q samples. FIG. 2 illustrates PAC using the example of FIG. 1 with a fixed scale factor of four and a mantissa bit width of four bits.
[0037] In some embodiments, exponential Golomb (EG) compression is performed after the ID FFT. Radar data is expected to be sparse in the range dimension, because usually a few large samples correspond to object reflections, and the remaining samples are relatively small. Thus, the average bit width across the range dimension is small. Accordingly, a variable bit width compression technique in which each sample occupies a space proportional to the sample bit width can significantly reduce the average bit width (per sample) needed to store the data.
[0038] One such variable bit width technique is order k exponential Golomb (EG) coding. For example, a description of such coding is located in "Exponential-Golomb coding," Wikipedia, available at https://en.wikipedia.org/wiki/Exponential-Golomb_coding on January 22, 2016, which is incorporated by reference herein. Generally, order k exponential Golomb codes are parameterized by a value "k", which may be referred to as the Golomb parameter k herein. The Golomb parameter k represents the most common bit width in the input vector and is used to determine the boundary between the variable bit width quotient of the encoded value and the fixed bit width remainder. As explained in more detail herein, in some embodiments the value of k is selected by searching a list of possible values, allowing the value to be optimized based on input sample values.
[0039] FIG. 3 is an example of order k EG coding of a value x = 21 assuming k = 2 and a sample bit width of eight. Initially, the sample value is divided 300 into a quotient and a remainder, the remainder being the least significant k bits of the sample value. A value of 1 is then added 302 to the quotient, which is the equivalent of adding 2k to sample value x. The bit width nextra of the incremented quotient is then determined 304 and the compressed sample is constructed 306. In the compressed sample, the first two bits are the unary representation of nextra - 1, the middle three bits are the binary representation of the incremented quotient, and the final two bits are the binary representation of the remainder.
[0040] As mentioned hereinabove, in FMCW radar signal processing, a range FFT is first performed on the digitized time domain samples corresponding to a chirp. The range FFT samples are then stored in an array in radar data memory. For purposes of this discussion, the storage is assumed to be row-wise. In some embodiments, the data is stored column-wise. To perform the subsequent 2D FFT (or any higher dimension FFT), the data in this array is accessed column wise, which requires a 'transpose access' operation. Further, when accessing samples in a column, these samples are not contiguous in memory. Thus, if a direct memory access (DMA) device is used, the DMA needs to be programmed to access each sample after the first sample using address offsets, commonly referred to as jumps. Such memory accesses are most efficient if the jumps are constant. If the jumps are not constant, the sequence of jumps needed to access a column would need to be remembered, which would require additional memory and add to the compression overhead.
[0041] In some embodiments, to ensure that the jumps are constant, both the BFP and EG schemes compress a fixed number of samples, which may be referred to a block of samples herein, into a fixed number of bits. This is straightforward when using BFP compression, because the mantissa and common scale factor bit widths are fixed, so the compressed size is constant. However, the textbook EG encoding is a variable bit width technique with no guarantee of the bit width of the encoded output. Thus, quantization is performed if the desired bit width is not achieved using textbook EG encoding. In some embodiments, this quantization takes the form of dropping some of the least significant bits to guarantee the desired bit width. The number of bits to drop is referred to as a scale factor or EG scale factor herein.
[0042] FIGS. 4, 5, 10 and 11 are block diagrams of an example high-level architecture for a compression management component 400 implementing both BFP compression and order k EG compression. The particular type of compression to be used during operation is user configurable. In some embodiments, only one type of compression is implemented. The compression management component 400 is suitable for use in an embedded radar system and manages the compression and decompression of samples output by the ID FFT of radar signal processing. As indicated in the block diagram of FIG. 4, the compression management component 400 is designed to interface with a direct memory access device (DMA).
[0043] Generally, the compression management component 400 ensures that the compressed output size in bits is less than or equal to a desired value to ensure a predictable and known usage of the available memory. The compression management component 400 provides two-pass compression for both BFP compression and EG compression in which the parameters for the compression operation are determined in the first pass and the actual compression is performed in the second pass according to the determined parameters. For BFP compression, the first pass determines the common scale factor for the block of samples to be compressed. For EG compression, the first pass determines: the optimal value of the Golomb parameter k for the block of samples to be compressed; and the scale factor to use to guarantee a desired compression ratio. This EG scale factor is also referred to as the number of least significant bits to drop.
[0044] Referring to FIG. 4, the compression management component 400 includes a parameter determination engine 402, a compression engine 418, a decompression engine 420, input ping/pong buffers 410, 412, output ping/pong buffers 414, 416, and a linear feedback shift register (LFSR) 408. As explained in more detail in reference to FIGS. 10 and 14, the LFSR 408 provides a dither signal that is used to add dither to encoded samples.
[0045] The input ping/pong buffers 410, 412 are coupled: between the DMA and the compression engine 418 to alternately receive blocks of samples to be compressed; and between the DMA and the decompression engine 420 to alternately receive compressed sample blocks to be decompressed. The output ping/pong buffers 414, 416 are coupled: between the compression engine 418 and the DMA to alternately receive compressed sample blocks to be stored by the DMA in the radar data memory; and between the decompression engine and the DMA to alternately receive decompressed blocks of samples to be stored in memory by the DMA. The ping/pong buffer mechanism is such that if the compression engine or the decompression engine is working on the input ping buffer, the DMA has access to the input pong buffer and vice-versa. Similarly, if the compression engine or the decompression engine is working on the output ping buffer, the DMA has access to the output pong buffer and vice-versa.
[0046] The parameter determination engine 402 implements the first pass of the compression process. The parameter determination engine 402 is coupled to receive a stream of input samples from the DMA as the samples are being stored in the input ping/pong buffers 410, 412. As described in more detail in reference to FIGS. 5-9, the parameter determination engine 402 includes functionality to compute the parameter values for the BFP compression and for the EG compression. Accordingly, the parameter determination engine 402 includes functionality to determine the common scale factor for a block of samples and functionality to determine the Golomb parameter k and the scale factor for a block of samples.
[0047] The compression engine 418 implements the second pass of the compression process. The compression engine 418 is coupled to the parameter determination engine 402 to receive the compression parameter or parameters to be used in compressing a block of samples. As described in more detail in reference to FIG. 10, the compression engine 418 includes functionality to perform BFP compression on a block of samples read from one of the input ping/pong buffers 410, 412 and to store the compressed sample block in one of the output ping/pong buffers 414, 416. The compression engine 418 also includes functionality to perform EG compression on a block of samples read from one of the input ping/pong buffers 410, 412 and to store the compressed sample block in one of the output ping/pong buffers 414, 416.
[0048] The decompression engine 420 reverses the compression performed by the compression engine 418. As described in more detail in reference to FIG. 11, the decompression engine 420 includes functionality to perform BFP decompression on a compressed sample block read from one of the input ping/pong buffers 410, 412 and to store the decompressed block of samples in one of the output ping/pong buffers 414, 416. The decompression engine 420 also includes functionality to perform EG decompression on a compressed sample block read from one of the input ping/pong buffers 410, 412 and to store the decompressed block of samples in one of the output ping/pong buffers 414, 416.
[0049] Referring to FIG. 5, the parameter determination engine 402 includes a sign extend component 502, a leading bits counter component 504, a BFP parameter determination component 506, and an EG parameter determination component 508. The sign extend component 502 sign extends each sample to 32 bits, if needed. The leading bits counter component 504 included functionality to determine counts of consecutive leading zero bits and consecutive leading one bits following the leading zero bits as needed for the BFP parameter determination component 506 and the EG parameter determination component 506. More specifically, for the BFP parameter determination component 506, the leading bits counter component 504 includes functionality to determine the maximum of the absolute values of the samples in a block and to determine the number of consecutive leading zeros No in the most significant bits of the maximum. For example, if the maximum has the value 00000111111010101010101010101000b, then N0 = 5. In some embodiments, the leading bits counter component 504 determines the maximum by performing OR operations to combine the absolute values of the samples to create a sample with the maximum possible bit width. The leading bits determination component 504 is coupled to the BFP parameter determination component 506 to provide the value of N0.
[0050] For the EG parameter determination component 506, the leading bits counter component 504 includes functionality to determine the number of consecutive leading zero bits No in the most significant bits of the absolute value of each sample in a block and the number of consecutive one bits N1 following the consecutive leading zero bits in each sample. For example, if the input sample is 00000111111010101010101010101000b, then N0 = 5 and Ni = 6. The leading bits determination component 504 is coupled to the EG parameter determination component 508 to provide the values of both No and Ni for each sample.
[0051] The BFP parameter determination component 506 includes functionality to determine the common scale factor for a block of samples. The common scale factor for a block of samples is based on the bit width of the absolute value of the largest sample in the block. As mentioned hereinabove, the leading bits determination component 504 determines the maximum sample value and the number of consecutive leading zeros N0 in the most significant bits of the maximum.
[0052] FIG. 6 is a flow diagram of a method for determining the common scale factor that may be performed by the BFP parameter determination component 506 given the maximum sample value and No. Initially, the bit width bw of the maximum sample value is computed 600. The bit width bw is computed beginning with the first non-zero bit in the most significant bits of the maximum sample value, so the bit width is computed as bw = 32 - N0. The computed bit width bw is then incremented 602 by one to include the sign bit, so bw = bw + 1. The common scale factor b is then computed 604 as the bit width bw less the desired bit width of the mantissa, so b = bw - mantissabw. Finally, the common scale factor is output 606 to the compression engine 418.
[0053] Referring again to FIG. 5, the EG parameter determination component 508 includes functionality to determine the Golomb parameter k and a scale factor b for a block of samples. In some embodiments, the value of the Golomb parameter k is selected from an array of predetermined values. In some such embodiments the values in the array are user-specified. Any suitable number of predetermined values may be in the array. In some embodiments, the number of predetermined values is less than or equal to sixteen. As mentioned hereinabove, the leading bits determination component 504 determines the number of consecutive leading zeros No in the most significant bits and the number of consecutive leading ones N1 following the N0 consecutive leading zeros for each sample. FIGS. 7-9 are flow diagrams of a method for determining the Golomb parameter k and a scale factor b for a block of samples given values of No and Ni for each sample, and an array of candidate values for k that may be performed by the EG parameter determination component 508.
[0054] Referring first to FIG. 7, initially an encoded block size St in bits is computed 700 for each of the candidate Golomb parameter values kt in the array of candidate values. An example of computation of the encoded block sizes is described below in reference to FIG. 8. The optimal kt and scale factor b are then determined 702 for the sample block based on the encoded block sizes St. Determination of the optimal kt and scale factor b is described below in reference to FIG. 9. The index i of the optimal kt and the scale factor b are then output 704 to the compression engine 418.
[0055] FIG. 8 is a flow diagram of an example method for computation of the encoded block size Sj for each candidate Golomb parameter value k Generally, for each sample 818 in a block of samples, an encoded bit width is computed for each candidate Golomb parameter kj 804 - 816 and a corresponding encoded block size Sj is updated 812. For a given sample, initially, the bit width bwi of the sample without the leading consecutive zero bits is computed 800. The bit width bw2 of the sample without the leading consecutive zero bits No and the following consecutive one bits Ni is also computed 802. Then the bit width bw of the size after the addition of the Golomb parameter 2h is computed 806 - 810 for the initial candidate Golomb parameter value kt. The corresponding block size accumulator Si is then updated 812 with the total encoded bit width, which is given by 2bw - (kj + 1). The steps of computing 806 - 810 the bit width bw and updating 812 the block size accumulator Si with the total encoded bit width are then repeated for the next candidate k if any 816.
[0056] FIG. 9 is a flow diagram of an example method for determination of the optimal kt and scale factor b given the encoded block sizes Si. Initially, a scale factor bt for each candidate Golomb parameter value kt is computed 900 - 908 based in the corresponding encoded block size St. A scale factor bt is computed by first computing 902 the difference et between the encoded block size Si and a desired encoded size. The computed difference e, is then used to compute 904 the number of bits that would need to be dropped to meet the desired size, in order to compute the scale factor bt. Pseudo code for computing the scale factor bt is shown in Table 1. In this table, log2_nsamps_blk is the bit width of the number of samples in the block.
Table 1
If et≤ 0
bi = 0;
Elseif mod (eit log2jis amps _blk) == 0
bi = ( ^i » log2_nsamps_blk)
else
bi = ( ^i » log2_nsamps_blk) + 1
[0057] After a scale factor bi is computed for each kt, the minimum valid bt is selected 910 as the scale factor b for compressing the sample block and the corresponding candidate Golomb parameter value kt is selected as the Golomb parameter k. A scale factor bt is valid if bt < kt. The scale factor b and the index i of the corresponding kt are returned 912. If no valid scale factor exists, then an error may be signaled.
[0058] Referring to FIG. 10, the compression engine 418 includes an input formatting component 1002, a BFP encoder component 1004, an EG encoder component 1006, a bit packing component 1008, and a compression control component 1010. The input to the compression engine 418 is a block of samples and the output of the compression engine 418 is a compressed sample block in one of BFP format or EG format. In some embodiments, the desired size of the compressed sample block is user-specified and the compression engine operates to ensure that each compressed sample block fits within the desired size. In some embodiments, the desired size is a multiple of eight bits.
[0059] FIG. 12 illustrates an example format of a BFP compressed sample block. The compressed block begins with a header containing the scale factor for the samples in the block. The header is followed by a sequence of the mantissas of each sample in twos complement format. The bit width of the scale factor and the bit width of the mantissa are user-specified. Padding may exist at the end of the compressed block, if the bit width of the scale factor and the bit widths of the mantissas are less than the desired bit width. The number of padding bits depends on the desired bit width, the specified mantissa bit width, the specified scale factor bit width, and the number of samples per block.
[0060] FIG. 13 illustrates an example format of an EG compressed sample block. The compressed block begins with a header containing the index of the Golomb parameter k for the compressed block in the Golomb parameter array and the scale factor for the compressed block. The header is followed by the variable bit width EG compressed bit sequences for each sample in the block and the sign bits s for each sample. Because the EG encoding is performed on the absolute value of each sample, the sign of each encoded sample follows the encoded sample in the compressed sample block. Padding may exist at the end of the compressed block, if the bit width of the header and the bit widths of the compressed samples with appended sign bits are less than the desired bit width.
[0061] Referring again to FIG. 10, the input formatting component 1002 sign extends each I and Q sample to 32 bits, if needed. The compression control component 1010 controls the overall operation of the compression engine 418. The compression control component 1010 may include functionality to manage switching between the input ping/pong buffers 410, 412 and output ping/pong buffers 414, 416, to manage the address to which compressed data is written, to reset the compression engine 418 between input blocks, and to manage formatting of the compressed output. In some embodiments, the compression control component 1010 is implemented as a state machine.
[0062] The BFP encoder component 1004 uses the common scale factor b determined by the parameter determination engine 402 to extract a mantissa of the desired bit width from each sample in a sample block. The BFP encoder component 1004 is coupled to the bit packing component 1008 to provide the mantissa bits of each sample. FIG. 14 is a flow diagram of an example method for extracting the mantissa of each sample in a sample block that may be implemented by the BFP encoder component 1004. The steps 1400-1406 are repeated 1408 for each sample in a sample block. The method assumes that dither is added to the samples. In some embodiments, the addition of dither is optional, so it may be turned on or off by a user-specified parameter.
[0063] Initially, dither is added 1400 to the sample to prevent spurs. As mentioned in reference to FIG. 4, the dither signal to be added is provided by the LFSR 408. Any suitable number of dither bits may be added. The dither value may vary from sample to sample. Generally, dither is simply noise added before quantization to avoid patterns that could arise due to the quantization. Such patterns can result in spurs. In some embodiments, the dither signal is three bits because each bit of dither adds approximately 6 dB to the spur free dynamic range (SFDR) for a total SFDR protection of 18 dB. The detection signal-to-noise ratio after the 2D FFT processing of the radar signal is usually 15 to 18 dB. Thus, the 18 dB SFDR protection may be sufficient to prevent spurs from affecting measurement of the noise floor.
[0064] For simplicity of explanation of the addition of dither to a sample, a dither of three bits is assumed. In some embodiments, dither is added to each sample, even though dither is needed only when samples are to be quantized, i.e., b > 1, to facilitate a simpler hardware design. In such embodiments, a number of zeros equal to the dither are appended to the end of the sample. For example, if the dither is three bits, three zeros are appended. More specifically, if the six bit sample x = 011101b, then x' = 011101000b. The three bits of dither are then added starting at the (b-l)th position. For example, if the dither is 101b and b = 0, then x' + dither = 011101101b. If the dither is 101b and b = 1, the x' + dither = 011101000b + 000001010b = 011110010b.
[0065] The sample with dither added is then right shifted 1402 by the sum of the scale factor b and the number of dither bits (such as 3) to generate the mantissa. Continuing the two examples above, for b = 0, the value 011 101101b is right shifted by 3, resulting in a mantissa of 011101b, and for b = 1, the value 011110010b is right shifted by 4, resulting in a mantissa of 0111 lb. The resulting mantissa value is then saturated 1404 to the desired mantissa bit width if the bit width of the value is greater than the desired bit width. The mantissa is then output 1406 to the bit packing component 1008.
[0066] Referring again to FIG. 10, the EG encoder component 1006 performs exp-Golomb encoding of a block of samples using the Golomb parameter k and the scale factor b determined by the parameter determination engine 402. The EG encoder component 1006 is coupled to the bit packing component 1008 to provide the EG encoded bits of each sample. FIG. 15 is a flow diagram of an example method for EG encoding of each sample of a block of samples that may be implemented by the EG encoder component 1006. The steps 1500 - 1518 are repeated 1520 for each sample of sample block. The method assumes that dither is added to the samples. In some embodiments, the addition of dither is optional, so it may be turned on or off by a user-specified parameter. The method explained using an example 8 bit sample x = 00010101b, k = 3, and b = 1.
[0067] Initially, the sign s of the sample x is extracted 1500 and x is set 1502 to the absolute value of x. Thus, s = 0 and x = |x| = 00010101b. The Golomb parameter is then added 1504 to x. Thus, x = x + 2k = 00010101b + 1000b = 00011101b. The bit width bw of x is then computed 1506. The bit width of x is the number of bits in x after the leading sequential zero value bits in the MSB of x are dropped. Thus, bw = 5.
[0068] Dither is then added 1508 to x. As mentioned in reference to FIG. 4, the dither signal to be added is provided by the LFSR 408. Any suitable number of dither bits may be added. The dither value may vary from sample to sample. Generally, dither is simply noise added before quantization to avoid patterns that could arise due to the quantization. Such patterns can result in spurs. In some embodiments, the dither signal is three bits because each bit of dither adds approximately 6 dB to the spur free dynamic range (SFDR) for a total SFDR protection of 18 dB. The detection signal-to-noise ratio after the 2D FFT processing of the radar signal is usually 15 to 18 dB. Thus, the 18 dB SFDR protection may be sufficient to prevent spurs from affecting measurement of the noise floor.
[0069] For simplicity of explanation of the addition of dither to a sample, a dither of three bits is assumed. In some embodiments, dither is added to each sample, even though dither is needed only when samples are to be quantized, i.e., b > 1, to facilitate a simpler hardware design. In such embodiments, a number of zeros equal to the dither are appended to the end of the sample. For example, given a dither of three bits, three zeros are appended. As an example, if the sample x = 00011101b, then x' = 00011101000b. The dither is the added at the (b-l)th least significant bit position. Accordingly, if the dither is 111b and b = 1, the x' + dither = 011101000b + 000001110b = 011110110b. The resulting value is then saturated 1510 to the bit width bw if the bit width of the value is greater than bw. In the example, adding the dither did not increase the bit width so saturation is not needed.
[0070] The unary part of the encoded sample is then computed 1512 as bw - (k + 1) = 1. Thus, the unary part of the encoded sample is a single 0. The binary part of the encoded sample is also computed 1514. The sample with dither added is right shifted 1514 by b + 3 and the binary part is the bw - b least significant bits of the result. Thus, x' + dither » 4 = 0001 11 lb and the binary part is 1111b. The unary and binary parts are combined and the sign s is appended 1516 to generate the compressed sample and the compressed sample is output 1518 to the bit packing component 1008. Completing the example, the compressed sample is 011110b.
[0071] The bit packing component 1008, under control by the compression control component 1010, packs the bits of the header of a compressed sample block and the bits of the encoded samples received from one of the encoder components 1004, 1006 into output blocks. Generally, the bit packing component 1008 packs a set of (variable bit width or fixed bit width) data into known chunks of "memory words" to enable easy storing of the output in memory. In some embodiments, the bit packing component 1008 is a shift register that accepts a bit stream, demarcates chunks of bits matching the output memory word size, and writes the bit stream to one of the output ping/pong buffers as chunks are ready.
[0072] Referring to FIG. 11, the decompression engine 4208 includes a bit unpacking component 1102, a BFP decoder component 1104, an EG decoder component 1106, an output formatting component 1108, and a decompression control component 1110. The output formatting component 1002 sign extends and saturates each decompressed sample to 16 bits or 32 bits as needed.
[0073] The decompression control component 1110 controls the overall operation of the decompression engine 420. The decompression control component 1110 may include functionality to manage switching between the input ping/pong buffers 410, 412 and output ping/pong buffers 414, 416, to manage the address to which the decompressed data is written, and to reset the decompression engine 420 between input compressed blocks. In some embodiments, the decompression control component 1110 is implemented as a state machine.
[0074] The BFP decoder component 1104 performs BFP decoding of a compressed sample block. The BFP decoder component 1104 is coupled to the output formatting component 1108 to provide the decoded samples. The BFP decoder component 1104 is coupled to bit unpacking component 1102 to receive the scale factor b for a compressed sample block and the mantissas for each sample in the block. To decode each encoded sample, the BFP decoder component 1104 sign extends the corresponding mantissa to 32 bits and multiplies the result by 2b to generate the output sample. Each output sample is output to the output formatting component 1108.
[0075] The EG decoder component 1106 performs exponential Golomb decoding of a compressed sample block. The EG decoder component 1106 is coupled to the output formatting component 1108 to provide the decoded samples. The EG decoder component 1106 is coupled to bit unpacking component 1102 to receive the index i of the Golomb parameter k for a compressed sample block, the scale factor b for the compressed sample block and each encoded sample in the block.
[0076] Given k and b, the decoding of each sample in a compressed sample block may be performed as follows. Initially, the bit unpacking component 1102 counts the number of leading zeros N0 in the sample and provides N0 to the EG decoder component 1106. The EG decoder component 1106 then computes the bit width bw of the sample using N0, so bw = (N0 + 2) + (k - b). To further explain the formula, (k - b) bits are in the remainder portion of the compressed sample, and (N0 + 1) bits are in the quotient portion of the compressed sample. The last bit is for the sign bit, which is appended to the end of the sample.
[0077] The EG decoder component 1106 communicates the bit width bw to the decompression control component 1110, which causes the bit unpacking component 1102 to provide the bw bits of the sample to the EG decoder component 1106. The EG decoder component 1106 then multiples the bw bits by 2b, removes the Golomb constant 2k from the result, and applies the sign bit to generate the output sample. Each output sample is output to the output formatting component 1108.
[0078] For example, assume the bit unpacking component 1102 is a 16-bit long barrel shifter and the bits of the barrel shifter are given by 0111100101010101b. Let k = 3 and b = 1. The barrel shifter counts the number of leading zeros, so N0 = 1. The barrel shifter is then updated to 1111001010101010b, having ejected the first bit and taken in another bit. The sample bit width is then computed by the EG decoder component 1106 as 5 using the above formula. The first 5 bits of the barrel shifter, 11110b, are ejected to the EG decoder component 1106. The sign bit is the LSB, which in this example is 0, indicating a positive number. The compressed sample is thus 1111b, which is multiplied by 2b to give 11110b. The Golomb constant (2k, k = 3) is then subtracted from the result of the multiplication, yielding 10110b, which is the decoded output sample.
[0079] The bit unpacking component 1102, under control of the decompression control component 1110, operates to read a compressed sample block and unpack the contents for decoding by the BFP decoder component 1104 or the EG decoder component 1 106. The bit unpacking component 1102 implements two modes of operation: a leading zero count mode used to count the number of leading zeros in an EG encoded sample and extract the unary portion of the encoded sample, and a regular mode used to extract a specified number of bits from the input compressed sample block.
[0080] More specifically, when the input compressed block was compressed using BFP, the regular mode is used to extract the bits of the scale factor and the bits of each mantissa and provide these to the BFP decoder component 1104. The parameters determining the size of the block, the bit width of the scale factor, and the bit width of the mantissa are provided by the decompression control component 1110. When the input compressed block was compressed using EG, the regular mode is used to extract the bits of the index of the Golomb parameter k, the bits of the scale factor, and the bits of the binary portion of each EG encoded sample. The parameters determining the size of the block, the bit width of the index, the bit width of the scale factor, and the bit width of the encoded sample are provided by the decompression control component 1110. In some embodiments, the bit unpacking component 1102 is a shift register.
[0081] In some embodiments, the compression management component 400 is configurable to provide the PAC compression technique (which is a specialized type of BFP) described hereinabove. In such embodiments, the first pass of the BFP compression is skipped, because the scale factor and the mantissa bit width are known. Further, no header is included in the compressed sample block.
[0082] In some embodiments, the compression management component 400 is configurable to use user-specified values for the Golomb parameter k and the scale factor for EG compression. In such embodiments, the first pass of the EG compression is skipped, because the parameter values for the EG compression are known. Further, no header is included in the compressed sample block. In some such embodiments, the user-specified values for the Golomb parameter k may be an array of values. As blocks of samples are compressed by the EG encoder 1006, the values in the array are used in turn to encode a block of samples. For example, if 32 values are in the array, then the first value is used to encode a block of samples, the second value is used to encode the next block of samples, the third value is used to encode the next block of samples, etc., until all 32 values have been used. The cycle then repeats beginning with the first value in the array. The array may include any suitable number of values. In some embodiments, the size of the array is based on the maximum number of sample blocks that may be stored in an input ping/pong buffer.
[0083] In some embodiments, a variant of BFP compression is performed, such as variable bit width block floating point (VBBFP) compression. In VBBFP compression, a block of samples to be compressed is divided into multiple equal sized sub-blocks of m samples. Any suitable value of m may be used. The block of samples is referred to as a super block herein. The number of samples in a super block and the number of samples m in a sub-block may be determined empirically. The mantissas for samples in a sub-block in a super block are determined using a super block scale factor computed for the super block in addition to a scale factor computed for the sub-block. As with BFP, the VBBFP compression is a two pass process in which the parameters for compression of a super block of samples are determined in the initial pass and the actual compression of the samples is performed in the second pass using the parameters. FIG. 17 is a flow diagram of an example method for determining the VBBFP compression parameters, and FIG. 18 is a flow diagram of an example method for performing the VBBFP compression using these parameters.
[0084] FIG. 16 illustrates the format of a VBBFP compressed sample super block assuming the super block includes two sub-blocks. The compressed block begins with a header containing the super block scale factor B for the samples in the super block followed by the BFP compressed sub-blocks of the superblock, each of which has the format of FIG. 12. The bit width bwB of the super block scale factor B and the bit width bw of the common scale factors bl and b2 may be user-specified. Although not specifically shown, padding may exist at the end of the compressed block, if the total bit width of the compressed super block is less than the desired bit width. The number of padding bits depends on the desired bit width, the specified mantissa bit width, the specified common scale factor bit width, the specified super block scale factor bit width, and the number of samples per super block.
[0085] FIG. 17 is a flow diagram of an example method for determining the VBBFP compression parameters of a super block of samples. Unlike BFP, the numbers of bits for the mantissas for any of the sub-blocks is not fixed. Instead, in the first pass, the mantissas are computed assuming no quantization is necessary. Initially, a sub-block scale factor is computed 1700 for each sub-block 1702. To compute the sub-block scale factor for a sub-block, the maximum of the absolute value for each sample in a sub-block is computed. The bit width of this maximum is the bit width for the mantissa of each sample in the sub-block, so the computed bit width is the sub-block scale factor b. After the sub-block scale factors are computed, the bit width of the compressed super block is estimated 1704. The bit width S of the super block may be estimated as S = ∑f=1 m * (bt + 1) + nbwb + bwB, where n is the number of sub-blocks in the super block, bi is the sub-block scale factor for sub-block i, bw is the bit width of each sub-block scale factor, and bwB is the bit width of the super block scale factor. One is added to each sub-block scale factor to accommodate storage of the sign bit.
[0086] If the estimated size S of the super block is greater than 1706 the desired compressed size, a scale factor B for the super block is determined 1708. Otherwise, the scale factor B is set 1710 to zero. The value of the scale factor B is the number of least significant bits to be dropped from each sample in the super block such that the compressed size of the super block will be less than or equal to the desired size. The values of the sub-block scale factors and the super block scale factor are then output 1712 for use in encoding the super block. The super block scale factor B may be computed as B = ceil((S - CZ) / (2 * m)) where CZ is the desired compressed size and the function ceil converts a real number to the nearest integer greater than or equal to the real number.
[0087] FIG. 18 is a flow diagram of an example method for performing the VBBFP compression of each sample in a super block using the sub-block scale factors and the super block scale factor. For each sample 1808 in each sub-block 1810, the sample value is truncated by dropping 1800 a number of least significant bits of the sample value as indicated by the super block scale factor B. The mantissa of the sample is then computed 1802 and output 1806. Computation of the mantissa is similar to steps 1400-1404 of FIG. 14. The super block scale factor is output before the encoded values of the sub-blocks, and the sub-block scale factor for a sub-block is output before the mantissas of the samples in the sub-blocks.
[0088] FIG. 19 is a flow diagram of an example method for VBBFP decoding of a compressed super block given the super block scale factor B and the sub-block scale factors. For each sample 1906 in each sub-block 1908 of the compressed super block, the mantissa is sign extended 1900 to 32 bits. The bit width of the mantissa is given by the sub-block scale factor b. The result is then multiplied 1902 by 2B, where B is the super block scale factor to generate the decoded sample and the decoded sample is output 1904.
[0089] FIG. 20 is a block diagram of an example FMCW radar system 2000 configured to perform compression of radar signals as described herein. In this embodiment, the radar system is a radar integrated circuit (IC) suitable for use in embedded applications. The radar IC 2000 may include multiple transmit channels 2004 for transmitting FMCW signals and multiple receive channels 2002 for receiving the reflected transmitted signals. Any suitable number of receive channels and transmit channels and the number of receive channels and the number of transmit channels may differ.
[0090] A transmit channel includes a suitable transmitter and antenna. A receive channel includes a suitable receiver and antenna. Further, each of the receive channels 2002 are identical and include a low-noise amplifier (LNA) 2005, 2007 to amplify the received radio frequency (RF) signal, a mixer 2006, 2008 to mix the transmitted signal with the received signal to generate an intermediate frequency (IF) signal (alternatively referred to as a dechirped signal, beat signal, or raw radar signal), a baseband bandpass filter 2010, 2012 for filtering the beat signal, a variable gain amplifier (VGA) 2014, 2016 for amplifying the filtered IF signal, and an analog-to-digital converter (ADC) 2018, 2020 for converting the analog IF signal to a digital IF signal.
[0091] The receive channels 2002 are coupled to a digital front end (DFE) component 2022 to provide the digital IF signals to the DFE 2022. The DFE includes functionality to perform decimation filtering on the digital IF signals to reduce the sampling rate and bring the signal back to baseband. The DFE 2022 may also perform other operations on the digital IF signals, such as DC offset removal. The DFE 2022 is coupled to the signal processor component 2044 to transfer the output of the DFE 2022 to the signal processor component 2044.
[0092] The signal processor component 2044 is configured to perform signal processing on the digital IF signals of a frame of radar data to detect any objects in the FOV of the radar system 2000 and to identify the range, velocity and angle of arrival of detected objects. The signal processor component 2044 is coupled to the radar data memory component 2024 via the direct memory access (DMA) component 2046 to read and write data to the radar data memory 2026 during the signal processing. To perform the signal processing (such as the pre-processing and post-processing described hereinabove), the signal processor component 2044 executes software instructions stored in the memory component 2048. The signal processor component 2044 may include any suitable processor or combination of processors. For example, the signal processor component 2044 may be a digital signal processor, an MCU, an FFT engine, a DSP+MCU processor, a field programmable gate array (FPGA), or an application specific integrated circuit (ASIC).
[0093] The radar data memory component 2024 provides storage for radar data during the signal processing performed by the signal processor component 2044. The radar data storage component 2024 includes a compression management component 2025 and a radar data memory component 2026. The radar data memory component 2026 may be any suitable random access memory (RAM), such as static RAM. The radar data memory component 2026 includes sufficient memory to store radar data corresponding to the largest expected frame of chirps.
[0094] The compression management component 2025 implements compression and decompression of blocks of range values. More specifically, the compression management component 2025 is coupled to the DMA component 2046 to receive the results of the range FFTs performed by the signal processor component 2044. The compression management component 2025 includes functionality to compress blocks of the range values (i.e., range samples) and to provide the compressed sample blocks to the DMA component 2046 for storage in the radar data memory component 2026.
[0095] Further, the compression management component 2025 is coupled to the DMA component 2046 to receive compressed sample blocks from the radar data memory component 2026. The compression management component 2025 includes functionality to decompress the compressed sample blocks and to provide the decompressed samples (range values) to the DMA component 2046 for storage in the memory 2048 for further processing by the signal processor component 2044.
[0096] The compression management component 2025 may include functionality to implement BFP compression/decompression, EG compression/decompression, PAC compression/decompression, and/or VBBFP compression/decompression as described herein. In some embodiments, the compression management component 2025 may have the architecture of the compression management component 400 of FIG. 4.
[0097] For example, the on-chip memory component 2048 provides on-chip storage (such as a computer readable medium) that may be used to communicate data between the various components of the radar IC 2000, and to store software programs executed by processors on the radar IC 2000. The on-chip memory component 2048 may include any suitable combination of read-only memory and/or random access memory (RAM), such as static RAM.
[0098] The direct memory access (DMA) component 2046 is coupled to the radar data storage component 2024 to perform data transfers between the radar data memory 2026 and the signal processor component 2044.
[0099] The control component 2027 includes functionality to control the operation of the radar IC 2000. For example, the control component 2027 may include an MCU that executes software to control the operation of the radar IC 2000.
[0100] The serial peripheral interface (SPI) 2028 provides an interface for external communication of the results of the radar signal processing. For example, the results of the signal processing performed by the signal processor component 2044 may be communicated to another processor for application specific processing, such as object tracking, rate of movement of objects and direction of movement.
[0101] The programmable timing engine 2042 includes functionality to receive chirp parameter values for a sequence of chirps in a radar frame from the control component 2027 and to generate chirp control signals that control the transmission and reception of the chirps in a frame based on the parameter values. For example, the chirp parameters are defined by the radar system architecture and may include a transmitter enable parameter for indicating which transmitters to enable, a chirp frequency start value, a chirp frequency slope, an analog-to-digital (ADC) sampling time, a ramp end time, and a transmitter start time.
[0102] The radio frequency synthesizer (RFSYNTH) 2030 includes functionality to generate FMCW signals for transmission based on chirp control signals from the timing engine 2042. In some embodiments, the RFSYNTH 2030 includes a phase locked loop (PLL) with a voltage controlled oscillator (VCO).
[0103] The multiplexer 2032 is coupled to the RFSYNTH 2030 and the input buffer 2036. The multiplexer 2032 is configurable to select between signals received in the input buffer 2036 and signals generated by the RFSYNTH 2030. For example, the output buffer 2038 is coupled to the multiplexer 2032 and may be used to transmit signals selected by the multiplexer 2032 to the input buffer of another radar IC.
[0104] The clock multiplier 2040 increases the frequency of the transmission signal to the frequency of the mixers 2006, 2008. The clean-up PLL (phase locked loop) 2034 operates to increase the frequency of the signal of an external low frequency reference clock (not shown) to the frequency of the RFSYNTH 2034 and to filter the reference clock phase noise out of the clock signal.
[0105] FIGS. 21-26 are block diagrams of example DMA architectures. FIG. 21 is a block diagram illustrating the normal mode of operation without memory compression/decompression, and FIGS. 22-27 illustrate modifications for inserting a compression management component between the DMA and the radar data memory storing the range values from the radar signal pre-processing, such as the radar memory component 2026 of FIG. 20. FIGS. 21-26 use the terms ACNT, BCNT, SRC BINDX, and DST BINDX, which are commonly understood in the context of a DMA. In multi-dimensional DMA transfers, ACNT refers to the number of bytes transferred in the first dimension and BCNT refers to the total number of such first dimension transfers that constitute a two dimension transfer. The terms SRC BINDX and DST BINDX refer to the total number of increments of the source pointer or destination pointer after the completion of each first dimension transfer.
[0106] Referring first to FIG. 22, an example of DMA for compression is illustrated. In this example, a block of size ACNT bytes is input to the compression management component and the output of the compression management component is CB bytes. The number of CB bytes is known by the compression management component. Everything else is the same as the normal DMA operation including the starting location of the destination. The compression management component latches the write address from the DMA every ACNT input bytes and the compressed data of CB bytes is written contiguously into the radar data memory from the last latched address. This mode of operation is useful when the data to be compressed is available contiguously at the source.
[0107] Referring to FIG. 23, another example of DMA for compression is illustrated. In this example, a block of size ACNT x BCNT is input to the compression management component and the output of the compression management component is CB bytes. The number of CB bytes is known by the compression management component. Everything else is the same as the normal DMA operation including the starting location of the destination. The compression management component latches the write address from the DMA every ACNT x BCNT input bytes and the compressed data of CB bytes is written contiguously into the radar data memory from the last latched address. This mode of operation is useful when the data to be compressed is not available contiguously at the source, such as for compressing data across receive channels into a single block.
[0108] Referring to FIG. 24, an example of DMA for decompression is illustrated. In this example, a block of CB contiguous bytes is input to the compression management component, where the number of CB bytes is known by the compression management component. The compression management component latches the read address every ACNT output bytes and reads CB bytes contiguously from radar data memory, starting from the last latched address.
[0109] Referring to FIG. 25, another example of DMA for decompression is illustrated. In this example, a block of CB contiguous bytes is input to the compression management component, where the number of CB bytes is known by the compression management component. The compression management component latches the read address every ACNT x BCNT output bytes and reads CB bytes contiguously from radar data memory, starting from the last latched address.
[0110] Referring to FIG. 26, another example of DMA for compression is illustrated. This mode of operation is useful for decompressing variable length codes such as exponential Golomb codes. In this example, multiple bins of range values are compressed as a single block where the compressed block size is fixed while each bin included in the block may have a variable number of bits. However, after being decompressed, each bin's size is ACNT bytes. The first block of compressed data starts at a source address SRC ADDR, with each subsequent block being placed SRC BINDX bytes away. A number BCNT of such compressed blocks exist. The management component traverses the BCNT blocks in sequence, picking up the next bin from each block. Each bin is decompressed to ACNT bytes that are passed to the DMA. This process is repeated CCNT number of times such that the CCNT bins in each of the BCNT blocks are decompressed.
[0111] FIG. 27 is a flow diagram of a method for compressing radar signals in a radar system. This method may be performed to compress blocks of range values generated by range FFTs applied to digitized intermediate frequency (IF) signals from receive channels of the radar system. The range values are generated during processing of reflected signals received by the receive channels as the result of transmitting a frame of chirps. Accordingly, blocks of range values corresponding to a transmitted frame of chirps are compressed.
[0112] For each block of range values 2706 corresponding to a transmitted frame of chirps, the block is compressed 2702 to generate a compressed block of range values, and the compressed block of range values is stored 2704 in radar data memory. In some embodiments, the block is compressed using BFP compression as described herein. In some embodiments, the block is compressed using order K EG compression as described herein. In some embodiments, the block is compressed using PAC compression as described herein. In some embodiments, the block is compressed using VBBFP compression as described herein.
[0113] In some embodiments, compression 2702 of the block includes selecting the optimal type of compression for the block from two or more compression types based on the quantization error of each type of compression. The compression types may include two or more of the compression types mentioned hereinabove.
[0114] Embodiments have been described herein in which a single compression technique is used for compressing range values. In some embodiments, from among two or three of the compression methods described herein, the best method is selected for a block of samples. For example, the size of the compressed block and the number of bits to drop (quantization error) using EG, BFP, and/or VBBFP can be computed. Then, the method that adds the least quantization error (i.e., the method that uses the smallest scale factor) can be selected to compress the block. In such embodiments, one or more bits may be added to the compressed output to indicate which compression technique was used.
[0115] In another example, the quantization introduced (i.e., the scale factor) may be made available to the user as an indicator of the quality of the compression. The quality of the compression increases as the value of the scale factor decreases. A user can use this information to decide whether too much information is lost during compression and adjust compression parameters accordingly.
[0116] In another example, during decompression of a block of k samples, a user-specified subset of those k samples may be provided as the decompression output rather than all of the decompressed samples. In some embodiments, the size of the decompressed data may exceed the available memory as much of the available memory may be storing compressed sample blocks.
[0117] In another example of some embodiments, a user may configure the compression, such that different portions of the range values are compressed by different amounts. For example, if the output of the range FFT is N samples, the user may specify that the initial K samples are to be compressed using BFP compression and the remaining N-K samples are to be compressed using EG compression.
[0118] In another example of some embodiments, the samples are complex samples, and the real and imaginary parts of the sample values are compressed separately.
[0119] In another example, some embodiments have been described herein in which the radar system is an embedded radar system in a vehicle. Embodiments are possible for other applications of embedded radar systems, such as surveillance and security applications, and maneuvering a robot in a factory or warehouse.
[0120] Although method steps may be presented and described herein sequentially, one or more of the steps shown in the figures and described herein may be performed concurrently, may be combined, and/or may be performed in a different order than the order shown in the figures and/or described herein.
[0121] Components in radar systems may be referred to by different names and/or may be combined in ways not shown herein without departing from the described functionality. The term "couple" and derivatives thereof include an indirect, direct, optical and/or wireless electrical connection. For example, if a first device couples to a second device, then such connection may be through a direct electrical connection, through an indirect electrical connection via other devices and connections, through an optical electrical connection, and/or through a wireless electrical connection.
[0122] Modifications are possible in the described embodiments, and other embodiments are possible, within the scope of the claims.

Claims

CLAIMS What is claimed is:
1. A radar system comprising:
a compression component configured to compress blocks of range values to generate compressed blocks of range values; and
a radar data memory configured to store compressed blocks of range values generated by the compression component.
2. The radar system of claim 1, wherein the compression component is configured to compress a block of range values using block floating point (BFP) compression.
3. The radar system of claim 1, wherein the compression component is configured to compress a block of range values using order k exponential Golomb (EG) compression.
4. The radar system of claim 3, wherein the compression component is configured to select an optimal Golomb parameter value for compressing a block of range values from a plurality of specified candidate Golomb parameter values.
5. The radar system of claim 3, wherein the compression component is configured determine a scale factor for a block of range values and to use the scale factor to truncate each range value in the block of range values, the scale factor determined as a minimum number of bits to be dropped from each range value in order to generate a corresponding compressed block of range values of a size less than or equal to a specified size.
6. The radar system of claim 3, wherein the compression component is configured to estimate an encoded size of the block of range values based on a number of leading consecutive zero bits and a number of consecutive one bits following the leading consecutive zero bits in each range value.
7. The radar system of claim 1, wherein the compression component is configured to provide at least two types compression.
8. The radar system of claim 7, wherein the compression component is configured to select a type of compression for a block of range values from the at least two types of compression based on quantization error of each type of compression.
9. The radar system of claim 7, wherein the at least two types of compression include block floating point (BFP) compression and order k exponential Golomb (EG) compression.
10. The radar system of claim 1, wherein the compression component is configured to add dither to a range value before quantization of the range value.
11. The radar system of claim 1, wherein the compression component is configured to generate compressed blocks of range values such that a size of each compressed block is less than or equal to a specified size.
12. The radar system of claim 1, wherein the compression component is configured to compress a block of range values using one selected from a group consisting of bit packing (PAC) compression and variable bit width block floating point (VBBFP) compression.
13. The radar system of claim 1, including:
a plurality of receive channels, each receive channel configured to generate a digitized intermediate frequency (IF) signal; and
a processor coupled to the plurality of receive channels to receive the digitized IF signals, the processor configured to process the IF signals to generate range values for each receive channel.
14. A method for compression of radar signals in a radar system, the method comprising: receiving blocks of range values generated from processing of digitized intermediate frequency (IF) signals;
compressing each block of range values to generate a compressed block of range values, the compressing performed by a compression component of the radar system; and
storing the compressed blocks of range values in radar data memory.
15. The method of claim 14, wherein compressing each block includes using block floating point (BFP) compression to compress the block.
16. The method of claim 14, wherein compressing each block includes using order k exponential Golomb (EG) compression to compress the block.
17. The method of claim 16, wherein compressing each block includes selecting an optimal Golomb parameter value for compressing the block of range values from a plurality of specified candidate Golomb parameter values.
18. The method of claim 16, wherein compressing each block includes determining a scale factor for the block of range values and using the scale factor to truncate each range value in the block of range values, the scale factor determined as a minimum number of bits to be dropped from each range value in order to generate a corresponding compressed block of range values of a size less than or equal to a specified size.
19. The method of claim 16, wherein compressing each block includes estimating an encoded size of the block of range values based on a number of leading consecutive zero bits and a number of consecutive one bits following the leading consecutive zero bits in each range value.
20. The method of claim 14, wherein the compression component is configured to provide at least two types compression and compressing each block includes using one of the at least two types of compression to compress the block.
21. The method of claim 20, wherein compressing each block includes selecting the type of compression for the block from the at least two types of compression based on quantization error of each type of compression.
22. The method of claim 20, wherein the at least two types of compression include block floating point (BFP) compression and order k exponential Golomb (EG) compression.
23. The method of claim 14, wherein compressing each block includes adding dither to each range value in the block before quantization of the range value.
24. The method of claim 14, wherein compressing each block includes generating the compressed block of range values such that a size of the compressed block is less than or equal to a specified size.
25. The method of claim 14, wherein compressing each block includes using one selected from a group consisting of bit packing (PAC) compression and variable bit width block floating point (VBBFP) compression to compress the block.
PCT/US2016/047254 2015-08-19 2016-08-17 Method and system for compression of radar signals WO2017031149A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
JP2018509614A JP7037028B2 (en) 2015-08-19 2016-08-17 Methods and systems for compression of radar signals
CN201680047315.9A CN107923971B (en) 2015-08-19 2016-08-17 Method and system for compressing radar signals
EP16837718.2A EP3338109B1 (en) 2015-08-19 2016-08-17 Method and system for compression of radar signals
JP2022002079A JP7379546B2 (en) 2015-08-19 2022-01-11 Method and system for compression of radar signals

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
IN4339/CHE/2015 2015-08-19
IN4339CH2015 2015-08-19
US15/061,728 2016-03-04
US15/061,728 US20170054449A1 (en) 2015-08-19 2016-03-04 Method and System for Compression of Radar Signals

Publications (1)

Publication Number Publication Date
WO2017031149A1 true WO2017031149A1 (en) 2017-02-23

Family

ID=58051691

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2016/047254 WO2017031149A1 (en) 2015-08-19 2016-08-17 Method and system for compression of radar signals

Country Status (1)

Country Link
WO (1) WO2017031149A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102017126723A1 (en) * 2017-11-14 2019-05-16 Infineon Technologies Ag Apparatus and method for processing radar signals
WO2020180911A1 (en) 2019-03-06 2020-09-10 Texas Instruments Incorporated Dithering fmcw radar parameters to mitigate spurious signals
WO2023121804A1 (en) * 2021-12-22 2023-06-29 Intel Corporation Apparatus, system and method of radar information compression

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5043734A (en) * 1988-12-22 1991-08-27 Hughes Aircraft Company Discrete autofocus for ultra-high resolution synthetic aperture radar
EP0779742A2 (en) * 1995-12-12 1997-06-18 RCA Thomson Licensing Corporation Noise estimation and reduction apparatus for video signal processing
US6314192B1 (en) * 1998-05-21 2001-11-06 Massachusetts Institute Of Technology System, method, and product for information embedding using an ensemble of non-intersecting embedding generators
US20080304044A1 (en) * 2007-06-06 2008-12-11 California Institute Of Technology High-resolution three-dimensional imaging radar
WO2011041269A2 (en) * 2009-09-30 2011-04-07 Samplify Systems, Inc. Enhanced multi-processor waveform data exchange using compression and decompression
US20120319876A1 (en) * 2011-06-17 2012-12-20 Sap Ag Method and System for Data Compression
US20130054661A1 (en) * 2009-10-23 2013-02-28 Albert W. Wegener Block floating point compression with exponent difference and mantissa coding
CN103108182A (en) * 2013-01-18 2013-05-15 北京航空航天大学 Multi-source special unmanned plane reconnoitered image general compression method
US20130243083A1 (en) * 2012-03-16 2013-09-19 Texas Instruments Incorporated Low-Complexity Two-Dimensional (2D) Separable Transform Design with Transpose Buffer Management

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5043734A (en) * 1988-12-22 1991-08-27 Hughes Aircraft Company Discrete autofocus for ultra-high resolution synthetic aperture radar
EP0779742A2 (en) * 1995-12-12 1997-06-18 RCA Thomson Licensing Corporation Noise estimation and reduction apparatus for video signal processing
US6314192B1 (en) * 1998-05-21 2001-11-06 Massachusetts Institute Of Technology System, method, and product for information embedding using an ensemble of non-intersecting embedding generators
US20080304044A1 (en) * 2007-06-06 2008-12-11 California Institute Of Technology High-resolution three-dimensional imaging radar
WO2011041269A2 (en) * 2009-09-30 2011-04-07 Samplify Systems, Inc. Enhanced multi-processor waveform data exchange using compression and decompression
US20130054661A1 (en) * 2009-10-23 2013-02-28 Albert W. Wegener Block floating point compression with exponent difference and mantissa coding
US20120319876A1 (en) * 2011-06-17 2012-12-20 Sap Ag Method and System for Data Compression
US20130243083A1 (en) * 2012-03-16 2013-09-19 Texas Instruments Incorporated Low-Complexity Two-Dimensional (2D) Separable Transform Design with Transpose Buffer Management
CN103108182A (en) * 2013-01-18 2013-05-15 北京航空航天大学 Multi-source special unmanned plane reconnoitered image general compression method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3338109A4 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102017126723A1 (en) * 2017-11-14 2019-05-16 Infineon Technologies Ag Apparatus and method for processing radar signals
WO2020180911A1 (en) 2019-03-06 2020-09-10 Texas Instruments Incorporated Dithering fmcw radar parameters to mitigate spurious signals
EP3935406A4 (en) * 2019-03-06 2022-05-11 Texas Instruments Incorporated Dithering fmcw radar parameters to mitigate spurious signals
US11740345B2 (en) 2019-03-06 2023-08-29 Texas Instruments Incorporated Dithering FMCW radar parameters to mitigate spurious signals
WO2023121804A1 (en) * 2021-12-22 2023-06-29 Intel Corporation Apparatus, system and method of radar information compression

Similar Documents

Publication Publication Date Title
EP3338109B1 (en) Method and system for compression of radar signals
CN107064881B (en) Frequency modulation scheme for FMCW radar
EP2950451B1 (en) Signal-based data compression
US10520584B2 (en) Radar system with optimized storage of temporary data
CN102597948B (en) The method and apparatus of the block floating point compression of signal data
US8317706B2 (en) Post-beamforming compression in ultrasound systems
WO2017031149A1 (en) Method and system for compression of radar signals
KR20190019937A (en) Radar Hardware Accelerator
US20120157852A1 (en) Ultrasound signal compression
US7773031B2 (en) Signal acquisition and method for ultra-wideband (UWB) radar
EP0470773A2 (en) Orthogonal transform coding apparatus
EP0663762A2 (en) Quantising and dequantising circuit with reduced size
CN113820702A (en) Data compression with variable mantissa size
US20110096621A1 (en) System and method for imaging
US9111155B2 (en) RFID reader and method of controlling the same
US7145487B1 (en) Signal processing providing lower downsampling losses
EP4344069A2 (en) Data compression method and apparatus and data decompression method and apparatus
US11988737B2 (en) FMCW radar sensor including synchronized high frequency components
US20200292658A1 (en) Methods and apparatus for data compression and transmission
WO2006052122A1 (en) Method and system for data compression
WO2023070455A1 (en) Data processing method and radar chip
RU2765654C9 (en) Method and device for digital data compression
RU2765654C2 (en) Method and device for digital data compression
CN114839631A (en) Intelligent quantization compression method and system for satellite-borne SAR (synthetic aperture radar) original data
JPH09214968A (en) Method and device for encoding image

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16837718

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2018509614

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2016837718

Country of ref document: EP