WO2014029260A1 - 数据压缩发送及解压缩方法和设备 - Google Patents

数据压缩发送及解压缩方法和设备 Download PDF

Info

Publication number
WO2014029260A1
WO2014029260A1 PCT/CN2013/080405 CN2013080405W WO2014029260A1 WO 2014029260 A1 WO2014029260 A1 WO 2014029260A1 CN 2013080405 W CN2013080405 W CN 2013080405W WO 2014029260 A1 WO2014029260 A1 WO 2014029260A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
bits
group
sample point
sample
Prior art date
Application number
PCT/CN2013/080405
Other languages
English (en)
French (fr)
Inventor
罗斐琼
任斌
李琼
Original Assignee
电信科学技术研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 电信科学技术研究院 filed Critical 电信科学技术研究院
Priority to EP13831399.4A priority Critical patent/EP2890076B1/en
Priority to US14/422,896 priority patent/US9515737B2/en
Publication of WO2014029260A1 publication Critical patent/WO2014029260A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B10/00Transmission systems employing electromagnetic waves other than radio-waves, e.g. infrared, visible or ultraviolet light, or employing corpuscular radiation, e.g. quantum communication
    • H04B10/25Arrangements specific to fibre transmission
    • H04B10/2575Radio-over-fibre, e.g. radio frequency signal modulated onto an optical carrier
    • H04B10/25752Optical arrangements for wireless networks
    • H04B10/25753Distribution optical network, e.g. between a base station and a plurality of remote units
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M7/00Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
    • H03M7/14Conversion to or from non-weighted codes
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M7/00Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
    • H03M7/30Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B10/00Transmission systems employing electromagnetic waves other than radio-waves, e.g. infrared, visible or ultraviolet light, or employing corpuscular radiation, e.g. quantum communication
    • H04B10/50Transmitters
    • H04B10/516Details of coding or modulation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B10/00Transmission systems employing electromagnetic waves other than radio-waves, e.g. infrared, visible or ultraviolet light, or employing corpuscular radiation, e.g. quantum communication
    • H04B10/60Receivers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/02Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas
    • H04B7/04Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas
    • H04B7/06Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station
    • H04B7/0686Hybrid systems, i.e. switching and simultaneous transmission
    • H04B7/0691Hybrid systems, i.e. switching and simultaneous transmission using subgroups of transmit antennas
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/02Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas
    • H04B7/10Polarisation diversity; Directional diversity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L27/00Modulated-carrier systems
    • H04L27/0014Carrier regulation
    • H04L2027/0016Stabilisation of local oscillators
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L27/00Modulated-carrier systems
    • H04L27/32Carrier systems characterised by combinations of two or more of the types covered by groups H04L27/02, H04L27/10, H04L27/18 or H04L27/26
    • H04L27/34Amplitude- and phase-modulated carrier systems, e.g. quadrature-amplitude modulated carrier systems
    • H04L27/36Modulator circuits; Transmitter circuits
    • H04L27/366Arrangements for compensating undesirable properties of the transmission path between the modulator and the demodulator
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L5/00Arrangements affording multiple use of the transmission path
    • H04L5/0001Arrangements for dividing the transmission path
    • H04L5/0014Three-dimensional division
    • H04L5/0023Time-frequency-space
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/04Protocols for data compression, e.g. ROHC

Definitions

  • a base station (eNB) of a Long Term Evolution (LTE)/Long Term Evolution (LTE-A) system is a distributed base station device, which is composed of a baseband unit device (BBU) and a radio remote device (RRU), which is a type A combination of base stations that can be flexibly and distributedly installed, as shown in Figure 1.
  • BBU baseband unit device
  • RRU radio remote device
  • the current Ir interface uses a transmission medium shield such as optical fiber. If the Ir interface data is compressed by an effective technical means, the demand for the transmission medium shield can be greatly reduced, the equipment cost can be reduced, and the product market competitiveness can be improved.
  • an existing data compression scheme of the Ir interface is: by performing automatic gain adjustment on the input signal, controlling the dynamic range of the signal, reducing the quantization bit width of the signal, and the quantization algorithm is uniformly quantized, and the scheme can be On the basis of ensuring signal reliability, 16-bit (bit) bit width data is compressed to 12 bits, and the compression ratio, that is, the ratio of data size before and after compression is 4:3.
  • a data compression sending method the method comprising:
  • the sender groups the data to be sent, and each packet includes at least one sample data
  • the sender determines the shift factor for each packet, #> according to the highest bit of the sample data with the largest value after removing the sign bit in the packet, and performs the left data bit of each sample data in the packet according to the shift factor. Shifting; respectively, each of the sample data after the left shift is quantized, so that the number of bits of each sample data after the quantization process is equal to the number of target bits compressed; the shift factor and each sample after the quantization process The data is sent to the receiving end.
  • a data decompression method comprising: The receiving end receives the shift factor sent by the transmitting end and the sampled data after the quantization process;
  • the receiving end separately dequantizes each sample data, so that the number of bits of each sample data after the dequantization process is equal to the original number of bits before the quantization process;
  • the receiving end respectively shifts the data bits in each of the sample data after the dequantization processing according to the shift factor to obtain decompressed sample data.
  • a data compression transmitting device comprising:
  • grouping unit grouping data to be sent, each group containing at least one sample data
  • a compression unit configured to determine, for each packet, a shift factor according to a highest bit of the sample data having the largest value after removing the sign bit in the packet, and data bits of each sample data in the packet according to the shift factor Performing a left shift separately; respectively, performing quantization processing on each of the sample data after the left shift, so that the number of bits of each sample data after the quantization process is equal to the number of target bits compressed;
  • a sending unit configured to transmit the shift factor and each sample data after the quantization process.
  • a data decompression device comprising:
  • a receiving unit configured to receive a shift factor and each sample data after the quantization process
  • a dequantization unit configured to perform dequantization processing on each of the sample data, so that the number of bits of each sample data after the dequantization process is equal to the original number of bits before the quantization process;
  • a shifting unit configured to perform right shifting of the data bits in each of the sample data after the dequantization processing according to the shift factor to obtain decompressed sample data.
  • An RRU where the RRU is used as a sender, includes:
  • grouping unit grouping data to be sent, each group containing at least one sample data
  • a compression unit configured to determine, for each packet, a shift factor according to a highest bit of the sample data having the largest value after removing the sign bit in the packet, and data bits of each sample data in the packet according to the shift factor Performing a left shift separately; respectively, performing quantization processing on each of the sample data after the left shift, so that the number of bits of each sample data after the quantization process is equal to the number of target bits compressed;
  • a sending unit configured to transmit the shift factor and each sample data after the quantization process.
  • a BBU as a sender, includes:
  • grouping unit grouping data to be sent, each group containing at least one sample data
  • a compression unit configured to determine, for each packet, a shift factor according to a highest bit of the sample data having the largest value after removing the sign bit in the packet, and data bits of each sample data in the packet according to the shift factor Performing a left shift separately; respectively, performing quantization processing on each of the sample data after the left shift, so that the number of bits of each sample data after the quantization process is equal to the number of target bits compressed;
  • a sending unit configured to transmit the shift factor and each sample data after the quantization process.
  • An RRU where the RRU is used as a receiving end, includes: a receiving unit, configured to receive a shift factor and each sample data after the quantization process;
  • a dequantization unit configured to perform dequantization processing on each of the sample data, so that the number of bits of each sample data after the dequantization process is equal to the original number of bits before the quantization process;
  • a shifting unit configured to perform right shifting of the data bits in each of the sample data after the dequantization processing according to the shift factor to obtain decompressed sample data.
  • a BBU as a receiving end, includes:
  • a receiving unit configured to receive a shift factor and each sample data after the quantization process
  • a dequantization unit configured to perform dequantization processing on each of the sample data, so that the number of bits of each sample data after the dequantization process is equal to the original number of bits before the quantization process;
  • a shifting unit configured to perform right shifting of the data bits in each of the sample data after the dequantization processing according to the shift factor to obtain decompressed sample data.
  • An RRU including a processor
  • the processor is configured to group the data to be sent, and each packet includes at least one sample data; for each packet, according to the sample data with the largest value after removing the sign bit in the packet The highest bit determines the shift factor, and the data bits of each sample point data in the group are respectively shifted to the left according to the shift factor; and each of the sample data after the left shift is separately quantized, so that each of the quantized processes is processed
  • the number of bits of the sample data is equal to the number of target bits compressed; the shift factor and the quantized sample data are sent to the receiving end;
  • the processor is configured to receive the shift factor sent by the transmitting end and the sampled data after the quantization process; and perform dequantization processing on each sample data to make each sample after the dequantization process
  • the number of bits of the data is equal to the number of original bits before the quantization process; the data bits in each of the sample data after the dequantization process are respectively right-shifted according to the shift factor, and the decompressed sample data is obtained.
  • a BBU including a processor
  • the processor is configured to group the data to be sent, and each packet includes at least one sample data; for each packet, according to the sample data with the largest value after removing the sign bit in the packet The highest bit determines the shift factor, and the data bits of each sample point data in the group are respectively shifted to the left according to the shift factor; and each of the sample data after the left shift is separately quantized, so that each of the quantized processes is processed
  • the number of bits of the sample data is equal to the number of target bits compressed; the shift factor and the quantized sample data are sent to the receiving end;
  • the processor is configured to receive the shift factor sent by the transmitting end and the sampled data after the quantization process; and perform dequantization processing on each sample data to make each sample after the dequantization process
  • the number of bits of the data is equal to the number of original bits before the quantization process; the data bits in each of the sample data after the dequantization process are respectively right-shifted according to the shift factor, and the decompressed sample data is obtained.
  • the sending end groups the data to be sent, and for each group: the shift factor is determined according to the highest bit of the sample data with the largest value after the sign bit is removed in the packet, and the shift is performed. Factor The data bits of each sample data in the group are respectively shifted to the left; the data of each sample after the left shift is separately quantized, so that the number of bits of each sample data after the quantization process is equal to the number of target bits compressed; The shift factor and each sample data after the quantization process are sent to the receiving end. It can be seen that the scheme realizes the segmentation shift data compression scheme by grouping the data to be transmitted and respectively shifting and compressing each obtained packet, thereby optimizing the data compression performance.
  • FIG. 1 is a schematic diagram of a distributed base station device in the prior art
  • FIG. 2 is a schematic flowchart of a method provided by an embodiment of the present application.
  • FIG. 3 is a schematic flowchart of another method provided by an embodiment of the present application.
  • FIG. 4 is a schematic flowchart of a data compression and decompression method in an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of an apparatus having a data compression and decompression module according to an embodiment of the present application
  • FIG. 6A is a schematic diagram of data grouping according to Embodiment 1 of the present application.
  • FIG. 6B is a schematic diagram of data grouping according to Embodiment 2 of the present application.
  • 6C is a schematic diagram of data grouping according to Embodiment 3 of the present application.
  • 6D is a schematic diagram of data grouping according to Embodiment 4 of the present application.
  • 6E is a schematic diagram of data grouping according to Embodiment 5 of the present application.
  • 6F is a schematic diagram of data grouping according to Embodiment 6 of the present application.
  • 6G is a schematic diagram of a format of an IQ data frame before compression according to Embodiment 7 of the present application.
  • 6H is a schematic diagram of a format of a compressed data frame according to Embodiment 7 of the present application.
  • FIG. 6 is a schematic diagram of sample data in a data compression and decompression process according to Embodiment 7 of the present application
  • FIG. 7 is a schematic structural diagram of a device according to an embodiment of the present application
  • FIG. 8 is a schematic structural diagram of another device according to an embodiment of the present application.
  • DETAILED DESCRIPTION OF THE INVENTION the embodiment of the present application provides a data compression and sending method.
  • a sending end groups data to be sent, for each packet: according to the largest value after removing the symbol bit in the packet The highest bit of the sample data determines a shift factor, and the data bits of each sample point data in the group are respectively left-shifted according to the shift factor; and each sample data after the left shift is separately quantized to make quantization Number of bits of each sampled data after processing Equal to the number of target bits compressed; the shift factor and the quantized sample data are transmitted to the receiving end.
  • the data compression sending method provided by the data sending end in the embodiment of the present application includes the following steps: Step 20: The sending end groups the data to be sent, and each packet includes at least one sample data. Step 21: Send For each packet, # ⁇ determines the shift factor according to the highest bit of the sample data with the largest value after removing the sign bit in the packet, and shifts the data bits of each sample data in the packet to the left according to the shift factor. Bits; each of the sample data after the left shift is separately quantized, so that the number of bits of each sample data after the quantization process is equal to the number of target bits compressed; the shift factor and the sample data after the quantization process Send to the receiving end.
  • the shift factor indicates the number of shift bits when the data bits of the sample data are left shifted; the method of shifting the data bits of the sample data data to the left is that the sign bit of the sample data does not change, After the N bits of the left side of the data bit are deleted, N zeros are added to the right of the data bit to obtain the left shifted data.
  • step 20 when the data to be sent is the real imaginary part (IQ) data, the data to be sent may be grouped according to the following two principles:
  • Principle 2 According to the correlation between different antennas, if the correlation is large, the scheme of unified grouping of antennas can be considered; otherwise, the scheme of independent grouping by different antennas is adopted. If the antenna is used in a unified grouping scheme, the antennas in the group are recommended to be the same direction polar antenna.
  • the specific grouping method can use one of the following six methods:
  • Method 1 grouping IQ data of each antenna separately, and grouping I channel data and Q channel data independently when grouping, so that at least one I channel data of the same antenna is divided into one group or at least one consecutive Q Road data is divided into groups;
  • the IQ data of each antenna is separately grouped, and the I channel data and the Q channel data are uniformly grouped when grouping, so that at least one I channel data and at least one Q channel data of the same antenna are divided into two consecutive channels.
  • a method; method 3 uniformly grouping IQ data of multiple antennas, and grouping I channel data and Q channel data separately when grouping, so that I channel data of the same point of the same position of multiple antennas is divided into one
  • the group or the same point of the Q way data is divided into a group; here, the same position specifically refers to the same time position, that is, the same time.
  • the IQ data of multiple antennas are uniformly grouped, and the I channel data and the Q channel data are uniformly grouped when grouping, so that the I channel data and the Q channel data of the same position of the multiple antennas are grouped into one group;
  • Method 5 uniformly grouping IQ data of multiple antennas, and grouping I channel data and Q channel data separately when grouping, so that I channel data of at least one sample of consecutive positions of multiple antennas is divided into one Group or continuous Q channel data of at least one sample is divided into one group;
  • the IQ data of the multiple antennas are uniformly grouped, and the I channel data and the Q channel data are uniformly grouped in the grouping, so that the I channel data and the Q channel of the continuous at least one sample of the same position of the plurality of antennas are unified.
  • Data is divided into one Group.
  • the IQ data is data transmitted on the Ir interface between the RRU and the BBU.
  • the method for grouping data to be transmitted is not limited to the above six methods, and any method capable of grouping data is within the protection scope of the present application, for example, randomly grouping data to be transmitted according to samples.
  • step 21 the shift factor is determined by the highest bit of the sample data with the largest value after the sign bit is removed in the packet, and the specific implementation method can be as follows:
  • I channel data and Q channel data each occupy n*k/2 bits, and the maximum number of shifts that can be represented is 2 ⁇ ( n*k/2)-l ) (starting from 0);
  • Q channel data are uniformly grouped, n/2 I channel data and n/2 Q channel data are grouped, then the shift factor can occupy n*k/2 bits, and the maximum shift number that can be represented is 2 ⁇ ( n*k/2)-l ) (starting at 0);
  • determining the sample data with the largest value after removing the sign bit in the packet, and the highest bit number of the sample data after removing the sign bit; here, the highest bit of the sample data after the sign bit is removed refers to
  • the sample data is the first bit that is 1 after the sign bit is removed, from the left to the right direction, and the bit number of the bit is numbered sequentially from 0 to 0 from right to left.
  • the data after removing the highest-order sign bit is 00100001
  • the number of bits of each bit is 0, 1, 2, 3, 4, 5, 6, 7 from the right to the left, the highest.
  • the number of bits in the bit is 5.
  • the shift factor can be used in the form of differential shift. That is, the shift factor of the first packet is an absolute value, i.e., calculated according to the above method, and the shift factor of the remaining packets may be equal to the difference between the shift factor of the packet and the shift factor of the first packet.
  • the shift factor in order to improve the accuracy of compression, after determining the maximum number of shifts that the shift factor can represent, and determining the maximum value of the sample data after the sign bit is removed in the packet, and removing the sign bit of the sample data Before the highest digit of the digit, if the number of digits in the packet is removed, the highest value of the sample data after the sign bit is removed is E, or the sample data exceeding the set ratio in the packet is When the number of bits of the highest bit after the sign bit is removed is less than or equal to E, each sample data in the packet is saturated to E bits, and E is an integer greater than 0 and less than W.
  • the method of saturating the sample data to the E bit is: the sign bit of the sample data is unchanged, and if the sample data is larger than the comparison data after removing the sign bit, the sample data is removed after the sign bit is removed.
  • step 21 each sample data after the left shift is separately quantized, and the following uniform quantization method can be used for the specific implementation:
  • V is equal to the number of target bits of the compression.
  • the quantized sample data is 1000.
  • the transmitting end may be an RRU, and the receiving end is a BBU; or the transmitting end is a BBU, and the receiving end is an RRU.
  • the sender can be any other data transmitting device, and the receiving end can be any other data receiving device.
  • the embodiment of the present application provides the following data decompression method:
  • Step 30 The receiving end receives the shift factor sent by the transmitting end and the sampled data after the quantization process
  • Step 31 The receiving end separately dequantizes the data of each sample point, so that the number of bits of each sample data after the dequantization process is equal to the original number of bits before the quantization process; here, the receiving end uses the data of each sample point according to the transmitting end.
  • the method of performing the quantization process performs dequantization processing on the corresponding sample data.
  • Step 32 The receiving end shifts the data bits in each of the sample data after the dequantization processing to the right by the shift factor to obtain the decompressed sample data.
  • the number of bits of the right shift is equal to the value corresponding to the shift factor.
  • the method can be applied when the uniform quantization method is used at the transmitting end.
  • the de-quantized processed sample data may be respectively added with a set offset value, where the set offset value is in (0, 2 ⁇ (B-1)], and ⁇ represents a power, for example,
  • the set offset value is equal to 2 ⁇ ( B-1 ); correspondingly, in step 32, the receiving end shifts the data bits in each sample data after adding the set offset value to the right according to the shift factor. Bit.
  • the transmitting end may be an RRU, and the receiving end is a BBU; or the transmitting end is a BBU, and the receiving end is an RRU.
  • Step 201 The compression module at the transmitting end groups the data to be sent, and each packet includes at least one sample data.
  • Step 202 The compression module determines a shift factor according to a highest bit of the sample data with the largest value after removing the sign bit in the packet;
  • Step 203 The compression module separately shifts the data bits of each sample data in the packet according to the shift factor.
  • Step 204 The compression module separately quantizes each sample data after the left shift, so that the quantization process is performed. The number of bits of each sample data is equal to the number of target bits compressed;
  • Step 205 The compression module puts the shift factor and the quantized sample data into the transmission channel for transmission;
  • Step 303 The decompression module respectively shifts the data bits in each of the sample data after the dequantization processing according to the shift factor to obtain decompressed data.
  • the location of the compression module and the decompression module in the system is as shown in FIG. 5, wherein in the uplink channel, the compression module is on the RRU side, corresponding to There is a decompression module on the BBU side. In the downlink channel, the compression module is on the BBU side, and the decompression module is provided on the corresponding RRU side.
  • Embodiment 1 to Embodiment 6 show six methods of data grouping:
  • Embodiment 1 is a diagrammatic representation of Embodiment 1:
  • Embodiment 2 is a diagrammatic representation of Embodiment 1:
  • Embodiment 3 is a diagrammatic representation of Embodiment 3
  • the I and Q paths of the four co-polarized antennas are a group.
  • Embodiment 5 is a diagrammatic representation of Embodiment 5:
  • this embodiment only describes the scheme of compressing 15bit IQ data to 7bit in the current 3G/4G system.
  • the format of the IQ data frame before compression is shown in Figure 6G.
  • the format of the compressed data frame is shown in Figure 6H. It should be noted that the present application is not limited to the frame format mentioned herein.
  • the binary data is represented by the original code.
  • Step 1 The sending end groups the IQ data to be sent, as shown in Embodiment 1 to Embodiment 6.
  • Step 2 Calculate the shift factor for the group i.
  • Step 3 The data of each sample in the group i is shifted to the left by 1 bit (the sign bit is unchanged, and the data bit is shifted to the left by 1 bit) to obtain the sample data shown in Fig. 6J.
  • Step 4 The data of each sample point shown in FIG. 6J is respectively quantized to a target bit number of 7 bits, that is, the data bits of the shifted sample data are intercepted by 6 bit from high to low, and the sign bit is unchanged, and the result is as shown in FIG. 6K.
  • the quantized sample data is shown, and the quantized sample data is sent to the receiving end.
  • Step 7 Move each point data shown in Fig. 6 ⁇ to the right by 1 bit (the sign bit is unchanged, and the data bit is shifted to the right by 1 bit), and the data shown in Fig. 6 ⁇ is obtained, which is obtained by decompressing the receiving end. The data.
  • an embodiment of the present application provides a data compression sending device, where the device includes:
  • the grouping unit 70 groups the data to be sent, and each packet includes at least one sample data.
  • the compression unit 71 is configured to, for each group, the sample data with the largest value after the symbol bit is removed from the group. The highest bit determines the shift factor, and the data bits of each sample point data in the group are respectively shifted to the left according to the shift factor; and each of the sample data after the left shift is separately quantized, so that each of the quantized processes is processed
  • the number of bits of the sample data is equal to the number of target bits compressed;
  • the transmitting unit 72 is configured to transmit the shift factor and each sample data after the quantization process.
  • grouping unit 70 is configured to:
  • the data to be transmitted is grouped as follows: The IQ data of each antenna is separately grouped, and the I channel data and the Q channel data are separately grouped when grouping, so that at least one I channel data of the same antenna is divided into one group or consecutive at least one Q channel data segment. As a group; or,
  • the IQ data of each antenna is separately grouped, and the I channel data and the Q channel data are uniformly grouped when grouping, so that at least one I channel data of the same antenna and at least one channel of continuous data are grouped into one group; Or, the IQ data of the multiple antennas are uniformly grouped, and the I channel data and the Q channel data are respectively grouped when grouping, so that the I channels of the same point of the same position of the multiple antennas are grouped into one group or the same.
  • the Q channel data of the point is divided into a group; or,
  • the IQ data of the multiple antennas are uniformly grouped, and the I channel data and the Q channel data are uniformly grouped when grouping, so that the I channel data and the Q channel data of the same position of the multiple antennas are grouped into one group; or
  • the IQ data of the plurality of antennas are uniformly grouped, and the I channel data and the Q channel data are respectively grouped when grouping, so that the I channel data of the continuous at least one sample of the same position of the plurality of antennas are grouped into one group or consecutively.
  • the Q path data of at least one sample is divided into one group; or,
  • the IQ data of the plurality of antennas are uniformly grouped, and the I channel data and the Q channel data are uniformly grouped in the grouping, so that the I channel data and the Q channel data of the continuous at least one sample of the same position of the plurality of antennas are divided into two.
  • a group The IQ data of the plurality of antennas are uniformly grouped, and the I channel data and the Q channel data are uniformly grouped in the grouping, so that the I channel data and the Q channel data of the continuous at least one sample of the same position of the plurality of antennas are divided into two.
  • the compression unit 71 is configured to: determine a shift factor according to the highest bit of the sample data having the largest value after the sign bit is removed in the packet according to the following method:
  • the compression unit 71 is configured to: determine, according to the following formula, a maximum shift number that the shift factor can represent:
  • denotes the power
  • is the number of sample data contained in the packet
  • k is the number of bits of the control bits used to transmit the shift factor.
  • the compression unit 71 is further configured to:
  • the maximum number of bits of the sample data after the sign bit is removed from the group after the sign bit is removed is E, or the bit number of the sample data exceeding the set ratio in the group is the highest bit after removing the sign bit. Less than or equal to E, then the number of points in the group According to saturation to E bits, E is an integer greater than zero.
  • the compression unit 71 is configured to: separately quantize the left-shifted sample data according to the following method:
  • bit data of each bit of the left shifted data is truncated from the upper bit to the lower bit to obtain the quantized sample data.
  • an embodiment of the present application provides a data decompression device, where the device includes:
  • a receiving unit 80 configured to receive a shift factor and each sample data after the quantization process
  • the dequantization unit 81 is configured to perform dequantization processing on each of the sample data, so that the number of bits of each sample data after the dequantization process is equal to the original number of bits before the quantization process;
  • the shifting unit 82 is configured to perform right shifting of the data bits in each of the sample data after the dequantization processing according to the shift factor to obtain decompressed sample data.
  • the dequantization unit 81 is configured to:
  • W is the sample data before the quantization process, after removing the sign bit
  • V is the number of bits of each sample data after quantization processing.
  • the dequantization unit 81 is further configured to:
  • the Su Sohu shift unit After the dequantized processed sample data is obtained, and the Su Sohu shift unit performs the right shift of the data bits in each of the desampled processed sample data according to the shift factor, the dequantization process is performed.
  • the data of each point is respectively added with a set offset value, which is a value in (0, 2 ⁇ (B-1)], and ⁇ represents a power;
  • the shifting unit 82 is configured to:
  • the data bits in each of the sample data after the offset value is added are respectively shifted right according to the shift factor.
  • the beneficial effects of the application include:
  • the sending end groups the data to be sent, and for each group: the shift factor is determined according to the highest bit of the sample data with the largest value after the sign bit is removed in the packet, and the shift is performed.
  • the factor shifts the data bits of each sample data in the group to the left; the left-shifted sample data is separately quantized, so that the number of bits of the quantized sample data is equal to the compressed target bit.
  • the shift factor and the quantized sample data are sent to the receiving end. It can be seen that the scheme realizes the segmentation shift data compression scheme by grouping the data to be transmitted and respectively shifting and compressing each obtained packet, thereby optimizing the data compression performance.
  • the receiving end After receiving the shift factor sent by the transmitting end and the sampled data after the quantization process, the receiving end separately dequantizes each sample data, so that the number of bits of each sample data after the dequantization process is equal to the quantization process.
  • the number of original bits is first, and then the data bits in each sample data after dequantization are respectively right-shifted according to the shift factor to obtain decompressed sample data, thereby realizing the data compression scheme. Corresponding data decompression scheme.
  • the embodiment of the present application further provides an RRU, where the RRU is used as a sending end.
  • the RRU is used as a sending end.
  • grouping unit grouping data to be sent, each group containing at least one sample data
  • a compression unit configured to determine, for each packet, a shift factor according to a highest bit of the sample data having the largest value after removing the sign bit in the packet, and data bits of each sample data in the packet according to the shift factor Performing a left shift separately; respectively, performing quantization processing on each of the sample data after the left shift, so that the number of bits of each sample data after the quantization process is equal to the number of target bits compressed;
  • a sending unit configured to transmit the shift factor and each sample data after the quantization process.
  • the embodiment of the present application further provides a BBU, where the BBU is used as a sending end, and includes:
  • grouping unit grouping data to be sent, each group containing at least one sample data
  • a compression unit configured to determine, for each packet, a shift factor according to a highest bit of the sample data having the largest value after removing the sign bit in the packet, and data bits of each sample data in the packet according to the shift factor Performing a left shift separately; respectively, performing quantization processing on each of the sample data after the left shift, so that the number of bits of each sample data after the quantization process is equal to the number of target bits compressed;
  • a sending unit configured to transmit the shift factor and each sample data after the quantization process.
  • the embodiment of the present application further provides an RRU, where the RRU is used as a receiving end, and includes:
  • a receiving unit configured to receive a shift factor and each sample data after the quantization process
  • a dequantization unit configured to perform dequantization processing on each of the sample data, so that the number of bits of each sample data after the dequantization process is equal to the original number of bits before the quantization process;
  • a shifting unit configured to perform right shifting of the data bits in each of the sample data after the dequantization processing according to the shift factor to obtain decompressed sample data.
  • the embodiment of the present application further provides a BBU, where the BBU serves as a receiving end, and includes:
  • a receiving unit configured to receive a shift factor and each sample data after the quantization process
  • a dequantization unit configured to perform dequantization processing on each of the sample data, so that the number of bits of each sample data after the dequantization process is equal to the original number of bits before the quantization process;
  • a shifting unit configured to perform right shifting of the data bits in each of the sample data after the dequantization processing according to the shift factor to obtain decompressed sample data.
  • the embodiment of the present application further provides an RRU, including a processor.
  • the processor is configured to group data to be sent, and each packet includes at least one sample. Data; for each packet, determining a shift factor according to the highest bit of the sample data having the largest value after removing the sign bit in the packet, and shifting the data bits of each sample data in the packet to the left according to the shift factor ; will left
  • Each of the shifted sample data is separately quantized so that the number of bits of each sampled data after the quantization process is equal to the number of compressed target bits; and the shift factor and the quantized sample data are transmitted to the receiving end. ;
  • the processor is configured to receive the shift factor sent by the transmitting end and the sampled data after the quantization process; and perform dequantization processing on each sample data to make each sample after the dequantization process
  • the number of bits of the data is equal to the number of original bits before the quantization process; the data bits in each of the sample data after the dequantization process are respectively right-shifted according to the shift factor, and the decompressed sample data is obtained.
  • the embodiment of the present application further provides a BBU, including a processor.
  • the processor is configured to group data to be sent, and each packet includes at least one sample. Data; for each packet, determining a shift factor according to the highest bit of the sample data having the largest value after removing the sign bit in the packet, and shifting the data bits of each sample data in the packet to the left according to the shift factor
  • the left-shifted data of each sample point is separately quantized, so that the number of bits of each sample data after the quantization process is equal to the number of target bits compressed; and the shift factor and the sampled data after the quantization process are transmitted.
  • the processor is configured to receive the shift factor sent by the transmitting end and the sampled data after the quantization process; and perform dequantization processing on each sample data to make each sample after the dequantization process
  • the number of bits of the data is equal to the number of original bits before the quantization process; the data bits in each of the sample data after the dequantization process are respectively right-shifted according to the shift factor, and the decompressed sample data is obtained.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

本申请实施例公开了一种数据压缩发送及解压缩方法和设备,涉及无线通信领域,用于优化数据压缩方案。本方法中,发送端将待发送数据进行分组,对于每个分组:根据该分组内去掉符号位后数值最大的样点数据的最高位确定移位因子,根据该移位因子将该分组内各样点数据的数据位分别进行左移位;将左移位后的各样点数据分别进行量化处理,使得量化处理后的各样点数据的比特数等于压缩的目标比特数;将该移位因子和量化处理后的各样点数据发送给接收端,接收端将各样点数据分别进行解量化处理,根据移位因子将解量化处理后的各样点数据中的数据位分别进行右移位,得到解压缩后的样点数据。采用本申请优化了数据压缩性能。

Description

数据压缩发送及解压缩方法和设备 本申请要求在 2012年 8月 21日提交中国专利局、 申请号为 201210298915.6、 发明名称为
"数据压缩方法及解压缩方法和设备"的中国专利申请的优先权,其全部内容通过引用结合在本 申请中。 技术领域 本申请涉及无线通信领域, 尤其涉及一种数据压缩发送及解压缩方法和设备。 背景技术 长期演进(LTE ) /长期演进升级(LTE-A ) 系统的基站(eNB ) 为分布式基站设备, 它是由基带单元设备 ( BBU )和射频远端设备 ( RRU )构成, 是一种可以灵活分布式安装 的基站组合, 如图 1所示。 其中 RRU通过 Ir接口与 BBU相连。
目前的 Ir接口釆用光纤等传输媒盾 ,如果通过有效的技术手段对 Ir接口数据进行压缩, 可以大大减少传输媒盾的需求量, 降低设备成本, 提升产品市场竟争力。
在 LTE系统中, 已有的一种 Ir接口的数据压缩方案为: 通过对输入信号进行自动增益 调节, 控制信号的动态范围, 降低信号的量化位宽, 量化算法为均匀量化, 该方案可以在 保证信号可靠性的基础上, 将 16比特(bit )位宽的数据压缩至 12bit, 压缩比即压缩前后 的数据大小之比为 4: 3。
可见, 现有技术的压缩比较低, 并且算法的通用性不强, 当输入信号为均匀分布时, 均匀量化是最佳的量化器, 但当输入信号为非均匀分布式, 筒单的均匀量化对量化电平的 分配就不够合理, 无法充分去除信号中的冗余成分。 发明内容 本申请实施例提供一种数据压缩发送及解压缩方法和设备, 用于优化数据压缩方案。 一种数据压缩发送方法, 该方法包括:
发送端将待发送数据进行分组, 每个分组中包含至少一个样点数据;
发送端对于每个分组, #>据该分组内去掉符号位后数值最大的样点数据的最高位确定 移位因子, 根据该移位因子将该分组内各样点数据的数据位分别进行左移位; 将左移位后 的各样点数据分别进行量化处理, 使得量化处理后的各样点数据的比特数等于压缩的目标 比特数; 将该移位因子和量化处理后的各样点数据发送给接收端。
一种数据解压缩方法, 该方法包括: 接收端接收发送端发送的移位因子和量化处理后的各样点数据;
接收端将各样点数据分别进行解量化处理, 使得解量化处理后的各样点数据的比特数 等于量化处理前的原始比特数;
接收端根据所述移位因子将解量化处理后的各样点数据中的数据位分别进行右移位, 得到解压缩后的样点数据。
一种数据压缩发送设备, 该设备包括:
分组单元, 将待发送数据进行分组, 每个分组中包含至少一个样点数据;
压缩单元, 用于对于每个分组, # ^据该分组内去掉符号位后数值最大的样点数据的最 高位确定移位因子, 根据该移位因子将该分组内各样点数据的数据位分别进行左移位; 将 左移位后的各样点数据分别进行量化处理, 使得量化处理后的各样点数据的比特数等于压 缩的目标比特数;
发送单元, 用于将该移位因子和量化处理后的各样点数据进行传输。
一种数据解压缩设备, 该设备包括:
接收单元, 用于接收移位因子和量化处理后的各样点数据;
解量化单元, 用于将各样点数据分别进行解量化处理, 使得解量化处理后的各样点数 据的比特数等于量化处理前的原始比特数;
移位单元, 用于根据所述移位因子将解量化处理后的各样点数据中的数据位分别进行 右移位, 得到解压缩后的样点数据。
一种 RRU, 该 RRU作为发送端, 包括:
分组单元, 将待发送数据进行分组, 每个分组中包含至少一个样点数据;
压缩单元, 用于对于每个分组, # ^据该分组内去掉符号位后数值最大的样点数据的最 高位确定移位因子, 根据该移位因子将该分组内各样点数据的数据位分别进行左移位; 将 左移位后的各样点数据分别进行量化处理, 使得量化处理后的各样点数据的比特数等于压 缩的目标比特数;
发送单元, 用于将该移位因子和量化处理后的各样点数据进行传输。
一种 BBU, 该 BBU作为发送端, 包括:
分组单元, 将待发送数据进行分组, 每个分组中包含至少一个样点数据;
压缩单元, 用于对于每个分组, # ^据该分组内去掉符号位后数值最大的样点数据的最 高位确定移位因子, 根据该移位因子将该分组内各样点数据的数据位分别进行左移位; 将 左移位后的各样点数据分别进行量化处理, 使得量化处理后的各样点数据的比特数等于压 缩的目标比特数;
发送单元, 用于将该移位因子和量化处理后的各样点数据进行传输。
一种 RRU, 该 RRU作为接收端, 包括: 接收单元, 用于接收移位因子和量化处理后的各样点数据;
解量化单元, 用于将各样点数据分别进行解量化处理, 使得解量化处理后的各样点数 据的比特数等于量化处理前的原始比特数;
移位单元, 用于根据所述移位因子将解量化处理后的各样点数据中的数据位分别进行 右移位, 得到解压缩后的样点数据。
一种 BBU, 该 BBU作为接收端, 包括:
接收单元, 用于接收移位因子和量化处理后的各样点数据;
解量化单元, 用于将各样点数据分别进行解量化处理, 使得解量化处理后的各样点数 据的比特数等于量化处理前的原始比特数;
移位单元, 用于根据所述移位因子将解量化处理后的各样点数据中的数据位分别进行 右移位, 得到解压缩后的样点数据。
一种 RRU, 包括处理器;
该 RRU作为发送端时, 处理器被配置为将待发送数据进行分组, 每个分组中包含至 少一个样点数据; 对于每个分组, 根据该分组内去掉符号位后数值最大的样点数据的最高 位确定移位因子, 根据该移位因子将该分组内各样点数据的数据位分别进行左移位; 将左 移位后的各样点数据分别进行量化处理, 使得量化处理后的各样点数据的比特数等于压缩 的目标比特数; 将该移位因子和量化处理后的各样点数据发送给接收端;
该 RRU作为接收端时, 处理器被配置为接收发送端发送的移位因子和量化处理后的 各样点数据; 将各样点数据分别进行解量化处理, 使得解量化处理后的各样点数据的比特 数等于量化处理前的原始比特数; 根据所述移位因子将解量化处理后的各样点数据中的数 据位分别进行右移位, 得到解压缩后的样点数据。
一种 BBU, 包括处理器;
该 BBU作为发送端时, 处理器被配置为将待发送数据进行分组, 每个分组中包含至 少一个样点数据; 对于每个分组, 根据该分组内去掉符号位后数值最大的样点数据的最高 位确定移位因子, 根据该移位因子将该分组内各样点数据的数据位分别进行左移位; 将左 移位后的各样点数据分别进行量化处理, 使得量化处理后的各样点数据的比特数等于压缩 的目标比特数; 将该移位因子和量化处理后的各样点数据发送给接收端;
该 BBU作为接收端时, 处理器被配置为接收发送端发送的移位因子和量化处理后的 各样点数据; 将各样点数据分别进行解量化处理, 使得解量化处理后的各样点数据的比特 数等于量化处理前的原始比特数; 根据所述移位因子将解量化处理后的各样点数据中的数 据位分别进行右移位, 得到解压缩后的样点数据。
本申请实施例提供的方案中, 发送端将待发送数据进行分组, 对于每个分组: 根据该 分组内去掉符号位后数值最大的样点数据的最高位确定移位因子, 才 居该移位因子将该分 组内各样点数据的数据位分别进行左移位; 将左移位后的各样点数据分别进行量化处理, 使得量化处理后的各样点数据的比特数等于压缩的目标比特数; 将该移位因子和量化处理 后的各样点数据发送给接收端。 可见, 本方案通过将待发送数据进行分组, 并对得到的每 个分组分别进行移位压缩, 实现了分段移位数据压缩方案, 进而优化了数据压缩性能。
接收端在接收到发送端发送的移位因子和量化处理后的各样点数据后, 将各样点数据 分别进行解量化处理, 使得解量化处理后的各样点数据的比特数等于量化处理前的原始比 特数, 然后根据该移位因子将解量化处理后的各样点数据中的数据位分别进行右移位, 得 到解压缩后的样点数据, 从而实现了与上述数据压缩方案相对应的数据解压缩方案。 附图说明 图 1为现有技术中的分布式基站设备示意图;
图 2为本申请实施例提供的方法流程示意图;
图 3为本申请实施例提供的另一方法流程示意图;
图 4为本申请实施例中的数据压缩及解压缩方法流程示意图;
图 5为本申请实施例中的具有数据压缩和解压缩模块的设备架构示意图;
图 6A为本申请实施例一的数据分组示意图;
图 6B为本申请实施例二的数据分组示意图;
图 6C为本申请实施例三的数据分组示意图;
图 6D为本申请实施例四的数据分组示意图;
图 6E为本申请实施例五的数据分组示意图;
图 6F为本申请实施例六的数据分组示意图;
图 6G为本申请实施例七的压缩前的 IQ数据帧格式示意图;
图 6H为本申请实施例七的压缩后的数据帧格式示意图;
图 61-图 6N为本申请实施例七的数据压缩及解压缩过程中的样点数据示意图; 图 7为本申请实施例提供的设备结构示意图;
图 8为本申请实施例提供的另一设备结构示意图。 具体实施方式 为了优化数据压缩性能, 本申请实施例提供一种数据压缩发送方法, 本方法中, 发送 端将待发送数据进行分组, 对于每个分组: 根据该分组内去掉符号位后数值最大的样点数 据的最高位确定移位因子, 根据该移位因子将该分组内各样点数据的数据位分别进行左移 位; 将左移位后的各样点数据分别进行量化处理, 使得量化处理后的各样点数据的比特数 等于压缩的目标比特数; 将该移位因子和量化处理后的各样点数据发送给接收端。
参见图 2, 本申请实施例针对数据发送端提供的数据压缩发送方法, 包括以下步骤: 步骤 20: 发送端将待发送数据进行分组, 每个分组中包含至少一个样点数据; 步骤 21 : 发送端对于每个分组, # ^据该分组内去掉符号位后数值最大的样点数据的最 高位确定移位因子, 根据该移位因子将该分组内各样点数据的数据位分别进行左移位; 将 左移位后的各样点数据分别进行量化处理, 使得量化处理后的各样点数据的比特数等于压 缩的目标比特数; 将该移位因子和量化处理后的各样点数据发送给接收端。 这里, 移位因 子表示在将样点数据的数据位进行左移位时的移位位数; 将样点数据的数据位进行左移位 的方法为, 样点数据的符号位不变, 将数据位的左边 N个比特删除后, 在数据位的右边添 加 N个 0, 得到左移位后的数据。
步骤 20 中, 在待发送数据为实部虚部 (IQ )数据时, 将该待发送数据进行分组可以 根据如下两个原则:
原则一、 根据 I路数据与 Q路数据之间的相关性, 若相关性较大, 可以考虑 I路数据 与 Q路数据统一分组的方案; 否则, 釆用 I路数据与 Q路数据独立分组的方案;
原则二、根据不同天线之间的相关性, 若相关性较大, 可以考虑天线统一分组的方案; 否则, 釆用不同天线独立分组的方案。 若釆用天线统一分组方案, 组内天线建议是同向极 化天线。
具体分组方法可以釆用如下六种方法之一:
方法 1 ,对每个天线的 IQ数据分别进行分组, 并且分组时对 I路数据和 Q路数据独立 进行分组, 使得同一天线的连续的至少一个 I路数据分为一组或连续的至少一个 Q路数据 分为一组;
方法 2,对每个天线的 IQ数据分别进行分组, 并且分组时对 I路数据和 Q路数据统一 进行分组,使得同一天线的连续的至少一个 I路数据和连续的至少一个 Q路数据分为一组; 方法 3 ,对多个天线的 IQ数据统一进行分组, 并且分组时对 I路数据和 Q路数据分别 进行分组, 使得多个天线的相同位置的同一样点的 I路数据分为一组或同一样点的 Q路数 据分为一组; 这里, 相同位置具体指相同的时间位置, 即相同时刻。
方法 4,对多个天线的 IQ数据统一进行分组, 并且分组时对 I路数据和 Q路数据统一 进行分组, 使得多个天线的相同位置的 I路数据和 Q路数据分为一组;
方法 5 ,对多个天线的 IQ数据统一进行分组, 并且分组时对 I路数据和 Q路数据分别 进行分组, 使得多个天线的相同位置的连续的至少一个样点的 I路数据分为一组或连续的 至少一个样点的 Q路数据分为一组;
方法 6,对多个天线的 IQ数据统一进行分组, 并且分组时对 I路数据和 Q路数据统一 进行分组, 使得多个天线的相同位置的连续的至少一个样点的 I路数据和 Q路数据分为一 组。
这里, IQ数据为在 RRU与 BBU之间的 Ir接口上传输的数据。
当然, 将待发送数据进行分组的方法并不局限于上述六种方法, 任何能够将数据进行 分组的方法均在本申请的保护范围内, 例如将待发送数据按照样点随机分组等。
步骤 21中, 才 居该分组内去掉符号位后数值最大的样点数据的最高位确定移位因子, 具体实现方法可以如下:
首先, 确定移位因子能够表示的最大移位数; 这里, 可以按照如下公式确定移位因子 能够表示的最大移位数 C: C=2A( n*k/2)-l ); 其中, Λ表示次方, η为该分组内包含的样点 数据的数目, k为用于传输移位因子的控制位的比特数, 例如 k可以为 1 ; 这里, 具体分 析如下: 假设控制位 AGC的比特数为 k (通常为 1 ), 若 I路数据和 Q路数据独立进行分 组, n个样点数据(I路数据或 Q路数据) 为一组, 则移位因子可占用 (n*k )个比特, I 路数据和 Q路数据各占用 n*k/2比特, 能表示的最大移位数为 2Λ( n*k/2)-l ) (从 0开始); 若 I路数据和 Q路数据统一进行分组, n/2个 I路数据和 n/2个 Q路数据为一组,则移位因 子可占用 n*k/2个比特, 能表示的最大移位数为 2Λ( n*k/2)-l ) (从 0开始);
然后, 确定该分组内去掉符号位后数值最大的样点数据、 以及该样点数据在去掉符号 位后的最高位的位数; 这里, 该样点数据在去掉符号位后的最高位是指该样点数据在去掉 符号位后、 从左向右的方向上第一个为 1的比特位, 比特位的位数编号从右向左从 0开始 依次编号。 例如,对于样点数据 100100001 ,在去掉最高位的符号位后的数据为 00100001 , 各比特位的位数从右向左依次为 0,1,2,3,4,5,6,7, 最高位的位数是 5。
最后, 若 A不大于所述最大移位数, 则确定移位因子等于 A, 否则, 确定移位因子等 于所述最大移位数;其中, A=W-1-H, W为该分组内的样点数据在去掉符号位后的比特数, H为所述最高位的位数。
这里, 移位因子可以釆用差分移位的方式。 即第一个分组的移位因子釆用绝对值, 即 按照上述方法计算, 其余分组的移位因子可以等于该分组的移位因子与第一个分组的移位 因子的差值。
较佳的, 为了提高压缩的精确度, 在确定移位因子能够表示的最大移位数之后、 且确 定该分组内去掉符号位后数值最大的样点数据、 以及该样点数据在去掉符号位后的最高位 的位数之前, 若该分组内去掉符号位后数值最大的样点数据在去掉符号位后的最高位的位 数为 E, 或者该分组内超过设定比例的样点数据在去掉符号位后的最高位的位数均小于或 等于 E, 则将该分组内的各样点数据饱和到 E比特, E为大于 0、 且小于 W的整数。 这里, 将样点数据饱和到 E比特的方法为: 该样点数据的符号位不变, 若该样点数据在去除符号 位后大于比对数据 , 则将该样点数据在去除符号位后的数据更新为该比对数据 , 若该样点 数据在去除符号位后不大于比对数据, 则该样点数据不变, 其中, 比对数据的比特位数为 W,且比对数据的低 E位均为 1 ,其余比特均为 0;例如,若该样点数据为 100011001 , E=3 , 该样点数据在去除最高位的符号位后大于比对数据 00000111 , 因此将该样点数据饱和后的 数据为 100000111 ; 又例如, 若该样点数据为 100000011 , E=3 , 该样点数据在去除最高位 的符号位后不大于比对数据 00000111 , 因此将该样点数据饱和后的数据仍为 100000011。
步骤 21 中, 将左移位后的各样点数据分别进行量化处理, 具体实现可以釆用如下均 匀量化方法:
对左移位后的各样点数据,该样点数据的符号位不变,从高位向低位截取数据位中 V-1 比特的比特数据 , 将符号位和截取的比特数据构成量化后的样点数据 , V等于所述压缩的 目标比特数。 例如, 左移位后的样点数据为 10001000, V=4, 最高位的符号位为 1 , 从高 位向低位截取数据位中 3比特的比特数据为 000, 那么由符号位和截取的比特数据构成的 量化后的样点数据为 1000。
本方法中,发送端可以为 RRU,接收端为 BBU;或者,发送端为 BBU,接收端为 RRU。 当然, 发送端可以为任何其他数据发送设备, 接收端可以为任何其他数据接收设备。
参见图 3 , 为了提供与图 2所述的数据压缩方法对应的数据解压缩方法, 本申请实施 例提供如下数据解压缩方法:
步骤 30: 接收端接收发送端发送的移位因子和量化处理后的各样点数据;
步骤 31 : 接收端将各样点数据分别进行解量化处理, 使得解量化处理后的各样点数据 的比特数等于量化处理前的原始比特数; 这里, 接收端根据发送端对各样点数据进行量化 处理的方法, 将对应的样点数据进行解量化处理。
步骤 32:接收端根据移位因子将解量化处理后的各样点数据中的数据位分别进行右移 位, 得到解压缩后的样点数据。 这里, 进行右移位的位数等于该移位因子对应的数值。
步骤 31中, 接收端将各样点数据分别进行解量化处理, 具体实现可以如下: 对于各样点数据, 在该样点数据的右侧补 B个 0, 得到解量化处理后的样点数据; 其 中, B=W-V, W为量化处理前的各样点数据在去掉符号位后的比特数, V为量化处理后的 各样点数据的比特数。 该方法可以适用于在发送端釆用所述均匀量化方法时。
较佳的, 为了提高解压缩结果的准确度, 在得到解量化处理后的样点数据之后、 且根 据移位因子将解量化处理后的各样点数据中的数据位分别进行右移位之前, 可以将解量化 处理后的各样点数据分别加上设定偏移值, 该设定偏移值在(0,2Λ ( B-1 ) ]中取值, Λ表示 次方, 比如, 该设定偏移值等于 2Λ ( B-1 ); 相应的, 步骤 32中, 接收端根据移位因子将 加上设定偏移值后的各样点数据中的数据位分别进行右移位。
本方法中,发送端可以为 RRU,接收端为 BBU;或者,发送端为 BBU,接收端为 RRU。 下面对本申请进行具体说明:
如图 4所示, 本申请中的发送端与接收端的处理流程如下: 步骤 201 : 发送端的压缩模块将待发送数据进行分组, 每个分组中包含至少一个样点 数据;
步骤 202: 压缩模块根据分组内去掉符号位后数值最大的样点数据的最高位确定移位 因子;
步骤 203: 压缩模块根据移位因子将分组内各样点数据的数据位分别进行左移位; 步骤 204: 压缩模块将左移位后的各样点数据分别进行量化处理, 使得量化处理后的 各样点数据的比特数等于压缩的目标比特数;
步骤 205: 压缩模块将移位因子和量化后的样点数据放到传输通道进行传输; 步骤 301: 接收端的解压缩模块接收移位因子和量化处理后的各样点数据; 步骤 302: 解压缩模块将各样点数据分别进行解量化处理, 使得解量化处理后的各样 点数据的比特数等于量化处理前的原始比特数;
步骤 303 : 解压缩模块根据移位因子将解量化处理后的各样点数据中的数据位分别进 行右移位, 得到解压缩后的数据。
特别的, 当本申请应用到 3G/4G系统的 Ir接口数据压缩时,其中的压缩模块和解压缩 模块在系统中的位置如图 5所示, 其中, 上行通道中, 压缩模块在 RRU侧, 对应 BBU侧 有一个解压缩模块。 下行通道中, 压缩模块在 BBU侧, 对应 RRU侧有解压缩模块。
以下实施例一〜实施例六给出了数据分组的 6种方法:
实施例一:
如图 6A所示, 针对数据分组方法 1 , 4个 I路或 4个 Q路为一组, 每个天线分别进行 分组。
实施例二:
如图 6B所示, 针对数据分组方法 2 , 2个 I路和 2个 Q路为一组, 每个天线分别进行 分组。
实施例三:
如图 6C所示,针对数据分组方法 3 , 4个同向极化天线的同一 I路或同一 Q路为一组。 实施例四:
如图 6D所示, 针对数据分组方法 4, 4个同向极化天线的 I路和 Q路为一组。
实施例五:
如图 6E所示, 针对数据分组方法 5 , 2个同向极化天线的连续 2个 I路数据为一组, 2 个同向极化天线的连续 2个 Q路数据为另一组, 图 6E只给出 I路数据的分组示意图。
实施例六:
如图 6F所示, 针对数据分组方法 6, 2个同向极化天线的连续 2个 I路和 Q路数据为 实施例七:
为了方便描述, 本实施例只针对目前 3G/4G系统中 15bit IQ数据压缩到 7bit的方案进 行说明。 压缩前的 IQ数据帧格式如图 6G所示, 压缩后的数据帧格式如图 6H所示, 需要 说明的是本申请不限于本文提到的帧格式。
本实施例中, 对于二进制数据釆用原码表示。 对于压缩前 15bit的 IQ数据中, I路 Q 路分别占 15bit, 其中, lbit为符号位, 14bit为数据位, 压缩到 7bit后, 1 bit为符号位, 6bit为数据位。 即压缩前去掉符号位的字长(即比特数) W=14, 压缩后去掉符号位的字长 为 V=6。
步骤 1 : 发送端将待发送的 IQ数据进行分组, 详见实施例一〜实施例六。
假设以 4个样点数据为一组, 移位因子占 2 bit, 表示 0~3的移位位数。 例如, 分组 i 内的 4个样点数据如图 61所示。
步骤 2: 针对分组 i计算移位因子, 分组 i内去掉符号位后数值最大的样点数据为 D3 , D3在去掉符号位后的最高位 H=12, 即移位因子等于 (W-1-H ) =14-1-12=1。
步骤 3: 将分组 i内的各样点数据左移 1位(符号位不变, 数据位左移 1位), 得到图 6J所示的样点数据。
步骤 4: 将图 6J所示的各样点数据分别量化到目标比特数 7bit, 即对移位后的样点数 据的数据位从高向低截取 6 bit, 符号位不变, 得到如图 6K所示的量化处理后的样点数据, 并将量化处理后的样点数据发送给接收端。
步骤 5: 接收端对接收到的样点数据进行解量化处理, 即将 7 bit 的样点数据先向右补 ( W-V ) =14-6=8个 0, 变为 15bit, 如图 6L所示。
步骤 6: 误差补偿, 即将如图 6L所示的 D1 D4加上一个偏移值 a, 设 a=2A(W-V-l ) =2^(14-6-1 ) =2Λ7, 得到如图 6Μ所示的样点数据。
步骤 7: 将如图 6Μ所示的各样点数据分别向右移 1位(符号位不变, 数据位右移 1 位), 得到如图 6Ν所示的数据, 即为接收端解压缩得到的数据。
参见图 7, 本申请实施例提供一种数据压缩发送设备, 该设备包括:
分组单元 70, 将待发送数据进行分组, 每个分组中包含至少一个样点数据; 压缩单元 71 , 用于对于每个分组, # ^据该分组内去掉符号位后数值最大的样点数据的 最高位确定移位因子, 根据该移位因子将该分组内各样点数据的数据位分别进行左移位; 将左移位后的各样点数据分别进行量化处理, 使得量化处理后的各样点数据的比特数等于 压缩的目标比特数;
发送单元 72 , 用于将该移位因子和量化处理后的各样点数据进行传输。
进一步的, 所述分组单元 70用于:
在所述待发送数据为实部虚部 IQ数据时, 按照如下方法将待发送数据进行分组: 对每个天线的 IQ数据分别进行分组, 并且分组时对 I路数据和 Q路数据独立进行分 组, 使得同一天线的连续的至少一个 I路数据分为一组或连续的至少一个 Q路数据分为一 组; 或者,
对每个天线的 IQ数据分别进行分组, 并且分组时对 I路数据和 Q路数据统一进行分 组,使得同一天线的连续的至少一个 I路数据和连续的至少一个 Q路数据分为一组;或者, 对多个天线的 IQ数据统一进行分组, 并且分组时对 I路数据和 Q路数据分别进行分 组, 使得多个天线的相同位置的同一样点的 I路数据分为一组或同一样点的 Q路数据分为 一组; 或者,
对多个天线的 IQ数据统一进行分组, 并且分组时对 I路数据和 Q路数据统一进行分 组, 使得多个天线的相同位置的 I路数据和 Q路数据分为一组; 或者,
对多个天线的 IQ数据统一进行分组, 并且分组时对 I路数据和 Q路数据分别进行分 组, 使得多个天线的相同位置的连续的至少一个样点的 I路数据分为一组或连续的至少一 个样点的 Q路数据分为一组; 或者,
对多个天线的 IQ数据统一进行分组, 并且分组时对 I路数据和 Q路数据统一进行分 组, 使得多个天线的相同位置的连续的至少一个样点的 I路数据和 Q路数据分为一组。
进一步的, 所述压缩单元 71 用于: 按照如下方法根据该分组内去掉符号位后数值最 大的样点数据的最高位确定移位因子:
确定移位因子能够表示的最大移位数;
确定该分组内去掉符号位后数值最大的样点数据、 以及该样点数据在去掉符号位后的 最高位的位数;
若 A不大于所述最大移位数, 则确定移位因子等于 A, 否则, 确定移位因子等于所述 最大移位数; 其中, A=W-1-H, W为该分组内的样点数据在去掉符号位后的比特数, H为 所述最高位的位数。
进一步的, 所述压缩单元 71 用于: 按照如下公式确定移位因子能够表示的最大移位 数 C:
C=2A( n*k/2)-l );
其中, Λ表示次方, η为该分组内包含的样点数据的数目, k为用于传输移位因子的控 制位的比特数。
进一步的, 所述压缩单元 71还用于:
在确定移位因子能够表示的最大移位数之后、 且确定该分组内去掉符号位后数值最大 的样点数据、 以及该样点数据在去掉符号位后的最高位的位数之前, 若该分组内去掉符号 位后数值最大的样点数据在去掉符号位后的最高位的位数为 E, 或者该分组内超过设定比 例的样点数据在去掉符号位后的最高位的位数均小于或等于 E, 则将该分组内的各样点数 据饱和到 E比特, E为大于 0的整数。
进一步的, 所述压缩单元 71 用于: 按照如下方法将左移位后的各样点数据分别进行 量化处理:
对左移位后的各样点数据分别从高位向低位截取 V比特的比特数据,得到量化后的样 点数据。
参见图 8 , 本申请实施例提供一种数据解压缩设备, 该设备包括:
接收单元 80, 用于接收移位因子和量化处理后的各样点数据;
解量化单元 81 , 用于将各样点数据分别进行解量化处理, 使得解量化处理后的各样点 数据的比特数等于量化处理前的原始比特数;
移位单元 82 ,用于根据所述移位因子将解量化处理后的各样点数据中的数据位分别进 行右移位, 得到解压缩后的样点数据。
进一步的, 所述解量化单元 81用于:
对于各样点数据, 在该样点数据的右侧补 B个 0, 得到解量化处理后的样点数据; 其 中, B=W-V, W为量化处理前的各样点数据在去掉符号位后的比特数, V为量化处理后的 各样点数据的比特数。
进一步的, 所述解量化单元 81还用于:
在得到解量化处理后的样点数据之后、 且苏搜狐移位单元根据所述移位因子将解量化 处理后的各样点数据中的数据位分别进行右移位之前, 将解量化处理后的各样点数据分别 加上设定偏移值, 该设定偏移值在(0,2Λ ( B-1 ) ]中取值, Λ表示次方;
所述移位单元 82用于:
根据所述移位因子将加上设定偏移值后的各样点数据中的数据位分别进行右移位。 综上, 本申请的有益效果包括:
本申请实施例提供的方案中, 发送端将待发送数据进行分组, 对于每个分组: 根据该 分组内去掉符号位后数值最大的样点数据的最高位确定移位因子, 才 居该移位因子将该分 组内各样点数据的数据位分别进行左移位; 将左移位后的各样点数据分别进行量化处理, 使得量化处理后的各样点数据的比特数等于压缩的目标比特数; 将该移位因子和量化处理 后的各样点数据发送给接收端。 可见, 本方案通过将待发送数据进行分组, 并对得到的每 个分组分别进行移位压缩, 实现了分段移位数据压缩方案, 进而优化了数据压缩性能。
接收端在接收到发送端发送的移位因子和量化处理后的各样点数据后, 将各样点数据 分别进行解量化处理, 使得解量化处理后的各样点数据的比特数等于量化处理前的原始比 特数, 然后根据该移位因子将解量化处理后的各样点数据中的数据位分别进行右移位, 得 到解压缩后的样点数据, 从而实现了与上述数据压缩方案相对应的数据解压缩方案。
基于与方法同样的发明构思, 本申请实施例还提供一种 RRU, 该 RRU作为发送端, 包括:
分组单元, 将待发送数据进行分组, 每个分组中包含至少一个样点数据;
压缩单元, 用于对于每个分组, # ^据该分组内去掉符号位后数值最大的样点数据的最 高位确定移位因子, 根据该移位因子将该分组内各样点数据的数据位分别进行左移位; 将 左移位后的各样点数据分别进行量化处理, 使得量化处理后的各样点数据的比特数等于压 缩的目标比特数;
发送单元, 用于将该移位因子和量化处理后的各样点数据进行传输。
基于与方法同样的发明构思, 本申请实施例还提供一种 BBU , 该 BBU作为发送端, 包括:
分组单元, 将待发送数据进行分组, 每个分组中包含至少一个样点数据;
压缩单元, 用于对于每个分组, # ^据该分组内去掉符号位后数值最大的样点数据的最 高位确定移位因子, 根据该移位因子将该分组内各样点数据的数据位分别进行左移位; 将 左移位后的各样点数据分别进行量化处理, 使得量化处理后的各样点数据的比特数等于压 缩的目标比特数;
发送单元, 用于将该移位因子和量化处理后的各样点数据进行传输。
基于与方法同样的发明构思, 本申请实施例还提供一种 RRU, 该 RRU作为接收端, 包括:
接收单元, 用于接收移位因子和量化处理后的各样点数据;
解量化单元, 用于将各样点数据分别进行解量化处理, 使得解量化处理后的各样点数 据的比特数等于量化处理前的原始比特数;
移位单元, 用于根据所述移位因子将解量化处理后的各样点数据中的数据位分别进行 右移位, 得到解压缩后的样点数据。
基于与方法同样的发明构思, 本申请实施例还提供一种 BBU, 该 BBU作为接收端, 包括:
接收单元, 用于接收移位因子和量化处理后的各样点数据;
解量化单元, 用于将各样点数据分别进行解量化处理, 使得解量化处理后的各样点数 据的比特数等于量化处理前的原始比特数;
移位单元, 用于根据所述移位因子将解量化处理后的各样点数据中的数据位分别进行 右移位, 得到解压缩后的样点数据。
基于与方法同样的发明构思, 本申请实施例还提供一种 RRU, 包括处理器; 该 RRU作为发送端时, 处理器被配置为将待发送数据进行分组, 每个分组中包含至 少一个样点数据; 对于每个分组, 根据该分组内去掉符号位后数值最大的样点数据的最高 位确定移位因子, 根据该移位因子将该分组内各样点数据的数据位分别进行左移位; 将左 移位后的各样点数据分别进行量化处理, 使得量化处理后的各样点数据的比特数等于压缩 的目标比特数; 将该移位因子和量化处理后的各样点数据发送给接收端;
该 RRU作为接收端时, 处理器被配置为接收发送端发送的移位因子和量化处理后的 各样点数据; 将各样点数据分别进行解量化处理, 使得解量化处理后的各样点数据的比特 数等于量化处理前的原始比特数; 根据所述移位因子将解量化处理后的各样点数据中的数 据位分别进行右移位, 得到解压缩后的样点数据。
基于与方法同样的发明构思, 本申请实施例还提供一种 BBU, 包括处理器; 该 BBU作为发送端时, 处理器被配置为将待发送数据进行分组, 每个分组中包含至 少一个样点数据; 对于每个分组, 根据该分组内去掉符号位后数值最大的样点数据的最高 位确定移位因子, 根据该移位因子将该分组内各样点数据的数据位分别进行左移位; 将左 移位后的各样点数据分别进行量化处理, 使得量化处理后的各样点数据的比特数等于压缩 的目标比特数; 将该移位因子和量化处理后的各样点数据发送给接收端;
该 BBU作为接收端时, 处理器被配置为接收发送端发送的移位因子和量化处理后的 各样点数据; 将各样点数据分别进行解量化处理, 使得解量化处理后的各样点数据的比特 数等于量化处理前的原始比特数; 根据所述移位因子将解量化处理后的各样点数据中的数 据位分别进行右移位, 得到解压缩后的样点数据。
本申请是参照根据本申请实施例的方法、 设备(系统)、 和计算机程序产品的流程图 和 /或方框图来描述的。 应理解可由计算机程序指令实现流程图和 /或方框图中的每一流 程和 /或方框、 以及流程图和 /或方框图中的流程和 /或方框的结合。 可提供这些计算机 程序指令到通用计算机、 专用计算机、 嵌入式处理机或其他可编程数据处理设备的处理器 以产生一个机器, 使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用 于实现在流程图一个流程或多个流程和 /或方框图一个方框或多个方框中指定的功能的 装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方 式工作的计算机可读存储器中, 使得存储在该计算机可读存储器中的指令产生包括指令装 置的制造品, 该指令装置实现在流程图一个流程或多个流程和 /或方框图一个方框或多个 方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上, 使得在计算机 或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理, 从而在计算机或其他 可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和 /或方框图一个 方框或多个方框中指定的功能的步骤。
尽管已描述了本申请的优选实施例, 但本领域内的技术人员一旦得知了基本创造性概 念, 则可对这些实施例作出另外的变更和修改。 所以, 所附权利要求意欲解释为包括优选 实施例以及落入本申请范围的所有变更和修改。
显然, 本领域的技术人员可以对本申请进行各种改动和变型而不脱离本申请的精神和 范围。这样,倘若本申请的这些修改和变型属于本申请权利要求及其等同技术的范围之内, 则本申请也意图包含这些改动和变型在内。

Claims

权 利 要 求
1、 一种数据压缩发送方法, 其特征在于, 该方法包括:
发送端将待发送数据进行分组, 每个分组中包含至少一个样点数据;
发送端对于每个分组, #>据该分组内去掉符号位后数值最大的样点数据的最高位确定 移位因子, 根据该移位因子将该分组内各样点数据的数据位分别进行左移位; 将左移位后 的各样点数据分别进行量化处理, 使得量化处理后的各样点数据的比特数等于压缩的目标 比特数; 将该移位因子和量化处理后的各样点数据发送给接收端。
2、如权利要求 1所述的方法,其特征在于, 在所述待发送数据为实部虚部 IQ数据时, 所述将待发送数据进行分组, 具体包括:
对每个天线的 IQ数据分别进行分组, 并且分组时对 I路数据和 Q路数据独立进行分 组, 使得同一天线的连续的至少一个 I路数据分为一组或连续的至少一个 Q路数据分为一 组; 或者,
对每个天线的 IQ数据分别进行分组, 并且分组时对 I路数据和 Q路数据统一进行分 组,使得同一天线的连续的至少一个 I路数据和连续的至少一个 Q路数据分为一组;或者, 对多个天线的 IQ数据统一进行分组, 并且分组时对 I路数据和 Q路数据分别进行分 组, 使得多个天线的相同位置的同一样点的 I路数据分为一组或同一样点的 Q路数据分为 一组; 或者,
对多个天线的 IQ数据统一进行分组, 并且分组时对 I路数据和 Q路数据统一进行分 组, 使得多个天线的相同位置的 I路数据和 Q路数据分为一组; 或者,
对多个天线的 IQ数据统一进行分组, 并且分组时对 I路数据和 Q路数据分别进行分 组, 使得多个天线的相同位置的连续的至少一个样点的 I路数据分为一组或连续的至少一 个样点的 Q路数据分为一组; 或者,
对多个天线的 IQ数据统一进行分组, 并且分组时对 I路数据和 Q路数据统一进行分 组, 使得多个天线的相同位置的连续的至少一个样点的 I路数据和 Q路数据分为一组。
3、 如权利要求 1 所述的方法, 其特征在于, 所述根据该分组内去掉符号位后数值最 大的样点数据的最高位确定移位因子, 具体包括:
确定移位因子能够表示的最大移位数;
确定该分组内去掉符号位后数值最大的样点数据、 以及该样点数据在去掉符号位后的 最高位的位数;
若 A不大于所述最大移位数, 则确定移位因子等于 A, 否则, 确定移位因子等于所述 最大移位数; 其中, A=W-1-H, W为该分组内的样点数据在去掉符号位后的比特数, H为 所述最高位的位数。
4、 如权利要求 3 所述的方法, 其特征在于, 按照如下公式确定移位因子能够表示的 最大移位数 C:
C=2A( n*k/2)-l );
其中, Λ表示次方, η为该分组内包含的样点数据的数目, k为用于传输移位因子的控 制位的比特数。
5、 如权利要求 3 所述的方法, 其特征在于, 在确定移位因子能够表示的最大移位数 之后、 且确定该分组内去掉符号位后数值最大的样点数据、 以及该样点数据在去掉符号位 后的最高位的位数之前, 进一步包括:
若该分组内去掉符号位后数值最大的样点数据在去掉符号位后的最高位的位数为 E, 或者该分组内超过设定比例的样点数据在去掉符号位后的最高位的位数均小于或等于 E, 则将该分组内的各样点数据饱和到 E比特, E为大于 0的整数。
6、 如权利要求 1 所述的方法, 其特征在于, 所述将左移位后的各样点数据分别进行 量化处理, 具体包括:
对左移位后的各样点数据 ,该样点数据的符号位不变,从高位向低位截取数据位中 V-1 比特的比特数据 , 将符号位和截取的比特数据构成量化后的样点数据 , V等于所述压缩的 目标比特数。
7、如权利要求 1-6中任一所述的方法,其特征在于,所述发送端为射频远端设备 RRU, 所述接收端为基带单元设备 BBU; 或者,
所述发送端为 BBU, 所述接收端为 RRU。
8、 一种数据解压缩方法, 其特征在于, 该方法包括:
接收端接收发送端发送的移位因子和量化处理后的各样点数据;
接收端将各样点数据分别进行解量化处理, 使得解量化处理后的各样点数据的比特数 等于量化处理前的原始比特数;
接收端根据所述移位因子将解量化处理后的各样点数据中的数据位分别进行右移位, 得到解压缩后的样点数据。
9、 如权利要求 8 所述的方法, 其特征在于, 所述接收端将各样点数据分别进行解量 化处理, 具体包括:
对于各样点数据, 在该样点数据的右侧补 B个 0, 得到解量化处理后的样点数据; 其 中, B=W-V, W为量化处理前的各样点数据在去掉符号位后的比特数, V为量化处理后的 各样点数据的比特数。
10、 如权利要求 9所述的方法, 其特征在于, 在得到解量化处理后的样点数据之后、 且根据所述移位因子将解量化处理后的各样点数据中的数据位分别进行右移位之前, 进一 步包括:
将解量化处理后的各样点数据分别加上设定偏移值, 该设定偏移值在 (0,2Λ ( B-1 ) ] 中取值, Λ表示次方;
所述根据所述移位因子将解量化处理后的各样点数据中的数据位分别进行右移位, 具 体包括:
根据所述移位因子将加上设定偏移值后的各样点数据中的数据位分别进行右移位。
11、 如权利要求 8-10中任一所述的方法, 其特征在于, 所述发送端为 RRU, 所述接 收端为 BBU; 或者,
所述发送端为 BBU, 所述接收端为 RRU。
12、 一种数据压缩发送设备, 其特征在于, 该设备包括:
分组单元, 将待发送数据进行分组, 每个分组中包含至少一个样点数据;
压缩单元, 用于对于每个分组, # ^据该分组内去掉符号位后数值最大的样点数据的最 高位确定移位因子, 根据该移位因子将该分组内各样点数据的数据位分别进行左移位; 将 左移位后的各样点数据分别进行量化处理, 使得量化处理后的各样点数据的比特数等于压 缩的目标比特数;
发送单元, 用于将该移位因子和量化处理后的各样点数据进行传输。
13、 如权利要求 12所述的设备, 其特征在于, 所述压缩单元用于:
在所述待发送数据为实部虚部 IQ数据时, 按照如下方法将待发送数据进行分组: 对每个天线的 IQ数据分别进行分组, 并且分组时对 I路数据和 Q路数据独立进行分 组, 使得同一天线的连续的至少一个 I路数据分为一组或连续的至少一个 Q路数据分为一 组; 或者,
对每个天线的 IQ数据分别进行分组, 并且分组时对 I路数据和 Q路数据统一进行分 组,使得同一天线的连续的至少一个 I路数据和连续的至少一个 Q路数据分为一组;或者, 对多个天线的 IQ数据统一进行分组, 并且分组时对 I路数据和 Q路数据分别进行分 组, 使得多个天线的相同位置的同一样点的 I路数据分为一组或同一样点的 Q路数据分为 一组; 或者,
对多个天线的 IQ数据统一进行分组, 并且分组时对 I路数据和 Q路数据统一进行分 组, 使得多个天线的相同位置的 I路数据和 Q路数据分为一组; 或者,
对多个天线的 IQ数据统一进行分组, 并且分组时对 I路数据和 Q路数据分别进行分 组, 使得多个天线的相同位置的连续的至少一个样点的 I路数据分为一组或连续的至少一 个样点的 Q路数据分为一组; 或者,
对多个天线的 IQ数据统一进行分组, 并且分组时对 I路数据和 Q路数据统一进行分 组, 使得多个天线的相同位置的连续的至少一个样点的 I路数据和 Q路数据分为一组。
14、 如权利要求 12 所述的设备, 其特征在于, 所述压缩单元用于: 按照如下方法根 据该分组内去掉符号位后数值最大的样点数据的最高位确定移位因子:
确定移位因子能够表示的最大移位数;
确定该分组内去掉符号位后数值最大的样点数据、 以及该样点数据在去掉符号位后的 最高位的位数;
若 A不大于所述最大移位数, 则确定移位因子等于 A, 否则, 确定移位因子等于所述 最大移位数; 其中, A=W-1-H, W为该分组内的样点数据在去掉符号位后的比特数, H为 所述最高位的位数。
15、 如权利要求 14 所述的设备, 其特征在于, 所述压缩单元用于: 按照如下公式确 定移位因子能够表示的最大移位数 C:
C=2A( n*k/2)-l );
其中, Λ表示次方, η为该分组内包含的样点数据的数目, k为用于传输移位因子的控 制位的比特数。
16、 如权利要求 14所述的设备, 其特征在于, 所述压缩单元还用于:
在确定移位因子能够表示的最大移位数之后、 且确定该分组内去掉符号位后数值最大 的样点数据、 以及该样点数据在去掉符号位后的最高位的位数之前, 若该分组内去掉符号 位后数值最大的样点数据在去掉符号位后的最高位的位数为 E, 或者该分组内超过设定比 例的样点数据在去掉符号位后的最高位的位数均小于或等于 E, 则将该分组内的各样点数 据饱和到 E比特, E为大于 0的整数。
17、 如权利要求 12 所述的设备, 其特征在于, 所述压缩单元用于: 按照如下方法将 左移位后的各样点数据分别进行量化处理:
对左移位后的各样点数据,该样点数据的符号位不变,从高位向低位截取数据位中 V-1 比特的比特数据 , 将符号位和截取的比特数据构成量化后的样点数据 , V等于所述压缩的 目标比特数。
18、 一种数据解压缩设备, 其特征在于, 该设备包括:
接收单元, 用于接收移位因子和量化处理后的各样点数据;
解量化单元, 用于将各样点数据分别进行解量化处理, 使得解量化处理后的各样点数 据的比特数等于量化处理前的原始比特数;
移位单元, 用于根据所述移位因子将解量化处理后的各样点数据中的数据位分别进行 右移位, 得到解压缩后的样点数据。
19、 如权利要求 18所述的设备, 其特征在于, 所述解量化单元用于:
对于各样点数据, 在该样点数据的右侧补 B个 0, 得到解量化处理后的样点数据; 其 中, B=W-V, W为量化处理前的各样点数据在去掉符号位后的比特数, V为量化处理后的 各样点数据的比特数。
20、 如权利要求 19所述的设备, 其特征在于, 所述解量化单元还用于:
在得到解量化处理后的样点数据之后、 且苏搜狐移位单元根据所述移位因子将解量化 处理后的各样点数据中的数据位分别进行右移位之前, 将解量化处理后的各样点数据分别 加上设定偏移值, 该设定偏移值在(0,2Λ ( B-1 ) ]中取值, Λ表示次方;
所述移位单元用于:
根据所述移位因子将加上设定偏移值后的各样点数据中的数据位分别进行右移位。
PCT/CN2013/080405 2012-08-21 2013-07-30 数据压缩发送及解压缩方法和设备 WO2014029260A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP13831399.4A EP2890076B1 (en) 2012-08-21 2013-07-30 Method and device for data compression, transmission, and decompression
US14/422,896 US9515737B2 (en) 2012-08-21 2013-07-30 Method and device for data compression, transmission, and decompression

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201210298915.6 2012-08-21
CN201210298915.6A CN103634273A (zh) 2012-08-21 2012-08-21 数据压缩发送及解压缩方法和设备

Publications (1)

Publication Number Publication Date
WO2014029260A1 true WO2014029260A1 (zh) 2014-02-27

Family

ID=50149408

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2013/080405 WO2014029260A1 (zh) 2012-08-21 2013-07-30 数据压缩发送及解压缩方法和设备

Country Status (4)

Country Link
US (1) US9515737B2 (zh)
EP (1) EP2890076B1 (zh)
CN (1) CN103634273A (zh)
WO (1) WO2014029260A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015185058A1 (de) * 2014-06-05 2015-12-10 Conti Temic Microelectronic Gmbh Radarsystem mit optimierter speicherung von zwischendaten

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE19752598C1 (de) * 1997-11-27 1999-08-26 Riesinger Auffangbeutel zum Anschluß an nicht-natürliche Körperöffnungen (Stromata) beim Menschen
WO2015197104A1 (en) * 2014-06-23 2015-12-30 Telecom Italia S.P.A. Method for reducing fronthaul load in centralized radio access networks (c-ran)
WO2016015286A1 (en) * 2014-07-31 2016-02-04 Telefonaktiebolaget L M Ericsson (Publ) Methods and apparatuses for data compression and decompression
CN108134804B (zh) * 2014-08-08 2021-03-09 安科讯(福建)科技有限公司 一种数据压缩算法及装置
CN105763287B (zh) * 2014-12-16 2019-12-03 中兴通讯股份有限公司 数据传输方法及装置
WO2016191987A1 (zh) 2015-05-29 2016-12-08 华为技术有限公司 一种i/q信号的传输方法、装置及系统
US10135599B2 (en) * 2016-08-05 2018-11-20 Nokia Technologies Oy Frequency domain compression for fronthaul interface
GB2567149B (en) 2017-09-29 2021-11-03 Bridgeworks Ltd Managing data Compression
CN110852439B (zh) * 2019-11-20 2024-02-02 字节跳动有限公司 数据处理方法及装置、存储介质
US11184023B1 (en) * 2020-08-24 2021-11-23 Innogrit Technologies Co., Ltd. Hardware friendly data compression
CN112134568A (zh) * 2020-09-15 2020-12-25 广州市埃信电信有限公司 一种有损的数据压缩、解压缩方法及其系统
CN112235828B (zh) * 2020-09-24 2022-06-28 杭州红岭通信息科技有限公司 基于cpri协议的数据压缩方法
CN116346939B (zh) * 2023-03-23 2024-04-02 上海毫微太科技有限公司 一种数据压缩方法、装置、电子设备及存储介质
CN117097346B (zh) * 2023-10-19 2024-03-19 深圳大普微电子股份有限公司 一种解压器及数据解压方法、系统、设备、计算机介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040136566A1 (en) * 2002-11-21 2004-07-15 Samsung Electronics Co., Ltd. Method and apparatus for encrypting and compressing multimedia data
CN101615911A (zh) * 2009-05-12 2009-12-30 华为技术有限公司 一种编解码方法和装置
CN101771416A (zh) * 2008-12-29 2010-07-07 华为技术有限公司 位平面编码和解码方法、通信系统及相关设备
CN101980464A (zh) * 2010-09-30 2011-02-23 华为技术有限公司 数据编码方法、解码方法、编码器和解码器

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6931058B1 (en) 2000-05-19 2005-08-16 Scientific-Atlanta, Inc. Method and apparatus for the compression and/or transport and/or decompression of a digital signal
US8301803B2 (en) * 2009-10-23 2012-10-30 Samplify Systems, Inc. Block floating point compression of signal data
CN102065470B (zh) * 2009-11-18 2013-11-06 中兴通讯股份有限公司 一种数据传输方法、装置及分布式基站系统
CN102244552A (zh) * 2010-05-13 2011-11-16 中兴通讯股份有限公司 一种数据发送、接收方法及装置
CN102075467B (zh) * 2010-12-17 2014-10-22 中兴通讯股份有限公司 同相正交信号iq数据压缩方法及装置
US8923386B2 (en) * 2011-02-11 2014-12-30 Alcatel Lucent Method and apparatus for signal compression and decompression

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040136566A1 (en) * 2002-11-21 2004-07-15 Samsung Electronics Co., Ltd. Method and apparatus for encrypting and compressing multimedia data
CN101771416A (zh) * 2008-12-29 2010-07-07 华为技术有限公司 位平面编码和解码方法、通信系统及相关设备
CN101615911A (zh) * 2009-05-12 2009-12-30 华为技术有限公司 一种编解码方法和装置
CN101980464A (zh) * 2010-09-30 2011-02-23 华为技术有限公司 数据编码方法、解码方法、编码器和解码器

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015185058A1 (de) * 2014-06-05 2015-12-10 Conti Temic Microelectronic Gmbh Radarsystem mit optimierter speicherung von zwischendaten
CN106461758A (zh) * 2014-06-05 2017-02-22 康蒂-特米克微电子有限公司 带有优化的中间数据存储的雷达系统
US10520584B2 (en) 2014-06-05 2019-12-31 Continental Automotive Systems, Inc. Radar system with optimized storage of temporary data
CN106461758B (zh) * 2014-06-05 2020-01-21 康蒂-特米克微电子有限公司 带有优化的中间数据存储的雷达系统

Also Published As

Publication number Publication date
CN103634273A (zh) 2014-03-12
EP2890076A1 (en) 2015-07-01
EP2890076B1 (en) 2020-02-12
US20150295652A1 (en) 2015-10-15
US9515737B2 (en) 2016-12-06
EP2890076A4 (en) 2015-10-28

Similar Documents

Publication Publication Date Title
WO2014029260A1 (zh) 数据压缩发送及解压缩方法和设备
US9794828B2 (en) Radio unit, baseband processing unit and base station system
JP6905066B2 (ja) 符号化及び復号方法並びにデバイス
Nanba et al. A new IQ data compression scheme for front-haul link in centralized RAN
CN102075467B (zh) 同相正交信号iq数据压缩方法及装置
US10230394B2 (en) Methods for compressing and decompressing IQ data, and associated devices
WO2012094517A1 (en) Frequency domain compression in a base transceiver system
CN103684680A (zh) 解码经编码的数据块
WO2016095577A1 (zh) 数据传输方法及装置
US20170078916A1 (en) Data processing method and apparatus
CN107517503B (zh) 一种处理装置、bbu、rru及天线校正方法
WO2012155614A1 (zh) 无线通讯系统中数据压缩与解压缩方法、装置及系统
US11050510B2 (en) Polar code transmission method and apparatus
EP3641173A1 (en) Polar code encoding and decoding method and device
WO2015094257A1 (en) Apparatus, system and method of communicating scrambled transmissions according to a retransmission scheme
CN102821072A (zh) 一种同相正交信号iq数据发送、接收方法、系统及装置
CN110635867A (zh) 通信方法、网络设备和终端
WO2014136193A1 (ja) 基地局装置、基地局システムおよびiqデータの圧縮方法
CN107493257B (zh) 一种帧数据压缩传输方法和装置
CN105846828A (zh) Iq数据的压缩和解压缩方法、装置和iq数据的传输方法、系统
KR101869903B1 (ko) 물리 계층 데이터 전송 방법 및 데이터 전송 장치
CN102821489A (zh) 基站及基站侧数据压缩方法
CN107615810B (zh) 用于在线网络代码的包头压缩系统和方法
CN114009080A (zh) 用于信道状态信息传输的方法、设备和计算机可读介质
CN104868942A (zh) 通信设备和通信系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13831399

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 14422896

Country of ref document: US

Ref document number: 2013831399

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE