CN104918259A - Cache data scheduling method and device - Google Patents

Cache data scheduling method and device Download PDF

Info

Publication number
CN104918259A
CN104918259A CN201510287722.4A CN201510287722A CN104918259A CN 104918259 A CN104918259 A CN 104918259A CN 201510287722 A CN201510287722 A CN 201510287722A CN 104918259 A CN104918259 A CN 104918259A
Authority
CN
China
Prior art keywords
sub
data segments
data
buffer unit
data segment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510287722.4A
Other languages
Chinese (zh)
Other versions
CN104918259B (en
Inventor
宋俊存
王一帆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201510287722.4A priority Critical patent/CN104918259B/en
Publication of CN104918259A publication Critical patent/CN104918259A/en
Application granted granted Critical
Publication of CN104918259B publication Critical patent/CN104918259B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W16/00Network planning, e.g. coverage or traffic planning tools; Network deployment, e.g. resource partitioning or cells structures
    • H04W16/02Resource partitioning among network components, e.g. reuse partitioning
    • H04W16/12Fixed resource partitioning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/9042Separate storage for different parts of the packet, e.g. header and payload
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W52/00Power management, e.g. TPC [Transmission Power Control], power saving or power classes
    • H04W52/02Power saving arrangements
    • H04W52/0203Power saving arrangements in the radio access network or backbone network of wireless communication networks
    • H04W52/0206Power saving arrangements in the radio access network or backbone network of wireless communication networks in access points, e.g. base stations
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention provides a cache data scheduling method and device. One data segment is broken down into at least three sub-data segments through a bit level processing accelerator according to storage granularity of a cache unit; the first sub-data segment in the at least three sub-data segments and the second sub-data segment in the at least three sub-data segments are written in the cache unit successively by the bit level processing accelerator; and after the first sub-data segment in the two sub-data segments is read out of the cache unit by a symbol level processing accelerator, the third sub-data segment in the at least three sub-data segments is written in the cache unit until all the sub-data segments in the at least three sub-data segments are written in the cache unit. The specification adaptive to an LTE chip is increasingly high, cell specifications and layer number proportionally increase and great and more extendibility is provided, and power consumption of the cache unit is reduced due to the fact that capacity of the cache unit does not need to be continuously extended to provide extendibility.

Description

Data cached dispatching method and device
Technical field
The present invention relates to the communication technology, particularly relate to a kind of data cached dispatching method and device.
Background technology
At Long Term Evolution (the Long Term Evolution of base station side, be called for short: LTE) in down link, data need to realize the conversion from bit (BIT) level processor accelerator to symbol level processor accelerator, adopt in prior art and set up Transmission Time Interval (Transmission Time Interval between bit-level processor accelerator and symbol level processor accelerator, be called for short: TTI) level ping-pong buffer storage unit (write table tennis time read pang, write pang time read table tennis), realize data from bit-level process to the storage of symbol level process interweave conversion.Wherein, interweave and refer to the ping-pong buffer storage unit according to code word granularity, data being write TTI level, each code word comprises all time domain datas of place frequency domain, according to the ping-pong buffer storage unit sense data of symbol granularity from this TTI level.
Further, when adopting above-mentioned implementation method, each code word only needs scheduling once, and BIT level processor accelerator is to the data of each code word simultaneously, only process once in 1 TTI, therefore this implementation method requires relatively low to the disposal ability of BIT level processor accelerator.
But, along with improving constantly of community specification and the number of plies, in prior art, adopt code word granularity data to be write the mode of the ping-pong buffer unit of TTI level, cause the memory capacity of TTI level ping-pong buffer storage unit also to need continuous expansion.For 6 community 20MHz8 antennas, carry out dispatching the memory capacity needed according to TTI granularity and be: 2 (table tennis) * 6 (community) * 13 (symbol) * 100 (RB) * 12 (RE) * 4 (layer) * 6 (64QAM)=4.15Mbit, its memory capacity expense needed is very large as seen.Therefore, adopt existing data cached scheduling mechanism, improve the power consumption of the buffer unit of system.
Summary of the invention
The invention provides a kind of data cached dispatching method and device, for reducing the power consumption of buffer unit.
First aspect of the present invention is to provide a kind of data cached dispatching method, comprising:
A data segment, according to the storage granularity of buffer unit, is split as at least three sub-data segments by bit-level processor accelerator;
The range of choice of the storage granularity of described buffer unit comprise following any one: time slot, at least one symbol;
Described bit-level processor accelerator is successively by sub-for second in first sub-data segment in described at least three sub-data segments and described at least three sub-data segments data segment write buffer unit, after first sub-data segment in described two sub-data segments reads from described buffer unit by symbol level processor accelerator, the 3rd sub-data segment in described at least three sub-data segments is write described buffer unit by described bit-level processor accelerator again, until the whole subdata sections in described at least three sub-data segments are write described buffer unit.
In conjunction with first aspect, in the first the possible implementation in first, a data segment, according to the storage granularity of buffer unit, is split as at least three sub-data segments, comprises by described bit-level processor accelerator:
A data segment, according to the storage granularity of described buffer unit, is split as three sub-data segments by described bit-level processor accelerator, and the storage granularity that each subdata section in described three sub-data segments takies is less than or equal to described time slot.
In conjunction with the first possible implementation of first aspect, in the implementation that the second in first is possible, in the storage granularity of described bit-level processor accelerator according to buffer unit, before a data segment is split as at least three sub-data segments, also comprise:
The output length of first sub-data segment at least three sub-data segments described in described bit-level processor accelerator obtains;
Described bit-level processor accelerator is successively by sub-for second in first sub-data segment in described at least three sub-data segments and described at least three sub-data segments data segment write buffer unit, after first sub-data segment in described two sub-data segments reads from described buffer unit by symbol level processor accelerator, the 3rd sub-data segment in described at least three sub-data segments is write described buffer unit by described bit-level processor accelerator again, until the whole subdata sections in described at least three sub-data segments are write described buffer unit, comprising:
Second sub-data segment in first sub-data segment in described at least three sub-data segments and described at least three sub-data segments, according to the output length of first sub-data segment in described at least three sub-data segments, is write described buffer unit by described bit-level processor accelerator;
After first sub-data segment in described at least three sub-data segments reads from described buffer unit by symbol level processor accelerator, the output length of the 3rd sub-data segment at least three sub-data segments described in described bit-level processor accelerator obtains and original position;
Described bit-level processor accelerator is according to the output length of the 3rd sub-data segment in described at least three sub-data segments and original position, the 3rd sub-data segment in described at least three sub-data segments is write described buffer unit, until the whole subdata sections in described at least three sub-data segments are write described buffer unit.
In conjunction with first aspect, in the third the possible implementation in first, a data segment, according to the storage granularity of buffer unit, is split as at least three sub-data segments, comprises by described bit-level processor accelerator:
A data segment, according to the storage granularity of described buffer unit, is split as at least four sub-data segments by described bit-level processor accelerator, time slot described in the storage granularity that each subdata section in described at least four sub-data segments takies.
In conjunction with the third possible implementation of first aspect, in the 4th kind of possible implementation in first, in the storage granularity of described bit-level processor accelerator according to buffer unit, before a data segment is split as at least four sub-data segments, also comprise:
The output length of first sub-data segment at least four sub-data segments described in described bit-level processor accelerator obtains;
Described bit-level processor accelerator is successively by sub-for two in described at least three sub-data segments data segment write buffer unit, after a sub-data segment in described two sub-data segments reads from described buffer unit by symbol level processor accelerator, next subdata section in described at least three sub-data segments is write described buffer unit by described bit-level processor accelerator again, until the whole subdata sections in described at least three sub-data segments are write described buffer unit, comprising:
Second sub-data segment in first sub-data segment in described at least four sub-data segments and described at least four sub-data segments, according to the output length of first sub-data segment in described at least four sub-data segments, is write described buffer unit by described bit-level processor accelerator;
After first sub-data segment in described at least four sub-data segments reads from described buffer unit by described symbol level processor accelerator, the output length of the 3rd sub-data segment and original position at least four sub-data segments described in described bit-level processor accelerator obtains;
Described bit-level processor accelerator is according to the output length of the 3rd sub-data segment in described at least four sub-data segments and original position, 3rd sub-data segment in described at least four sub-data segments is write described buffer unit, until the whole subdata sections in described at least four sub-data segments are write described buffer unit.
Second aspect of the present invention is to provide a kind of data cached dispatching device, comprising: bit-level processor accelerator, buffer unit and symbol level processor accelerator;
Described bit-level processor accelerator, for the storage granularity according to buffer unit, is split as at least three sub-data segments by a data segment; Successively second sub-data segment in first sub-data segment in described at least three sub-data segments and described at least three sub-data segments is write described buffer unit; After first sub-data segment in described two sub-data segments reads from described buffer unit by described symbol level processor accelerator, the 3rd sub-data segment in described at least three sub-data segments is write described buffer unit, until the whole subdata sections in described at least three sub-data segments are write described buffer unit;
The range of choice of the storage granularity of described buffer unit comprise following any one: time slot, at least one symbol;
Described buffer unit, for the whole subdata sections at least three sub-data segments described in buffer memory;
Described symbol level processor accelerator, for successively from the whole subdata sections at least three sub-data segments described in described buffer unit reads.
In conjunction with second aspect, in the first the possible implementation in second, a data segment, according to the storage granularity of buffer unit, is split as at least three sub-data segments, specifically comprises by described bit-level processor accelerator:
According to the storage granularity of described buffer unit, a data segment is split as three sub-data segments, the storage granularity that each subdata section in described three sub-data segments takies is less than or equal to described time slot.
In conjunction with the first possible implementation of second aspect, in the implementation that the second in second is possible, described bit-level processor accelerator is in the storage granularity according to buffer unit, before a data segment is split as at least three sub-data segments, also for the output length of first sub-data segment at least three sub-data segments described in obtaining;
Described bit-level processor accelerator is successively by sub-for second in first sub-data segment in described at least three sub-data segments and described at least three sub-data segments data segment write buffer unit, after first sub-data segment in described two sub-data segments reads from described buffer unit by symbol level processor accelerator, the 3rd sub-data segment in described at least three sub-data segments is write described buffer unit by described bit-level processor accelerator again, until the whole subdata sections in described at least three sub-data segments are write described buffer unit, specifically comprise:
According to the output length of first sub-data segment in described at least three sub-data segments, second sub-data segment in first sub-data segment in described at least three sub-data segments and described at least three sub-data segments is write described buffer unit;
After first sub-data segment in described at least three sub-data segments reads from described buffer unit by described symbol level processor accelerator, the output length of the 3rd sub-data segment at least three sub-data segments described in acquisition and original position;
According to output length and the original position of the 3rd sub-data segment in described at least three sub-data segments, the 3rd sub-data segment in described at least three sub-data segments is write described buffer unit, until the whole subdata sections in described at least three sub-data segments are write described buffer unit.
In conjunction with second aspect, in the third the possible implementation in second, a data segment, according to the storage granularity of buffer unit, is split as at least three sub-data segments, specifically comprises by described bit-level processor accelerator:
According to the storage granularity of described buffer unit, a data segment is split as at least four sub-data segments, time slot described in the storage granularity that each subdata section in described at least four sub-data segments takies.
In conjunction with the third possible implementation of second aspect, in the 4th kind of possible implementation in second, it is characterized in that, described bit-level processor accelerator is in the storage granularity according to buffer unit, before a data segment is split as at least four sub-data segments, also for:
The output length of first sub-data segment at least four sub-data segments described in acquisition;
Described bit-level processor accelerator is successively by sub-for two in described at least three sub-data segments data segment write buffer unit, after a sub-data segment in described two sub-data segments reads from described buffer unit by described symbol level processor accelerator, next subdata section in described at least three sub-data segments is write described buffer unit by described bit-level processor accelerator again, until the whole subdata sections in described at least three sub-data segments are write described buffer unit, specifically comprise:
According to the output length of first sub-data segment in described at least four sub-data segments, second sub-data segment in first sub-data segment in described at least four sub-data segments and described at least four sub-data segments is write described buffer unit;
After first sub-data segment in described at least four sub-data segments reads from described buffer unit by described symbol level processor accelerator, the output length of the 3rd sub-data segment and original position at least four sub-data segments described in acquisition;
According to output length and the original position of the 3rd sub-data segment in described at least four sub-data segments, 3rd sub-data segment in described at least four sub-data segments is write described buffer unit, until the whole subdata sections in described at least four sub-data segments are write described buffer unit.
A kind of data cached dispatching method provided by the invention and device, by the storage granularity of described bit-level processor accelerator according to buffer unit, be split as at least three sub-data segments by a data segment; Second sub-data segment in first sub-data segment in described at least three sub-data segments and described at least three sub-data segments is successively write described buffer unit by described bit-level processor accelerator; After first sub-data segment in described two sub-data segments reads from described buffer unit by described symbol level processor accelerator, the 3rd sub-data segment in described at least three sub-data segments is write described buffer unit, until the whole subdata sections in described at least three sub-data segments are write described buffer unit; Whole subdata sections at least three sub-data segments described in described buffer unit buffer memory; Described symbol level processor accelerator is successively from the whole subdata sections at least three sub-data segments described in described buffer unit reading., community specification more and more higher in the specification adapting to LTE chip and the proportional increase of the number of plies, while providing good many autgmentabilities, owing to not needing the capacity constantly expanding buffer unit to provide autgmentability, thus reduce the power consumption of buffer unit.
Accompanying drawing explanation
In order to be illustrated more clearly in the embodiment of the present invention or technical scheme of the prior art, introduce doing one to the accompanying drawing used required in embodiment or description of the prior art simply below, apparently, accompanying drawing in the following describes is some embodiments of the present invention, for those of ordinary skill in the art, under the prerequisite not paying creative work, other accompanying drawing can also be obtained according to these accompanying drawings.
The structural representation of the data cached dispatching device that Fig. 1 provides for the embodiment of the present invention;
The schematic flow sheet of a kind of data cached dispatching method that Fig. 2 provides for the embodiment of the present invention;
The schematic flow sheet of the dispatching method that the another kind that Fig. 3 provides for the embodiment of the present invention is data cached;
The schematic diagram of the fractionation data segment that Fig. 4 provides for the embodiment of the present invention;
Schematic flow sheet data segment being split as two sub-data segments that Fig. 5 provides for the embodiment of the present invention;
The schematic flow sheet of the dispatching method that the another kind that Fig. 6 provides for the embodiment of the present invention is data cached.
Embodiment
For making the object of the embodiment of the present invention, technical scheme and advantage clearly, below in conjunction with the accompanying drawing in the embodiment of the present invention, technical scheme in the embodiment of the present invention is clearly and completely described, obviously, described embodiment is the present invention's part embodiment, instead of whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art, not making the every other embodiment obtained under creative work prerequisite, belong to the scope of protection of the invention.
The structural representation of the data cached dispatching device that Fig. 1 provides for the embodiment of the present invention, eNB), the equipment such as trunking with reference to Fig. 1, this device can be arranged on usually: (evolved Node B, is called for short: for base station, evolved base station; This data cached dispatching device 10 comprises: bit-level processor accelerator 10-1, buffer unit 10-2 and symbol level processor accelerator 10-3;
Described bit-level processor accelerator 10-1, for the storage granularity according to buffer unit 10-2, is split as at least three sub-data segments by a data segment; Successively second sub-data segment in first sub-data segment in described at least three sub-data segments and described at least three sub-data segments is write described buffer unit 10-2; After first sub-data segment in described two sub-data segments reads from described buffer unit 10-2 by described symbol level processor accelerator 10-3, the 3rd sub-data segment in described at least three sub-data segments is write described buffer unit 10-2, until the whole subdata sections in described at least three sub-data segments are write described buffer unit 10-2;
The range of choice of the storage granularity of described buffer unit 10-2 comprise following any one: time slot, at least one symbol;
Described buffer unit 10-2, for the whole subdata sections at least three sub-data segments described in buffer memory;
Concrete, this buffer unit 10-2 is the storage granularity of the ping-pong buffer unit 10-2 be arranged between bit-level processor accelerator 10-1 and symbol level processor accelerator 10-3, this buffer unit 10-2 is varying granularity;
Described symbol level processor accelerator 10-3, for successively from the whole subdata sections at least three sub-data segments described in described buffer unit 10-2 reads.
The data cached dispatching device that the embodiment of the present invention provides, by the storage granularity of described bit-level processor accelerator according to buffer unit, is split as at least three sub-data segments by a data segment; Second sub-data segment in first sub-data segment in described at least three sub-data segments and described at least three sub-data segments is successively write described buffer unit by described bit-level processor accelerator; After first sub-data segment in described two sub-data segments reads from described buffer unit by described symbol level processor accelerator, the 3rd sub-data segment in described at least three sub-data segments is write described buffer unit, until the whole subdata sections in described at least three sub-data segments are write described buffer unit; Whole subdata sections at least three sub-data segments described in described buffer unit buffer memory; Described symbol level processor accelerator is successively from the whole subdata sections at least three sub-data segments described in described buffer unit reading.At adaptation Long Term Evolution (Long Term Evolution, be called for short: LTE) more and more higher, the community specification of the specification of chip and the proportional increase of the number of plies, while good many autgmentabilities are provided, owing to not needing the capacity constantly expanding buffer unit to provide autgmentability, thus reduce the power consumption of buffer unit.
Further, the write operation of bit-level processor accelerator 10-1 and the reading of symbol level processor accelerator 10-3 grasp that adopting rattles reads and writes mechanism, i.e. the write subdata section A of bit-level processor accelerator 10-1 and subdata section B; After the reading subdata section A of symbol level processor accelerator 10-3, bit-level processor accelerator 10-1 writes subdata section C again;
Optionally, above-mentioned varying granularity specifically can be set to: Transmission Time Interval (Transmission Time Interval, be called for short: TTI), time slot (SLOT, 0.5ms), at least 1 symbol (1 TTI comprises 14 symbols);
Optionally, a data segment, according to the storage granularity of buffer unit 10-2, is split as at least three sub-data segments, specifically comprises by described bit-level processor accelerator 10-1:
According to the storage granularity of described buffer unit 10-2, a data segment is split as three sub-data segments, the storage granularity that each subdata section in described three sub-data segments takies is less than or equal to described time slot.
Further, described bit-level processor accelerator 10-1 in the storage granularity according to buffer unit 10-2, before a data segment is split as at least three sub-data segments, also for the output length of first sub-data segment at least three sub-data segments described in obtaining;
Accordingly, described bit-level processor accelerator 10-1 is successively by sub-for second in first sub-data segment in described at least three sub-data segments and described at least three sub-data segments data segment write buffer unit 10-2, after first sub-data segment in described two sub-data segments reads from described buffer unit 10-2 by symbol level processor accelerator 10-3, the 3rd sub-data segment in described at least three sub-data segments is write described buffer unit 10-2 by described bit-level processor accelerator 10-1 again, until the whole subdata sections in described at least three sub-data segments are write described buffer unit 10-2, specifically comprise:
According to the output length of first sub-data segment in described at least three sub-data segments, second sub-data segment in first sub-data segment in described at least three sub-data segments and described at least three sub-data segments is write described buffer unit 10-2;
After first sub-data segment in described at least three sub-data segments reads from described buffer unit 10-2 by described symbol level processor accelerator 10-3, the output length of the 3rd sub-data segment at least three sub-data segments described in acquisition and original position;
According to output length and the original position of the 3rd sub-data segment in described at least three sub-data segments, the 3rd sub-data segment in described at least three sub-data segments is write described buffer unit 10-2, until the whole subdata sections in described at least three sub-data segments are write described buffer unit 10-2.
Optionally, a data segment, according to the storage granularity of buffer unit 10-2, is split as at least three sub-data segments, specifically comprises by described bit-level processor accelerator 10-1:
According to the storage granularity of described buffer unit 10-2, a data segment is split as at least four sub-data segments, time slot described in the storage granularity that each subdata section in described at least four sub-data segments takies.
Further, described bit-level processor accelerator 10-1 in the storage granularity according to buffer unit 10-2, before a data segment is split as at least four sub-data segments, also for:
The output length of first sub-data segment at least four sub-data segments described in acquisition;
Accordingly, described bit-level processor accelerator 10-1 is successively by sub-for two in described at least three sub-data segments data segment write buffer unit 10-2, after a sub-data segment in described two sub-data segments reads from described buffer unit 10-2 by described symbol level processor accelerator 10-3, next subdata section in described at least three sub-data segments is write described buffer unit 10-2 by described bit-level processor accelerator 10-1 again, until the whole subdata sections in described at least three sub-data segments are write described buffer unit 10-2, specifically comprise:
According to the output length of first sub-data segment in described at least four sub-data segments, second sub-data segment in first sub-data segment in described at least four sub-data segments and described at least four sub-data segments is write described buffer unit 10-2;
After first sub-data segment in described at least four sub-data segments reads from described buffer unit 10-2 by described symbol level processor accelerator 10-3, the output length of the 3rd sub-data segment and original position at least four sub-data segments described in acquisition;
According to output length and the original position of the 3rd sub-data segment in described at least four sub-data segments, 3rd sub-data segment in described at least four sub-data segments is write described buffer unit 10-2, until the whole subdata sections in described at least four sub-data segments are write described buffer unit 10-2.
And in prior art, the storage granularity of the buffer unit arranged between bit-level processor accelerator and symbol level processor accelerator is merely able to adopt TTI.Therefore, along with the specification of LTE chip is more and more higher, community specification and the proportional increase of the number of plies, adopt the fixed storage granularity of prior art, the resource of buffer unit can be caused increasing, power consumption is more and more higher, the data cached dispatching device that the embodiment of the present invention provides, by providing variable storage granularity, reduces the power consumption of memory cell, improves the autgmentability being provided with the system of this data cached dispatching device simultaneously.
The schematic flow sheet of a kind of data cached dispatching method that Fig. 2 provides for the embodiment of the present invention, the method executive agent is dispatching device data cached shown in Fig. 1, and with reference to Fig. 2, the method comprises:
A data segment, according to the storage granularity of buffer unit, is split as at least three sub-data segments by step 101, bit-level processor accelerator;
Concrete, the range of choice of the storage granularity of described buffer unit comprise following any one: time slot, at least one symbol;
Step 102, described bit-level processor accelerator are successively by sub-for second in first sub-data segment in described at least three sub-data segments and described at least three sub-data segments data segment write buffer unit, after first sub-data segment in described two sub-data segments reads from described buffer unit by symbol level processor accelerator, the 3rd sub-data segment in described at least three sub-data segments is write described buffer unit by described bit-level processor accelerator again, until the whole subdata sections in described at least three sub-data segments are write described buffer unit.
The data cached dispatching method that the embodiment of the present invention provides, by the storage granularity of bit-level processor accelerator according to buffer unit, a data segment is split as at least three sub-data segments, further, the range of choice of the storage granularity of buffer unit described in this comprise following any one: time slot, at least one symbol, thus different storage granularities can be adopted to split data segment, described bit-level processor accelerator is successively by sub-for second in first sub-data segment in described at least three sub-data segments and described at least three sub-data segments data segment write buffer unit, after first sub-data segment in described two sub-data segments reads from described buffer unit by symbol level processor accelerator, the 3rd sub-data segment in described at least three sub-data segments is write described buffer unit by described bit-level processor accelerator again, until the whole subdata sections in described at least three sub-data segments are write described buffer unit., community specification more and more higher in the specification adapting to LTE chip and the proportional increase of the number of plies, while providing good many autgmentabilities, owing to not needing the capacity constantly expanding buffer unit to provide autgmentability, thus reduce the power consumption of buffer unit.
On the basis of Fig. 2, the schematic flow sheet of the dispatching method that the another kind that Fig. 3 provides for the embodiment of the present invention is data cached, with reference to Fig. 3, a kind of possible implementation of step 101 is:
A data segment, according to the storage granularity of described buffer unit, is split as three sub-data segments by step 101a, described bit-level processor accelerator, and the storage granularity that each subdata section in described three sub-data segments takies is less than or equal to described time slot.
Further, with reference to Fig. 3, before step 101a, also comprise:
The output length of first sub-data segment at least three sub-data segments described in step 100a, described bit-level processor accelerator obtain;
A kind of possible implementation of step 102 is:
Second sub-data segment in first sub-data segment in described at least three sub-data segments and described at least three sub-data segments, according to the output length of first sub-data segment in described at least three sub-data segments, is write described buffer unit by step 102a, described bit-level processor accelerator;
Step 102b, after first sub-data segment in described at least three sub-data segments reads from described buffer unit by symbol level processor accelerator, the output length of the 3rd sub-data segment at least three sub-data segments described in described bit-level processor accelerator obtains and original position;
Step 102c, described bit-level processor accelerator are according to the output length of the 3rd sub-data segment in described at least three sub-data segments and original position, the 3rd sub-data segment in described at least three sub-data segments is write described buffer unit, until the whole subdata sections in described at least three sub-data segments are write described buffer unit.
By the analysis to agreement, the bit of a code word maps from front to back in time domain, and the granularity segmentation that therefore can split according to sequential exports each segment data of a code word.Below with (the Physical Downlink Control Channel of Physical Downlink Control Channel in 1 TTI, be called for short: PDCCH) channel accounts for 1 symbol [so Physical Downlink Shared Channel (Physical Downlink Shared Channel, be called for short: PDSCH) from S2], PDSCH accounts for 13 symbols, be divided into 2 sections, the storage granularity of each subdata section is time slot (SLOT) is example, the schematic diagram of the fractionation data segment that Fig. 4 provides for the embodiment of the present invention, with reference to Fig. 4, the data segment of a TTI is according to being split as two sub-data segments as shown in Figure 4.
Digital signal processor (Digital Signal Processor, be called for short: DSP) each data segment can be calculated at two time slot (SLOT 0 according to agreement, SLOT 1) in carry out quadrature amplitude modulation (Quadrature Amplitude Modulation, QAM) data length needed time, task scheduling is carried out according to time slot, the subdata section of SLOT 0 and SLOT 1 is exported by twice task scheduling segmentation, realize data buffering ping-pong buffer (the BIT To Symbol Data Buffer PING/PANG Buffer that bit is converted into symbol, be called for short: switching B2S DB PING/PANG Buffer).For 6 community 20MHz8 antennas, dispatch according to being split as 2 sections, then the storage size that buffer unit needs is: 2 (table tennis) * 6 (community) * 7 (symbol) * 100 (RB) * 12 (RE) * 4 (layer) * 6 (64QAM)=2.42Mbit.And in prior art, adopt immutable storage granularity, when namely dispatching according to TTI granularity, buffer unit needs the size of memory capacity to be: 2 (table tennis) * 6 (community) * 13 (symbol) * 100 (RB) * 12 (RE) * 4 (layer) * 6 (64QAM)=4.15Mbit, by contrast, the data cached dispatching method that the embodiment of the present invention provides saves storage resources 42%.
Further, above-mentioned DSP can be connected with described bit-level processor accelerator and described symbol level processor accelerator by bus, to send task to described bit-level processor accelerator and described symbol level processor accelerator.
Further, for taking time slot as the scene storing granularity, schematic flow sheet data segment being split as two sub-data segments that Fig. 5 provides for the embodiment of the present invention, with reference to Fig. 5, this flow process comprises the steps:
(Common Public Radio Interface, is called for short: CPRI) clocked flip DSP calculates first sub-data segment in the output length of first time slot and original position for step 10, common public radio interface;
Concrete, DSP issues task to the process of bit-level processor accelerator when each slot timing (wherein each symbol of CPRI timing triggers once, and every 7 symbols of SLOT timing trigger once) arrives.
Step 11, DSP issue task to bit-level processor accelerator;
Concrete, this task agent containing first sub-data segment in the output length of first time slot and original position, the original position that bit-level processor accelerator supports each code word to export and output length configurable.The time delay that simultaneously bit-level processor accelerator processes all code words in a TTI only needs 0.2ms, therefore the storage granularity can supporting to be less than a TTI carries out the mode of dispatching, take full advantage of the free processing capacity of bit-level processor accelerator, while not wasting bit-level processor accelerator logical resource, save the storage resources of buffer unit.
Step 12, bit-level processor accelerator export first sub-data segment of bit-level to buffer unit according to the output length of first sub-data segment;
Whether then step 13, DSP judge CPRI timing;
Concrete, if when CPRI is timed to, then perform step 14; Otherwise repeated execution of steps 13, until when CPRI is timed to;
Step 14, DSP calculate second sub-data segment in the output length of second time slot and original position;
Step 15, DSP issue task to bit-level processor accelerator;
This task agent containing second word data stage in the output length of second time slot and original position.
Step 16, bit-level processor accelerator export second sub-data segment of bit-level to buffer unit according to the output length of second sub-data segment and original position;
Whether then step 17, DSP judge CPRI timing;
Concrete, if when CPRI is timed to, then perform step 10; Otherwise repeated execution of steps 17, until when CPRI is timed to;
On the basis of Fig. 2, the schematic flow sheet of the dispatching method that the another kind that Fig. 6 provides for the embodiment of the present invention is data cached, with reference to Fig. 6, a kind of possible implementation of step 101 is:
A data segment, according to the storage granularity of described buffer unit, is split as at least four sub-data segments by step 101b, described bit-level processor accelerator, time slot described in the storage granularity that each subdata section in described at least four sub-data segments takies.
Further, with reference to figure, before step 101b, also comprise:
The output length of first sub-data segment at least four sub-data segments described in step 100b, described bit-level processor accelerator obtain;
A kind of possible implementation of step 102 is:
Second sub-data segment in first sub-data segment in described at least four sub-data segments and described at least four sub-data segments, according to the output length of first sub-data segment in described at least four sub-data segments, is write described buffer unit by step 102d, described bit-level processor accelerator;
Step 102e, after first sub-data segment in described at least four sub-data segments reads from described buffer unit by described symbol level processor accelerator, the output length of the 3rd sub-data segment and original position at least four sub-data segments described in described bit-level processor accelerator obtains;
Step 102f, described bit-level processor accelerator are according to the output length of the 3rd sub-data segment in described at least four sub-data segments and original position, 3rd sub-data segment in described at least four sub-data segments is write described buffer unit, until the whole subdata sections in described at least four sub-data segments are write described buffer unit.
For by the storage granularity of described bit-level processor accelerator according to described buffer unit, a data segment is split as at least three sub-data segments, be described so that a data segment is split into four sub-data segments below, concrete: (PDSCH scheduling is from S2 to account for 1 symbol with PDCCH channel in 1 TTI, namely symbol 2 starts), PDSCH accounts for 13 symbols, be divided into 4 sections as shown in table 1:
Table 1
Implementation Segmentation 0 Segmentation 1 Segmentation 2 Segmentation 3
1 S2~S5 S6~S8 S9~S11 S12~S14
2 S2~S4 S5~S8 S9~S11 S12~S14
3 S2~S4 S5~S7 S8~S11 S12~S14
4 S2~S4 S5~S7 S8~S11 S12~S14
With reference to table 1, the data segment of a TTI is split as 4 cross-talk data segments to be dispatched; Wherein, S2 ~ S14 represents symbol 2 ~ symbol 14 respectively.Preferably, in order to save the storage resources of buffer unit, by symbol corresponding for the data segment of PDSCH mean allocation as far as possible when segmentation.For 6 community 20MHz8 antennas, according to being split as 4 sections of storage Buffer sizes of carrying out dispatching needs be: 2 (table tennis) * 6 (community) * 4 (symbol) * 100 (RB) * 12 (RE) * 4 (layer) * 6 (64QAM)=1.3824Mbit, and in prior art, adopt immutable storage granularity, when namely dispatching according to TTI granularity, buffer unit needs the size of memory capacity to be: 2 (table tennis) * 6 (community) * 13 (symbol) * 100 (RB) * 12 (RE) * 4 (layer) * 6 (64QAM)=4.15Mbit, by contrast, the data cached dispatching method that the embodiment of the present invention provides saves storage resources 66%.
Again such as, according to the disposal ability of bit-level processor accelerator and the demand of DSP load, the embodiment of the present invention can split the data segment be carried on PDSCH channel flexibly, 2 symbols (so PDSCH is from S3) are accounted for PDCCH channel in 1 TTI, PDSCH accounts for 12 symbols, be divided into 4 sections as shown in table 2:
Table 2
Implementation Segmentation 0 Segmentation 1 Segmentation 2 Segmentation 3
1 S3~S5 S6~S8 S9~S11 S12~S14
One of ordinary skill in the art will appreciate that: all or part of step realizing above-mentioned each embodiment of the method can have been come by the hardware that program command is relevant.Aforesaid program can be stored in a computer read/write memory medium.This program, when performing, performs the step comprising above-mentioned each embodiment of the method; And aforesaid storage medium comprises: ROM, RAM, magnetic disc or CD etc. various can be program code stored medium.
Last it is noted that above each embodiment is only in order to illustrate technical scheme of the present invention, be not intended to limit; Although with reference to foregoing embodiments to invention has been detailed description, those of ordinary skill in the art is to be understood that: it still can be modified to the technical scheme described in foregoing embodiments, or carries out equivalent replacement to wherein some or all of technical characteristic; And these amendments or replacement, do not make the essence of appropriate technical solution depart from the scope of various embodiments of the present invention technical scheme.

Claims (10)

1. a data cached dispatching method, is characterized in that, comprising:
A data segment, according to the storage granularity of buffer unit, is split as at least three sub-data segments by bit-level processor accelerator;
The range of choice of the storage granularity of described buffer unit comprise following any one: time slot, at least one symbol;
Described bit-level processor accelerator is successively by sub-for second in first sub-data segment in described at least three sub-data segments and described at least three sub-data segments data segment write buffer unit, after first sub-data segment in described two sub-data segments reads from described buffer unit by symbol level processor accelerator, the 3rd sub-data segment in described at least three sub-data segments is write described buffer unit by described bit-level processor accelerator again, until the whole subdata sections in described at least three sub-data segments are write described buffer unit.
2. method according to claim 1, is characterized in that, a data segment, according to the storage granularity of buffer unit, is split as at least three sub-data segments, comprises by described bit-level processor accelerator:
A data segment, according to the storage granularity of described buffer unit, is split as three sub-data segments by described bit-level processor accelerator, and the storage granularity that each subdata section in described three sub-data segments takies is less than or equal to described time slot.
3. method according to claim 2, is characterized in that, in the storage granularity of described bit-level processor accelerator according to buffer unit, before a data segment is split as at least three sub-data segments, also comprises:
The output length of first sub-data segment at least three sub-data segments described in described bit-level processor accelerator obtains;
Described bit-level processor accelerator is successively by sub-for second in first sub-data segment in described at least three sub-data segments and described at least three sub-data segments data segment write buffer unit, after first sub-data segment in described two sub-data segments reads from described buffer unit by symbol level processor accelerator, the 3rd sub-data segment in described at least three sub-data segments is write described buffer unit by described bit-level processor accelerator again, until the whole subdata sections in described at least three sub-data segments are write described buffer unit, comprising:
Second sub-data segment in first sub-data segment in described at least three sub-data segments and described at least three sub-data segments, according to the output length of first sub-data segment in described at least three sub-data segments, is write described buffer unit by described bit-level processor accelerator;
After first sub-data segment in described at least three sub-data segments reads from described buffer unit by symbol level processor accelerator, the output length of the 3rd sub-data segment at least three sub-data segments described in described bit-level processor accelerator obtains and original position;
Described bit-level processor accelerator is according to the output length of the 3rd sub-data segment in described at least three sub-data segments and original position, the 3rd sub-data segment in described at least three sub-data segments is write described buffer unit, until the whole subdata sections in described at least three sub-data segments are write described buffer unit.
4. method according to claim 1, is characterized in that, a data segment, according to the storage granularity of buffer unit, is split as at least three sub-data segments, comprises by described bit-level processor accelerator:
A data segment, according to the storage granularity of described buffer unit, is split as at least four sub-data segments by described bit-level processor accelerator, time slot described in the storage granularity that each subdata section in described at least four sub-data segments takies.
5. method according to claim 4, is characterized in that, in the storage granularity of described bit-level processor accelerator according to buffer unit, before a data segment is split as at least four sub-data segments, also comprises:
The output length of first sub-data segment at least four sub-data segments described in described bit-level processor accelerator obtains;
Described bit-level processor accelerator is successively by sub-for two in described at least three sub-data segments data segment write buffer unit, after a sub-data segment in described two sub-data segments reads from described buffer unit by symbol level processor accelerator, next subdata section in described at least three sub-data segments is write described buffer unit by described bit-level processor accelerator again, until the whole subdata sections in described at least three sub-data segments are write described buffer unit, comprising:
Second sub-data segment in first sub-data segment in described at least four sub-data segments and described at least four sub-data segments, according to the output length of first sub-data segment in described at least four sub-data segments, is write described buffer unit by described bit-level processor accelerator;
After first sub-data segment in described at least four sub-data segments reads from described buffer unit by described symbol level processor accelerator, the output length of the 3rd sub-data segment and original position at least four sub-data segments described in described bit-level processor accelerator obtains;
Described bit-level processor accelerator is according to the output length of the 3rd sub-data segment in described at least four sub-data segments and original position, 3rd sub-data segment in described at least four sub-data segments is write described buffer unit, until the whole subdata sections in described at least four sub-data segments are write described buffer unit.
6. a data cached dispatching device, is characterized in that, comprising: bit-level processor accelerator, buffer unit and symbol level processor accelerator;
Described bit-level processor accelerator, for the storage granularity according to buffer unit, is split as at least three sub-data segments by a data segment; Successively second sub-data segment in first sub-data segment in described at least three sub-data segments and described at least three sub-data segments is write described buffer unit; After first sub-data segment in described two sub-data segments reads from described buffer unit by described symbol level processor accelerator, the 3rd sub-data segment in described at least three sub-data segments is write described buffer unit, until the whole subdata sections in described at least three sub-data segments are write described buffer unit;
The range of choice of the storage granularity of described buffer unit comprise following any one: time slot, at least one symbol;
Described buffer unit, for the whole subdata sections at least three sub-data segments described in buffer memory;
Described symbol level processor accelerator, for successively from the whole subdata sections at least three sub-data segments described in described buffer unit reads.
7. data cached dispatching device according to claim 6, is characterized in that, a data segment, according to the storage granularity of buffer unit, is split as at least three sub-data segments, specifically comprises by described bit-level processor accelerator:
According to the storage granularity of described buffer unit, a data segment is split as three sub-data segments, the storage granularity that each subdata section in described three sub-data segments takies is less than or equal to described time slot.
8. data cached dispatching device according to claim 7, it is characterized in that, described bit-level processor accelerator is in the storage granularity according to buffer unit, before a data segment is split as at least three sub-data segments, also for the output length of first sub-data segment at least three sub-data segments described in obtaining;
Described bit-level processor accelerator is successively by sub-for second in first sub-data segment in described at least three sub-data segments and described at least three sub-data segments data segment write buffer unit, after first sub-data segment in described two sub-data segments reads from described buffer unit by symbol level processor accelerator, the 3rd sub-data segment in described at least three sub-data segments is write described buffer unit by described bit-level processor accelerator again, until the whole subdata sections in described at least three sub-data segments are write described buffer unit, specifically comprise:
According to the output length of first sub-data segment in described at least three sub-data segments, second sub-data segment in first sub-data segment in described at least three sub-data segments and described at least three sub-data segments is write described buffer unit;
After first sub-data segment in described at least three sub-data segments reads from described buffer unit by described symbol level processor accelerator, the output length of the 3rd sub-data segment at least three sub-data segments described in acquisition and original position;
According to output length and the original position of the 3rd sub-data segment in described at least three sub-data segments, the 3rd sub-data segment in described at least three sub-data segments is write described buffer unit, until the whole subdata sections in described at least three sub-data segments are write described buffer unit.
9. data cached dispatching device according to claim 6, is characterized in that, a data segment, according to the storage granularity of buffer unit, is split as at least three sub-data segments, specifically comprises by described bit-level processor accelerator:
According to the storage granularity of described buffer unit, a data segment is split as at least four sub-data segments, time slot described in the storage granularity that each subdata section in described at least four sub-data segments takies.
10. data cached dispatching device according to claim 9, is characterized in that, described bit-level processor accelerator in the storage granularity according to buffer unit, before a data segment is split as at least four sub-data segments, also for:
The output length of first sub-data segment at least four sub-data segments described in acquisition;
Described bit-level processor accelerator is successively by sub-for two in described at least three sub-data segments data segment write buffer unit, after a sub-data segment in described two sub-data segments reads from described buffer unit by described symbol level processor accelerator, next subdata section in described at least three sub-data segments is write described buffer unit by described bit-level processor accelerator again, until the whole subdata sections in described at least three sub-data segments are write described buffer unit, specifically comprise:
According to the output length of first sub-data segment in described at least four sub-data segments, second sub-data segment in first sub-data segment in described at least four sub-data segments and described at least four sub-data segments is write described buffer unit;
After first sub-data segment in described at least four sub-data segments reads from described buffer unit by described symbol level processor accelerator, the output length of the 3rd sub-data segment and original position at least four sub-data segments described in acquisition;
According to output length and the original position of the 3rd sub-data segment in described at least four sub-data segments, 3rd sub-data segment in described at least four sub-data segments is write described buffer unit, until the whole subdata sections in described at least four sub-data segments are write described buffer unit.
CN201510287722.4A 2015-05-29 2015-05-29 Data cached dispatching method and device Active CN104918259B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510287722.4A CN104918259B (en) 2015-05-29 2015-05-29 Data cached dispatching method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510287722.4A CN104918259B (en) 2015-05-29 2015-05-29 Data cached dispatching method and device

Publications (2)

Publication Number Publication Date
CN104918259A true CN104918259A (en) 2015-09-16
CN104918259B CN104918259B (en) 2018-12-14

Family

ID=54086865

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510287722.4A Active CN104918259B (en) 2015-05-29 2015-05-29 Data cached dispatching method and device

Country Status (1)

Country Link
CN (1) CN104918259B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106603458A (en) * 2016-12-13 2017-04-26 北京北方烽火科技有限公司 Baseband processing method and apparatus
CN112789821A (en) * 2018-08-10 2021-05-11 瑞典爱立信有限公司 Physical shared channel splitting at slot boundaries

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101789846A (en) * 2010-02-26 2010-07-28 联芯科技有限公司 Dissociation rate matching method and device
CN102340372A (en) * 2010-07-28 2012-02-01 中兴通讯股份有限公司 Method and device for increasing bit throughput at transmitting end of LTE (Long Term Evolution) base station
WO2012122741A1 (en) * 2011-03-16 2012-09-20 中兴通讯股份有限公司 Method and system for dynamically buffering user information
CN103248465A (en) * 2012-02-01 2013-08-14 联芯科技有限公司 Terminal processing device and terminal processing method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101789846A (en) * 2010-02-26 2010-07-28 联芯科技有限公司 Dissociation rate matching method and device
CN102340372A (en) * 2010-07-28 2012-02-01 中兴通讯股份有限公司 Method and device for increasing bit throughput at transmitting end of LTE (Long Term Evolution) base station
WO2012122741A1 (en) * 2011-03-16 2012-09-20 中兴通讯股份有限公司 Method and system for dynamically buffering user information
CN103248465A (en) * 2012-02-01 2013-08-14 联芯科技有限公司 Terminal processing device and terminal processing method

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106603458A (en) * 2016-12-13 2017-04-26 北京北方烽火科技有限公司 Baseband processing method and apparatus
CN106603458B (en) * 2016-12-13 2020-01-31 武汉虹信通信技术有限责任公司 baseband processing method and device
CN112789821A (en) * 2018-08-10 2021-05-11 瑞典爱立信有限公司 Physical shared channel splitting at slot boundaries
CN112789821B (en) * 2018-08-10 2024-06-11 瑞典爱立信有限公司 Method and apparatus for physical shared channel splitting at slot boundaries
US12016000B2 (en) 2018-08-10 2024-06-18 Telefonaktiebolaget Lm Ericsson (Publ) Physical shared channel splitting at slot boundaries

Also Published As

Publication number Publication date
CN104918259B (en) 2018-12-14

Similar Documents

Publication Publication Date Title
CN102131211B (en) Method and device for shutting off downlink service time slot in TD-SCDMA (Time Division-Synchronization Code Division Multiple Access) system
CN108616927B (en) Data sending and receiving method and device
CN102195742A (en) Configuration method of physical downlink control channel, network equipment and terminal
US10149181B2 (en) Signal output apparatus, board, and signal output method
CN102891728A (en) Method and equipment for transmission and blind detection of physical downlink control channels
CN106900061A (en) A kind of carrier polymerizing method and network side equipment
CN104918259A (en) Cache data scheduling method and device
CN102821045B (en) Method and device for copying multicast message
CN105183701A (en) 1536-point FFT processing mode and related equipment
CN104462006A (en) Method and device for synchronizing configuration between multiple processor cores in system-level chip
CN101588631B (en) Method for controlling channel resource allocation
CN102685858A (en) Radio frequency unit power consumption control method and station system
CN103428862A (en) Resource distribution method and device
CN104409098A (en) Chip internal table item with double capacity and implementation method thereof
CN102609240B (en) The method of processor circuit and reading data
CN103765962A (en) Method, device, and system for obtaining sharing power
JP5892325B2 (en) Loopback circuit
CN106612155A (en) A special sub-frame configuration method in a TDD LTE system
CN104281546A (en) Wireless communication apparatus and method
CN104683806A (en) High-speed FPGA realization method applied to MQ arithmetic encoder based on deep running water
CN105450488A (en) Data processing method and related device
EP4187861A9 (en) Data processing method and apparatus, and device and storage medium
CN113660695B (en) Method and device for processing cell data
CN103500091B (en) A kind of method of adjustment, device and the network equipment of exchange chip dominant frequency
CN101527624B (en) Method for processing data and device thereof

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant