CN104918259B - Data cached dispatching method and device - Google Patents
Data cached dispatching method and device Download PDFInfo
- Publication number
- CN104918259B CN104918259B CN201510287722.4A CN201510287722A CN104918259B CN 104918259 B CN104918259 B CN 104918259B CN 201510287722 A CN201510287722 A CN 201510287722A CN 104918259 B CN104918259 B CN 104918259B
- Authority
- CN
- China
- Prior art keywords
- section
- sub
- data segment
- subdatas
- cache unit
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W16/00—Network planning, e.g. coverage or traffic planning tools; Network deployment, e.g. resource partitioning or cells structures
- H04W16/02—Resource partitioning among network components, e.g. reuse partitioning
- H04W16/12—Fixed resource partitioning
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/90—Buffering arrangements
- H04L49/9042—Separate storage for different parts of the packet, e.g. header and payload
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W52/00—Power management, e.g. TPC [Transmission Power Control], power saving or power classes
- H04W52/02—Power saving arrangements
- H04W52/0203—Power saving arrangements in the radio access network or backbone network of wireless communication networks
- H04W52/0206—Power saving arrangements in the radio access network or backbone network of wireless communication networks in access points, e.g. base stations
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/70—Reducing energy consumption in communication networks in wireless communication networks
Abstract
The present invention provides one kind data cached dispatching method and device and one data segment is split as at least three subdata sections by bit-level processor accelerator according to the storage granularity of cache unit;Gradually cache unit is written in second sub- data segment in first sub- data segment and at least three subdata sections at least three sub- data segments by bit-level processor accelerator;Symbol level processor accelerator by first sub- data segment in two sub- data segments from cache unit reading after, cache unit is written into the sub- data segment of third at least three sub- data segments, until cache unit is written in whole subdata sections at least three sub- data segments;Adapt to LTE chip specification is higher and higher, cell specification and the proportional increase of the number of plies, while providing good more scalabilities, since the capacity for not needing constantly to extend cache unit is to provide scalability, to reduce the power consumption of cache unit.
Description
Technical field
The present invention relates to the communication technologys, more particularly to a kind of data cached dispatching method and device.
Background technique
In long term evolution (Long Term Evolution, referred to as: LTE) downlink of base station side, data need reality
Now from bit (BIT) grade processor accelerator to the conversion of symbol level processor accelerator, in the prior art using in bit-level processing
Added between accelerator and symbol level processor accelerator Transmission Time Interval (Transmission Time Interval, referred to as:
TTI) grade ping-pong buffer storage unit (read when writing table tennis pang, write pang when read table tennis), to realize data from bit-level processing to from symbol level
The storage of reason, which interweaves, to be converted.Wherein, interweave and refer to the ping-pong buffer storage unit for writing data into TTI grades according to code word granularity, each
Code word includes all time domain datas of place frequency domain, and the ping-pong buffer storage unit according to symbol granularity from the TTI grades reads data.
When further, using above-mentioned implementation method, each code word only needs to dispatch once, while BIT grades of processing accelerate
Device is to the data of each code word, and only processing is primary in 1 TTI, therefore processing of the implementation method to BIT grades of processor accelerators
Capability Requirement is relatively low.
But with the continuous improvement of cell specification and the number of plies, in the prior art, write data into using code word granularity
The mode of TTI grades of ping-pong buffer unit causes the memory capacity of TTI grades of ping-pong buffer storage units to be also required to constantly extend.With 6
For cell 20MHz8 antenna, the memory capacity of needs is scheduled according to TTI granularity are as follows: 2 (table tennis) * 6 (cell) * 13 (symbols
Number) * 100 (RB) * 12 (RE) * 4 (floor) * 6 (64QAM)=4.15Mbit, it is seen that the memory capacity expense that it is needed is very big.Cause
This improves the power consumption of the cache unit of system using existing data cached scheduling mechanism.
Summary of the invention
The present invention provides a kind of data cached dispatching method and device, for reducing the power consumption of cache unit.
The first aspect of the invention is to provide a kind of data cached dispatching method, comprising:
One data segment is split as at least three subnumbers according to the storage granularity of cache unit by bit-level processor accelerator
According to section;
The range of choice of the storage granularity of the cache unit include it is following any one: time slot, at least one symbol;
The bit-level processor accelerator gradually by at least three subdatas section first sub- data segment and institute
State second at least three subdata sections sub- data segment write-in cache unit, symbol level processor accelerator will be described two
First sub- data segment in subdata section is after cache unit reading, and general is described extremely again for the bit-level processor accelerator
The cache unit is written in the sub- data segment of third in few three sub- data segments, until will be in at least three subdatas section
Whole subdata sections the cache unit is written.
In conjunction with first aspect, in the possible implementation of in the first aspect the first, the bit-level processing adds
One data segment is split as at least three subdata sections according to the storage granularity of cache unit by fast device, comprising:
One data segment is split as three according to the storage granularity of the cache unit by the bit-level processor accelerator
Subdata section, the storage granularity that each subdata section in three sub- data segments occupies are less than or equal to the time slot.
In conjunction with the first possible implementation of first aspect, the possible realization side of second in the first aspect
In formula, in the bit-level processor accelerator according to the storage granularity of cache unit, a data segment is split as at least three
Before subdata section, further includes:
The output that the bit-level processor accelerator obtains first sub- data segment in at least three subdatas section is long
Degree;
The bit-level processor accelerator gradually by at least three subdatas section first sub- data segment and institute
State second at least three subdata sections sub- data segment write-in cache unit, symbol level processor accelerator will be described two
First sub- data segment in subdata section is after cache unit reading, and general is described extremely again for the bit-level processor accelerator
The cache unit is written in the sub- data segment of third in few three sub- data segments, until will be in at least three subdatas section
Whole subdata sections the cache unit is written, comprising:
The bit-level processor accelerator is according to the output of first sub- data segment in at least three subdatas section
Length, by second in the first sub- data segment and at least three subdatas section in at least three subdatas section
The cache unit is written in subdata section;
In symbol level processor accelerator by first sub- data segment in at least three subdatas section from the caching
After unit is read, the bit-level processor accelerator obtains the defeated of the sub- data segment of the third in at least three subdatas section
Length and initial position out;
The bit-level processor accelerator is according to the output of the third sub- data segment in at least three subdatas section
The cache unit is written in the sub- data segment of third in at least three subdatas section by length and initial position, until
The cache unit is written into whole subdata sections in at least three subdatas section.
In conjunction with first aspect, in the third possible implementation in the first aspect, the bit-level processing adds
One data segment is split as at least three subdata sections according to the storage granularity of cache unit by fast device, comprising:
One data segment is split as at least by the bit-level processor accelerator according to the storage granularity of the cache unit
Four sub- data segments, time slot described in the storage granularity that each subdata section in at least four subdatas section occupies.
In conjunction with the third possible implementation of first aspect, the 4th kind of possible realization side in the first aspect
In formula, in the bit-level processor accelerator according to the storage granularity of cache unit, a data segment is split as at least four
Before subdata section, further includes:
The output that the bit-level processor accelerator obtains first sub- data segment in at least four subdatas section is long
Degree;
The bit-level processor accelerator gradually two sub- data segments in at least three subdatas section is written slow
Memory cell reads a sub- data segment in described two subdata sections from the cache unit in symbol level processor accelerator
Afterwards, again the caching is written in next subdata section in at least three subdatas section by the bit-level processor accelerator
Unit, until the cache unit is written in whole subdata sections in at least three subdatas section, comprising:
The bit-level processor accelerator is long according to the output of first sub- data segment in at least four subdatas section
Degree, by second subdata in first sub- data segment in at least four subdatas section and at least four subdatas section
The cache unit is written in section;
First sub- data segment in at least four subdatas section is delayed from described in the symbol level processor accelerator
After memory cell is read, the bit-level processor accelerator obtains the defeated of a sub- data segment of third in at least four subdatas section
Length and initial position out;
The bit-level processor accelerator is long according to the output of the sub- data segment of third in at least four subdatas section
The cache unit is written, until by institute in the sub- data segment of third in at least four subdatas section by degree and initial position
The cache unit is written in the whole subdata sections stated at least four subdata sections.
The second aspect of the invention is to provide a kind of data cached dispatching device, comprising: bit-level processor accelerator,
Cache unit and symbol level processor accelerator;
The bit-level processor accelerator, for the storage granularity according to cache unit, by a data segment be split as to
Few three sub- data segments;Gradually by the first sub- data segment and at least three subnumber in at least three subdatas section
The cache unit is written according to second sub- data segment in section;In the symbol level processor accelerator by described two subdatas
First sub- data segment in section is after cache unit reading, by the third subnumber in at least three subdatas section
The cache unit is written according to section, until it is single that the caching is written in whole subdata sections in at least three subdatas section
Member;
The range of choice of the storage granularity of the cache unit include it is following any one: time slot, at least one symbol;
The cache unit, for caching whole subdata sections in at least three subdatas section;
The symbol level processor accelerator, for gradually being read in at least three subdatas section from the cache unit
Whole subdata sections.
In conjunction with the second aspect, in the possible implementation of in the second aspect the first, the bit-level processing adds
One data segment is split as at least three subdata sections, specifically included by fast device according to the storage granularity of cache unit:
According to the storage granularity of the cache unit, a data segment is split as three sub- data segments, three sons
The storage granularity that each subdata section in data segment occupies is less than or equal to the time slot.
In conjunction with the first possible implementation of the second aspect, the possible realization side of second in the second aspect
In formula, the bit-level processor accelerator is split as at least three in the storage granularity according to cache unit, by a data segment
Before subdata section, it is also used to obtain the output length of first sub- data segment in at least three subdatas section;
The bit-level processor accelerator gradually by at least three subdatas section first sub- data segment and institute
State second at least three subdata sections sub- data segment write-in cache unit, symbol level processor accelerator will be described two
First sub- data segment in subdata section is after cache unit reading, and general is described extremely again for the bit-level processor accelerator
The cache unit is written in the sub- data segment of third in few three sub- data segments, until will be in at least three subdatas section
Whole subdata sections the cache unit is written, specifically include:
According to the output length of first sub- data segment in at least three subdatas section, by least three son
The caching is written in the sub- data segment of second in first sub- data segment and at least three subdatas section in data segment
Unit;
In the symbol level processor accelerator by first sub- data segment in at least three subdatas section from described
After cache unit is read, the output length and start bit of the sub- data segment of third in at least three subdatas section are obtained
It sets;
It, will be described according to the output length of the sub- data segment of third in at least three subdatas section and initial position
The cache unit is written in the sub- data segment of third at least three subdata sections, until by at least three subdatas section
In whole subdata sections the cache unit is written.
In conjunction with the second aspect, in the third possible implementation in the second aspect, the bit-level processing adds
One data segment is split as at least three subdata sections, specifically included by fast device according to the storage granularity of cache unit:
According to the storage granularity of the cache unit, a data segment is split as at least four subdata sections, it is described extremely
Time slot described in the storage granularity that each subdata section in few four sub- data segments occupies.
In conjunction with the third possible implementation of the second aspect, the 4th kind of possible realization side in the second aspect
In formula, which is characterized in that the bit-level processor accelerator splits a data segment in the storage granularity according to cache unit
Before at least four subdata sections, it is also used to:
Obtain the output length of first sub- data segment in at least four subdatas section;
The bit-level processor accelerator gradually two sub- data segments in at least three subdatas section is written slow
Memory cell, in the symbol level processor accelerator by a sub- data segment in described two subdata sections from the cache unit
After reading, the bit-level processor accelerator again will be described in next subdata section write-in in at least three subdatas section
Cache unit specifically includes until the cache unit is written in whole subdata sections in at least three subdatas section:
According to the output length of first sub- data segment in at least four subdatas section, by least four subnumber
The cache unit is written according to second in first sub- data segment in section and at least four subdatas section sub- data segment;
First sub- data segment in at least four subdatas section is delayed from described in the symbol level processor accelerator
After memory cell is read, the output length of the sub- data segment of third and initial position in at least four subdatas section are obtained;
According to the output length of the sub- data segment of third in at least four subdatas section and initial position, by described in extremely
The cache unit is written in the sub- data segment of third in few four sub- data segments, until will be in at least four subdatas section
The cache unit is written in whole subdata sections.
One kind provided by the invention data cached dispatching method and device, by the bit-level processor accelerator according to
One data segment is split as at least three subdata sections by the storage granularity of cache unit;The bit-level processor accelerator by
Second son in secondary first sub- data segment and at least three subdatas section by at least three subdatas section
The cache unit is written in data segment;In the symbol level processor accelerator by first subnumber in described two subdata sections
According to section after cache unit reading, the caching is written into the sub- data segment of third in at least three subdatas section
Unit, until the cache unit is written in whole subdata sections in at least three subdatas section;The cache unit
Cache whole subdata sections in at least three subdatas section;The symbol level processor accelerator is gradually single from the caching
Member reads whole subdata sections in at least three subdatas section.Adapt to LTE chip specification is higher and higher, cell rule
Lattice and the proportional increase of the number of plies, while good more scalabilities are provided, due to do not need constantly to extend the capacity of cache unit with
Scalability is provided, to reduce the power consumption of cache unit.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below
There is attached drawing needed in technical description to do one simply to introduce, it should be apparent that, the accompanying drawings in the following description is this hair
Bright some embodiments for those of ordinary skill in the art without any creative labor, can be with
It obtains other drawings based on these drawings.
Fig. 1 is the structural schematic diagram of data cached dispatching device provided in an embodiment of the present invention;
Fig. 2 is the flow diagram of the data cached dispatching method of one kind provided in an embodiment of the present invention;
Fig. 3 is the flow diagram of the data cached dispatching method of another kind provided in an embodiment of the present invention;
Fig. 4 is the schematic diagram provided in an embodiment of the present invention for splitting data segment;
Fig. 5 is the flow diagram provided in an embodiment of the present invention that data segment is split as to two sub- data segments;
Fig. 6 is the flow diagram of the data cached dispatching method of another kind provided in an embodiment of the present invention.
Specific embodiment
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with the embodiment of the present invention
In attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is
A part of the embodiment of the present invention, instead of all the embodiments.Based on the embodiments of the present invention, those of ordinary skill in the art
Every other embodiment obtained without making creative work, shall fall within the protection scope of the present invention.
Fig. 1 is the structural schematic diagram of data cached dispatching device provided in an embodiment of the present invention, referring to Fig.1, the device
It can be generally arranged at: the equipment such as base station, evolved base station (evolved Node B, referred to as: eNB), trunking;The caching
The dispatching device 10 of data includes: bit-level processor accelerator 10-1, cache unit 10-2 and symbol level processor accelerator 10-3;
The bit-level processor accelerator 10-1, for the storage granularity according to cache unit 10-2, by a data segment
It is split as at least three subdata sections;Gradually by at least three subdatas section first sub- data segment and it is described at least
The cache unit 10-2 is written in second sub- data segment in three sub- data segments;In the symbol level processor accelerator 10-3
By first sub- data segment in described two subdata sections after cache unit 10-2 reading, by least three son
The cache unit 10-2 is written in the sub- data segment of third in data segment, until will be complete in at least three subdatas section
The cache unit 10-2 is written in portion's subdata section;
The range of choice of the storage granularity of the cache unit 10-2 include it is following any one: time slot, at least one symbol
Number;
The cache unit 10-2, for caching whole subdata sections in at least three subdatas section;
Specifically, cache unit 10-2 is to be set to bit-level processor accelerator 10-1 and symbol level processor accelerator
The storage granularity of ping-pong buffer unit 10-2 between 10-3, cache unit 10-2 are varying granularity;
The symbol level processor accelerator 10-3, for gradually reading at least three son from the cache unit 10-2
Whole subdata sections in data segment.
Data cached dispatching device provided in an embodiment of the present invention, by the bit-level processor accelerator according to caching
One data segment is split as at least three subdata sections by the storage granularity of unit;The bit-level processor accelerator gradually will
Second subdata in first sub- data segment and at least three subdatas section in at least three subdatas section
The cache unit is written in section;In the symbol level processor accelerator by first sub- data segment in described two subdata sections
After cache unit reading, it is single that the caching is written into the sub- data segment of third in at least three subdatas section
Member, until the cache unit is written in whole subdata sections in at least three subdatas section;The cache unit is slow
Deposit whole subdata sections in at least three subdatas section;The symbol level processor accelerator is gradually from the cache unit
Read whole subdata sections in at least three subdatas section.Adapt to long term evolution (Long Term Evolution,
The specification of referred to as: LTE) chip is higher and higher, cell specification and the proportional increase of the number of plies, while good more scalabilities are provided,
Since the capacity for not needing constantly to extend cache unit is to provide scalability, to reduce the power consumption of cache unit.
Further, the write operation of bit-level processor accelerator 10-1 and the reading of symbol level processor accelerator 10-3 are grasped
Using table tennis read-write mechanism, i.e. the write-in subdata section A and subdata section B of bit-level processor accelerator 10-1;Symbol level processing
After the reading subdata section A of accelerator 10-3, bit-level processor accelerator 10-1's is written subdata section C again;
Optionally, above-mentioned varying granularity specifically can be set are as follows: Transmission Time Interval (Transmission Time
Interval, referred to as: TTI), time slot (SLOT, 0.5ms), at least one symbol (1 TTI include 14 symbols);
Optionally, the bit-level processor accelerator 10-1 is according to the storage granularity of cache unit 10-2, by a data
Section is split as at least three subdata sections, specifically includes:
According to the storage granularity of the cache unit 10-2, a data segment is split as three sub- data segments, described three
The storage granularity that each subdata section in a sub- data segment occupies is less than or equal to the time slot.
Further, the bit-level processor accelerator 10-1 is in the storage granularity according to cache unit 10-2, by one
Data segment is split as before at least three subdata sections, is also used to obtain first subdata in at least three subdatas section
The output length of section;
Correspondingly, the bit-level processor accelerator 10-1 is gradually by first son in at least three subdatas section
Cache unit 10-2 is written in second sub- data segment in data segment and at least three subdatas section, adds in symbol level processing
Fast device 10-3 by first sub- data segment in described two subdata sections from the cache unit 10-2 reading after, the bit
Again the cache unit 10- is written in the sub- data segment of third in at least three subdatas section by grade processor accelerator 10-1
2, until the cache unit 10-2 is written in whole subdata sections in at least three subdatas section, specifically include:
According to the output length of first sub- data segment in at least three subdatas section, by least three son
The caching is written in the sub- data segment of second in first sub- data segment and at least three subdatas section in data segment
Unit 10-2;
The symbol level processor accelerator 10-3 by first sub- data segment in at least three subdatas section from
After the cache unit 10-2 is read, obtain the sub- data segment of third in at least three subdatas section output length and
Initial position;
It, will be described according to the output length of the sub- data segment of third in at least three subdatas section and initial position
The cache unit 10-2 is written in the sub- data segment of third at least three subdata sections, until by least three subnumber
The cache unit 10-2 is written according to whole subdata sections in section.
Optionally, the bit-level processor accelerator 10-1 is according to the storage granularity of cache unit 10-2, by a data
Section is split as at least three subdata sections, specifically includes:
According to the storage granularity of the cache unit 10-2, a data segment is split as at least four subdata sections, institute
State time slot described in the storage granularity that each subdata section at least four subdata sections occupies.
Further, the bit-level processor accelerator 10-1 is in the storage granularity according to cache unit 10-2, by one
Data segment is split as before at least four subdata sections, is also used to:
Obtain the output length of first sub- data segment in at least four subdatas section;
Correspondingly, the bit-level processor accelerator 10-1 is gradually by two subnumbers in at least three subdatas section
Cache unit 10-2 is written according to section, in the symbol level processor accelerator 10-3 by a subnumber in described two subdata sections
According to section after cache unit 10-2 reading, the bit-level processor accelerator 10-1 is again by at least three subdatas section
In next subdata section the cache unit 10-2 is written, until by whole subnumbers in at least three subdatas section
The cache unit 10-2 is written according to section, specifically includes:
According to the output length of first sub- data segment in at least four subdatas section, by least four subnumber
The cache unit 10- is written according to second in first sub- data segment in section and at least four subdatas section sub- data segment
2;
In the symbol level processor accelerator 10-3 by first sub- data segment in at least four subdatas section from institute
After stating cache unit 10-2 reading, the output length and starting of the sub- data segment of third in at least four subdatas section are obtained
Position;
According to the output length of the sub- data segment of third in at least four subdatas section and initial position, by described in extremely
The cache unit 10-2 is written in the sub- data segment of third in few four sub- data segments, until by at least four subdatas section
In whole subdata sections the cache unit 10-2 is written.
And in the prior art, the cache unit being arranged between bit-level processor accelerator and symbol level processor accelerator is deposited
Storage granularity is merely able to using TTI.Therefore, as the specification of LTE chip is higher and higher, cell specification and the proportional increase of the number of plies,
Using the fixed storage granularity of the prior art, the resource that will cause cache unit is increasing, and power consumption is higher and higher, and the present invention is real
Apply the data cached dispatching device of example offer by providing variable storage granularity, to reduce the power consumption of storage unit, simultaneously
Improve the scalability for being provided with the system of the data cached dispatching device.
Fig. 2 is the flow diagram of the data cached dispatching method of one kind provided in an embodiment of the present invention, and this method executes
Main body is dispatching device data cached shown in Fig. 1, referring to Fig. 2, this method comprises:
One data segment is split as at least by step 101, bit-level processor accelerator according to the storage granularity of cache unit
Three sub- data segments;
Specifically, the range of choice of the storage granularity of the cache unit include it is following any one: time slot, at least one
Symbol;
Step 102, the bit-level processor accelerator are gradually by first subnumber in at least three subdatas section
Cache unit is written according to second sub- data segment in section and at least three subdatas section, is incited somebody to action in symbol level processor accelerator
First sub- data segment in described two subdata sections is after cache unit reading, and the bit-level processor accelerator is again
The cache unit is written into the sub- data segment of third in at least three subdatas section, until by least three son
The cache unit is written in whole subdata sections in data segment.
Data cached dispatching method provided in an embodiment of the present invention, by bit-level processor accelerator according to cache unit
Storage granularity, a data segment is split as at least three subdata sections, also, the storage granularity of the cache unit
Range of choice include it is following any one: time slot, at least one symbol;So as to use different storage granularities to data segment
Split, the bit-level processor accelerator gradually by at least three subdatas section first sub- data segment and institute
State second at least three subdata sections sub- data segment write-in cache unit, symbol level processor accelerator will be described two
First sub- data segment in subdata section is after cache unit reading, and general is described extremely again for the bit-level processor accelerator
The cache unit is written in the sub- data segment of third in few three sub- data segments, until will be in at least three subdatas section
Whole subdata sections the cache unit is written.Adapt to LTE chip specification is higher and higher, cell specification and the number of plies at than
Example increases, while good more scalabilities are provided, since the capacity for not needing constantly to extend cache unit is to provide scalability, from
And reduce the power consumption of cache unit.
On the basis of Fig. 2, Fig. 3 is that the process of the data cached dispatching method of another kind provided in an embodiment of the present invention is shown
It is intended to, referring to Fig. 3, a kind of possible implementation of step 101 are as follows:
Step 101a, the described bit-level processor accelerator is according to the storage granularity of the cache unit, by a data segment
Three sub- data segments are split as, the storage granularity that each subdata section in three sub- data segments occupies is less than or equal to institute
State time slot.
Further, referring to Fig. 3, before step 101a, further includes:
Step 100a, the described bit-level processor accelerator obtains first sub- data segment in at least three subdatas section
Output length;
A kind of possible implementation of step 102 are as follows:
Step 102a, the described bit-level processor accelerator is according to first subdata in at least three subdatas section
The output length of section, will be in the first sub- data segment and at least three subdatas section in at least three subdatas section
Second sub- data segment the cache unit is written;
Step 102b, symbol level processor accelerator by first sub- data segment in at least three subdatas section from
After the cache unit is read, the bit-level processor accelerator obtains the third subnumber in at least three subdatas section
According to the output length of section and initial position;
Step 102c, the described bit-level processor accelerator is according to the third subdata in at least three subdatas section
It is single the caching to be written in the sub- data segment of third in at least three subdatas section by the output length of section and initial position
Member, until the cache unit is written in whole subdata sections in at least three subdatas section.
By the analysis to agreement, the bit of a code word maps from front to back in the time domain, therefore can be according to timing
Each segment data of granularity segmentation one code word of output of fractionation.Below with Physical Downlink Control Channel (Physical in 1 TTI
Downlink Control Channel, referred to as: PDCCH) channel accounts for 1 symbol [so Physical Downlink Shared Channel
(Physical Downlink Shared Channel, referred to as: PDSCH) is since S2], PDSCH accounts for 13 symbols, is divided into 2
Section, the storage granularity of each subdata section is for time slot (SLOT), Fig. 4 is fractionation data segment provided in an embodiment of the present invention
Schematic diagram, referring to Fig. 4, the data segment of a TTI is as shown in Figure 4 according to two sub- data segments are split as.
Digital signal processor (Digital Signal Processor, referred to as: DSP) can calculate often according to agreement
A data segment carries out quadrature amplitude modulation (Quadrature Amplitude in two time slots (SLOT 0, SLOT 1)
Modulation, QAM) when the data length that needs, carry out task schedule according to time slot, pass through the segmentation output of task schedule twice
The subdata section of SLOT 0 and SLOT 1 realizes that bit is converted into data buffering ping-pong buffer (the BIT To Symbol of symbol
Data Buffer PING/PANG Buffer, referred to as: B2S DB PING/PANG Buffer) switching.With 6 cells
It for 20MHz8 antenna, is scheduled according to being split as 2 sections, then the storage size that cache unit needs are as follows: 2 (table tennises) * 6
* 6 (64QAM)=2.42Mbit of (cell) * 7 (symbol) * 100 (RB) * 12 (RE) * 4 (layer).And in the prior art, using can not
The storage granularity of change, i.e., when being scheduled according to TTI granularity, cache unit needs the size of memory capacity are as follows: 2 (table tennises) * 6
* 6 (64QAM)=4.15Mbit of (cell) * 13 (symbol) * 100 (RB) * 12 (RE) * 4 (layer), in contrast, the embodiment of the present invention
The data cached dispatching method provided saves storage resource 42%.
Further, above-mentioned DSP can be handled by bus and the bit-level processor accelerator and the symbol level
Accelerator connection, to send task to the bit-level processor accelerator and the symbol level processor accelerator.
Further, for being the scene for storing granularity with time slot, Fig. 5 tears data segment open to be provided in an embodiment of the present invention
It is divided into the flow diagram of two sub- data segments, referring to Fig. 5, which includes the following steps:
Step 10, common public radio interface (Common Public Radio Interface, referred to as: CPRI) are fixed
When triggering DSP calculate output length and initial position of first sub- data segment in first time slot;
Specifically, DSP is in each slot timing, (wherein each symbol triggering of CPRI timing is primary, every 7 symbols of SLOT timing
Number triggering is primary) task is issued when arriving gives the processing of bit-level processor accelerator.
Step 11, DSP issue task and give bit-level processor accelerator;
Specifically, output length that the task includes first sub- data segment in first time slot and initial position, bit
Grade processor accelerator supports the initial position of each code word output and output length configurable.Bit-level processor accelerator exists simultaneously
The time delay of all code words of processing only needs 0.2ms in one TTI, thus can support with the storage granularity less than a TTI into
The mode of row scheduling, takes full advantage of the free processing capacity of bit-level processor accelerator, accelerates not wasting bit-level processing
The storage resource of cache unit is saved while device logical resource.
Step 12, bit-level processor accelerator export first of bit-level according to the output length of first sub- data segment
Subdata section is to cache unit;
Whether then step 13, DSP judge CPRI timing;
Specifically, if thening follow the steps 14 when CPRI is timed to;Otherwise step 13 is repeated, until CPRI is timed to
When;
Step 14, DSP calculate output length and initial position of second sub- data segment in second time slot;
Step 15, DSP issue task and give bit-level processor accelerator;
The task includes second word data stage in the output length of second time slot and initial position.
Step 16, bit-level processor accelerator export bit according to the output length of second sub- data segment and initial position
Second sub- data segment of grade is to cache unit;
Whether then step 17, DSP judge CPRI timing;
Specifically, if thening follow the steps 10 when CPRI is timed to;Otherwise step 17 is repeated, until CPRI is timed to
When;
On the basis of Fig. 2, Fig. 6 is that the process of the data cached dispatching method of another kind provided in an embodiment of the present invention is shown
It is intended to, referring to Fig. 6, a kind of possible implementation of step 101 are as follows:
Step 101b, the described bit-level processor accelerator is according to the storage granularity of the cache unit, by a data segment
At least four subdata sections are split as, described in the storage granularity that each subdata section in at least four subdatas section occupies
Time slot.
Further, referring to figure, before step 101b, further includes:
Step 100b, the described bit-level processor accelerator obtains first sub- data segment in at least four subdatas section
Output length;
A kind of possible implementation of step 102 are as follows:
Step 102d, the described bit-level processor accelerator is according to first sub- data segment in at least four subdatas section
Output length, will be second in first sub- data segment in at least four subdatas section and at least four subdatas section
The cache unit is written in a sub- data segment;
Step 102e, in the symbol level processor accelerator by first sub- data segment in at least four subdatas section
After cache unit reading, the bit-level processor accelerator obtains third subnumber in at least four subdatas section
According to the output length of section and initial position;
Step 102f, the described bit-level processor accelerator is according to the sub- data segment of third in at least four subdatas section
Output length and initial position, the cache unit is written into the sub- data segment of third in at least four subdatas section,
Until the cache unit is written in whole subdata sections in at least four subdatas section.
For the storage granularity by the bit-level processor accelerator according to the cache unit, a data segment is split
For at least three subdata sections, it is illustrated below with a data segment is split into four sub- data segments, it is specific: with 1
PDCCH channel accounts for 1 symbol (PDSCH is dispatched since S2, i.e., symbol 2 starts) in TTI, and PDSCH accounts for 13 symbols, is divided into 4
Section is as shown in table 1:
Table 1
Implementation | Segmentation 0 | Segmentation 1 | Segmentation 2 | Segmentation 3 |
1 | S2~S5 | S6~S8 | S9~S11 | S12~S14 |
2 | S2~S4 | S5~S8 | S9~S11 | S12~S14 |
3 | S2~S4 | S5~S7 | S8~S11 | S12~S14 |
4 | S2~S4 | S5~S7 | S8~S11 | S12~S14 |
Referring to table 1, the data segment of a TTI is split as 4 cross-talk data segments and is scheduled;Wherein, S2~S14 is respectively indicated
2~symbol of symbol 14.Preferably, in order to save the storage resource of cache unit, by the corresponding symbol of the data segment of PDSCH point
Mean allocation as far as possible when section.By taking 6 cell 20MHz8 antennas as an example, according to being split as 4 sections of storages for being scheduled needs
Buffer size are as follows: 2 (table tennis) * 6 (cell) * 4 (symbol) * 100 (RB) * 12 (RE) * 4 (layer) * 6 (64QAM)=
1.3824Mbit, and in the prior art, using immutable storage granularity, i.e., when being scheduled according to TTI granularity, cache unit
Need the size of memory capacity are as follows: 2 (table tennis) * 6 (cell) * 13 (symbol) * 100 (RB) * 12 (RE) * 4 (layer) * 6 (64QAM)=
4.15Mbit, in contrast, data cached dispatching method provided in an embodiment of the present invention save storage resource 66%.
In another example the embodiment of the present invention can according to the demand that the processing capacity of bit-level processor accelerator and DSP are loaded
The data segment being carried on PDSCH channel is flexibly split, 2 symbols are accounted for (so PDSCH is from S3 with PDCCH channel in 1 TTI
Start), PDSCH accounts for 12 symbols, it is divided into 4 sections as shown in table 2:
Table 2
Implementation | Segmentation 0 | Segmentation 1 | Segmentation 2 | Segmentation 3 |
1 | S3~S5 | S6~S8 | S9~S11 | S12~S14 |
Those of ordinary skill in the art will appreciate that: realize that all or part of the steps of above-mentioned each method embodiment can lead to
The relevant hardware of program instruction is crossed to complete.Program above-mentioned can be stored in a computer readable storage medium.The journey
When being executed, execution includes the steps that above-mentioned each method embodiment to sequence;And storage medium above-mentioned include: ROM, RAM, magnetic disk or
The various media that can store program code such as person's CD.
Finally, it should be noted that the above embodiments are only used to illustrate the technical solution of the present invention., rather than its limitations;To the greatest extent
Pipe present invention has been described in detail with reference to the aforementioned embodiments, those skilled in the art should understand that: its according to
So be possible to modify the technical solutions described in the foregoing embodiments, or to some or all of the technical features into
Row equivalent replacement;And these are modified or replaceed, various embodiments of the present invention technology that it does not separate the essence of the corresponding technical solution
The range of scheme.
Claims (10)
1. a kind of data cached dispatching method characterized by comprising
One data segment is split as at least three subdatas according to the storage granularity of cache unit by bit-level processor accelerator
Section;
The range of choice of the storage granularity of the cache unit include it is following any one: time slot, at least one symbol;
The bit-level processor accelerator gradually by at least three subdatas section first sub- data segment and it is described extremely
Cache unit is written in second sub- data segment in few three sub- data segments, in symbol level processor accelerator by first son
Data segment is after cache unit reading, and the bit-level processor accelerator is again by the in at least three subdatas section
The cache unit is written in three sub- data segments, until institute is written in whole subdata sections in at least three subdatas section
State cache unit.
2. the method according to claim 1, wherein bit-level processor accelerator the depositing according to cache unit
Granularity is stored up, a data segment is split as at least three subdata sections, comprising:
One data segment is split as three subnumbers according to the storage granularity of the cache unit by the bit-level processor accelerator
According to section, the storage granularity that each subdata section in three sub- data segments occupies is less than or equal to the time slot.
3. according to the method described in claim 2, it is characterized in that, in the bit-level processor accelerator according to cache unit
Granularity is stored, a data segment is split as before at least three subdata sections, further includes:
The bit-level processor accelerator obtains the output length of first sub- data segment in at least three subdatas section;
The bit-level processor accelerator gradually by at least three subdatas section first sub- data segment and it is described extremely
Cache unit is written in second sub- data segment in few three sub- data segments, in symbol level processor accelerator by first son
Data segment is after cache unit reading, and the bit-level processor accelerator is again by the in at least three subdatas section
The cache unit is written in three sub- data segments, until institute is written in whole subdata sections in at least three subdatas section
State cache unit, comprising:
The bit-level processor accelerator according to the output length of first sub- data segment in at least three subdatas section,
By second subnumber in the first sub- data segment and at least three subdatas section in at least three subdatas section
The cache unit is written according to section;
In symbol level processor accelerator by first sub- data segment in at least three subdatas section from the cache unit
After reading, the output that the bit-level processor accelerator obtains the sub- data segment of the third in at least three subdatas section is long
Degree and initial position;
The bit-level processor accelerator is according to the output length of the third sub- data segment in at least three subdatas section
And initial position, the cache unit is written into the sub- data segment of third in at least three subdatas section, until by institute
The cache unit is written in the whole subdata sections stated at least three subdata sections.
4. the method according to claim 1, wherein bit-level processor accelerator the depositing according to cache unit
Granularity is stored up, a data segment is split as at least three subdata sections, comprising:
One data segment is split as at least four according to the storage granularity of the cache unit by the bit-level processor accelerator
Subdata section, the storage granularity that each subdata section in at least four subdatas section occupies are the time slot.
5. according to the method described in claim 4, it is characterized in that, in the bit-level processor accelerator according to cache unit
Granularity is stored, a data segment is split as before at least four subdata sections, further includes:
The bit-level processor accelerator obtains the output length of first sub- data segment in at least four subdatas section;
The bit-level processor accelerator is gradually single by two sub- data segment write-in cachings in at least three subdatas section
Member, symbol level processor accelerator by a sub- data segment in described two subdata sections from the cache unit reading after,
It is single that again the caching is written in next subdata section in at least three subdatas section by the bit-level processor accelerator
Member, until the cache unit is written in whole subdata sections in at least three subdatas section, comprising:
The bit-level processor accelerator, will according to the output length of first sub- data segment in at least four subdatas section
Second sub- data segment is write in first sub- data segment and at least four subdatas section in at least four subdatas section
Enter the cache unit;
It is in the symbol level processor accelerator that first sub- data segment in at least four subdatas section is single from the caching
After member is read, the output that the bit-level processor accelerator obtains the sub- data segment of third in at least four subdatas section is long
Degree and initial position;
The bit-level processor accelerator according to the output length of the sub- data segment of third in at least four subdatas section and
The cache unit is written, until by described in extremely in the sub- data segment of third in at least four subdatas section by initial position
The cache unit is written in whole subdata sections in few four sub- data segments.
6. a kind of data cached dispatching device characterized by comprising bit-level processor accelerator, cache unit and symbol
Grade processor accelerator;
One data segment is split as at least three for the storage granularity according to cache unit by the bit-level processor accelerator
A sub- data segment;Gradually by at least three subdatas section first sub- data segment and at least three subdatas section
In second sub- data segment the cache unit is written;In the symbol level processor accelerator by first sub- data segment
After cache unit reading, it is single that the caching is written into the sub- data segment of third in at least three subdatas section
Member, until the cache unit is written in whole subdata sections in at least three subdatas section;
The range of choice of the storage granularity of the cache unit include it is following any one: time slot, at least one symbol;
The cache unit, for caching whole subdata sections in at least three subdatas section;
The symbol level processor accelerator is read for gradually from the cache unit complete in at least three subdatas section
Portion's subdata section.
7. data cached dispatching device according to claim 6, which is characterized in that the bit-level processor accelerator root
According to the storage granularity of cache unit, a data segment is split as at least three subdata sections, is specifically included:
According to the storage granularity of the cache unit, a data segment is split as three sub- data segments, three subdatas
The storage granularity that each subdata section in section occupies is less than or equal to the time slot.
8. data cached dispatching device according to claim 7, which is characterized in that the bit-level processor accelerator exists
According to the storage granularity of cache unit, before a data segment is split as at least three subdata sections, it is also used to obtain described
The output length of first sub- data segment at least three subdata sections;
The bit-level processor accelerator gradually by at least three subdatas section first sub- data segment and it is described extremely
Cache unit is written in second sub- data segment in few three sub- data segments, in symbol level processor accelerator by first son
Data segment is after cache unit reading, and the bit-level processor accelerator is again by the in at least three subdatas section
The cache unit is written in three sub- data segments, until institute is written in whole subdata sections in at least three subdatas section
Cache unit is stated, is specifically included:
According to the output length of first sub- data segment in at least three subdatas section, by least three subdata
The cache unit is written in the sub- data segment of second in first sub- data segment and at least three subdatas section in section;
In the symbol level processor accelerator by first sub- data segment in at least three subdatas section from the caching
After unit is read, output length and the initial position of the sub- data segment of third in at least three subdatas section are obtained;
According to the output length of the sub- data segment of third in at least three subdatas section and initial position, by described at least
The cache unit is written in the sub- data segment of third in three sub- data segments, until will be in at least three subdatas section
The cache unit is written in whole subdata sections.
9. data cached dispatching device according to claim 6, which is characterized in that the bit-level processor accelerator root
According to the storage granularity of cache unit, a data segment is split as at least three subdata sections, is specifically included:
According to the storage granularity of the cache unit, a data segment is split as at least four subdata sections, described at least four
The storage granularity that each subdata section in a sub- data segment occupies is the time slot.
10. data cached dispatching device according to claim 9, which is characterized in that the bit-level processor accelerator
It is also used to before one data segment is split as at least four subdata sections in the storage granularity according to cache unit:
Obtain the output length of first sub- data segment in at least four subdatas section;
The bit-level processor accelerator is gradually single by two sub- data segment write-in cachings in at least three subdatas section
Member reads a sub- data segment in described two subdata sections from the cache unit in the symbol level processor accelerator
Afterwards, again the caching is written in next subdata section in at least three subdatas section by the bit-level processor accelerator
Unit specifically includes until the cache unit is written in whole subdata sections in at least three subdatas section:
According to the output length of first sub- data segment in at least four subdatas section, by at least four subdatas section
In in first sub- data segment and at least four subdatas section second sub- data segment the cache unit is written;
It is in the symbol level processor accelerator that first sub- data segment in at least four subdatas section is single from the caching
After member is read, the output length of the sub- data segment of third and initial position in at least four subdatas section are obtained;
According to the output length of the sub- data segment of third in at least four subdatas section and initial position, by described at least four
The cache unit is written in the sub- data segment of third in a sub- data segment, until by the whole in at least four subdatas section
The cache unit is written in subdata section.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510287722.4A CN104918259B (en) | 2015-05-29 | 2015-05-29 | Data cached dispatching method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510287722.4A CN104918259B (en) | 2015-05-29 | 2015-05-29 | Data cached dispatching method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104918259A CN104918259A (en) | 2015-09-16 |
CN104918259B true CN104918259B (en) | 2018-12-14 |
Family
ID=54086865
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510287722.4A Active CN104918259B (en) | 2015-05-29 | 2015-05-29 | Data cached dispatching method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104918259B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106603458B (en) * | 2016-12-13 | 2020-01-31 | 武汉虹信通信技术有限责任公司 | baseband processing method and device |
EP3834341A1 (en) * | 2018-08-10 | 2021-06-16 | Telefonaktiebolaget LM Ericsson (publ) | Physical shared channel splitting at slot boundaries |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101789846B (en) * | 2010-02-26 | 2012-10-17 | 联芯科技有限公司 | Dissociation rate matching method and device |
CN102340372B (en) * | 2010-07-28 | 2014-04-09 | 中兴通讯股份有限公司 | Method and device for increasing bit throughput at transmitting end of LTE (Long Term Evolution) base station |
CN102685810B (en) * | 2011-03-16 | 2015-01-28 | 中兴通讯股份有限公司 | Method and system for dynamic caching of user information |
CN103248465B (en) * | 2012-02-01 | 2016-06-08 | 联芯科技有限公司 | A kind of terminal processing device and terminal processing method |
-
2015
- 2015-05-29 CN CN201510287722.4A patent/CN104918259B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN104918259A (en) | 2015-09-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
TW201916725A (en) | Method for determining resource allocation and indicating resource allocation, terminal and network-side equipment | |
CN101610586B (en) | Resource mapping method of control channel in orthogonal frequency division multiplexing system | |
CN104615684B (en) | A kind of mass data communication concurrent processing method and system | |
CN107220348A (en) | A kind of method of data capture based on Flume and Alluxio | |
CN104918259B (en) | Data cached dispatching method and device | |
CN103139732A (en) | Improved short message sending method and system | |
CN106686735A (en) | Physical downlink shared channel resource mapping method | |
CN103414679B (en) | The mapping method of resource particle and device in LTE system | |
CN103957090A (en) | Vectorization achieving method and device of LTE system resource mapping | |
CN101902390B (en) | Unicast and multicast integrated scheduling device, exchange system and method | |
CN105183701B (en) | 1536 point FFT processing modes and relevant device | |
CN108647278B (en) | File management method and system | |
CN103744626B (en) | It is a kind of that the method for carrying out data write-in is replaced based on internal memory | |
CN103428862A (en) | Resource distribution method and device | |
CN101588631A (en) | Method for controlling channel resource allocation | |
CN102780620B (en) | A kind of network processes device and message processing method | |
CN102076088B (en) | Resource unit mapping method and device | |
CN102170401B (en) | Method and device of data processing | |
CN103369684A (en) | Resource scheduling method and device based on carrier aggregation | |
CN103313397A (en) | Realization method of LTE downlink system resource mapping | |
CN106954264B (en) | A kind of downlink physical shares the method for mapping resource and system of channel PDSCH | |
CN106325757A (en) | Storage structure and storage method thereof | |
CN102595421A (en) | Inter-community interference control method and base station dispatcher | |
CN102025546B (en) | Method and equipment for generating, transmitting and reading network equipment performance files | |
CN107257550B (en) | Signal processing method, base station and computer storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |