WO2014056405A1 - 一种数据处理方法和装置 - Google Patents

一种数据处理方法和装置 Download PDF

Info

Publication number
WO2014056405A1
WO2014056405A1 PCT/CN2013/084481 CN2013084481W WO2014056405A1 WO 2014056405 A1 WO2014056405 A1 WO 2014056405A1 CN 2013084481 W CN2013084481 W CN 2013084481W WO 2014056405 A1 WO2014056405 A1 WO 2014056405A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
ram
segment
shared cache
time slot
Prior art date
Application number
PCT/CN2013/084481
Other languages
English (en)
French (fr)
Inventor
石欣琢
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Priority to ES13844885T priority Critical patent/ES2749519T3/es
Priority to US14/434,564 priority patent/US9772946B2/en
Priority to EP13844885.7A priority patent/EP2908251B1/en
Priority to KR1020157011938A priority patent/KR20150067321A/ko
Priority to JP2015535968A priority patent/JP6077125B2/ja
Publication of WO2014056405A1 publication Critical patent/WO2014056405A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0806Multiuser, multiprocessor or multiprocessing cache systems
    • G06F12/084Multiuser, multiprocessor or multiprocessing cache systems with a shared cache
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1016Performance improvement
    • G06F2212/1021Hit rate improvement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/28Using a specific disk cache architecture
    • G06F2212/281Single cache

Definitions

  • the present invention relates to the field of data communications, and in particular, to a data processing method and apparatus. Background technique
  • data is input from one side of the data bus and transmitted to Y (Y is an integer greater than or equal to 1) channel on the other side.
  • data is aggregated from Y channels and transmitted to the scheduling indication. Output on the data bus.
  • the data transmitted on the data bus is only for one channel per beat; in order to effectively utilize the bandwidth of each channel, only valid data is transmitted on each channel.
  • the data bus has a bit width of A, each device has a bus width of B, and A is N times B (N is an integer greater than or equal to 1). Considering that the bandwidth of the data bus is not necessarily equal to the total bandwidth of each channel, and there may be congestion on the data bus and each channel, buffers need to be set on both the uplink and downlink transmission paths.
  • the logic circuit shown in Figure 2 it is usually possible to use the logic circuit shown in Figure 2 to achieve data buffering and bit width conversion, in which FIFO (first in, first out) data buffering, using an independent bit width conversion split circuit implementation Bit width conversion of data.
  • the channel identification distribution circuit distributes the data and the valid bit segment indication of the data to the input corresponding to each channel corresponding to the direction of the data on the input data bus, and the bit width is "A + data valid bit segment indicating width" In the FIFO; when the corresponding channel can receive the data input, the data is read from the input FIFO, and the bit width conversion splitting circuit converts the valid part of the data into the data stream with the bit width B according to the data valid bit segment indication, and sends Go to the corresponding channel.
  • the bit width conversion splicing circuit first converts the data from each channel into a bit width A data by bit width conversion splicing circuit, and then writes to the corresponding output FIFO of each channel;
  • the data selection convergence circuit sequentially reads data from each output FIFO in a scheduling order, and aggregates the output onto the output data bus.
  • the bit width conversion splitting circuit for realizing data bit width conversion is mainly composed of a demultiplexer (DMUX), and its working mode is as follows:
  • the data with a bit width of A is first stored in the register after being read out from the input FIFO along with the data valid segment indication.
  • the bit width conversion splitting circuit selects and outputs the data of the width B of the header in the first cycle, and outputs the data of the width B adjacent to the previous beat in the second cycle until all the valid data is scheduled to be output.
  • the bit width conversion split circuit then turns to the next beat data read from the input FIFO, and continues to perform bit width conversion as described above.
  • the bit width conversion splicing circuit is basically the inverse process of the bit width conversion splitting circuit, and is mainly composed of a multiplexer (MUX), and its working mode is as follows:
  • bit width conversion splicing circuit For each channel, after the data with the bit width B is output from the channel, the bit width conversion splicing circuit is sequentially spliced into data of width A in the output order, and written into the corresponding output FIFO.
  • the block RAM block random access memory
  • the block RAM block random access memory
  • the block RAM block random access memory
  • the maximum configuration bit width of a 36kb block RAM can only reach 36bit
  • this method requires Y bit widths.
  • the "A+ data valid bit segment indicates the width" of the FIFO, so when the data bus width A is large, the FPGA mode requires bit width splicing of multiple block RAMs to implement each FIFO.
  • the technical problem to be solved by the present invention is to provide a data processing method and apparatus, which can save cache resources and improve cache utilization while reliably implementing data buffering and bit width conversion.
  • the present invention provides a data processing method, including: after receiving data input by a data bus, writing data input by the data bus according to a direction indication of the data and a data valid bit segment indication Upstream shared cache;
  • the uplink shared buffer is polled in a fixed slot sequence, and the data in the uplink shared buffer is read out and outputted to corresponding channels.
  • the uplink shared buffer is composed of N blocks of random bit memory (RAM) of a specified bit width, and each block of RAM is logically divided into Y RAM segments, according to the fixed time slot.
  • the uplink shared buffer is sequentially polled, and the data in the uplink shared cache is read out in the following manner:
  • each polling period is N time slots, and each time slot accesses Y RAM segments; when N ⁇ Y, each polling period is Y time slots, and each time slot accesses N times. RAM segment.
  • the foregoing method further has the following features: the process of writing the data input by the data bus into the uplink shared buffer, and the method further includes:
  • the position at which the end of the data is written is recorded by channel.
  • the present invention further provides a data processing apparatus, including: an upstream side write control module, configured to: after receiving data input by the data bus, according to the direction indication of the data and the data valid bit segment indication, Writing data input by the data bus into the uplink shared buffer;
  • the uplink read control module is configured to: read and record the data in the uplink shared buffer according to a fixed time slot sequence, and output the data to the corresponding channels.
  • the above device also has the following features:
  • the uplink shared buffer is composed of random blocks (RAM) of N bits specifying a bit width, and each block of RAM is logically divided into Y RAM segments.
  • the uplink read control module is configured to: read data in the uplink shared buffer according to a fixed time slot sequence, and include: when N ⁇ Y, each polling period is N Gap, each time slot accesses Y RAM segments; when N ⁇ Y, each polling cycle is Y time slots, and each time slot accesses N RAM segments.
  • the above device also has the following features:
  • the upstream write control module is configured to: write the data input by the data bus into the shared buffer of the upstream side to further: record the write position of the tail of the data by the channel.
  • the present invention further provides a data processing method, including: storing data outputted by each channel into a downlink shared buffer;
  • Data is read from the downstream shared buffer according to the scheduling sequence and output to the data bus.
  • the above method also has the following features:
  • the downlink shared buffer is composed of random blocks (RAM) of N bits specifying bit width, and each block of RAM is logically divided into Y RAM segments.
  • the storing the data output by each channel into the downlink shared buffer includes:
  • the RAM segments in the RAM blocks of the downlink shared buffer are polled according to a fixed time slot sequence. If the currently polled RAM segment has a spare, the data to be outputted by the corresponding channel is stored in the RAM segment column.
  • the foregoing method further has the following features:: polling each of the RAM segments in each of the RAM blocks of the downlink shared buffer according to a fixed slot sequence comprises the following manners:
  • each polling period is N time slots, and each time slot accesses Y RAM segments; when N ⁇ Y, each polling period is Y time slots, and each time slot accesses N times. RAM segment.
  • the reading out data from the downlink shared buffer according to a scheduling sequence includes:
  • the present invention further provides a data processing apparatus, including: a downlink side write control module, configured to: store data outputted by each channel into a downlink shared buffer;
  • the downlink read control module is configured to: read data from the downlink shared buffer according to a scheduling sequence, and output the data to the data bus.
  • the above device also has the following features:
  • the downlink shared buffer is composed of random blocks (RAM) of N bits specifying bit width, and each block of RAM is logically divided into Y RAM segments.
  • the downlink write control module is configured to: poll each RAM segment in each RAM block of the downlink shared buffer according to a fixed slot sequence, and if the currently polled RAM segment has a spare, the corresponding channel
  • the data to be output is stored in the RAM segment column.
  • the above device also has the following features:
  • the downlink side write control module is configured to: poll each of the RAM segments in each of the RAM blocks of the downlink shared buffer according to a fixed time slot sequence, and include the following manner: when N ⁇ Y, each polling period is N Each time slot, each time slot accesses Y RAM segments; when N ⁇ Y, each polling cycle is Y time slots, and each time slot accesses N RAM segments.
  • the above device also has the following features:
  • the downlink side readout control module is configured to: calculate a data amount of an overall cache of each RAM segment column, and when the amount of data buffered by the RAM segment column is greater than or equal to a data amount required by the current scheduling indication, according to the length of the scheduling bit segment Indicates that the data output by this schedule is read from each RAM segment.
  • the embodiments of the present invention provide a data processing method and apparatus, which can effectively save cache resources, reduce area and timing pressure, and improve cache utilization while reliably implementing data buffering and bit width conversion. rate.
  • BRIEF DESCRIPTION OF THE DRAWINGS A brief description of the drawings used in the embodiments or the description of the prior art will be briefly described below. It is obvious that the drawings in the following description are only some embodiments of the present invention, and those skilled in the art may not obtain other drawings based on these drawings without any creative work.
  • FIG. 1 is a schematic diagram of an application scenario of a data buffer bit width conversion circuit
  • FIG. 2 is a schematic structural diagram of a conventional data buffer bit width conversion and buffer circuit
  • FIG. 5 is a schematic diagram of a data processing apparatus according to an embodiment of the present invention
  • FIG. 6 is a schematic diagram of an uplink transmission process according to an embodiment of the present invention.
  • FIG. 7 is a schematic diagram of a single-channel uplink transmission process according to an embodiment of the present invention.
  • FIG. 8 is a schematic diagram of a downlink side transmission process according to an embodiment of the present invention. Preferred embodiment of the invention
  • Step 101 In the uplink direction, after receiving the data input by the data bus, writing the data input by the data bus to the uplink shared buffer according to the direction indication of the data and the data valid bit segment indication;
  • the data is given by the data bus, and the direction indication and the valid bit segment indication of the beat data are given.
  • the data input by the data bus is stored in each RAM segment of each RAM block of the upstream shared buffer according to the direction indication and the data valid bit segment.
  • Step 102 Poll the uplink shared buffer according to a fixed slot sequence, and read out data in the uplink shared buffer to output to each corresponding channel.
  • the RAM segments of each RAM block of the uplink shared buffer are polled in a fixed time slot order. If the polled RAM segment is not empty and the corresponding output channel can receive data, the data currently polled into the RAM segment is read. Output to the corresponding channel.
  • the "destination indication" synchronized with the data is used to indicate the channel number to which the beat data is sent, and the "effective bit segment indication” synchronized with the data is used to indicate how many parts are in the beat data (usually It is effective to measure the channel bit width B, which is the unit of the RAM block width.
  • the write control module needs to record the write position of the tail of the beat data by the channel as the basis for writing the next beat data to the same channel.
  • the method of this embodiment includes:
  • Step 103 In the downlink direction, the data outputted by each channel is stored in the downlink shared buffer; when a channel has data to be output, and the corresponding downlink shared buffer RAM segment has a spare space, the downlink side write control module controls The channel outputs the data and stores it in each RAM segment of each RAM block of the downstream shared buffer.
  • Step 104 Read data from the downlink shared buffer according to a scheduling sequence, and output the data to a data bus.
  • Each RAM segment of each RAM block of the downstream shared buffer is accessed according to the scheduling sequence, and the data is aggregated and outputted onto the data bus.
  • FIG. 5 is a schematic diagram of a data processing apparatus according to an embodiment of the present invention. As shown in FIG. 5, the following modules are included:
  • the uplink and downlink shared cache is used to implement data caching, and the uplink and downlink bit width read/write control module and the uplink and downlink shared cache jointly implement data bit width conversion.
  • the uplink side (data bus to each channel) includes the following parts:
  • the uplink side write control module is configured to receive the data input by the data bus, and according to the direction indication of the data and the data valid bit segment indication And writing the data input by the data bus into the uplink shared buffer.
  • the uplink read control module is configured to poll the uplink shared buffer according to a fixed slot sequence, and read out data in the uplink shared buffer to output to each corresponding channel.
  • the upstream shared buffer is composed of RAMs with N bits of width B, and each block of RAM is divided into Y RAM segments for storing data to be output to each channel.
  • the uplink read control module reads the data in the uplink shared buffer according to a fixed time slot sequence, and includes the following manner: when N ⁇ Y, each polling period is N time slots. Each time slot accesses Y RAM segments; when N ⁇ Y, each polling cycle is Y time slots, and each time slot accesses N RAM segments.
  • the uplink write control module in the process of writing data input by the data bus into the uplink shared buffer, is further configured to: record, by the channel, a write position of the tail of the data.
  • the downlink side (each channel to the data bus) includes the following parts:
  • the downlink side write control module is configured to store the data outputted by each channel into the downlink shared buffer.
  • the downlink read control module is configured to read data from the downlink shared buffer according to a scheduling sequence, and output the data to the data bus.
  • the downstream shared buffer is composed of RAMs with N bits of width B, and each block of RAM is logically divided into Y RAM segments for storing data to be output to the data bus.
  • the downlink side write control module is specifically configured to poll each RAM segment in each of the RAM blocks of the downlink shared buffer according to a fixed time slot sequence, and if the currently polled RAM segment has a spare space, The data to be outputted by the corresponding channel is stored in the RAM segment column.
  • the downlink side write control module polls each of the RAM segments in each of the RAM blocks of the downlink shared buffer according to a fixed time slot sequence, and includes the following manner: when N ⁇ Y, each polling period is N Time slots, each time slot accesses Y RAM segments; when N ⁇ Y, each polling period is Y time slots, and each time slot accesses N RAM segments.
  • the downlink side readout control module is specifically configured to calculate a data amount of an overall cache of each RAM segment column. When the data amount of the RAM segment column buffer is greater than or equal to a data amount required by the current scheduling indication, according to the scheduling bit. The segment length indicates that the data output by this schedule is read from each RAM segment.
  • the invention realizes the bit width conversion of the data by operating the shared cache in a fixed time slot order, and the cache efficiency is improved by the method of splicing and storing the valid data, so as to solve the problem that the cache utilization in the prior art is not high. And the problem of occupying too much logic resources and design area when the FPGA is implemented.
  • the implementation block diagram of the embodiment of the present invention is shown in FIG. 6, FIG. 7 and FIG. Among them, uplink and downlink sharing
  • the cache is composed of N simple dual-port RAMs, each of which is logically divided into Y RAM segments.
  • the RAM segments in the same column correspond to the same channel, which is called the RAM segment column.
  • the shared cache divides an address range for each logical RAM segment. In the process of reading and writing to the shared cache, each RAM segment is distinguished by a read/write address; each logical RAM segment is read and written according to a fixed time slot, when When selected and the read enable is pulled high, the RAM segment header data is output, and when it is selected and the write enable is pulled high, data is written to the end of the RAM segment.
  • the outgoing indication synchronized with the data transmitted on the data bus indicates the channel number to which the data is taken; the valid bit segment synchronization with the data indicates that the RAM block width B is The unit measures how many parts of the data are valid from the high position.
  • the upstream write control module After each beat data is written, the upstream write control module records the write end position (write to the RAM segment of the RAM segment) for the RAM segment column written for it. Each time the write is made, the upstream write control module selects the corresponding RAM segment column according to the direction indication, and then the RAM segment column "last write end position + ⁇ is taken as the start position of the current write, and is valid according to the data.
  • the bit segment indicates that the write enable of the "valid bit segment" of the RAM block is sequentially raised from the current write start position.
  • the valid data on the data bus also starts from the write start position of this time, and is sequentially sent to Each selected RAM segment.
  • the second and third beats of the channel m# are successively selected, and the effective bit lengths are 4B and 8B, respectively.
  • the write end position of the RAM segment column m# corresponding to the channel m# is the RAM segment 1#.
  • the beat data is sequentially stored in the RAM segment 2# ⁇ RAM segment 5# in the first and last order, and the write end position of the RAM segment column m# is updated to the RAM segment 5# after the writing is completed.
  • the beat data is sequentially stored in the first and last order to the RAM segment 5#, the RAM segment 6#, the RAM segment 7#, the RAM segment 0#, the RAM segment 1#, and the RAM segment 2#.
  • the writing end position of the RAM segment column m# is updated to the RAM segment 2#.
  • the uplink side read control module polls the uplink shared buffer RAM in a fixed order by time slot.
  • Each RAM segment in the block When N ⁇ Y, each polling period is N time slots, and each time slot accesses Y RAM segments; when N ⁇ Y, each polling period is Y time slots, and each time slot accesses N RAM segment.
  • the polling order is as follows:
  • Read time slot 0 access RAM segment 0, RAM group 0; RAM segment 1, RAM group 7; RAM segment 2, RAM group 6; RAM segment 3, RAM group 5; RAM segment 4, RAM group 4; RAM segment 5, RAM group 3; RAM segment 6, RAM group 2; RAM segment 7, RAM group 1.
  • Read time slot 1 access RAM segment 0, RAM group 1; RAM segment 1, RAM group 0; RAM segment 2, RAM group 7; RAM segment 3, RAM group 6; RAM segment 4, RAM group 5; RAM segment 5, RAM group 4; RAM segment 6, RAM group 3; RAM segment 7, RAM group 2.
  • Read time slot 2 access RAM segment 0, RAM group 2; RAM segment 1, RAM group 1; RAM segment 2, RAM bank 0; RAM segment 3, RAM bank 7; RAM segment 4, RAM bank 6; RAM segment 5, RAM group 5; RAM segment 6, RAM group 4; RAM segment 7, RAM group 3.
  • Read time slot 3 access RAM segment 0, RAM group 3; RAM segment 1, RAM group 2; RAM segment 2, RAM group 1; RAM segment 3, RAM group 0; RAM segment 4, RAM group 7; RAM segment 5, RAM group 6; RAM segment 6, RAM group 5; RAM segment 7, RAM group 4.
  • Read time slot 4 access RAM segment 0, RAM group 4; RAM segment 1, RAM group 3; RAM segment 2, RAM group 2; RAM segment 3, RAM group 1; RAM segment 4, RAM group 0; RAM segment 5, RAM group 7; RAM segment 6, RAM group 6; RAM segment 7, RAM group 5.
  • Read time slot 5 access RAM segment 0, RAM group 5; RAM segment 1, RAM group 4; RAM segment 2, RAM group 3; RAM segment 3, RAM group 2; RAM segment 4, RAM group 1; RAM segment 5, RAM group 0; RAM segment 6, RAM group 7; RAM segment 7, RAM group 6.
  • Read time slot 6 access RAM segment 0, RAM group 6; RAM segment 1, RAM group 5; RAM segment 2, RAM group 4; RAM segment 3, RAM group 3; RAM segment 4, RAM group 2; RAM segment 5, RAM group 1; RAM segment 6, RAM group 0; RAM segment 7, RAM group 7.
  • Read time slot 7 access RAM segment 0, RAM bank 7; RAM segment 1, RAM group 6; RAM segment 2, RAM bank 5; RAM segment 3, RAM bank 4; RAM segment 4, RAM bank 3; RAM segment 5, RAM group 2; RAM segment 6, RAM group 1; RAM segment 7, RAM group 0.
  • Each RAM segment column on the upstream side records its last read end position. If the currently polled RAM segment position is "This RAM segment column last read end position is +1" and the RAM segment is not empty, At the same time, the corresponding output channel can receive data at present, then the read enable of the RAM block in the RAM segment is pulled up, and the data read this time is sent to the corresponding channel of the RAM; in other cases, no data is output.
  • the downlink write control module polls each RAM segment in each RAM block of the downlink shared buffer in a fixed order in a fixed order, in the same order as the uplink read slot polling sequence.
  • each polling period is N time slots, and each time slot accesses Y RAM segments;
  • each polling period is Y time slots, and each time slot accesses N times. RAM segment.
  • the downstream write control module records the last write location for each RAM segment column. If the current polled RAM segment position is "+1st write position of the RAM segment column" and the RAM has spare space, and the corresponding channel has data to be output, write the RAM block where the RAM segment is located. Enable pull-up, send data input from the corresponding channel to the RAM segment; in other cases, no data write operation.
  • the downlink read control module calculates the amount of data of the overall cache for each RAM segment column. When the amount of data buffered in the RAM segment column is greater than or equal to the amount of data required by the current scheduling indication, the downlink read control module initiates reading. Operation, and at the same time give the indication of the length of the scheduling bit segment (measured in terms of the width of the RAM block).
  • the downstream read control module records the last read end position for each RAM segment column.
  • the downstream side readout control module takes the RAM segment of "the last read end position of the RAM segment column +1" as a starting position, and according to the length of the scheduling bit segment, the data output by the current scheduling is Read in the RAM segment and send it to the data bus output.
  • the effective data is cached by using the splicing method in the uplink side and the downlink side writing process, thereby making the cache utilization high.
  • the solution described in the background art due to the implementation of the N block width For the B cache, if the data bus A is much larger than B, in the specific implementation, the on-chip cache resources can be effectively utilized to reduce the design area and timing delay, which is especially obvious when the channel number Y is large.
  • the embodiment of the present invention has the following advantages:
  • the method and device provided by the embodiments of the present invention only perform buffering and bit-width conversion of valid bit segments in data during data processing, and have high cache utilization rate and transmission efficiency; and because of its implementation, N bit widths are used.
  • the RAM block of B can effectively utilize the on-chip RAM resources and reduce the design area when the data bus has a large bit width and is implemented by an FPGA. Overcoming the low cache utilization in the prior art, the specific implementation consumes too many cache resources, and the problem of area and timing pressure is too large.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)
  • Information Transfer Systems (AREA)
  • Computer And Data Communications (AREA)
  • Communication Control (AREA)
  • Time-Division Multiplex Systems (AREA)

Abstract

一种数据处理方法和装置,该方法包括:接收到数据总线输入的数据后,根据该数据的去向指示和数据有效位段指示,将所述数据总线输入的数据写入上行侧共享缓存中;按照固定时隙顺序轮询所述上行侧共享缓存,将所述上行侧共享缓存中的数据读出,输出到对应的各个通道中。本发明实施例可以使其在可靠的实现数据的缓存和位宽转换的同时,能有效的节省缓存资源,降低面积及时序上的压力,提升缓存利用率。

Description

一种数据处理方法和装置
技术领域
本发明涉及数据通讯领域, 尤其涉及一种数据处理方法和装置。 背景技术
在数据通讯领域的大规模数字逻辑设计中, 有时为了提升逻辑电路的处 理能力或是为了匹配逻辑电路两侧总线的位宽, 需要对逻辑电路的数据路径 进行位宽转换。 考虑到逻辑电路两侧的总带宽不一定相等, 或其中一侧会出 现拥塞的状况, 逻辑电路还需要能够对待传输的数据进行緩存。
比如图 1所示的这样一种应用场景。 在上行侧, 数据由一侧的数据总线 输入并传输去往另一侧的 Y个(Y为大于等于 1的整数)通道, 在下行侧, 数据按调度指示从 Y个通道中汇聚后传输到数据总线上进行输出。在数据总 线上传输的数据, 每一拍均仅针对一个通道; 为有效利用各通道的带宽, 各 通道上只传输有效的数据。数据总线的位宽为 A,每个器件的总线宽度为 B, A为 B的 N倍(N为大于等于 1的整数)。 考虑到数据总线的带宽与各通道 总带宽不一定相等, 且数据总线和各通道上均有可能出现拥塞的状况, 在上 行和下行传输路径上均需要设置緩存。
对此, 通常可以釆用如图 2所示的逻辑电路来实现数据的緩存和位宽的 转换, 其中用 FIFO (先进先出)进行数据的緩存, 用一个独立的位宽转换拆 分电路实现数据的位宽转换。 在上行侧, 通道识别分发电路根据输入数据总 线上数据的去向将数据与数据的有效位段指示一起分发到与各通道——对应 的, 位宽为 "A+数据有效位段指示宽度" 的输入 FIFO中; 当对应通道可以 接收数据输入时, 数据从输入 FIFO 中读出, 经过位宽转换拆分电路根据数 据有效位段指示将数据中的有效部分转成位宽为 B的数据流, 送往相应的通 道。 在下行侧, 位宽转换拼接电路将从各通道过来的数据首先各自经位宽转 换拼接电路转换为位宽为 A的数据, 再写入与各通道——对应的输出 FIFO 中; 当数据总线可以接收数据时, 数据选择汇聚电路按调度顺序从各输出 FIFO中先后读取数据, 汇聚输出到输出数据总线上。 其中, 实现数据位宽转换的位宽转换拆分电路主要由解复用器( DMUX ) 组成, 其工作方式如下:
对于每一个通道,位宽为 A的数据与数据有效位段指示一起从输入 FIFO 中读出后, 首先被存在寄存器中。 位宽转换拆分电路在第一个周期将其首部 的宽度为 B的数据选择输出,第二个周期将与上一拍相邻的宽度为 B的数据 输出, 直到有效数据全部被调度输出之后, 位宽转换拆分电路再转向从输入 FIFO中读出的下一拍数据, 继续按上述方式进行位宽转换。
位宽转换拼接电路基本是位宽转换拆分电路的逆过程, 主要由复用器 ( MUX )组成, 其工作方式如下:
对于每一个通道, 位宽为 B的数据从通道输出后, 经位宽转换拼接电路 按输出顺序依次拼接成宽度为 A的数据, 写入对应的输出 FIFO中。
此种方法中, 如果上行侧数据总线输入的数据或是下行侧各通道输出经 位宽转换拼接电路拼接成的数据(位宽均为 A ) 中仅有部分位段有效, 该拍 数据在存入 FIFO时仍然要占据 "A+数据有效位段指示宽度" 的宽度, 与该 拍数据全部位段均有效时完全相同, 从而导致其緩存利用率较低。
另夕卜, 考虑到该电路的具体实现: 如果釆用 FPGA ( Field Programmable Gate Array, 现场可编程门阵列 )方式实现该电路, 由于 FPGA中用来实现 FIFO的 Block RAM (块随机存储器 )位宽有限, 且其长度远大于其宽度(以 Xilinx的 Virex-5 FPGA为例, 一个 36kb大小的 Block RAM的最大配置位宽 只能达到 36bit ) , 考虑到此种方法需要釆用 Y个位宽为 "A+数据有效位段 指示宽度" 的 FIFO, 因而当数据总线宽度 A较大时, 釆用 FPGA方式则需 要对多个 Block RAM进行位宽拼接来实现每个 FIFO。 这样就会耗费相当多 的 Block RAM资源, 占据很大的设计面积,尤其是在通道个数 Y也较多时。 即便釆用 ASIC来实现相关的逻辑电路,如此大位宽的 FIFO也会给后端带来 布局及时序延时上的压力, 同样也会占据很大的设计面积。 发明内容
本发明要解决的技术问题是提供一种数据处理方法和装置, 以在可靠的 实现数据緩存和位宽转换的同时, 节省緩存资源, 提高緩存利用率。 为了解决上述技术问题, 本发明提供了一种数据处理方法, 包括: 接收到数据总线输入的数据后, 根据该数据的去向指示和数据有效位段 指示, 将所述数据总线输入的数据写入上行侧共享緩存中;
按照固定时隙顺序轮询所述上行侧共享緩存, 将所述上行侧共享緩存中 的数据读出, 输出到对应的各个通道中。
优选地, 上述方法还具有下面特点: 所述上行侧共享緩存由 N块指定位 宽的随机存储器( RAM )组成,每块 RAM被在逻辑上划分成 Y个 RAM段, 所述按照固定时隙顺序轮询所述上行侧共享緩存, 将所述上行侧共享緩 存中的数据读出包括以下方式:
当 N≥Y时, 每个轮询周期为 N个时隙 , 每个时隙访问 Y个 RAM段; 当 N<Y时 , 每个轮询周期为 Y个时隙, 每个时隙访问 N个 RAM段。 优选地, 上述方法还具有下面特点: 所述将所述数据总线输入的数据写 入上行侧共享緩存中的过程中, 还包括:
按通道记录当拍数据尾部的写入位置。
为了解决上述问题, 本发明还提供了一种数据处理装置, 包括: 上行侧写入控制模块, 设置为: 接收到数据总线输入的数据后, 根据该 数据的去向指示和数据有效位段指示, 将所述数据总线输入的数据写入上行 侧共享緩存中;
上行侧读出控制模块, 设置为: 按照固定时隙顺序轮询将所述上行侧共 享緩存中的数据读出, 输出到对应的各个通道中。
优选地, 上述装置还具有下面特点:
所述上行侧共享緩存由 N块指定位宽的随机存储器 ( RAM )组成, 每块 RAM被在逻辑上划分成 Y个 RAM段,
所述上行侧读出控制模块, 设置为: 按照固定时隙顺序轮询将所述上行 侧共享緩存中的数据读出包括以下方式: 当 N≥Y时, 每个轮询周期为 N个 时隙 ,每个时隙访问 Y个 RAM段; 当 N<Y时 ,每个轮询周期为 Y个时隙, 每个时隙访问 N个 RAM段。 优选地, 上述装置还具有下面特点:
所述上行侧写入控制模块, 设置为: 将所述数据总线输入的数据写入上 行侧共享緩存中的过程中还用于: 按通道记录当拍数据尾部的写入位置。
为了解决上述问题, 本发明还提供了一种数据处理方法, 包括: 将各通道输出的数据存入下行侧共享緩存中;
根据调度顺序将数据从所述下行侧共享緩存中读出,输出到数据总线上。 优选地, 上述方法还具有下面特点:
所述下行侧共享緩存由 N块指定位宽的随机存储器 ( RAM )组成, 每块 RAM被在逻辑上划分成 Y个 RAM段,
所述将各通道输出的数据存入下行侧共享緩存中包括:
按照固定时隙顺序轮询所述下行侧共享緩存各 RAM块中的各 RAM段, 若当前轮询到的 RAM段列有空余,则将对应通道待输出的数据存入该 RAM 段列。
优选地, 上述方法还具有下面特点: 所述按照固定时隙顺序轮询所述下 行侧共享緩存各 RAM块中的各 RAM段包括以下方式:
当 N≥Y时, 每个轮询周期为 N个时隙 , 每个时隙访问 Y个 RAM段; 当 N<Y时 , 每个轮询周期为 Y个时隙, 每个时隙访问 N个 RAM段。
优选地, 上述方法还具有下面特点: 所述根据调度顺序将数据从所述下 行侧共享緩存中读出包括:
计算每一个 RAM段列总体緩存的数据量, 当该 RAM段列緩存的数据 量大于等于当前调度指示所需的数据量时, 依据调度位段长度指示将本次调 度输出的数据从各 RAM段中读出。
为了解决上述问题, 本发明还提供了一种数据处理装置, 包括: 下行侧写入控制模块, 设置为: 将各通道输出的数据存入下行侧共享緩 存中;
下行侧读出控制模块, 设置为: 根据调度顺序将数据从所述下行侧共享 緩存中读出, 输出到数据总线上。 优选地, 上述装置还具有下面特点:
所述下行侧共享緩存由 N块指定位宽的随机存储器 ( RAM )组成, 每块 RAM被在逻辑上划分成 Y个 RAM段,
所述下行侧写入控制模块, 设置为: 按照固定时隙顺序轮询所述下行侧 共享緩存各 RAM块中的各 RAM段, 若当前轮询到的 RAM段列有空余, 则 将对应通道待输出的数据存入该 RAM段列。
优选地, 上述装置还具有下面特点:
所述下行侧写入控制模块, 设置为: 按照固定时隙顺序轮询所述下行侧 共享緩存各 RAM块中的各 RAM段包括以下方式: 当 N≥Y时, 每个轮询周 期为 N个时隙, 每个时隙访问 Y个 RAM段; 当 N<Y时 , 每个轮询周期为 Y个时隙 , 每个时隙访问 N个 RAM段。
优选地, 上述装置还具有下面特点:
所述下行侧读出控制模块, 设置为: 计算每一个 RAM段列总体緩存的 数据量, 当该 RAM段列緩存的数据量大于等于当前调度指示所需的数据量 时, 依据调度位段长度指示将本次调度输出的数据从各 RAM段中读出。
综上, 本发明实施例提供一种数据处理方法和装置, 使其在可靠的实现 数据的緩存和位宽转换的同时, 能有效的节省緩存资源, 降低面积及时序上 的压力, 提升緩存利用率。 附图概述 下面将对实施例或现有技术描述中所需要使用的附图进行简单的介绍。 显而易见的, 下面描述中的附图仅仅是本发明的一些实施例, 对于本领域技 术人员而言, 在不付出创造性劳动的前提下, 不可以根据这些附图获得其它 的附图。
图 1为数据緩存位宽转换电路应用场景示意图;
图 2为现有数据緩存位宽转换及緩存电路结构示意图; 图 5为本发明实施例的一种数据处理装置的示意图;
图 6为本发明实施例的上行侧传输过程示意图;
图 7为本发明实施例的单通道上行侧传输过程示意图;
图 8为本发明实施例的下行侧传输过程示意图。 本发明的较佳实施方式
下文中将结合附图对本发明的实施例进行详细说明。 需要说明的是, 在 不冲突的情况下, 本申请中的实施例及实施例中的特征可以相互任意组合。 示, 本实施例的方法包括:
步骤 101、 上行侧方向, 接收到数据总线输入的数据后, 根据该数据的 去向指示和数据有效位段指示, 将所述数据总线输入的数据写入上行侧共享 緩存中;
数据在由数据总线输入的同时, 给出当拍数据的去向指示和有效位段指 示。 根据去向指示和数据有效位段指示将由数据总线输入的数据存入上行侧 共享緩存各 RAM块的各 RAM段中。
步骤 102、 按照固定时隙顺序轮询所述上行侧共享緩存, 将所述上行侧 共享緩存中的数据读出, 输出到对应的各个通道中。
按固定时隙顺序轮询上行侧共享緩存各 RAM块各 RAM段, 如果轮询 到的 RAM段非空且对应的输出通道可以接收数据,则将当前轮询到 RAM段 中的数据读出, 输出到对应的通道。
所述步骤 101中, 与数据同步的 "去向指示" 用于指示该拍数据所去往 的通道号, 而与数据同步的 "有效位段指示" 用于指示当拍数据中有多少部 分(通常以通道位宽 B, 即 RAM块位宽为单位衡量)是有效的。 所述步骤 102 中, 在将数据存入上行侧共享緩存的同时, 写入控制模块需要按通道记 录当拍数据尾部的写入位置, 作为去往相同通道的下拍数据写入的依据。 示, 本实施例的方法包括:
步骤 103、 下行侧方向, 将各通道输出的数据存入下行侧共享緩存中; 当某通道有数据待输出, 且对应的下行侧共享緩存 RAM段有空余时, 经下行侧写入控制模块控制, 该通道将数据输出, 并存入下行侧共享緩存各 RAM块的各 RAM段中。
步骤 104、 根据调度顺序将数据从所述下行侧共享緩存中读出, 输出到 数据总线上。
根据调度顺序访问下行侧共享緩存各 RAM块的各 RAM段, 将数据汇 聚输出到数据总线上。
图 5为本发明实施例的一种数据处理装置的示意图, 如图 5所示, 包括 以下模块:
上行侧写入控制模块、 上行侧读出控制模块、 上行侧共享緩存、 下行侧 写入控制模块、 下行侧读出控制模块及下行侧共享緩存。 其中, 上下行共享 緩存用于实现数据的緩存功能, 上下行位宽读写控制模块与上下行共享緩存 共同实现数据的位宽转换。
按数据流向划分, 上行侧 (数据总线到各通道) 包括以下几个部分: 上行侧写入控制模块, 用于接收到数据总线输入的数据后, 根据该数据 的去向指示和数据有效位段指示, 将所述数据总线输入的数据写入上行侧共 享緩存中。
上行侧读出控制模块,用于按照固定时隙顺序轮询所述上行侧共享緩存, 将所述上行侧共享緩存中的数据读出, 输出到对应的各个通道中。
上行侧共享緩存, 由 N块位宽为 B的 RAM组成, 每块 RAM被划分成 Y个 RAM段, 用于存储待输出到各通道的数据。
其中, 所述上行侧读出控制模块, 按照固定时隙顺序轮询将所述上行侧 共享緩存中的数据读出包括以下方式: 当 N≥Y时, 每个轮询周期为 N个时 隙, 每个时隙访问 Y个 RAM段; 当 N<Y时, 每个轮询周期为 Y个时隙, 每个时隙访问 N个 RAM段。 其中, 所述上行侧写入控制模块, 将所述数据总线输入的数据写入上行 侧共享緩存中的过程中还用于: 按通道记录当拍数据尾部的写入位置。
按数据流向划分, 下行侧 (各通道到数据总线) 包括以下几个部分: 下行侧写入控制模块,用于将各通道输出的数据存入下行侧共享緩存中。 下行侧读出控制模块, 用于根据调度顺序将数据从所述下行侧共享緩存 中读出, 输出到数据总线上。
下行侧共享緩存, 由 N块位宽为 B的 RAM组成, 每块 RAM被在逻辑 上划分成 Y个 RAM段, 用于存储待输出到数据总线的数据。
其中, 所述下行侧写入控制模块, 具体用于按照固定时隙顺序轮询所述 下行侧共享緩存各 RAM块中的各 RAM段,若当前轮询到的 RAM段列有空 余, 则将对应通道待输出的数据存入该 RAM段列。
其中, 所述下行侧写入控制模块, 按照固定时隙顺序轮询所述下行侧共 享緩存各 RAM块中的各 RAM段包括以下方式: 当 N≥Y时, 每个轮询周期 为 N个时隙, 每个时隙访问 Y个 RAM段; 当 N<Y时, 每个轮询周期为 Y 个时隙, 每个时隙访问 N个 RAM段。
其中, 所述下行侧读出控制模块, 具体用于计算每一个 RAM段列总体 緩存的数据量, 当该 RAM段列緩存的数据量大于等于当前调度指示所需的 数据量时, 依据调度位段长度指示将本次调度输出的数据从各 RAM段中读 出。
下面结合附图及具体实施例对本发明再做进一步详细的说明。 所描述的 实施例仅仅是本发明一部分实施例, 而不是全部的实施例。 基于本发明中的 实施例, 本领域技术人员在没有做出创造性劳动前提下所获得的所有其它实 施例, 都属于本发明保护的范围。
本发明釆用按固定时隙顺序操作共享緩存的方式实现数据的位宽转换, 釆用对有效数据进行拼接存储的方式实现緩存效率的提升, 以解决现有技术 中存在的緩存利用率不高, 及 FPGA实现时占用逻辑资源及设计面积过多的 问题。
本发明实施例的实现框图见图 6、 图 7及图 8所示。 其中, 上下行共享 緩存均由 N块简单双口 RAM组成,每块 RAM在逻辑上被划分为 Y个 RAM 段。 处于同一列的 RAM段对应同一个通道, 将其称之为 RAM段列。 共享 緩存为每个逻辑 RAM段划分一个地址范围, 在对共享緩存的读写操作过程 中, 通过读写地址区分各个 RAM段; 每个逻辑的 RAM段按固定时隙进行 读写控制, 当其被选中且读使能拉高时, 将该 RAM段首部数据输出, 当其 被选中且写使能拉高时, 将数据写入该 RAM段的尾部。
为简便起见,下面仅以 N= Y=8的情况为例详细描述本发明的工作过程: 上行侧的整个写入和读出过程如图 6所示。
上行侧的写入过程:
如图 6所示, 在上行侧, 与数据总线上传输的数据同步的去向指示指明 了当拍数据所去往的通道编号; 与数据同步的有效位段指示指明了以 RAM 块位宽 B为单位衡量, 当拍数据中从高位开始有多少部分有效。
当每一拍数据写入完成后, 上行侧写入控制模块都会为其写入的 RAM 段列记录本次写入结束位置 (写到该 RAM段列哪一个 RAM段) 。 每次写 入时, 上行侧写入控制模块根据去向指示选中对应的 RAM段列, 然后将该 RAM段列 "上一次写入结束位置 +Γ 作为本次写入的起始位置, 根据数据 有效位段指示判断从当前写入起始位置起, 依次拉高 "有效位段" 个 RAM 块的写使能。 数据总线上的有效数据也按本次的写入起始位置开始, 依次送 往当前选中的各个 RAM段。
以本例而言, 如图 7所示, 先后去往通道 m#的第 2拍和第 3拍, 有效 位段长度分别为 4B和 8B。在第 2拍数据写入前, 通道 m#对应的 RAM段列 m#的写入结束位置为 RAM段 1#。 当第一拍数据写入时, 该拍数据按首尾顺 序被依次存到 RAM段 2#~RAM段 5#中 , 写入完成后 RAM段列 m#的写入 结束位置更新为 RAM段 5#; 当第二拍数据写入时, 该拍数据按首尾顺序被 依次存到 RAM段 5#、 RAM段 6#、 RAM段 7#、 RAM段 0#、 RAM段 1#、 RAM段 2#中,写入完成后 RAM段列 m#的写入结束位置更新为 RAM段 2#。
上行侧的读出过程:
上行侧读出控制模块按时隙, 以固定顺序轮询上行侧共享緩存各 RAM 块中的各 RAM段。 当 N≥Y时,每个轮询周期为 N个时隙,每个时隙访问 Y 个 RAM段;当 N<Y时 ,每个轮询周期为 Y个时隙 ,每个时隙访问 N个 RAM 段。 对于同一个 RAM段列而言, 在前后两个时隙中依次访问上下两个相邻 的 RAM段。 在本例中, 其轮询顺序如下所示:
读时隙 0: 访问 RAM段 0, RAM组 0; RAM段 1, RAM组 7; RAM 段 2, RAM组 6; RAM段 3, RAM组 5; RAM段 4, RAM组 4; RAM段 5, RAM组 3; RAM段 6, RAM组 2; RAM段 7, RAM组 1。
读时隙 1: 访问 RAM段 0, RAM组 1; RAM段 1, RAM组 0; RAM 段 2, RAM组 7; RAM段 3, RAM组 6; RAM段 4, RAM组 5; RAM段 5, RAM组 4; RAM段 6, RAM组 3; RAM段 7, RAM组 2。
读时隙 2: 访问 RAM段 0, RAM组 2; RAM段 1, RAM组 1; RAM 段 2, RAM组 0; RAM段 3, RAM组 7; RAM段 4, RAM组 6; RAM段 5, RAM组 5; RAM段 6, RAM组 4; RAM段 7, RAM组 3。
读时隙 3: 访问 RAM段 0, RAM组 3; RAM段 1, RAM组 2; RAM 段 2, RAM组 1; RAM段 3, RAM组 0; RAM段 4, RAM组 7; RAM段 5, RAM组 6; RAM段 6, RAM组 5; RAM段 7, RAM组 4。
读时隙 4: 访问 RAM段 0, RAM组 4; RAM段 1, RAM组 3; RAM 段 2, RAM组 2; RAM段 3, RAM组 1; RAM段 4, RAM组 0; RAM段 5, RAM组 7; RAM段 6, RAM组 6; RAM段 7, RAM组 5。
读时隙 5: 访问 RAM段 0, RAM组 5; RAM段 1, RAM组 4; RAM 段 2, RAM组 3; RAM段 3, RAM组 2; RAM段 4, RAM组 1; RAM段 5, RAM组 0; RAM段 6, RAM组 7; RAM段 7, RAM组 6。
读时隙 6: 访问 RAM段 0, RAM组 6; RAM段 1, RAM组 5; RAM 段 2, RAM组 4; RAM段 3, RAM组 3; RAM段 4, RAM组 2; RAM段 5, RAM组 1; RAM段 6, RAM组 0; RAM段 7, RAM组 7。
读时隙 7: 访问 RAM段 0, RAM组 7; RAM段 1, RAM组 6; RAM 段 2, RAM组 5; RAM段 3, RAM组 4; RAM段 4, RAM组 3; RAM段 5, RAM组 2; RAM段 6, RAM组 1; RAM段 7, RAM组 0。 上行侧每个 RAM段列都会记录其上一次的读出结束位置, 如果当前被 轮询到的 RAM段位置为 "该 RAM段列上一次读出结束位置 +1" 且该 RAM 段非空,同时其对应的输出通道当前可以接收数据,则将该 RAM段所处 RAM 块的读使能拉高,将本次读出的数据送往该 RAM对应的通道;其余情况下, 不输出数据。
下行侧的整个写入和读出过程如图 8所示。
下行侧的写入过程:
下行侧写入控制模块按时隙, 以固定顺序轮询下行侧共享緩存各 RAM 块中的各 RAM段, 其顺序同上行侧读时隙轮询顺序。 当 N≥ Y时, 每个轮询 周期为 N个时隙, 每个时隙访问 Y个 RAM段; 当 N<Y时, 每个轮询周期 为 Y个时隙, 每个时隙访问 N个 RAM段。
下行侧写入控制模块为每个 RAM段列记录上一次的写入位置。 如果当 前轮询到的 RAM段位置为 "该 RAM段列上一次写入位置 +1"且该 RAM有 空余, 同时其对应的通道有数据可以输出, 则将该 RAM段所处 RAM块的 写使能拉高, 将从对应通道输入的数据送往该 RAM段; 其余情况下, 不进 行数据的写入操作。
下行侧的读出过程:
下行侧读出控制模块为每一个 RAM段列计算总体緩存的数据量, 当该 RAM段列緩存的数据量大于等于当前调度指示所需的数据量时, 下行侧读 出控制模块才发起读出操作, 并同时给出调度位段长度指示 (按 RAM块位 宽为衡量单位) 。
下行侧读出控制模块为每个 RAM段列记录上一次的读出结束位置。 当 发起读操作时, 下行侧读出控制模块以 "该 RAM段列上一次读出结束位置 +1" 的 RAM段为起始位置, 依据调度位段长度指示将本次调度输出的数据 从各 RAM段中读出, 送往数据总线输出。
由上述实施方式描述可以看出,在实现数据位宽转换和緩存功能的同时, 由于在上行侧和下行侧写入过程中釆用拼接的方式对有效数据进行緩存, 从 而使其緩存利用率高于背景技术所述方案。另外由于在实施中釆用 N块宽度 为 B的緩存, 如数据总线 A远大于 B, 则在具体实现时, 可以有效的利用片 内緩存资源,减小设计面积和时序延迟,这一点在通道数 Y较大时尤为明显。
本领域普通技术人员可以理解上述方法中的全部或部分步骤可通过程序 来指令相关硬件完成, 所述程序可以存储于计算机可读存储介质中, 如只读 存储器、 磁盘或光盘等。 可选地, 上述实施例的全部或部分步骤也可以使用 一个或多个集成电路来实现。 相应地, 上述实施例中的各模块 /单元可以釆用 硬件的形式实现, 也可以釆用软件功能模块的形式实现。 本发明不限制于任 何特定形式的硬件和软件的结合。
以上仅为本发明的优选实施例, 当然, 本发明还可有其他多种实施例, 在不背离本发明精神及其实质的情况下, 熟悉本领域的技术人员当可根据本 发明作出各种相应的改变和变形, 但这些相应的改变和变形都应属于本发明 所附的权利要求的保护范围。
工业实用性 与现有技术相比, 本发明实施例具有以下优点:
本发明实施例提供的方法和装置, 在数据处理时仅将数据中的有效位段 进行緩存及位宽转换, 其緩存利用率及传输效率较高; 且由于其实现釆用 N 个位宽为 B的 RAM块, 当数据总线位宽较大且釆用 FPGA实现时可以有效 利用片内的 RAM资源, 减小设计面积。 克服现有技术中存在的緩存利用率 较低, 具体实现时耗费緩存资源过多, 面积及时序压力过大的问题和缺陷。

Claims

权 利 要 求 书
1、 一种数据处理方法, ^:
接收到数据总线输入的数据后, 根据该数据的去向指示和数据有效位段 指示, 将所述数据总线输入的数据写入上行侧共享緩存中;
按照固定时隙顺序轮询所述上行侧共享緩存, 将所述上行侧共享緩存中 的数据读出, 输出到对应的各个通道中。
2、 如权利要求 1所述的方法, 其中, 所述上行侧共享緩存由 N块指定 位宽的随机存储器( RAM )组成, 每块 RAM被在逻辑上划分成 Y个 RAM 段,
所述按照固定时隙顺序轮询所述上行侧共享緩存, 将所述上行侧共享緩 存中的数据读出包括以下方式:
当 N≥Y时, 每个轮询周期为 N个时隙 , 每个时隙访问 Y个 RAM段; 当 N<Y时 , 每个轮询周期为 Y个时隙, 每个时隙访问 N个 RAM段。
3、如权利要求 1或 2所述的方法, 其中, 所述将所述数据总线输入的数 据写入上行侧共享緩存中的过程中, 还包括:
按通道记录当拍数据尾部的写入位置。
4、 一种数据处理装置, 包括:
上行侧写入控制模块, 设置为: 接收到数据总线输入的数据后, 根据该 数据的去向指示和数据有效位段指示, 将所述数据总线输入的数据写入上行 侧共享緩存中;
上行侧读出控制模块, 设置为: 按照固定时隙顺序轮询将所述上行侧共 享緩存中的数据读出, 输出到对应的各个通道中。
5、 如权利要求 4所述的装置, 其中,
所述上行侧共享緩存由 N块指定位宽的随机存储器 ( RAM )组成, 每块 RAM被在逻辑上划分成 Y个 RAM段 ,
所述上行侧读出控制模块, 设置为: 按照固定时隙顺序轮询将所述上行 侧共享緩存中的数据读出包括以下方式: 当 N≥Y时, 每个轮询周期为 N个 时隙,每个时隙访问 Υ个 RAM段; 当 N<Y时,每个轮询周期为 Y个时隙, 每个时隙访问 N个 RAM段。
6、 如权利要求 4或 5所述的装置, 其中,
所述上行侧写入控制模块, 设置为: 将所述数据总线输入的数据写入上 行侧共享緩存中的过程中还用于: 按通道记录当拍数据尾部的写入位置。
7、 一种数据处理方法, 包括:
将各通道输出的数据存入下行侧共享緩存中;
根据调度顺序将数据从所述下行侧共享緩存中读出,输出到数据总线上。
8、 如权利要求 7所述的方法, 其中,
所述下行侧共享緩存由 N块指定位宽的随机存储器(RAM )组成, 每块
RAM被在逻辑上划分成 Y个 RAM段,
所述将各通道输出的数据存入下行侧共享緩存中包括:
按照固定时隙顺序轮询所述下行侧共享緩存各 RAM块中的各 RAM段, 若当前轮询到的 RAM段列有空余,则将对应通道待输出的数据存入该 RAM 段列。
9、如权利要求 8所述的方法, 其中, 所述按照固定时隙顺序轮询所述下 行侧共享緩存各 RAM块中的各 RAM段包括以下方式:
当 N≥ Y时, 每个轮询周期为 N个时隙, 每个时隙访问 Y个 RAM段; 当 N<Y时 , 每个轮询周期为 Y个时隙, 每个时隙访问 N个 RAM段。
10、 如权利要求 7-9任一项所述的方法, 其中, 所述根据调度顺序将数 据从所述下行侧共享緩存中读出包括:
计算每一个 RAM段列总体緩存的数据量, 当该 RAM段列緩存的数据 量大于等于当前调度指示所需的数据量时, 依据调度位段长度指示将本次调 度输出的数据从各 RAM段中读出。
11、 一种数据处理装置, 包括:
下行侧写入控制模块, 设置为: 将各通道输出的数据存入下行侧共享緩 存中; 下行侧读出控制模块, 设置为: 根据调度顺序将数据从所述下行侧共享 緩存中读出, 输出到数据总线上。
12、 如权利要求 11所述的装置, 其中,
所述下行侧共享緩存由 N块指定位宽的随机存储器 ( RAM )组成, 每块 RAM被在逻辑上划分成 Y个 RAM段 ,
所述下行侧写入控制模块, 设置为: 按照固定时隙顺序轮询所述下行侧 共享緩存各 RAM块中的各 RAM段, 若当前轮询到的 RAM段列有空余, 则 将对应通道待输出的数据存入该 RAM段列。
13、 如权利要求 12所述的装置, 其中,
所述下行侧写入控制模块, 设置为: 按照固定时隙顺序轮询所述下行侧 共享緩存各 RAM块中的各 RAM段包括以下方式: 当 N≥Y时, 每个轮询周 期为 N个时隙 , 每个时隙访问 Y个 RAM段; 当 N<Y时 , 每个轮询周期为 Y个时隙 , 每个时隙访问 N个 RAM段。
14、 如权利要求 11-13任一项所述的装置, 其中,
所述下行侧读出控制模块, 设置为: 计算每一个 RAM段列总体緩存的 数据量, 当该 RAM段列緩存的数据量大于等于当前调度指示所需的数据量 时, 依据调度位段长度指示将本次调度输出的数据从各 RAM段中读出。
PCT/CN2013/084481 2012-10-09 2013-09-27 一种数据处理方法和装置 WO2014056405A1 (zh)

Priority Applications (5)

Application Number Priority Date Filing Date Title
ES13844885T ES2749519T3 (es) 2012-10-09 2013-09-27 Método y dispositivo para procesar datos
US14/434,564 US9772946B2 (en) 2012-10-09 2013-09-27 Method and device for processing data
EP13844885.7A EP2908251B1 (en) 2012-10-09 2013-09-27 Method and device for processing data
KR1020157011938A KR20150067321A (ko) 2012-10-09 2013-09-27 데이터 처리 방법 및 장치
JP2015535968A JP6077125B2 (ja) 2012-10-09 2013-09-27 データ処理方法及び装置

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201210379871.XA CN103714038B (zh) 2012-10-09 2012-10-09 一种数据处理方法和装置
CN201210379871.X 2012-10-09

Publications (1)

Publication Number Publication Date
WO2014056405A1 true WO2014056405A1 (zh) 2014-04-17

Family

ID=50407031

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2013/084481 WO2014056405A1 (zh) 2012-10-09 2013-09-27 一种数据处理方法和装置

Country Status (7)

Country Link
US (1) US9772946B2 (zh)
EP (1) EP2908251B1 (zh)
JP (1) JP6077125B2 (zh)
KR (1) KR20150067321A (zh)
CN (1) CN103714038B (zh)
ES (1) ES2749519T3 (zh)
WO (1) WO2014056405A1 (zh)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105607888A (zh) * 2014-11-25 2016-05-25 中兴通讯股份有限公司 数据位宽转换方法及装置
CN104850515B (zh) * 2015-04-28 2018-03-06 华为技术有限公司 一种缓存信元数据的方法、装置和设备
US10205672B2 (en) * 2015-09-11 2019-02-12 Cirrus Logic, Inc. Multi-device synchronization of devices
CN107920258B (zh) * 2016-10-11 2020-09-08 中国移动通信有限公司研究院 一种数据处理方法及装置
CN109388590B (zh) * 2018-09-28 2021-02-26 中国电子科技集团公司第五十二研究所 提升多通道dma访问性能的动态缓存块管理方法和装置
CN109818603B (zh) * 2018-12-14 2023-04-28 深圳市紫光同创电子有限公司 一种位宽转换电路的复用方法及位宽转换电路
CN110134365B (zh) * 2019-05-21 2022-10-11 合肥工业大学 一种多通道并行读出fifo的方法及装置
CN115004654A (zh) * 2020-11-27 2022-09-02 西安诺瓦星云科技股份有限公司 数据传输方法、装置、通信系统、存储介质和处理器
CN114153763A (zh) * 2021-11-09 2022-03-08 中国船舶重工集团公司第七一五研究所 一种高带宽低延时算法处理的fpga硬件实现方法
CN115061959B (zh) * 2022-08-17 2022-10-25 深圳比特微电子科技有限公司 数据交互方法、装置、系统、电子设备和存储介质
WO2024040604A1 (zh) * 2022-08-26 2024-02-29 华为技术有限公司 一种数据传输方法及装置

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1477532A (zh) * 2002-08-20 2004-02-25 华为技术有限公司 控制芯片片内存储装置及其存储方法
CN1674477A (zh) * 2004-03-26 2005-09-28 华为技术有限公司 一种实现时分复用电路位宽转换的装置及方法
CN101291275A (zh) * 2008-06-02 2008-10-22 杭州华三通信技术有限公司 Spi4.2总线桥接实现方法及spi4.2总线桥接器件

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6754741B2 (en) * 2001-05-10 2004-06-22 Pmc-Sierra, Inc. Flexible FIFO system for interfacing between datapaths of variable length
US7613856B2 (en) * 2004-10-21 2009-11-03 Lsi Corporation Arbitrating access for a plurality of data channel inputs with different characteristics
WO2009089301A1 (en) * 2008-01-07 2009-07-16 Rambus Inc. Variable-width memory module and buffer
CN101789845B (zh) * 2010-02-22 2013-01-16 烽火通信科技股份有限公司 应用sfec的光传送网中总线位宽变换实现方法及电路
CN101894005A (zh) * 2010-05-26 2010-11-24 上海大学 高速接口向低速接口的异步fifo传输方法
CN102541506B (zh) * 2010-12-29 2014-02-26 深圳市恒扬科技有限公司 一种fifo数据缓存器、芯片以及设备

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1477532A (zh) * 2002-08-20 2004-02-25 华为技术有限公司 控制芯片片内存储装置及其存储方法
CN1674477A (zh) * 2004-03-26 2005-09-28 华为技术有限公司 一种实现时分复用电路位宽转换的装置及方法
CN101291275A (zh) * 2008-06-02 2008-10-22 杭州华三通信技术有限公司 Spi4.2总线桥接实现方法及spi4.2总线桥接器件

Also Published As

Publication number Publication date
EP2908251A1 (en) 2015-08-19
US20150301943A1 (en) 2015-10-22
EP2908251B1 (en) 2019-06-12
JP2016503526A (ja) 2016-02-04
EP2908251A4 (en) 2015-12-30
ES2749519T3 (es) 2020-03-20
CN103714038B (zh) 2019-02-15
KR20150067321A (ko) 2015-06-17
JP6077125B2 (ja) 2017-02-08
US9772946B2 (en) 2017-09-26
CN103714038A (zh) 2014-04-09

Similar Documents

Publication Publication Date Title
WO2014056405A1 (zh) 一种数据处理方法和装置
KR101607180B1 (ko) 패킷 재조립 및 재배열 방법, 장치 및 시스템
US8942248B1 (en) Shared control logic for multiple queues
CN112084136B (zh) 队列缓存管理方法、系统、存储介质、计算机设备及应用
US7352836B1 (en) System and method of cross-clock domain rate matching
JP2005518018A5 (zh)
WO2014063599A1 (zh) 一种用于以太网设备的数据缓存系统及方法
JP6340481B2 (ja) データキャッシング方法、装置及び記憶媒体
WO2014131273A1 (zh) 数据传输方法、装置及直接存储器存取
WO2016070668A1 (zh) 一种实现数据格式转换的方法、装置及计算机存储介质
CN108614792B (zh) 1394事务层数据包存储管理方法及电路
JP3437518B2 (ja) デジタル信号伝送方法及び装置
CN115543882B (zh) 不同位宽总线间的数据转发装置及数据传输方法
WO2012149742A1 (zh) 信号保序方法和装置
JP5360594B2 (ja) Dma転送装置及び方法
WO2010020191A1 (zh) 提高同步数字体系虚级联延时补偿缓存效率的方法及装置
JP5061688B2 (ja) データ転送制御装置およびデータ転送制御方法
CN102984088A (zh) 应用于afdx交换机确保帧转发顺序一致性的方法
JP5750387B2 (ja) フレーム制御装置、伝送装置、ネットワークシステム、バッファの読み出し制御方法
JP2013162475A (ja) ループバック回路
JP4904136B2 (ja) 双方向データ通信用単一ポートメモリ制御装置およびその制御方法
US20150095621A1 (en) Arithmetic processing unit, and method of controlling arithmetic processing unit
JP2003318947A (ja) 多層メモリ使用交換装置
CN104850515B (zh) 一种缓存信元数据的方法、装置和设备
JP2005504464A (ja) シングル・ポート・メモリを使用したディジタル・ライン・ディレー

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13844885

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2015535968

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 20157011938

Country of ref document: KR

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 2013844885

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 14434564

Country of ref document: US