WO2014056405A1 - 一种数据处理方法和装置 - Google Patents
一种数据处理方法和装置 Download PDFInfo
- Publication number
- WO2014056405A1 WO2014056405A1 PCT/CN2013/084481 CN2013084481W WO2014056405A1 WO 2014056405 A1 WO2014056405 A1 WO 2014056405A1 CN 2013084481 W CN2013084481 W CN 2013084481W WO 2014056405 A1 WO2014056405 A1 WO 2014056405A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data
- ram
- segment
- shared cache
- time slot
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 40
- 238000012545 processing Methods 0.000 title claims abstract description 10
- 238000011144 upstream manufacturing Methods 0.000 claims description 27
- 238000003672 processing method Methods 0.000 claims description 8
- 238000006243 chemical reaction Methods 0.000 abstract description 25
- 230000005540 biological transmission Effects 0.000 abstract description 6
- 239000000872 buffer Substances 0.000 description 56
- 238000010586 diagram Methods 0.000 description 8
- 238000013461 design Methods 0.000 description 6
- 230000003139 buffering effect Effects 0.000 description 5
- 230000001360 synchronised effect Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000001934 delay Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0806—Multiuser, multiprocessor or multiprocessing cache systems
- G06F12/084—Multiuser, multiprocessor or multiprocessing cache systems with a shared cache
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/10—Providing a specific technical effect
- G06F2212/1016—Performance improvement
- G06F2212/1021—Hit rate improvement
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/28—Using a specific disk cache architecture
- G06F2212/281—Single cache
Definitions
- the present invention relates to the field of data communications, and in particular, to a data processing method and apparatus. Background technique
- data is input from one side of the data bus and transmitted to Y (Y is an integer greater than or equal to 1) channel on the other side.
- data is aggregated from Y channels and transmitted to the scheduling indication. Output on the data bus.
- the data transmitted on the data bus is only for one channel per beat; in order to effectively utilize the bandwidth of each channel, only valid data is transmitted on each channel.
- the data bus has a bit width of A, each device has a bus width of B, and A is N times B (N is an integer greater than or equal to 1). Considering that the bandwidth of the data bus is not necessarily equal to the total bandwidth of each channel, and there may be congestion on the data bus and each channel, buffers need to be set on both the uplink and downlink transmission paths.
- the logic circuit shown in Figure 2 it is usually possible to use the logic circuit shown in Figure 2 to achieve data buffering and bit width conversion, in which FIFO (first in, first out) data buffering, using an independent bit width conversion split circuit implementation Bit width conversion of data.
- the channel identification distribution circuit distributes the data and the valid bit segment indication of the data to the input corresponding to each channel corresponding to the direction of the data on the input data bus, and the bit width is "A + data valid bit segment indicating width" In the FIFO; when the corresponding channel can receive the data input, the data is read from the input FIFO, and the bit width conversion splitting circuit converts the valid part of the data into the data stream with the bit width B according to the data valid bit segment indication, and sends Go to the corresponding channel.
- the bit width conversion splicing circuit first converts the data from each channel into a bit width A data by bit width conversion splicing circuit, and then writes to the corresponding output FIFO of each channel;
- the data selection convergence circuit sequentially reads data from each output FIFO in a scheduling order, and aggregates the output onto the output data bus.
- the bit width conversion splitting circuit for realizing data bit width conversion is mainly composed of a demultiplexer (DMUX), and its working mode is as follows:
- the data with a bit width of A is first stored in the register after being read out from the input FIFO along with the data valid segment indication.
- the bit width conversion splitting circuit selects and outputs the data of the width B of the header in the first cycle, and outputs the data of the width B adjacent to the previous beat in the second cycle until all the valid data is scheduled to be output.
- the bit width conversion split circuit then turns to the next beat data read from the input FIFO, and continues to perform bit width conversion as described above.
- the bit width conversion splicing circuit is basically the inverse process of the bit width conversion splitting circuit, and is mainly composed of a multiplexer (MUX), and its working mode is as follows:
- bit width conversion splicing circuit For each channel, after the data with the bit width B is output from the channel, the bit width conversion splicing circuit is sequentially spliced into data of width A in the output order, and written into the corresponding output FIFO.
- the block RAM block random access memory
- the block RAM block random access memory
- the block RAM block random access memory
- the maximum configuration bit width of a 36kb block RAM can only reach 36bit
- this method requires Y bit widths.
- the "A+ data valid bit segment indicates the width" of the FIFO, so when the data bus width A is large, the FPGA mode requires bit width splicing of multiple block RAMs to implement each FIFO.
- the technical problem to be solved by the present invention is to provide a data processing method and apparatus, which can save cache resources and improve cache utilization while reliably implementing data buffering and bit width conversion.
- the present invention provides a data processing method, including: after receiving data input by a data bus, writing data input by the data bus according to a direction indication of the data and a data valid bit segment indication Upstream shared cache;
- the uplink shared buffer is polled in a fixed slot sequence, and the data in the uplink shared buffer is read out and outputted to corresponding channels.
- the uplink shared buffer is composed of N blocks of random bit memory (RAM) of a specified bit width, and each block of RAM is logically divided into Y RAM segments, according to the fixed time slot.
- the uplink shared buffer is sequentially polled, and the data in the uplink shared cache is read out in the following manner:
- each polling period is N time slots, and each time slot accesses Y RAM segments; when N ⁇ Y, each polling period is Y time slots, and each time slot accesses N times. RAM segment.
- the foregoing method further has the following features: the process of writing the data input by the data bus into the uplink shared buffer, and the method further includes:
- the position at which the end of the data is written is recorded by channel.
- the present invention further provides a data processing apparatus, including: an upstream side write control module, configured to: after receiving data input by the data bus, according to the direction indication of the data and the data valid bit segment indication, Writing data input by the data bus into the uplink shared buffer;
- the uplink read control module is configured to: read and record the data in the uplink shared buffer according to a fixed time slot sequence, and output the data to the corresponding channels.
- the above device also has the following features:
- the uplink shared buffer is composed of random blocks (RAM) of N bits specifying a bit width, and each block of RAM is logically divided into Y RAM segments.
- the uplink read control module is configured to: read data in the uplink shared buffer according to a fixed time slot sequence, and include: when N ⁇ Y, each polling period is N Gap, each time slot accesses Y RAM segments; when N ⁇ Y, each polling cycle is Y time slots, and each time slot accesses N RAM segments.
- the above device also has the following features:
- the upstream write control module is configured to: write the data input by the data bus into the shared buffer of the upstream side to further: record the write position of the tail of the data by the channel.
- the present invention further provides a data processing method, including: storing data outputted by each channel into a downlink shared buffer;
- Data is read from the downstream shared buffer according to the scheduling sequence and output to the data bus.
- the above method also has the following features:
- the downlink shared buffer is composed of random blocks (RAM) of N bits specifying bit width, and each block of RAM is logically divided into Y RAM segments.
- the storing the data output by each channel into the downlink shared buffer includes:
- the RAM segments in the RAM blocks of the downlink shared buffer are polled according to a fixed time slot sequence. If the currently polled RAM segment has a spare, the data to be outputted by the corresponding channel is stored in the RAM segment column.
- the foregoing method further has the following features:: polling each of the RAM segments in each of the RAM blocks of the downlink shared buffer according to a fixed slot sequence comprises the following manners:
- each polling period is N time slots, and each time slot accesses Y RAM segments; when N ⁇ Y, each polling period is Y time slots, and each time slot accesses N times. RAM segment.
- the reading out data from the downlink shared buffer according to a scheduling sequence includes:
- the present invention further provides a data processing apparatus, including: a downlink side write control module, configured to: store data outputted by each channel into a downlink shared buffer;
- the downlink read control module is configured to: read data from the downlink shared buffer according to a scheduling sequence, and output the data to the data bus.
- the above device also has the following features:
- the downlink shared buffer is composed of random blocks (RAM) of N bits specifying bit width, and each block of RAM is logically divided into Y RAM segments.
- the downlink write control module is configured to: poll each RAM segment in each RAM block of the downlink shared buffer according to a fixed slot sequence, and if the currently polled RAM segment has a spare, the corresponding channel
- the data to be output is stored in the RAM segment column.
- the above device also has the following features:
- the downlink side write control module is configured to: poll each of the RAM segments in each of the RAM blocks of the downlink shared buffer according to a fixed time slot sequence, and include the following manner: when N ⁇ Y, each polling period is N Each time slot, each time slot accesses Y RAM segments; when N ⁇ Y, each polling cycle is Y time slots, and each time slot accesses N RAM segments.
- the above device also has the following features:
- the downlink side readout control module is configured to: calculate a data amount of an overall cache of each RAM segment column, and when the amount of data buffered by the RAM segment column is greater than or equal to a data amount required by the current scheduling indication, according to the length of the scheduling bit segment Indicates that the data output by this schedule is read from each RAM segment.
- the embodiments of the present invention provide a data processing method and apparatus, which can effectively save cache resources, reduce area and timing pressure, and improve cache utilization while reliably implementing data buffering and bit width conversion. rate.
- BRIEF DESCRIPTION OF THE DRAWINGS A brief description of the drawings used in the embodiments or the description of the prior art will be briefly described below. It is obvious that the drawings in the following description are only some embodiments of the present invention, and those skilled in the art may not obtain other drawings based on these drawings without any creative work.
- FIG. 1 is a schematic diagram of an application scenario of a data buffer bit width conversion circuit
- FIG. 2 is a schematic structural diagram of a conventional data buffer bit width conversion and buffer circuit
- FIG. 5 is a schematic diagram of a data processing apparatus according to an embodiment of the present invention
- FIG. 6 is a schematic diagram of an uplink transmission process according to an embodiment of the present invention.
- FIG. 7 is a schematic diagram of a single-channel uplink transmission process according to an embodiment of the present invention.
- FIG. 8 is a schematic diagram of a downlink side transmission process according to an embodiment of the present invention. Preferred embodiment of the invention
- Step 101 In the uplink direction, after receiving the data input by the data bus, writing the data input by the data bus to the uplink shared buffer according to the direction indication of the data and the data valid bit segment indication;
- the data is given by the data bus, and the direction indication and the valid bit segment indication of the beat data are given.
- the data input by the data bus is stored in each RAM segment of each RAM block of the upstream shared buffer according to the direction indication and the data valid bit segment.
- Step 102 Poll the uplink shared buffer according to a fixed slot sequence, and read out data in the uplink shared buffer to output to each corresponding channel.
- the RAM segments of each RAM block of the uplink shared buffer are polled in a fixed time slot order. If the polled RAM segment is not empty and the corresponding output channel can receive data, the data currently polled into the RAM segment is read. Output to the corresponding channel.
- the "destination indication" synchronized with the data is used to indicate the channel number to which the beat data is sent, and the "effective bit segment indication” synchronized with the data is used to indicate how many parts are in the beat data (usually It is effective to measure the channel bit width B, which is the unit of the RAM block width.
- the write control module needs to record the write position of the tail of the beat data by the channel as the basis for writing the next beat data to the same channel.
- the method of this embodiment includes:
- Step 103 In the downlink direction, the data outputted by each channel is stored in the downlink shared buffer; when a channel has data to be output, and the corresponding downlink shared buffer RAM segment has a spare space, the downlink side write control module controls The channel outputs the data and stores it in each RAM segment of each RAM block of the downstream shared buffer.
- Step 104 Read data from the downlink shared buffer according to a scheduling sequence, and output the data to a data bus.
- Each RAM segment of each RAM block of the downstream shared buffer is accessed according to the scheduling sequence, and the data is aggregated and outputted onto the data bus.
- FIG. 5 is a schematic diagram of a data processing apparatus according to an embodiment of the present invention. As shown in FIG. 5, the following modules are included:
- the uplink and downlink shared cache is used to implement data caching, and the uplink and downlink bit width read/write control module and the uplink and downlink shared cache jointly implement data bit width conversion.
- the uplink side (data bus to each channel) includes the following parts:
- the uplink side write control module is configured to receive the data input by the data bus, and according to the direction indication of the data and the data valid bit segment indication And writing the data input by the data bus into the uplink shared buffer.
- the uplink read control module is configured to poll the uplink shared buffer according to a fixed slot sequence, and read out data in the uplink shared buffer to output to each corresponding channel.
- the upstream shared buffer is composed of RAMs with N bits of width B, and each block of RAM is divided into Y RAM segments for storing data to be output to each channel.
- the uplink read control module reads the data in the uplink shared buffer according to a fixed time slot sequence, and includes the following manner: when N ⁇ Y, each polling period is N time slots. Each time slot accesses Y RAM segments; when N ⁇ Y, each polling cycle is Y time slots, and each time slot accesses N RAM segments.
- the uplink write control module in the process of writing data input by the data bus into the uplink shared buffer, is further configured to: record, by the channel, a write position of the tail of the data.
- the downlink side (each channel to the data bus) includes the following parts:
- the downlink side write control module is configured to store the data outputted by each channel into the downlink shared buffer.
- the downlink read control module is configured to read data from the downlink shared buffer according to a scheduling sequence, and output the data to the data bus.
- the downstream shared buffer is composed of RAMs with N bits of width B, and each block of RAM is logically divided into Y RAM segments for storing data to be output to the data bus.
- the downlink side write control module is specifically configured to poll each RAM segment in each of the RAM blocks of the downlink shared buffer according to a fixed time slot sequence, and if the currently polled RAM segment has a spare space, The data to be outputted by the corresponding channel is stored in the RAM segment column.
- the downlink side write control module polls each of the RAM segments in each of the RAM blocks of the downlink shared buffer according to a fixed time slot sequence, and includes the following manner: when N ⁇ Y, each polling period is N Time slots, each time slot accesses Y RAM segments; when N ⁇ Y, each polling period is Y time slots, and each time slot accesses N RAM segments.
- the downlink side readout control module is specifically configured to calculate a data amount of an overall cache of each RAM segment column. When the data amount of the RAM segment column buffer is greater than or equal to a data amount required by the current scheduling indication, according to the scheduling bit. The segment length indicates that the data output by this schedule is read from each RAM segment.
- the invention realizes the bit width conversion of the data by operating the shared cache in a fixed time slot order, and the cache efficiency is improved by the method of splicing and storing the valid data, so as to solve the problem that the cache utilization in the prior art is not high. And the problem of occupying too much logic resources and design area when the FPGA is implemented.
- the implementation block diagram of the embodiment of the present invention is shown in FIG. 6, FIG. 7 and FIG. Among them, uplink and downlink sharing
- the cache is composed of N simple dual-port RAMs, each of which is logically divided into Y RAM segments.
- the RAM segments in the same column correspond to the same channel, which is called the RAM segment column.
- the shared cache divides an address range for each logical RAM segment. In the process of reading and writing to the shared cache, each RAM segment is distinguished by a read/write address; each logical RAM segment is read and written according to a fixed time slot, when When selected and the read enable is pulled high, the RAM segment header data is output, and when it is selected and the write enable is pulled high, data is written to the end of the RAM segment.
- the outgoing indication synchronized with the data transmitted on the data bus indicates the channel number to which the data is taken; the valid bit segment synchronization with the data indicates that the RAM block width B is The unit measures how many parts of the data are valid from the high position.
- the upstream write control module After each beat data is written, the upstream write control module records the write end position (write to the RAM segment of the RAM segment) for the RAM segment column written for it. Each time the write is made, the upstream write control module selects the corresponding RAM segment column according to the direction indication, and then the RAM segment column "last write end position + ⁇ is taken as the start position of the current write, and is valid according to the data.
- the bit segment indicates that the write enable of the "valid bit segment" of the RAM block is sequentially raised from the current write start position.
- the valid data on the data bus also starts from the write start position of this time, and is sequentially sent to Each selected RAM segment.
- the second and third beats of the channel m# are successively selected, and the effective bit lengths are 4B and 8B, respectively.
- the write end position of the RAM segment column m# corresponding to the channel m# is the RAM segment 1#.
- the beat data is sequentially stored in the RAM segment 2# ⁇ RAM segment 5# in the first and last order, and the write end position of the RAM segment column m# is updated to the RAM segment 5# after the writing is completed.
- the beat data is sequentially stored in the first and last order to the RAM segment 5#, the RAM segment 6#, the RAM segment 7#, the RAM segment 0#, the RAM segment 1#, and the RAM segment 2#.
- the writing end position of the RAM segment column m# is updated to the RAM segment 2#.
- the uplink side read control module polls the uplink shared buffer RAM in a fixed order by time slot.
- Each RAM segment in the block When N ⁇ Y, each polling period is N time slots, and each time slot accesses Y RAM segments; when N ⁇ Y, each polling period is Y time slots, and each time slot accesses N RAM segment.
- the polling order is as follows:
- Read time slot 0 access RAM segment 0, RAM group 0; RAM segment 1, RAM group 7; RAM segment 2, RAM group 6; RAM segment 3, RAM group 5; RAM segment 4, RAM group 4; RAM segment 5, RAM group 3; RAM segment 6, RAM group 2; RAM segment 7, RAM group 1.
- Read time slot 1 access RAM segment 0, RAM group 1; RAM segment 1, RAM group 0; RAM segment 2, RAM group 7; RAM segment 3, RAM group 6; RAM segment 4, RAM group 5; RAM segment 5, RAM group 4; RAM segment 6, RAM group 3; RAM segment 7, RAM group 2.
- Read time slot 2 access RAM segment 0, RAM group 2; RAM segment 1, RAM group 1; RAM segment 2, RAM bank 0; RAM segment 3, RAM bank 7; RAM segment 4, RAM bank 6; RAM segment 5, RAM group 5; RAM segment 6, RAM group 4; RAM segment 7, RAM group 3.
- Read time slot 3 access RAM segment 0, RAM group 3; RAM segment 1, RAM group 2; RAM segment 2, RAM group 1; RAM segment 3, RAM group 0; RAM segment 4, RAM group 7; RAM segment 5, RAM group 6; RAM segment 6, RAM group 5; RAM segment 7, RAM group 4.
- Read time slot 4 access RAM segment 0, RAM group 4; RAM segment 1, RAM group 3; RAM segment 2, RAM group 2; RAM segment 3, RAM group 1; RAM segment 4, RAM group 0; RAM segment 5, RAM group 7; RAM segment 6, RAM group 6; RAM segment 7, RAM group 5.
- Read time slot 5 access RAM segment 0, RAM group 5; RAM segment 1, RAM group 4; RAM segment 2, RAM group 3; RAM segment 3, RAM group 2; RAM segment 4, RAM group 1; RAM segment 5, RAM group 0; RAM segment 6, RAM group 7; RAM segment 7, RAM group 6.
- Read time slot 6 access RAM segment 0, RAM group 6; RAM segment 1, RAM group 5; RAM segment 2, RAM group 4; RAM segment 3, RAM group 3; RAM segment 4, RAM group 2; RAM segment 5, RAM group 1; RAM segment 6, RAM group 0; RAM segment 7, RAM group 7.
- Read time slot 7 access RAM segment 0, RAM bank 7; RAM segment 1, RAM group 6; RAM segment 2, RAM bank 5; RAM segment 3, RAM bank 4; RAM segment 4, RAM bank 3; RAM segment 5, RAM group 2; RAM segment 6, RAM group 1; RAM segment 7, RAM group 0.
- Each RAM segment column on the upstream side records its last read end position. If the currently polled RAM segment position is "This RAM segment column last read end position is +1" and the RAM segment is not empty, At the same time, the corresponding output channel can receive data at present, then the read enable of the RAM block in the RAM segment is pulled up, and the data read this time is sent to the corresponding channel of the RAM; in other cases, no data is output.
- the downlink write control module polls each RAM segment in each RAM block of the downlink shared buffer in a fixed order in a fixed order, in the same order as the uplink read slot polling sequence.
- each polling period is N time slots, and each time slot accesses Y RAM segments;
- each polling period is Y time slots, and each time slot accesses N times. RAM segment.
- the downstream write control module records the last write location for each RAM segment column. If the current polled RAM segment position is "+1st write position of the RAM segment column" and the RAM has spare space, and the corresponding channel has data to be output, write the RAM block where the RAM segment is located. Enable pull-up, send data input from the corresponding channel to the RAM segment; in other cases, no data write operation.
- the downlink read control module calculates the amount of data of the overall cache for each RAM segment column. When the amount of data buffered in the RAM segment column is greater than or equal to the amount of data required by the current scheduling indication, the downlink read control module initiates reading. Operation, and at the same time give the indication of the length of the scheduling bit segment (measured in terms of the width of the RAM block).
- the downstream read control module records the last read end position for each RAM segment column.
- the downstream side readout control module takes the RAM segment of "the last read end position of the RAM segment column +1" as a starting position, and according to the length of the scheduling bit segment, the data output by the current scheduling is Read in the RAM segment and send it to the data bus output.
- the effective data is cached by using the splicing method in the uplink side and the downlink side writing process, thereby making the cache utilization high.
- the solution described in the background art due to the implementation of the N block width For the B cache, if the data bus A is much larger than B, in the specific implementation, the on-chip cache resources can be effectively utilized to reduce the design area and timing delay, which is especially obvious when the channel number Y is large.
- the embodiment of the present invention has the following advantages:
- the method and device provided by the embodiments of the present invention only perform buffering and bit-width conversion of valid bit segments in data during data processing, and have high cache utilization rate and transmission efficiency; and because of its implementation, N bit widths are used.
- the RAM block of B can effectively utilize the on-chip RAM resources and reduce the design area when the data bus has a large bit width and is implemented by an FPGA. Overcoming the low cache utilization in the prior art, the specific implementation consumes too many cache resources, and the problem of area and timing pressure is too large.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Memory System Of A Hierarchy Structure (AREA)
- Information Transfer Systems (AREA)
- Computer And Data Communications (AREA)
- Communication Control (AREA)
- Time-Division Multiplex Systems (AREA)
Abstract
Description
Claims
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
ES13844885T ES2749519T3 (es) | 2012-10-09 | 2013-09-27 | Método y dispositivo para procesar datos |
US14/434,564 US9772946B2 (en) | 2012-10-09 | 2013-09-27 | Method and device for processing data |
EP13844885.7A EP2908251B1 (en) | 2012-10-09 | 2013-09-27 | Method and device for processing data |
KR1020157011938A KR20150067321A (ko) | 2012-10-09 | 2013-09-27 | 데이터 처리 방법 및 장치 |
JP2015535968A JP6077125B2 (ja) | 2012-10-09 | 2013-09-27 | データ処理方法及び装置 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210379871.XA CN103714038B (zh) | 2012-10-09 | 2012-10-09 | 一种数据处理方法和装置 |
CN201210379871.X | 2012-10-09 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2014056405A1 true WO2014056405A1 (zh) | 2014-04-17 |
Family
ID=50407031
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2013/084481 WO2014056405A1 (zh) | 2012-10-09 | 2013-09-27 | 一种数据处理方法和装置 |
Country Status (7)
Country | Link |
---|---|
US (1) | US9772946B2 (zh) |
EP (1) | EP2908251B1 (zh) |
JP (1) | JP6077125B2 (zh) |
KR (1) | KR20150067321A (zh) |
CN (1) | CN103714038B (zh) |
ES (1) | ES2749519T3 (zh) |
WO (1) | WO2014056405A1 (zh) |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105607888A (zh) * | 2014-11-25 | 2016-05-25 | 中兴通讯股份有限公司 | 数据位宽转换方法及装置 |
CN104850515B (zh) * | 2015-04-28 | 2018-03-06 | 华为技术有限公司 | 一种缓存信元数据的方法、装置和设备 |
US10205672B2 (en) * | 2015-09-11 | 2019-02-12 | Cirrus Logic, Inc. | Multi-device synchronization of devices |
CN107920258B (zh) * | 2016-10-11 | 2020-09-08 | 中国移动通信有限公司研究院 | 一种数据处理方法及装置 |
CN109388590B (zh) * | 2018-09-28 | 2021-02-26 | 中国电子科技集团公司第五十二研究所 | 提升多通道dma访问性能的动态缓存块管理方法和装置 |
CN109818603B (zh) * | 2018-12-14 | 2023-04-28 | 深圳市紫光同创电子有限公司 | 一种位宽转换电路的复用方法及位宽转换电路 |
CN110134365B (zh) * | 2019-05-21 | 2022-10-11 | 合肥工业大学 | 一种多通道并行读出fifo的方法及装置 |
CN115004654A (zh) * | 2020-11-27 | 2022-09-02 | 西安诺瓦星云科技股份有限公司 | 数据传输方法、装置、通信系统、存储介质和处理器 |
CN114153763A (zh) * | 2021-11-09 | 2022-03-08 | 中国船舶重工集团公司第七一五研究所 | 一种高带宽低延时算法处理的fpga硬件实现方法 |
CN115061959B (zh) * | 2022-08-17 | 2022-10-25 | 深圳比特微电子科技有限公司 | 数据交互方法、装置、系统、电子设备和存储介质 |
WO2024040604A1 (zh) * | 2022-08-26 | 2024-02-29 | 华为技术有限公司 | 一种数据传输方法及装置 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1477532A (zh) * | 2002-08-20 | 2004-02-25 | 华为技术有限公司 | 控制芯片片内存储装置及其存储方法 |
CN1674477A (zh) * | 2004-03-26 | 2005-09-28 | 华为技术有限公司 | 一种实现时分复用电路位宽转换的装置及方法 |
CN101291275A (zh) * | 2008-06-02 | 2008-10-22 | 杭州华三通信技术有限公司 | Spi4.2总线桥接实现方法及spi4.2总线桥接器件 |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6754741B2 (en) * | 2001-05-10 | 2004-06-22 | Pmc-Sierra, Inc. | Flexible FIFO system for interfacing between datapaths of variable length |
US7613856B2 (en) * | 2004-10-21 | 2009-11-03 | Lsi Corporation | Arbitrating access for a plurality of data channel inputs with different characteristics |
WO2009089301A1 (en) * | 2008-01-07 | 2009-07-16 | Rambus Inc. | Variable-width memory module and buffer |
CN101789845B (zh) * | 2010-02-22 | 2013-01-16 | 烽火通信科技股份有限公司 | 应用sfec的光传送网中总线位宽变换实现方法及电路 |
CN101894005A (zh) * | 2010-05-26 | 2010-11-24 | 上海大学 | 高速接口向低速接口的异步fifo传输方法 |
CN102541506B (zh) * | 2010-12-29 | 2014-02-26 | 深圳市恒扬科技有限公司 | 一种fifo数据缓存器、芯片以及设备 |
-
2012
- 2012-10-09 CN CN201210379871.XA patent/CN103714038B/zh active Active
-
2013
- 2013-09-27 JP JP2015535968A patent/JP6077125B2/ja active Active
- 2013-09-27 EP EP13844885.7A patent/EP2908251B1/en active Active
- 2013-09-27 WO PCT/CN2013/084481 patent/WO2014056405A1/zh active Application Filing
- 2013-09-27 US US14/434,564 patent/US9772946B2/en active Active
- 2013-09-27 KR KR1020157011938A patent/KR20150067321A/ko not_active Application Discontinuation
- 2013-09-27 ES ES13844885T patent/ES2749519T3/es active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1477532A (zh) * | 2002-08-20 | 2004-02-25 | 华为技术有限公司 | 控制芯片片内存储装置及其存储方法 |
CN1674477A (zh) * | 2004-03-26 | 2005-09-28 | 华为技术有限公司 | 一种实现时分复用电路位宽转换的装置及方法 |
CN101291275A (zh) * | 2008-06-02 | 2008-10-22 | 杭州华三通信技术有限公司 | Spi4.2总线桥接实现方法及spi4.2总线桥接器件 |
Also Published As
Publication number | Publication date |
---|---|
EP2908251A1 (en) | 2015-08-19 |
US20150301943A1 (en) | 2015-10-22 |
EP2908251B1 (en) | 2019-06-12 |
JP2016503526A (ja) | 2016-02-04 |
EP2908251A4 (en) | 2015-12-30 |
ES2749519T3 (es) | 2020-03-20 |
CN103714038B (zh) | 2019-02-15 |
KR20150067321A (ko) | 2015-06-17 |
JP6077125B2 (ja) | 2017-02-08 |
US9772946B2 (en) | 2017-09-26 |
CN103714038A (zh) | 2014-04-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2014056405A1 (zh) | 一种数据处理方法和装置 | |
KR101607180B1 (ko) | 패킷 재조립 및 재배열 방법, 장치 및 시스템 | |
US8942248B1 (en) | Shared control logic for multiple queues | |
CN112084136B (zh) | 队列缓存管理方法、系统、存储介质、计算机设备及应用 | |
US7352836B1 (en) | System and method of cross-clock domain rate matching | |
JP2005518018A5 (zh) | ||
WO2014063599A1 (zh) | 一种用于以太网设备的数据缓存系统及方法 | |
JP6340481B2 (ja) | データキャッシング方法、装置及び記憶媒体 | |
WO2014131273A1 (zh) | 数据传输方法、装置及直接存储器存取 | |
WO2016070668A1 (zh) | 一种实现数据格式转换的方法、装置及计算机存储介质 | |
CN108614792B (zh) | 1394事务层数据包存储管理方法及电路 | |
JP3437518B2 (ja) | デジタル信号伝送方法及び装置 | |
CN115543882B (zh) | 不同位宽总线间的数据转发装置及数据传输方法 | |
WO2012149742A1 (zh) | 信号保序方法和装置 | |
JP5360594B2 (ja) | Dma転送装置及び方法 | |
WO2010020191A1 (zh) | 提高同步数字体系虚级联延时补偿缓存效率的方法及装置 | |
JP5061688B2 (ja) | データ転送制御装置およびデータ転送制御方法 | |
CN102984088A (zh) | 应用于afdx交换机确保帧转发顺序一致性的方法 | |
JP5750387B2 (ja) | フレーム制御装置、伝送装置、ネットワークシステム、バッファの読み出し制御方法 | |
JP2013162475A (ja) | ループバック回路 | |
JP4904136B2 (ja) | 双方向データ通信用単一ポートメモリ制御装置およびその制御方法 | |
US20150095621A1 (en) | Arithmetic processing unit, and method of controlling arithmetic processing unit | |
JP2003318947A (ja) | 多層メモリ使用交換装置 | |
CN104850515B (zh) | 一种缓存信元数据的方法、装置和设备 | |
JP2005504464A (ja) | シングル・ポート・メモリを使用したディジタル・ライン・ディレー |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13844885 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2015535968 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 20157011938 Country of ref document: KR Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2013844885 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14434564 Country of ref document: US |