EP1894108A2 - Controleur de memoire - Google Patents

Controleur de memoire

Info

Publication number
EP1894108A2
EP1894108A2 EP06765728A EP06765728A EP1894108A2 EP 1894108 A2 EP1894108 A2 EP 1894108A2 EP 06765728 A EP06765728 A EP 06765728A EP 06765728 A EP06765728 A EP 06765728A EP 1894108 A2 EP1894108 A2 EP 1894108A2
Authority
EP
European Patent Office
Prior art keywords
memory
buffer
stl
data streams
memory controller
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP06765728A
Other languages
German (de)
English (en)
Inventor
Artur Burchard
Atul P. S. Chauhan
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NXP BV
Original Assignee
NXP BV
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by NXP BV filed Critical NXP BV
Priority to EP06765728A priority Critical patent/EP1894108A2/fr
Publication of EP1894108A2 publication Critical patent/EP1894108A2/fr
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/16Handling requests for interconnection or transfer for access to memory bus
    • G06F13/1668Details of memory controller
    • G06F13/1689Synchronisation and timing concerns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/16Handling requests for interconnection or transfer for access to memory bus
    • G06F13/1668Details of memory controller
    • G06F13/1673Details of memory controller using buffers

Definitions

  • the present invention relates to a memory controller and a method for coupling a network and a memory.
  • subsystems typically are implemented as separate ICs, each having a different internal architecture that consists of local processors, busses, and memories, etc. Alternatively, various subsystems, may be integrated on an IC. At system level, these subsystems communicate with each other via a top-level interconnect, that provides certain services, often with real-time support.
  • a top-level interconnect that provides certain services, often with real-time support.
  • subsystems in a mobile phone architecture we can have, among others, base-band processor, display, media processor, or storage element.
  • Fig. 1 shows a basic representation of such a communication, which can be described as a graph of processes Pl -P4 connected via FIFO buffers B.
  • Kahn process network Such an representation is often referred to as Kahn process network.
  • the Kahn process network can be mapped on the system architecture, as described in E.A. de Kock et al., "YAPI: Application modeling for signal processing systems". In Proc. of the 37th. Design
  • Buffering is essential in a proper support of data streaming between the involved processes.
  • FIFO buffers are used for streaming, which is in accordance to (bounded) Kahn process network models of streaming application.
  • real-time streams With increased number of multimedia applications that can run simultaneously the number of processes, real-time streams, as well as the number of associated FIFOs, substantially increases.
  • the FIFO buffers can be implemented in a shared memory using an external DRAM memory technology.
  • SDRAM and DDR-SDRAM are the technologies that deliver large capacity external memory at low cost, with a very attractive cost to silicon area ratio.
  • Fig. 2 shows a basic architecture of a system on chip with a shared memory streaming framework.
  • the processing units C, S communicate with each other via the buffer B.
  • the processing units C, S as well as the buffer each are associated to an interface unit IU for coupling them to an interconnect means IM.
  • the memory can also be used for other purposes.
  • the memory can for example also be used for the code execution or a dynamic memory allocation for the processings of a program running on a main processor.
  • Such a communication architecture or network including the interconnect means, the interface units as well as the processing units C, S and the buffer B, may provide specific transport facilities and a respective infrastructure giving certain data transport guarantee such as for example a guaranteed throughput or a guaranteed delivery for an error- free transport of data or a synchronization service for synchronizing source and destination elements such that no data is lost due to the under or overflow of buffers. This becomes important if real-time streaming processing is to be performed by the system and a real-time support is required for all of the components.
  • background memory DRAM
  • pre-fetch buffering can be used. This means that the data from the SDRAM is read beforehand and kept in a special (pre-fetch) buffer.
  • pre-fetch buffer When the read request arrives it can be served from local pre-fetch buffer, usually implemented in on-chip SRAM, without latency otherwise introduced by background memory (DRAM). This is similar to known caching techniques of random data for processors.
  • next, previous - depending on a burst policy accessed in every next cycle of the memory can be stored without any further delay (within 1 cycle), for a specified number of accesses (e.g. 2/4/8/full page). Therefore, for streaming accesses to memory, when addresses are increased or decreased in the same way for every access (e.g. contiguous addressing) the burst access provides the best performance at the lowest power dissipation.
  • a DRAM memory please refer to Micron's 128-Mbit DDRRAM specifications, http://download.micron.eom/pdf/datasheets/dram/ddr/l 28MbDDRx4x8xl 6.pdf, which is incorporated by reference.
  • data to be written to the memory is first stored in a write-back buffer, while data read from the external memory is first stored in a pre-fetch buffer.
  • the requirements for such buffers are that they should be large enough to reduce the delay or latency as much as possible while not larger than required such that no space is wasted and can be used for other purposes.
  • a memory controller is provided for coupling a memory to a network.
  • the memory controller comprises a first interface, a streaming memory unit and a second interlace.
  • the first interface is used for connecting the memory controller to the network for receiving and transmitting data streams.
  • the streaming memory unit is coupled to the first interlace for controlling data streams between the network and the memory.
  • the streaming memory unit comprises a buffer for temporarily storing at least part of the data stream and a buffer managing unit for managing the temporarily storing of the data streams in the buffer and for dynamically allocating buffers for at least one of the data streams.
  • the second interlace is coupled to the streaming memory unit for connecting the memory controller to the memory in order to exchange data with the memory in bursts.
  • a buffer dimensioning unit is provided for dimensioning the buffer for at least one of the data streams. Accordingly, an exact and optimal size for the buffer can be calculated and allocated during runtime, thus increasing the capability of the overall system.
  • the first interface is implemented as a PCI-Express interface such that the properties and network services of a PCI-Express network can be implemented by the memory controller.
  • an arbiter is provided allowing each data stream to access the memory during a time slot which is sufficient to access at least one memory page of the memory.
  • a memory like a DRAM is best operated in bursts regarding the power dissipation, such a memory controller will therefore allow an intelligent arbitration with a low power dissipation.
  • the invention also relates to a method for coupling a memory to a network.
  • Data streams are received and transmitted via a first interface for connecting the memory controller to the network.
  • the data streams are controlled between the network and the memory by a streaming memory unit.
  • At least part of the data streams are temporarily stored in a buffer.
  • the temporarily storing of the data streams in the buffer is managed and the buffers are dynamically allocated for at least one of the data streams.
  • the streaming memory controller is connected to the memory via a second interface and the data is exchanged with the memory in bursts.
  • the buffers of at least one of the data streams is dimensioned.
  • the invention relates to the idea to calculate buffers, i.e. pre-fetch and write- back buffers, of a memory controller during runtime, taking into account the stream bandwidth, communication network behavior and details of the DRAM arbitration.
  • the buffer dimensioning unit is provided for dimensioning the buffers in the memory controller.
  • the pre-fetch buffers and the write-back buffers are dimensioned according to the buffer dimensioning unit.
  • the optimal buffer size of the pre-fetch buffers and write-back buffers enabling a zero delay access to the memory corresponds to one memory page of the memory and a number of bits that are transmitted during a worst case queue time. With such a buffer size, the delay introduced by the memory controller is significantly reduced to a minimum.
  • Fig. 1 shows a basic representation of a Kahn process network and mapping of it onto a shared memory architecture
  • Fig. 2 shows a basic architecture of a system on chip with a shared memory streaming framework
  • Fig. 3 shows a block diagram of a streaming memory controller SMC according to a first embodiment
  • Fig. 4 shows a block diagram of a logical view of the streaming memory controller SMC according to a second embodiment
  • Fig. 5 shows a graph illustrating the influence of the buffer size on the worst case delay
  • Fig. 6 shows a graph illustrating the influence of the latency versus the buffer size
  • Fig. 7 shows a graph illustrating the influence of buffer size on the power and the.
  • Fig. 3 shows a block diagram of a streaming memory controller SMC according to a first embodiment.
  • the streaming memory controller SMC comprises a PCI- Express interface PI, a streaming memory unit SMU and further interface MI which serves as interlace to an (external) SDRAM memory MEM.
  • the streaming memory unit SMU comprises a buffer manager unit BMU, a buffer B, which may be implemented as a SRAM memory, as well as an arbiter ARB.
  • the streaming memory unit SMU that implements buffering in SRAM is together with the buffer manager BMU used for buffering an access via PCI-Express Interface to the SDRAM.
  • the buffer manager unit BMU serves to react to read or write accesses to SDRAM from the PCI-Express Interface, to manage the buffers (update pointer's registers) and to relay data from/to buffers (SRAM) and from/to SDRAM.
  • the buffer manager unit BMU may comprise a FIFO manager.
  • the streaming memory units furthermore comprise a buffer dimensioning unit BDU, which serves to dimension the buffers B.
  • the buffer dimensioning unit BDU serves to calculate the sizes of the buffers, i.e. the pre-fetch and write-back buffers in order to ensure a zero latency access to the SDRAM memory. This is performed based on the stream's bandwidth and the slot size of the arbitration. With such a buffer dimensioning unit BDU, an exact and optimal size of the pre-fetch and write-back buffers as well as other buffers can be calculated and allocated during runtime. Furthermore, a low delay access to the SDRAM is implemented. If required, a trade-off can be performed between the available memory space, the allocated memory and the actual delay.
  • the pre-fetch and write-back buffers may be implemented in a SRAM memory.
  • the streaming memory controller SMC adapts the traffic generated by the network (based on a PCI-Express network) to the specific behavior of the external memory MEM which may be implemented as a SDRAM.
  • the streaming memory controller SMC serves to provide a bandwidth guarantee for each of the streams, to provide for bounded delivery time and for an error free transport of data to and from the external memory MEM.
  • the bandwidth arbitration in the streaming memory controller SMC is based on the same concept as in the network arbitration, i.e. time slots and the time slot allocation, however, the sizes of the time slots have to be adapted in order to fit to the behavior of a SDRAM.
  • the streaming memory unit SMU implements the network services of the PCI-Express network to the external memory MEM. Accordingly, the streaming memory unit SMU translates the data streams from the PCI-Express network into bursts for accessing the external SDRAM memory in order to divide the total available bandwidth of the SDRAM into a number of burst accesses. The number of burst accesses can be assigned to streams from the network in order to fulfill their bandwidth requirements.
  • the streaming memory unit SMU also serves to implement a synchronization mechanism in order to comply with the flow control mechanism of the PCI-Express network. This synchronization mechanism may include a blocking of each request. As the streaming memory controller SMC is designed to handle several separate streams, the streaming memory unit SMU is designed to create, maintain and manage the required buffers.
  • the streaming memory controller SMC has two interfaces: one towards PCI Express fabric, and second towards the memory (i.e. the DRAM).
  • the PCI Express interface of the streaming memory controller SMC must perform the traffic shaping on the data retrieved from the SDRAM memory to comply with the traffic rules of the PCI Express.
  • the access to the DRAM memory can be performed in bursts, since this mode of accessing data stored in DRAM memory has the biggest advantage with respect to power consumption.
  • the streaming memory controller SMC itself must provide intelligent arbitration of access to the DRAM among different streams such that throughput and latency of access are guaranteed. Additionally, the SMC also provides functionality for smart FIFO buffer management.
  • PCI-Express network The basic concept of a PCI-Express network is described in "PCI Express Base Specification, Revision 1.0", PCI-SIG, July 2002, www.pcisig.org, which is incorporated herein by reference.
  • the features of PCI Express which are taken into consideration in the design of the streaming memory controller, are: isochronous data transport support, flow control, and specific addressing scheme.
  • the isochronous support is primarily based on segregation of isochronous and non-isochronous traffic by means of Virtual Channels VCs. Consequently, network resources like bandwidth and buffers are explicitly reserved in the switch fabric for specific streams, such that no interference between streams in different virtual channels VCs is guaranteed.
  • the isochronous traffic in the switch fabric, is regulated by scheduling, namely admission control and service discipline.
  • a (DDR)SDRAM memory is used.
  • DDR digital versatile memory
  • the Micron's 128-Mbit DDR-SDRAM as described in Micron's 128-Mbit DDRRAM specifications, http://download.micron.eom/pdf/datasheets/dram/ddr/l 28MbDDRx4x8xl 6.pdf, which is incorporated herein by reference.
  • Such technology is preferable since it provides desirable power consumption and timing behavior.
  • the design is parameterized, and the memory controller SMC can be configured to work also with single rate memory. Since the DDR-SDRAM behaves similar to SDRAM, except the timing of the data lines, we explain basics using SDRAM concepts.
  • the PCI Express network PCIE provides network services, e.g. guaranteed real-time data transport, through exclusive resource/bandwidth reservation in the devices that are traversed by the real-time streams.
  • network services e.g. guaranteed real-time data transport
  • bandwidth delay, and other guarantees, typically provided by the PCI Express will not be fulfilled by the memory, since it does not give any guarantees and acts as a "slave" towards incoming traffic.
  • Fig. 4 shows a block diagram of a logical view of the streaming memory controller SMC according to a second embodiment. Here, a logical view of a multi-stream buffering is shown.
  • Each of the streams ST1-ST4 are associated to a separate buffer. These buffers may be divided into two parts when the data accesses to the external SDRAM is required, i.e. a pre-fetch buffer PFB and a write-back buffer WBB is provided. As only one stream at the time can access the external SDRAM an arbiter ARB is provided which performs the arbitration in combination with a multiplexer MUX in order to resolve conflicts between different streams accessing the memory buffers according to their bandwidth requirements.
  • the arbitration of the memory access between different real-time streams is essential for guaranteeing throughput and bounded access delay. Assume that whenever data is written to or read from the memory, preferably a full page (or a multiple thereof) is either written or read, i.e. the access is performed in bursts. The time needed to access one page (slightly different for read and write operations) is called a time slot TS.
  • Each stream has control of the memory MEM which may be implemented as a SDRAM for one time slot TS during which it can access for example a single page of the SDRAM.
  • a service cycle SC can consist of a fixed number of time slots. The access sequence repeats and resets every new service cycle is started.
  • the time slot size should in principle be programmable.
  • the size of the slot should reflect the memory behavior as well as the desired size of data, i.e the size of an internal memory controller buffer that is to be transferred between SDRAM and interconnect (e.g. PCI Express). Therefore, the time slot will be different for every system and every memory.
  • the time slot size can be adjusted at run-time. If a lack of internal memory (e.g. taken by other stream buffers) is present in order to create an optimal buffer for the current stream, a trade-off between power dissipation for a buffer size, and the adjustment of the time slot to reflect non-optimal buffering (e.g. smaller buffer) can be performed.
  • one time slot must be at least 3.894 ⁇ Sec (or 520 memory clock cycles). Hence, a (maximum) 256,805 page-accesses can be achieved per second to SDRAM.
  • the maximum possible data rate (bandwidth) to SDRAM in this case is 256.805 Mbytes/S. Please note that these values are true for the above described DDR-RAM. Other DRAMs will lead to other values.
  • the total number of SDRAM FIFO buffers is chosen to be four for calculating the worst case delay of the packets introduced by the arbitration, i.e. the time difference when a packet leaves the memory controller SMC and when it enters the memory controller. Furthermore, it is assumed that all data streams have the same priority, i.e. the arbitration is performed based on a round robin basis.
  • the buffer size in the memory controller SMC is 8 packets or lKbytes. The streams are reading and writing to the SDRAM at the same data rate. Accordingly, a window of 8 time slots is provided in which each FIFO buffer in the SDRAM is written and read once if sufficient data is present.
  • tl corresponds to the time taken by PCI-Express in unpacking data and sending to the memory controller SMC.
  • t2 corresponds to the time how much packet remains in write-back buffer of the memory controller SMC.
  • t3 corresponds to the time consumed in writing packet to SDRAM.
  • t4 corresponds to the time packet remain in the FIFO of the SDRAM FIFO.
  • t5 corresponds to the time consumed in reading packet from SDRAM.
  • t6 corresponds to the time how much packet remains in the pre-fetch buffer of the memory controller.
  • t7 corresponds to the time taken by PCI-Express to receive data from the memory controller and forming a packet.
  • t2, t3, t5, and t6 are calculated with some assumptions.
  • t2 + 16 6615 ⁇ Sec.
  • the worst case value of t3 and t5 is 8 page accesses to SDRAM because for worst case any stream may have to wait for 7 time slots and then it has to complete the page access. Hence, the total time is waiting (7 time slots) and actual reading/writing (1 time slot).
  • Fig. 5 shows a graph for illustrating the influence of the buffer size on of the worst case delay WCD.
  • the graph depicts the dependency between worst-case delay WCD and buffer sizes BS (SDRAM arbitration slot).
  • t' is neglected as it does not depend upon the design.
  • the graph of Fig. 5 is computed for 128 Mbits/s of data rate, a 128 Mb DDR- RAM(4 Meg * 8 * 4 Banks) using Page Burst, and a worst case delay assuming all streams have same bandwidth allotted.
  • the worst case delay WCD increases linearly with the buffer size BS.
  • FIG. 6 shows a graph illustrating the influence of the buffer size of a write- back buffer on the latency LT of the momory controller SMC.
  • Fig. 6 for high data rate streams the buffer size needed is to be larger because it has to supply more data not to block accesses during the SDRAM access time. For high data rate streams more packets can be requested during SDRAM access time.
  • Fig. 7 shows a graph illustrating the influence of the buffer size BS to the dissipated power P.
  • SDRAM supports burst sizes of 1,2,4,8 and page bursts. As the burst size is increased, the time for SDRAM to be in ACTIVE mode reduces and the SDRAM can be put it into standby state (SELF REFRESH) earlier. Therefore, as the burst increases the power consumption in SDRAM reduces.
  • the graph according to Fig. 7 is computed for 10 Mbit/s Data Read to SDRAM, and 128 Mb DDR-RAM(4 Meg * 8 * 4 Banks). This graph shows that page burst consumes the lowest power. The more bandwidth of SDRAM is requested, the bigger difference in power consumption for different bursts.
  • the above described idea can also be implemented in systems that require real-time arbitration for a SDRAM access (like a streaming access) while fulfilling little- power requirements.
  • One example thereof can be a mobile phone with audio/video capabilities.
  • an interconnect infrastructure such as a bus or a network supporting specific services while other (external) devices do not implement such network services.
  • an interconnect infrastructure is a PCI-express network which can implement a bandwidth allocation service, a flow control service or the like, while an (external) SDRAM memory does not implement such services.
  • the above-mentioned scheme can be used for every PCI-Express streaming transaction, in particular for sequential addresses like a direct memory access DMA address, and the above principles of the invention may also be applied to physically distributed memory systems with two or more separate memories.
  • a separate memory controller should be provided for every memory, wherein every memory should comprise a separate device address.
  • the number of streaming buffers will not be limited to eight. While playing with the design by changing its parameters (e.g. buffer and burst sizes, arbitration strategies), it is possible to experiment to obtain results for trade-offs in the design of real-time streaming memory controller for off-chip memories. Examples of such trade-offs, which can be visualized by exercising the design, are relations between burst sizes and input/output buffer sizes versus worst-case delay for data access, external memory power dissipation, and latency within SMC.
  • the real-time streaming memory controller supports off-chip network services and real-time guarantees for accessing external DRAM in a streaming manner.
  • the memory controller SMC has been designed to allow accessing external DRAM from within a PCI Express network.
  • This memory controller SMC has been designed in VHDL, synthesized, and verified. The complexity figures in terms of consumed silicon and power are available.
  • a design space can be explored for a particular application, and certain trade-offs can be visualized by exercising the design with different parameters and arbitration policies.
  • a memory controller SMC is realized that gives bandwidth guarantees for SDRAM access in low power way.
  • the arbitration algorithms though always guarantee bandwidth, are still flexible to cope with network fluctuations and jitter.
  • PCI Express has limitations of 8 streams that can independently be arbitrated.
  • the increase of the I/O buffers relaxes the arbitration, lowers the access latency, and reduces the cumulated bandwidth required from the SDRAM.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Information Transfer Systems (AREA)
  • Communication Control (AREA)

Abstract

Contrôleur de mémoire (SMC) pour le couplage de mémoire (MEM) à un réseau (N), comprenant une première interface (PI), une unité de mémoire de flux (SMU) et une seconde interface. La première interface (PI) permet de relier le contrôleur de mémoire (SMC) au réseau (N) pour la réception et la transmission de flux de données (STl - ST4). L'unité de mémoire (SMU) est couplée à la première interface (PI) pour le contrôle de flux (STl - ST4) entre le réseau (N) et la mémoire (MEM). L'unité de mémoire (SMU) comprend un tampon (B) pour le stockage temporaire d'au moins une partie du flux de données (STl - ST4) et une unité de gestion de tampon (BMU) pour la gestion du stockage temporaire des flux de données (STl - ST4) dans le tampon (B) et pour l'attribution dynamique de tampons (PFB, WBB) pendant la durée d'au moins un des flux de données (STl - ST4). La seconde interface est couplée à l'unité de mémoire (SMU) pour la connexion entre le contrôleur de mémoire (SMC) et la mémoire (MEM) permettant l'échange de données avec la mémoire (MEM) par salves. Enfin, une unité de dimensionnement de tampon (BDU) assure le dimensionnement du tampon (B) pendant la durée d'au moins un des flux de données (STl - ST4).
EP06765728A 2005-06-13 2006-06-13 Controleur de memoire Withdrawn EP1894108A2 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP06765728A EP1894108A2 (fr) 2005-06-13 2006-06-13 Controleur de memoire

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
EP05105145 2005-06-13
EP06765728A EP1894108A2 (fr) 2005-06-13 2006-06-13 Controleur de memoire
PCT/IB2006/051876 WO2006134550A2 (fr) 2005-06-13 2006-06-13 Controleur de memoire

Publications (1)

Publication Number Publication Date
EP1894108A2 true EP1894108A2 (fr) 2008-03-05

Family

ID=37235997

Family Applications (1)

Application Number Title Priority Date Filing Date
EP06765728A Withdrawn EP1894108A2 (fr) 2005-06-13 2006-06-13 Controleur de memoire

Country Status (4)

Country Link
EP (1) EP1894108A2 (fr)
JP (1) JP2008544359A (fr)
CN (1) CN101198941A (fr)
WO (1) WO2006134550A2 (fr)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008544348A (ja) 2005-06-09 2008-12-04 エヌエックスピー ビー ヴィ メモリーコントローラ及びネットワークとメモリーの結合方法
WO2006131900A2 (fr) * 2005-06-09 2006-12-14 Nxp B.V. Controleur de memoire et procede pour le couplage d'un reseau et d'une memoire
US20120066444A1 (en) 2010-09-14 2012-03-15 Advanced Micro Devices, Inc. Resolution Enhancement of Video Stream Based on Spatial and Temporal Correlation
US10691344B2 (en) 2013-05-30 2020-06-23 Hewlett Packard Enterprise Development Lp Separate memory controllers to access data in memory
CN105630714B (zh) * 2014-12-01 2018-12-18 晨星半导体股份有限公司 接口资源分析装置及其方法
CN109981620A (zh) * 2019-03-14 2019-07-05 山东浪潮云信息技术有限公司 一种后台接口管理系统
KR20210066631A (ko) 2019-11-28 2021-06-07 삼성전자주식회사 메모리에 데이터를 기입하기 위한 장치 및 방법

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6859454B1 (en) * 1999-06-30 2005-02-22 Broadcom Corporation Network switch with high-speed serializing/deserializing hazard-free double data rate switching
US6813701B1 (en) * 1999-08-17 2004-11-02 Nec Electronics America, Inc. Method and apparatus for transferring vector data between memory and a register file
US6553446B1 (en) * 1999-09-29 2003-04-22 Silicon Graphics Inc. Modular input/output controller capable of routing packets over busses operating at different speeds

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO2006134550A3 *

Also Published As

Publication number Publication date
CN101198941A (zh) 2008-06-11
WO2006134550A3 (fr) 2007-03-08
JP2008544359A (ja) 2008-12-04
WO2006134550A2 (fr) 2006-12-21

Similar Documents

Publication Publication Date Title
EP1820309B1 (fr) Controleur de memoire en continu
US9141568B2 (en) Proportional memory operation throttling
US10783104B2 (en) Memory request management system
CN111742305A (zh) 调度具有不统一等待时间的存储器请求
US20120137090A1 (en) Programmable Interleave Select in Memory Controller
US8065493B2 (en) Memory controller and method for coupling a network and a memory
WO2005073864A1 (fr) Procédé et appareil pour la gestion de requêtes d'accès à la mémoire
EP1894108A2 (fr) Controleur de memoire
CN111684430A (zh) 支持同一信道上对不统一等待时间的存储器类型的响应
Jang et al. Application-aware NoC design for efficient SDRAM access
US11994996B2 (en) Transmission of address translation type packets
US8037254B2 (en) Memory controller and method for coupling a network and a memory
US10740032B2 (en) Resource allocation for atomic data access requests
US11652761B2 (en) Switch for transmitting packet, network on chip having the same, and operating method thereof
JP5058116B2 (ja) ストリーミングidメソッドによるdmac発行メカニズム
KR20140095399A (ko) 적응적 서비스 제어기, 시스템 온 칩 및 시스템 온 칩의 제어 방법

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20080114

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR

DAX Request for extension of the european patent (deleted)
17Q First examination report despatched

Effective date: 20080506

DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20110101