Connect public, paid and private patent data with Google Patents Public Datasets

Apparatus, system, and method for graphics memory hub

Download PDF

Info

Publication number
US7477257B2
US7477257B2 US11303187 US30318705A US7477257B2 US 7477257 B2 US7477257 B2 US 7477257B2 US 11303187 US11303187 US 11303187 US 30318705 A US30318705 A US 30318705A US 7477257 B2 US7477257 B2 US 7477257B2
Authority
US
Grant status
Grant
Patent type
Prior art keywords
memory
hub
dram
gpu
interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US11303187
Other versions
US20070139426A1 (en )
Inventor
Joseph David Greco
Jonah M. Alben
Barry A. Wagner
Anthony Michael Tamasi
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NVidia Corp
Original Assignee
NVidia Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Grant date

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/16Handling requests for interconnection or transfer for access to memory bus
    • G06F13/1668Details of memory controller
    • G06F13/1694Configuration of memory controller to different memory types
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/60Memory management
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/003Details of a display terminal, the details relating to the control arrangement of the display terminal and to the interfaces thereto
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/363Graphics controllers
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/39Control of the bit-mapped memory
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2360/00Aspects of the architecture of display systems
    • G09G2360/12Frame memory handling
    • G09G2360/128Frame memory using a Synchronous Dynamic RAM [SDRAM]
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/003Details of a display terminal, the details relating to the control arrangement of the display terminal and to the interfaces thereto
    • G09G5/006Details of the interface to the display terminal

Abstract

A memory hub permits a graphics processor to access random access memories, such as dynamic random access memories (DRAMs). In one implementation, the memory hub permits an increase in effective memory bandwidth by aggregating the memory of two or more memories. In another implementation, the memory hub permits a graphics processor to offload memory access interfacing operations to the memory hub.

Description

FIELD OF THE INVENTION

The present invention is generally related to techniques to couple memories to a graphics processing unit. More particular, the present invention is directed to techniques to increase memory bandwidth in a graphics processing system.

BACKGROUND OF THE INVENTION

FIG. 1 illustrates a conventional graphics processing system 100. Certain conventional components are omitted for the purposes of illustration. A graphics processing unit (GPU) 110 includes a memory controller 112 coupled to dynamic random access memories (DRAMs) 120 via DRAM buses 115. DRAM has specific interface protocols. Thus, GPU 110 requires DRAM memory bus interfaces 125 compatible with DRAM memory.

One drawback of graphics processing system 100 is that the memory bandwidth may be less than desired. The total memory bandwidth scales with the number of DRAM memories 120 that can be simultaneously utilized by GPU 110. The number of DRAMs 110 that can be coupled to GPU 100 is limited by several considerations. One consideration is that the total number of input/output (I/O) interfaces in GPU 100 is limited. For example, in a ball-grid array packaging scheme, there is a limited number of balls per unit area that can be fabricated to provide access to memory. Thus, for a given GPU chip area, a limited number of balls can be allocated to servicing DRAM memory.

Another drawback of graphics processing system 100 is that GPU 110 is required to have DRAM interfaces 125 compatible with different versions of DRAM memory. Double data rate (DDR) memory, for example, has several different protocols, such as DDR and DDR2. Designing GPU 110 to be compatible with different DRAM memory protocols increases the cost and complexity of GPU 110.

Another drawback of graphics processing system 100 relates to the potential patent licensing burden. Many memory protocols are set by standard setting bodies. Such standard setting bodies typically require that participants in the standard setting body license essential patents required to practice the standard on a reasonable and non-discriminatory basis (RAND). Additionally, third parties may also hold patents related to preferred techniques to implement the standard. Thus, designing a GPU 110 to support a number of different memory protocols increases the potential patent licensing burden on GPU 110.

In light of the problems described above, the apparatus, system, and method of the present invention was developed.

SUMMARY OF THE INVENTION

A graphics system includes a memory hub. The memory hub is utilized by a graphics processing unit to access a plurality of memories, such as DRAM memories. The memory hub performs memory access processes. Applications of the memory hub of the present invention include increasing memory bandwidth of a graphics system, offloading memory access processes from a graphics processing unit, and providing an adaptation functionality to make different types of memories compatible with a graphics processing unit.

One embodiment of the present invention is for a memory hub. The memory hub has a dynamic random access memory (DRAM) interface operative to access a plurality of DRAM memories utilizing at least one DRAM protocol. The memory hub includes a hub interface for accessing a graphics processing unit. Logic is included for bridging signals between the hub interface and the DRAM interface and performing translation of signals between the hub interface and the DRAM interface. The memory hub is operative for a graphics processing unit to utilize the hub interface to access two or more DRAMs.

BRIEF DESCRIPTION OF THE FIGURES

The invention is more fully appreciated in connection with the following detailed description taken in conjunction with the accompanying drawings, in which:

FIG. 1 is a block diagram of a prior art graphics processing unit directly coupled to random access memories;

FIG. 2 is a block diagram of a graphics system including a memory hub in accordance with one embodiment of the present invention; and

FIG. 3 is a block diagram of a memory hub including a raster operations module in accordance with one embodiment of the present invention.

Like reference numerals refer to corresponding parts throughout the several views of the drawings.

DETAILED DESCRIPTION OF THE INVENTION

FIG. 2 illustrates a graphics system 200 in accordance with one embodiment of the present invention. Graphics system 200 includes a central processing unit (CPU) 205, interface 210 (e.g., one or more bridges and associated buses) coupling CPU 205 to a graphics processing unit (GPU) 220, a memory hub 230 coupled to GPU 220 via a hub bus 240, and random access memories (RAM) 250 coupled to memory hub 230 via memory input/output (I/O) buses 260.

Memory hub 230 includes a GPU connection portion 233 for accessing GPU 220 and a RAM connection portion 239 for accessing RAM memory. Hub logic 235 performs signal bridging and any necessary translation of signals between RAM connection portion 239 and GPU connection portion 233.

It is contemplated that memory hub 230 may be utilized for a variety of different RAM memory architectures. However dynamic random access memory (DRAM) is commonly used in graphics systems such that in the following discussion RAM memories are described in an exemplary embodiment as DRAM. It is well known in the graphics art that there are a number of different DRAM protocols that may be used to implement a memory for a graphics system. It will therefore be understood that memory hub 230 may be adapted to support one or more different DRAM protocols, including variations of DRAM especially adapted for graphics systems. For example, DRAM memories 250 may be single data rate (SDR) synchronous dynamic random access memories (SDRAMs) or double data rate (DDR) SDRAMs such as DDR and DDR2 memories. DRAM memories 250 may also be implemented using a variation of SDRAM specially adapted for graphics systems, such as a graphics double data rate (GDDR) type memory. Examples of GDDR type memories include GDDR, GDDR2, GDDR3, and GDDR4. The GDDR memory may further be a synchronous graphic random access memory (SGRAM) memory (e.g., GDDR SGRAM, GDDR2 SGRAM, GDDR3 SGRAM, and GDDR4 SGRAM). It will be understood, however, that the memory hub 230 of the present invention may be adapted to support DRAM standards and implementations other than those described above.

Memory hub 230 may be used to provide several different benefits, alone or in combination, depending upon the implementation. In an aggregation implementation, memory hub 230 is used to aggregate two or more DRAM memories 250 to improve memory bandwidth per number of I/O connections (e.g., physical pins or balls in a ball grid array package) on GPU 220. Memory hub 230 may also be used to offload DRAM memory access protocols from GPU 220 to memory hub 230. In an adaptation implementation, enough DRAM interfacing is performed in memory hub 230 such that memory hub 230 is used to make GPU 220 compatible with different types of DRAM memories 250.

In one embodiment, memory hub bus 240 is a high-speed bus, such as a bus communicating data and memory requests in data packets (a “packetized” bus). For example, high-speed I/O buses may be implemented using a low voltage differential signal technique and interface logic to support a packet protocol to transmit and receive data as data packets. In one embodiment, GPU 220 includes a hub interface 227 and hub I/O module 229 and memory hub 230 includes a hub I/O module 231 and hub interface 232. For example, hub I/O modules 231 and 229 may include hardware and associated I/O pins for supporting an I/O bus whereas hub interfaces 232 and 229 include interfacing logic to support a bus protocol. However, it will be understood that the functionality of the I/O hardware and interface logic functionality may be combined into single hub interfaces 227 and 232.

As one example, hub bus 240 may be implemented using a proprietary bus technology, such as a high-speed differential bus. Alternatively, a high-speed bus standard, such as the Peripheral Component Interconnect Expresss (PCI-E) bus standard specification may be used. The Peripheral Component Interconnect Special Interest Group (PCI-SIG) publishes the “PCI-E Base Specification”, the contents of which are hereby incorporated by reference. An advantage of using a high speed hub bus 240 is that it permits a reduction in pin count. Conventional DRAM memories 250 utilize comparatively slow non-packetized I/O buses 260. Thus, a single high speed hub bus 240 may be used to access several conventional DRAM memories 250 using a comparatively small number of GPU I/O pins or balls devoted to memory accesses (not shown in FIG. 2). This permits the total memory bandwidth to be increased for a given number of I/O pins/balls of GPU 220 devoted to memory accesses. Or, in other words, for a given memory bandwidth, the required number of balls/pins on GPU 220 is reduced. The reduction in I/O pin/ball count on GPU 220 will scale with the ratio of the bus speed of hub bus 240 to DRAM I/O buses 260. As an illustrative example, if hub bus 240 provides an approximately factor of two faster (per pin/ball) transfer rate than conventional DRAM I/O buses, then the use of hub bus 240 will reduce the pin count on GPU 220 required to access DRAM memory by a factor of two. However, it is contemplated that memory hub 230 may also be utilized to couple a GPU 220 having a wide (but slow per pin) hub bus 240 to a fast differential memory (not shown).

GPU 220 includes a graphics pipeline 222. A stage of graphics pipeline 222 requires access to data from DRAM memories 250, such as a raster operations (ROP) stage 224. GPU 220 includes a memory controller 226 that issues memory read and memory write requests on the behalf of graphics clients, such as ROP stage 224. Memory controller is coupled to hub interface 227. Hub interface 227 utilizes hub bus 240 to send memory read/write requests to DRAMs 250 and receive returned data.

Memory hub 230 includes a DRAM I/O module 238 to support input/output operations with DRAMs 250 via I/O buses 260. As is well known in the memory art, each type of DRAM memory typically has its own particular signaling protocol and bus terminations, such as low voltage transistor to transistor logic (LVTTL) for SDRAM and stub series terminated logic (SSTL) supporting low voltage differential signaling for DDR memory. A DRAM interface 236 supports DRAM memory access interface processes. Exemplary DRAM memory access interface processes which may be supported by DRAM interface 236 include: a dynamic random access memory (DRAM) initialization sequence; a mode register command sequence; support for transmitting data to dynamic random access memory over the rising and falling edge of a clock; support for supplying a data mask to dynamic random access memory over a falling edge of a clock; and a DRAM protocol for storing and retrieving DRAM cell contents such as a read command, a write command, a refresh command, a bank/row command, and a pre-charge command. More generally, DRAM memory interface 236 may be used to offload DRAM interface processes mandated by memory standards such as those required by SDR SDRAM memory, GDDR SGRAM memory, DDR memory (i.e., the DDR and DDR2 standards promoted by the Joint Engineering Device Engineering Councils (JEDEC)), or any of the other previously described DRAM standards.

Memory hub 230 also includes logic 235 to support signal bridging and any necessary translation between hub interface 232 and DRAM interface 236. This may include, for example, logic to support aggregation of several DRAM memories and perform any translations required to account for differences in the protocol of DRAM interface 236 and the protocol of hub interface 232.

Memory hub 230 is preferably designed to increase total memory latency within a pre-selected latency budget. For example, conventional DRAM memory may ordinarily have about twenty clock cycles of latency. Memory hub 230 may be designed to add a small number of additional clock cycles of latency to the conventional DRAM memory latency. In a GPU 220 performing parallel processing of graphics data a small increase in memory latency will typically not significantly reduce total performance. This is due, in part, to the fact that a high performance GPU 220 may have hundreds or thousands of parallel processing threads that divide up the work performed to process graphics frames. As a result, small increases in memory latency do not significantly decrease performance in a massively parallel GPU architecture. In contrast, the performance of a CPU 205 having a small number of processing threads (e.g., one thread) is critically dependent on a small latency to prevent idle clock cycles when no processing is performed.

FIG. 3 illustrates an embodiment of a memory hub 330 in which a ROP stage 224 is included in a memory hub 330. Placing a ROP stage 224 closer to physical memory may provide performance advantages for ROP processes that are latency sensitive. Additionally, placing ROP stage 224 closer to physical memory may be useful for a GPU that is not massively parallel. Thus, as an alternative to designing a low-latency memory hub, part or all of the functionality of ROP stage 224 may be moved into memory hub 330.

One application of the present invention is to increase memory bandwidth of a graphics processor 220. Memory hub 230 permits an increase in the number of memories that can be coupled to GPU 220 using a fixed number of GPU I/O connections (e.g., balls of the GPU in a ball-grid-array package). Another application of the present invention is to provide an adaptation function. Memory hub 230 may be designed to make GPU 220 compatible with several different types of memories. Still yet another application of the present invention is to offload one or more memory access protocols to memory hub 230. Memory access protocols may be subject to reasonable and non-discriminatory (RAND) licensing provisions set by standard setting organizations. By placing the functionality of memory access protocols in memory hub 230 the licensing burden associated with the memory access protocols lies with a comparatively low-cost memory hub 230, not within a high-value GPU 220. Additionally, different versions of memory hub 230 may be created to minimize the number of units requiring specific licenses for graphics system 200 to be manufactured with different types of memory. For example one version of memory hub 230 may support DDR2 and another may support GDDR SGRAM. The memory hub 230 may, for example, be designed to minimize the number of licenses that must be procured, minimize the number of units requiring a license, or minimize the value of the units from which licensing royalties are calculated.

The foregoing description, for purposes of explanation, used specific nomenclature to provide a thorough understanding of the invention. However, it will be apparent to one skilled in the art that specific details are not required in order to practice the invention. Thus, the foregoing descriptions of specific embodiments of the invention are presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the invention to the precise forms disclosed; obviously, many modifications and variations are possible in view of the above teachings. The embodiments were chosen and described in order to best explain the principles of the invention and its practical applications, they thereby enable others skilled in the art to best utilize the invention and various embodiments with various modifications as are suited to the particular use contemplated. It is intended that the following claims and their equivalents define the scope of the invention.

Claims (19)

1. A memory hub for a graphics system, comprising:
a dynamic random access memory (DRAM) interface operative to access a plurality of different types of DRAM memories each requiring a different DRAM protocol;
a hub interface for accessing the memory hub via an Input/Output (I/O) bus; and
logic for bridging signals between said hub interface and said DRAM interface and performing translation of signals between said hub interface and said DRAM interface;
said memory hub operative for a graphics processing unit (GPU) to utilize said hub interface to access two or more DRAMs;
wherein said hub interface utilizes a high speed packetized bus protocol and said DRAM interface utilizes a non-packetized protocol, said high speed packetized bus protocol having at least a factor of two faster transfer rate than said non-packetized protocol such that said memory hub reduces an I/O pin count on said GPU.
2. The memory hub of claim 1, wherein said hub interface supports a hub bus having a bandwidth corresponding to two or more of said DRAM I/O bus interfaces.
3. The memory hub of claim 2, wherein said hub bus is a high speed bus having a data rate greater than two of said DRAM I/O bus interfaces.
4. The memory hub of claim 1, wherein said at least one DRAM memory access interfacing operation is offloaded from said graphics processing unit to said memory hub.
5. The memory hub of claim 4, wherein a DRAM initialization sequence is offloaded to said memory hub.
6. The memory hub of claim 4, wherein a mode register command sequence is offloaded to said memory hub.
7. The memory hub of claim 4, wherein said DRAM interface supports transmitting data to DRAMs over the rising and falling edge of a clock.
8. The memory hub of claim 4, wherein said DRAM interface supports supplying a data mask to dynamic random access memory over a falling edge of a clock.
9. The memory hub of claim 4, wherein a DRAM protocol for storing and retrieving DRAM cell contents is offloaded to said memory hub.
10. The memory hub of claim 9, wherein said DRAM protocol includes at least one protocol selected from the group consisting of a read command, a write command, a refresh command, a bank/row command, and a pre-charge command.
11. The memory hub of claim 1, wherein said DRAM interface supports at least one DRAM protocol from the group consisting of SDR SDRAM, DDR SDRAM, DDR2 SDRAM, GDDR SGRAM, GDDR2 SGRAM, GDDR3 SGRAM, and GDDR4 SGRAM.
12. The memory hub of claim 1, wherein said memory hub further comprises a raster operations module for performing at least one raster operation on the behalf of said GPU.
13. A graphics processing system, comprising:
a graphics processing unit (GPU), said GPU including:
a memory controller; and
a memory hub interface for accessing a memory hub via an Input/Output (I/O) bus wherein said I/O bus is a packetized high speed bus;
said GPU being operative to access a plurality of dynamic random access memories (DRAMs) via said memory hub, each DRAM utilizing a non-packetized I/O bus;
wherein said packetized high speed bus has at least a factor of two faster transfer rate than said non-packetized I/O bus such that said GPU is operative to utilize said memory hub to aggregate two or more DRAMs to increase memory bandwidth with a reduced pin count.
14. The graphics processing system of claim 13, wherein said GPU offloads DRAM interfacing to said memory hub.
15. The graphics processing system of claim 13, wherein said GPU offloads a plurality of DRAM memory access protocols to said memory hub to support a plurality of different types of DRAM memories.
16. The graphics processing system of claim 13, wherein said packetized high speed bus is a peripheral component interconnect express (PCI-E) bus.
17. A graphics system, comprising:
a graphics processing unit (GPU) including a memory controller and a first memory hub interface; and
a memory hub, comprising:
a dynamic random access memory (DRAM) interface operative to access a plurality of DRAM memories utilizing at least one DRAM protocol, each DRAM utilizing a non-packetized Input/Output (I/O) bus;
a second memory hub interface; and
logic for bridging signals between said second hub interface and said DRAM interface and performing translation of signals between said second memory hub interface and said DRAM interface;
said GPU communicatively coupled to said memory hub via an input/output (I/O) bus coupling said first memory hub interface and said second memory hub interface wherein said I/O bus is a packetized high speed bus;
said GPU accessing a plurality of DRAM memories via said memory hub and offloading at least one DRAM interfacing operation to said memory hub
wherein said packetized high speed bus has at least a factor of two faster transfer rate than said non-packetized I/O bus such that said GPU is operative to utilize said memory hub to aggregate two or more DRAMs to increase memory bandwidth with a reduced pin count.
18. The graphics system of claim 17, wherein said memory hub supports a plurality of different types of DRAM memories each having a different DRAM protocol.
19. The graphics system of claim 17, wherein said packetized high speed bus is a peripheral component interconnect express (PCI-E) bus.
US11303187 2005-12-15 2005-12-15 Apparatus, system, and method for graphics memory hub Active 2026-11-24 US7477257B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11303187 US7477257B2 (en) 2005-12-15 2005-12-15 Apparatus, system, and method for graphics memory hub

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US11303187 US7477257B2 (en) 2005-12-15 2005-12-15 Apparatus, system, and method for graphics memory hub
EP20060006270 EP1808772B1 (en) 2005-12-15 2006-03-27 Apparatus, system, and method for graphics memory hub
DE200660009212 DE602006009212D1 (en) 2005-12-15 2006-03-27 Device, system and method for a graphics memory hub
JP2006088642A JP2007164755A (en) 2005-12-15 2006-03-28 Apparatus, system and method for graphics memory hub
CN 200610066357 CN1983329B (en) 2005-12-15 2006-03-30 Apparatus, system, and method for graphics memory hub
US12327626 US8194085B2 (en) 2005-12-15 2008-12-03 Apparatus, system, and method for graphics memory hub

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US12327626 Continuation US8194085B2 (en) 2005-12-15 2008-12-03 Apparatus, system, and method for graphics memory hub

Publications (2)

Publication Number Publication Date
US20070139426A1 true US20070139426A1 (en) 2007-06-21
US7477257B2 true US7477257B2 (en) 2009-01-13

Family

ID=37719125

Family Applications (2)

Application Number Title Priority Date Filing Date
US11303187 Active 2026-11-24 US7477257B2 (en) 2005-12-15 2005-12-15 Apparatus, system, and method for graphics memory hub
US12327626 Active US8194085B2 (en) 2005-12-15 2008-12-03 Apparatus, system, and method for graphics memory hub

Family Applications After (1)

Application Number Title Priority Date Filing Date
US12327626 Active US8194085B2 (en) 2005-12-15 2008-12-03 Apparatus, system, and method for graphics memory hub

Country Status (5)

Country Link
US (2) US7477257B2 (en)
JP (1) JP2007164755A (en)
CN (1) CN1983329B (en)
DE (1) DE602006009212D1 (en)
EP (1) EP1808772B1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080036758A1 (en) * 2006-03-31 2008-02-14 Intelisum Inc. Systems and methods for determining a global or local position of a point of interest within a scene using a three-dimensional model of the scene
US20080225603A1 (en) * 2007-03-15 2008-09-18 Thomas Hein Circuit

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7412554B2 (en) * 2006-06-15 2008-08-12 Nvidia Corporation Bus interface controller for cost-effective high performance graphics system with two or more graphics processing units
ES2374656T3 (en) * 2007-10-24 2012-02-20 Agfa Graphics N.V. Inks and fluids radiation curable inkjet improved for photoyellowing.
US8391302B1 (en) * 2009-12-03 2013-03-05 Integrated Device Technology, Inc. High-performance ingress buffer for a packet switch
US20130027413A1 (en) * 2011-07-26 2013-01-31 Rajeev Jayavant System and method for entering and exiting sleep mode in a graphics subsystem
CN102522113B (en) * 2011-09-28 2014-09-17 华为技术有限公司 SDRAM bridge circuit
US9086881B2 (en) * 2012-06-29 2015-07-21 Intel Corporation Mechanism for facilitating write tracking for following data eye movements across changing thermal conditions in memory systems
US8959298B2 (en) 2012-12-10 2015-02-17 Qualcomm Incorporated System and method for managing performance of a computing device having dissimilar memory types
US9110795B2 (en) 2012-12-10 2015-08-18 Qualcomm Incorporated System and method for dynamically allocating memory in a memory subsystem having asymmetric memory components
US9092327B2 (en) 2012-12-10 2015-07-28 Qualcomm Incorporated System and method for allocating memory to dissimilar memory devices using quality of service
US20150035843A1 (en) * 2013-07-31 2015-02-05 Sandia Corporation Graphics processing unit management system for computed tomography
US9383809B2 (en) * 2013-11-13 2016-07-05 Qualcomm Incorporated System and method for reducing memory I/O power via data masking
CN104750632A (en) * 2013-12-26 2015-07-01 联芯科技有限公司 Data interaction method and system between baseband processor and application processor

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5301278A (en) 1988-04-29 1994-04-05 International Business Machines Corporation Flexible dynamic memory controller
US6260127B1 (en) 1998-07-13 2001-07-10 Compaq Computer Corporation Method and apparatus for supporting heterogeneous memory in computer systems
US6532525B1 (en) 2000-09-29 2003-03-11 Ati Technologies, Inc. Method and apparatus for accessing memory
US20030122834A1 (en) 2001-12-28 2003-07-03 Mastronarde Josh B. Memory arbiter with intelligent page gathering logic
US20030177303A1 (en) * 1994-12-23 2003-09-18 Intel Corporation Method and apparatus for controlling of a memory subsystem installed with standard page mode memory and an extended data out memory
US6630936B1 (en) * 2000-09-28 2003-10-07 Intel Corporation Mechanism and method for enabling two graphics controllers to each execute a portion of a single block transform (BLT) in parallel
US20040168020A1 (en) 1999-02-26 2004-08-26 Micron Technology, Inc. Memory device interface
US20040260864A1 (en) 2003-06-19 2004-12-23 Lee Terry R. Reconfigurable memory module and method
US20070067517A1 (en) * 2005-09-22 2007-03-22 Tzu-Jen Kuo Integrated physics engine and related graphics processing system

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0212541A (en) * 1988-04-29 1990-01-17 Internatl Business Mach Corp <Ibm> Computing system and operating method therefor
US6335932B2 (en) * 1998-07-08 2002-01-01 Broadcom Corporation High performance self balancing low cost network switching architecture based on distributed hierarchical shared memory
US6792500B1 (en) * 1998-07-08 2004-09-14 Broadcom Corporation Apparatus and method for managing memory defects
GB9828144D0 (en) * 1998-12-22 1999-02-17 Power X Limited Data switching apparatus
US6567091B2 (en) * 2000-02-01 2003-05-20 Interactive Silicon, Inc. Video controller system with object display lists
US6954811B2 (en) * 2002-07-19 2005-10-11 Calix Networks, Inc. Arbiter for an input buffered communication switch
US6820181B2 (en) 2002-08-29 2004-11-16 Micron Technology, Inc. Method and system for controlling memory accesses to memory modules having a memory hub architecture
US7162567B2 (en) * 2004-05-14 2007-01-09 Micron Technology, Inc. Memory hub and method for memory sequencing

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5301278A (en) 1988-04-29 1994-04-05 International Business Machines Corporation Flexible dynamic memory controller
US20030177303A1 (en) * 1994-12-23 2003-09-18 Intel Corporation Method and apparatus for controlling of a memory subsystem installed with standard page mode memory and an extended data out memory
US6260127B1 (en) 1998-07-13 2001-07-10 Compaq Computer Corporation Method and apparatus for supporting heterogeneous memory in computer systems
US6907493B2 (en) 1999-02-26 2005-06-14 Micron Technology, Inc. Memory device interface
US20040168020A1 (en) 1999-02-26 2004-08-26 Micron Technology, Inc. Memory device interface
US6630936B1 (en) * 2000-09-28 2003-10-07 Intel Corporation Mechanism and method for enabling two graphics controllers to each execute a portion of a single block transform (BLT) in parallel
US6532525B1 (en) 2000-09-29 2003-03-11 Ati Technologies, Inc. Method and apparatus for accessing memory
US6792516B2 (en) 2001-12-28 2004-09-14 Intel Corporation Memory arbiter with intelligent page gathering logic
US20030122834A1 (en) 2001-12-28 2003-07-03 Mastronarde Josh B. Memory arbiter with intelligent page gathering logic
US20040260864A1 (en) 2003-06-19 2004-12-23 Lee Terry R. Reconfigurable memory module and method
US7120727B2 (en) 2003-06-19 2006-10-10 Micron Technology, Inc. Reconfigurable memory module and method
US20070067517A1 (en) * 2005-09-22 2007-03-22 Tzu-Jen Kuo Integrated physics engine and related graphics processing system

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
"DDR2 SDRAM FBDIMM," Micron Technology, Inc., pp. 1-36, 2004.
"Intel Desktop Board CC820 Specification Update," pp. 1-122, Jun. 2000.
"Intel Desktop Board CC820 Technical Product Specification" pp. 1-122, Nov. 1999.
European Search Report from Application No. EP06006270.0-2212 dated Feb. 27, 2007.
Haas et al., "OSAP005 Advances In Server Memory Technology" pp. 1-35.
Hass et al., "Fully Buffered DIMM Technology Moves Enterprise Platforms to he Next Level," pp. 1-7, Mar. 2005.

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080036758A1 (en) * 2006-03-31 2008-02-14 Intelisum Inc. Systems and methods for determining a global or local position of a point of interest within a scene using a three-dimensional model of the scene
US20080225603A1 (en) * 2007-03-15 2008-09-18 Thomas Hein Circuit
US8207976B2 (en) * 2007-03-15 2012-06-26 Qimonda Ag Circuit
US20120198265A1 (en) * 2007-03-15 2012-08-02 Qimonda Ag Circuit
US8674999B2 (en) * 2007-03-15 2014-03-18 Qimonda Ag Circuit

Also Published As

Publication number Publication date Type
JP2007164755A (en) 2007-06-28 application
CN1983329A (en) 2007-06-20 application
US8194085B2 (en) 2012-06-05 grant
EP1808772A1 (en) 2007-07-18 application
CN1983329B (en) 2011-10-19 grant
US20090079748A1 (en) 2009-03-26 application
DE602006009212D1 (en) 2009-10-29 grant
US20070139426A1 (en) 2007-06-21 application
EP1808772B1 (en) 2009-09-16 grant

Similar Documents

Publication Publication Date Title
US5319755A (en) Integrated circuit I/O using high performance bus interface
US5953263A (en) Synchronous memory device having a programmable register and method of controlling same
US5243703A (en) Apparatus for synchronously generating clock signals in a data processing system
US7024518B2 (en) Dual-port buffer-to-memory interface
US7254075B2 (en) Integrated circuit memory system having dynamic memory bank count and page size
US6721840B1 (en) Method and system for interfacing an integrated circuit to synchronous dynamic memory and static memory
US6742098B1 (en) Dual-port buffer-to-memory interface
US6295592B1 (en) Method of processing memory requests in a pipelined memory controller
US6324120B2 (en) Memory device having a variable data output length
US20080115006A1 (en) System and method for adjusting the timing of signals associated with a memory system
US6751696B2 (en) Memory device having a programmable register
US20060039227A1 (en) Memory device having staggered memory operations
US5729709A (en) Memory controller with burst addressing circuit
US20040006674A1 (en) System and method for multi-modal memory controller system operation
US5844856A (en) Dual port memories and systems and methods using the same
US7724589B2 (en) System and method for delaying a signal communicated from a system to at least one of a plurality of memory circuits
US7580312B2 (en) Power saving system and method for use with a plurality of memory circuits
US7609567B2 (en) System and method for simulating an aspect of a memory circuit
US6298426B1 (en) Controller configurable for use with multiple memory organizations
US7242635B2 (en) Semiconductor integrated circuit device, data processing system and memory system
US20040117566A1 (en) Reconfigurable memory controller
Kushiyama et al. A 500-megabyte/s data-rate 4.5 M DRAM
US20080123459A1 (en) Combined signal delay and power saving system and method for use with a plurality of memory circuits
US20060106951A1 (en) Command controlling different operations in different chips
US7979757B2 (en) Method and apparatus for testing high capacity/high bandwidth memory devices

Legal Events

Date Code Title Description
AS Assignment

Owner name: NVIDIA CORPORATION, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:GRECO, JOSEPH DAVID;ALBEN, JONAH M.;WAGNER, BARRY A.;ANDOTHERS;REEL/FRAME:017382/0265;SIGNING DATES FROM 20051212 TO 20051214

FPAY Fee payment

Year of fee payment: 4

FPAY Fee payment

Year of fee payment: 8