US20010019331A1 - Unified memory architecture for use in computer system - Google Patents

Unified memory architecture for use in computer system Download PDF

Info

Publication number
US20010019331A1
US20010019331A1 US09/137,067 US13706798A US2001019331A1 US 20010019331 A1 US20010019331 A1 US 20010019331A1 US 13706798 A US13706798 A US 13706798A US 2001019331 A1 US2001019331 A1 US 2001019331A1
Authority
US
United States
Prior art keywords
memory
computer system
memory controller
implemented
coupled
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US09/137,067
Inventor
Michael J. K. Nielsen
Zahid S. Hussain
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Microsoft Technology Licensing LLC
Original Assignee
Michael J. K. Nielsen
Zahid S. Hussain
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Michael J. K. Nielsen, Zahid S. Hussain filed Critical Michael J. K. Nielsen
Priority to US09/137,067 priority Critical patent/US20010019331A1/en
Publication of US20010019331A1 publication Critical patent/US20010019331A1/en
Assigned to MICROSOFT TECHNOLOGY LICENSING, LLC reassignment MICROSOFT TECHNOLOGY LICENSING, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MICROSOFT CORPORATION
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/60Memory management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/10Address translation
    • G06F12/1081Address translation for peripheral access to main memory, e.g. direct memory access [DMA]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/10Address translation
    • G06F12/1027Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB]
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2360/00Aspects of the architecture of display systems
    • G09G2360/12Frame memory handling
    • G09G2360/125Frame memory handling using unified memory architecture [UMA]
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/39Control of the bit-mapped memory

Definitions

  • the present invention relates to the field of computer systems. Specifically, the present invention relates to a computer system architecture including dynamic memory allocation of pixel buffers for graphics and image processing.
  • peripheral processors are used to render graphics images (synthesis) and peripheral image processors are used to perform image processing (analysis).
  • CPU main memory is separate from peripheral memory units which can be dedicated to graphics rendering or image processing or other computational functions.
  • the prior art computer graphics system 100 includes three separate memory units; a main memory 102 , a dedicated graphics memory 104 , and a dedicated image processing memory (image processor memory) 105 .
  • Main memory 102 provides fast access to data for a CPU 106 and an input/output device 108 .
  • the CPU 106 and input/output device 108 are connected to main memory 102 via a main memory controller 110 .
  • Dedicated graphics memory 104 provides fast access to graphics data for a graphics processor 112 via a graphics memory controller 114 .
  • Dedicated image processor memory 105 provides fast access to buffers of data used by an image processor 116 via an image processor memory controller 118 .
  • CPU 106 has read/write access to main memory 102 but not to dedicated graphics memory 104 or dedicated image processor memory 105 .
  • image processor 116 has read/write access to dedicated image processor memory 105 , but not to main memory 102 or dedicated graphics memory 104 .
  • graphics processor 112 has read/write access to dedicated graphics memory 104 but not to main memory 102 or dedicated image processor memory 105 .
  • Certain computer system applications require that data, stored in main memory 102 or in one of the dedicated memory units 104 , 105 , be operated upon by a processor other than the processor which has access to the memory unit in which the desired data is stored. Whenever data stored in one particular memory unit is to be processed by a designated processor other than the processor which has access to that particular memory unit, the data must be transferred to a memory unit for which the designated processor has access.
  • certain image processing applications require that data, stored in main memory 102 or dedicated graphics memory 104 , be processed by the image processor 116 .
  • Image processing is defined as any function(s) that apply to two dimensional blocks of pixels.
  • pixels may be in the format of file system images, fields, or frames of video entering the prior art computer system 100 through video ports, mass storage devices such as CD-ROMs, fixed-disk subsystems and Local or Wide Area network ports.
  • image processor 116 In order to enable image processor 116 to access data stored in main memory 102 or in dedicated graphics memory 104 , the data must be transferred or copied to dedicated image processor memory 105 .
  • One problem with the prior art computer graphics system 100 is the cost of high performance peripheral dedicated memory systems such as the dedicated graphics memory unit 104 and dedicated image processor memory 105 . Another problem with the prior art computer graphics system 100 is the cost of high performance interconnects for multiple memory systems. Another problem with the prior art computer graphics system 100 is that the above discussed transfers of data between memory units require time and processing resources.
  • the present invention pertains to a computer system providing dynamic memory allocation for graphics.
  • the computer system includes a memory controller, a unified system memory, and memory clients each having access to the system memory via the memory controller.
  • Memory clients can include a graphics rendering engine, a central processing unit (CPU), an image processor, a data compression/expansion device, an input/output device, and a graphics back end device.
  • the rendering engine and the memory controller are implemented on a first integrated circuit (first IC) and the image processor and the data compression/expansion are implemented on a second IC.
  • the computer system provides read/write access to the unified system memory, through the memory controller, for each of the memory clients.
  • Translation hardware is included for mapping virtual addresses of pixel buffers to physical memory locations in the unified system memory. Pixel buffers are dynamically allocated as tiles of physically contiguous memory. Translation hardware, for mapping the virtual addresses of pixel buffers to physical memory locations in the unified system memory, is implemented in each of the computational devices which are included as memory clients in the computer system.
  • the unified system memory is implemented using synchronous DRAM.
  • tiles are comprised of 64 kilobytes of physically contiguous memory arranged as 128 rows of 128 pixels wherein each pixel is a 4 byte pixel.
  • the present invention is also well suited to using tiles of other sizes.
  • the dynamically allocated pixel buffers are comprised of n 2 tiles where n is an integer.
  • the computer system of the present invention provides functional advantages for graphical display and image processing. There are no dedicated memory units in the computer system of the present invention aside from the unified system memory. Therefore, it is not necessary to transfer data from one dedicated memory unit to another when a peripheral processor is called upon to process data generated by the CPU or by another peripheral device.
  • FIG. 1A is a circuit block diagram of a typical prior art computer system including peripheral processors and associated dedicated memory units.
  • FIG. 2A is a circuit block diagram of an exemplary unified system memory computer architecture according to the present invention.
  • FIG. 2B is an internal circuit block diagram of a graphics rendering and memory controller IC including a memory controller (MC)and a graphics rendering engine integrated therein.
  • MC memory controller
  • FIG. 3A is an illustration of an exemplary tile for dynamic allocation of pixel buffers according to the present invention.
  • FIG. 3B is an illustration of an exemplary pixel buffer comprised of n 2 tiles according to the present invention.
  • FIG. 3C is a block diagram of an address translation scheme according to the present invention.
  • FIG. 4 is a block diagram of a memory controller according to the present invention.
  • FIG. 5 is a timing diagram for memory client requests issued to the unified system memory according to the present invention.
  • FIG. 6 is a timing diagram for memory client write data according to the present invention.
  • FIG. 7 is a timing diagram for memory client read data according to the present invention.
  • FIG. 8 is a timing diagram for an exemplary write to a new page performed by the unified system memory according to the present invention.
  • FIG. 9 is a timing diagram for an exemplary read to a new page performed by the unified system memory according to the present invention.
  • FIG. 10 shows external banks of the memory controller according to the present invention.
  • FIG. 11 shows a flow diagram for bank state machines according to the present invention.
  • Computer system 200 includes a unified system memory 204 which is shared by various memory system clients including a CPU 206 , a graphics rendering engine 208 , an input/output IC 210 , a graphics back end IC 212 , an image processor 214 , and a memory controller 204 .
  • Computer system 201 includes the unified system memory 202 which is shared by various memory system clients including the CPU 206 , the input/output IC 210 , the graphics back end IC 212 , an image processing and compression and expansion IC 216 , and a graphics rendering and memory controller IC 218 .
  • the image processing and compression and expansion IC 216 includes the image processor 214 , and a data compression and expansion unit 215 .
  • GRMC IC 218 includes the graphics rendering engine (rendering engine) 208 and the memory controller 204 integrated therein.
  • the graphics rendering and memory controller IC 218 is coupled to unified system memory 202 via a high bandwidth memory data bus (HBWMD BUS) 225 .
  • HBWMD BUS 225 includes a demultiplexer (SD-MUX) 220 , a first BUS 222 coupled between the graphics rendering and memory controller IC 218 and SD-MUX 220 , and a second bus 224 coupled between SD-MUX 220 and unified system memory 202 .
  • BUS 222 includes 144 lines cycled at 133 MHz and BUS 224 includes 288 lines cycled at 66 MHz.
  • SD-MUX 220 demultiplexes the 144 lines of BUS 222 , which are cycled at 133 MHz, to double the number of lines, 288, of BUS 224 , which are cycled at half the frequency, 66 MHz.
  • CPU 206 is coupled to the graphics rendering and memory controller IC 218 by a third bus 226 .
  • BUS 226 is 64 bits wide and carries signals cycled at 100 MHz.
  • the image processing and compression and expansion IC 216 is coupled to BUS 226 , by a third bus 228 .
  • BUS 228 is 64 bits wide and carries signals cycled at 100 MHz.
  • the graphics back end IC 212 is coupled to the graphics rendering and memory controller IC 218 by a fourth bus 230 .
  • BUS 230 is 64 bits wide and carries signals cycled at 133 MHz.
  • the input/output IC 210 is coupled to the graphics rendering and memory controller IC 218 by a fifth bus 232 .
  • BUS 232 is 32 bits wide and carries signals cycled at 133 MHz.
  • the input/output IC 210 of FIG. 2A contains all of the input/output interfaces including: keyboard & mouse, interval timers, serial, parallel, ic, audio, video in & out, and fast ethernet.
  • the input/output IC 210 also contains an interface to an external 64-bit PCI expansion bus, BUS 231 , that supports five masters (two SCSI controllers and three expansion slots).
  • FIG. 2B an internal circuit block diagram is shown of the graphics rendering and memory controller IC 218 according to an embodiment of the present invention.
  • rendering engine 208 and memory controller 214 are integrated within the graphics rendering and memory controller IC 218 .
  • the graphics rendering and memory controller IC 218 also includes a CPU/IPCE interface 234 , an input/output interface 236 , and a GBE interface 232 .
  • GBE interface 232 buffers and transfers display data from unified system memory 202 to the graphics back end IC 212 in 16 ⁇ 32-byte bursts.
  • GBE interface 232 buffers and transfers video capture data from the graphics back end IC 212 to unified system memory 202 in 16 ⁇ 32 -byte bursts.
  • GBE interface 232 issues GBE interrupts to CPU/IPCE interface 234 .
  • BUS 228 shown in both FIG. 2A and FIG. 2B, couples GBE interface 232 to the graphics back end IC 212 (FIG. 2A).
  • the input/output interface 236 buffers and transfers data from unified system memory 202 to the input/output IC 210 in 8 ⁇ 32-byte bursts.
  • the input/output interface 236 buffers and transfers data from the input/output IC 210 to unified system memory 202 in 8 ⁇ 32-byte bursts.
  • the input/output interface 236 issues the input/output IC interrupts to CPU/IBCE interface 234 .
  • BUS 230 shown in both FIG. 2A and FIG. 2B, couples the input/output interface 236 to the input/output IC 210 (FIG. 2A).
  • a bus, BUS 224 provides coupling between CPU/IPCE interface 234 and CPU 206 and the image processing and compression and expansion IC 216 .
  • the memory controller 214 is the interface between memory system clients (CPU 206 , rendering engine 208 , input/output IC 210 , graphics back end IC 212 , image processor 214 , and data compression/expansion device 215 ) and the unified system memory 202 .
  • the memory controller 214 is coupled to unified system memory 202 via HBWMD BUS 225 which allows fast transfer of large amounts of data to and from unified system memory 202 .
  • Memory clients make read and write requests to unified system memory 202 through the memory controller 214 .
  • the memory controller 214 converts requests into the appropriate control sequences and passes data between memory clients and unified system memory 202 .
  • the memory controller 214 contains two pipeline structures, one for commands and another for data.
  • the request pipe has three stages, arbitration, decode and issue/state machine.
  • the data pipe has only one stage, ECC. Requests and data flow through the pipes in the following manner. Clients place their requests in a queue.
  • the arbitration logic looks at all of the requests at the top of the client queues and decides which request to start through the pipe. From the arbitration stage, the request flows to the decode stage. During the decode stage, information about the request is collected and passed onto an issue/state machine stage.
  • the rendering engine 208 is a 2-D and 3-D graphics coprocessor which can accelerate rasterization.
  • the rendering engine 208 is also cycled at 66 MHz and operates synchronously to the unified system memory 202 .
  • the rendering engine 208 receives rendering parameters from the CPU 206 and renders directly to frame buffers stored in the unified system memory 202 (FIG. 2A).
  • the rendering engine 208 issues memory access requests to the memory controller 214 . Since the rendering engine 208 shares the unified system memory 202 with other memory clients, the performance of the rendering engine 208 will vary as a function of the load on the unified system memory 202 .
  • the rendering engine 208 is logically partitioned into four major functional units: a host interface, a pixel pipeline, a memory transfer engine, and a memory request unit.
  • the host interface controls reading and writing from the host to programming interface registers.
  • the pixel pipeline implements a rasterization and rendering pipeline to a frame buffer.
  • the memory transfer engine performs memory bandwidth byte aligned clears and copies on both linear buffers and frame buffers.
  • the memory request unit arbitrates between requests from the pixel pipeline and queues up memory requests to be issued to the memory controller 214 .
  • the computer system 200 includes dynamic memory allocation of virtual pixel buffers in the unified system memory 202 .
  • Pixel buffers include frame buffers, texture maps, video maps, image buffers, etc.
  • Each pixel buffer can include multiple color buffers, a depth buffer, and a stencil buffer.
  • pixel buffers are allocated in units of contiguous memory called tiles and address translation buffers are provided for dynamic allocation of pixel buffers.
  • each tile 300 includes 64 kilobytes of physically contiguous memory.
  • a 64 kilobyte tile size can be comprised of 128 ⁇ 128 pixels for 32 bit pixels, 256 ⁇ 128 pixels for 16 bit pixels, or 512 ⁇ 128 pixels for 8 bit pixels.
  • tiles begin on 64 kilobyte aligned addresses.
  • An integer number of tiles can be allocated for each pixel buffer. For example, a 200 ⁇ 200 pixel buffer and a 256 ⁇ 256 pixel buffer would both require four (128 ⁇ 128) pixel tiles.
  • FIG. 3B an illustration is shown of an exemplary pixel buffer 302 according to the present invention.
  • translation hardware maps virtual addresses of pixel buffers 302 to physical memory locations in unified system memory 202 .
  • Each of the computational units of the computer system 200 (image processing and compression and expansion IC, 212 , graphics back end IC 212 , The input/output IC 210 , and rendering engine 208 ) includes translation hardware for mapping virtual addresses of pixel buffers 302 to physical memory locations in unified system memory 202 .
  • the rendering engine 208 supports a frame buffer address translation buffer (TLB) to translate frame buffer (x,y) addresses into physical memory addresses.
  • TLB frame buffer address translation buffer
  • This TLB is loaded by CPU 206 with the base physical memory addresses of the tiles which compose a color buffer and the stencil-depth buffer of a frame buffer.
  • the frame buffer TLB has enough entries to hold the tile base physical memory addresses of a 2048 ⁇ 2048 pixel color buffer and a 2048 ⁇ 2048 pixel stencil-depth buffer. Therefore, the TLB has 256 entries for color buffer tiles and 256 entries for stencil-depth buffer tiles.
  • Tiles provide a convenient unit for memory allocation. By allowing tiles to be scattered throughout memory, tiling makes the amount of memory which must be contiguously allocated manageable. Additionally, tiling provides a means of reducing the amount of system memory consumed by frame buffers. Rendering to tiles which do not contain any pixels pertinent for display, invisible tiles, can be easily clipped out and hence no memory needs to be allocated for these tiles. For example, a 1024 ⁇ 1024 virtual frame buffer consisting of front and back RGBA buffers and a depth buffer would consume 12 Mb of memory if fully resident. However, if each 1024 ⁇ 1024 buffer were partitioned into 64 (128 ⁇ 128) tiles of which only four tiles contained non-occluded pixels, only memory for those visible tiles would need to be allocated. In this case, only 3MB would be consumed.
  • memory system clients e.g., CPU 206 , rendering engine 208 , input/output IC 210 , graphics back end IC 212 , image processor 214 , and data compression/expansion device 215 .
  • memory system clients e.g., CPU 206 , rendering engine 208 , input/output IC 210 , graphics back end IC 212 , image processor 214 , and data compression/expansion device 215 .
  • each memory system client has access to memory shared by each of the other memory system clients, there is no need for transferring data from one dedicated memory unit to another.
  • data can be received by the input/output IC 210 , decompressed (or expanded) by the data compression/expansion device 215 , and stored in the unified system memory 202 .
  • This data can then be accessed by the CPU 206 , the rendering engine 208 , the input/output IC 210 , the graphics back end IC 212 , or the image processor 214 .
  • the CPU 206 , the rendering engine 208 , the input/output IC 210 , the graphics back end IC 212 , or the image processor 214 can use data generated by the CPU 206 , the rendering engine 208 , the input/output IC 210 , the graphics back end IC 212 , or the image processor 214 .
  • Each of the computational units (CPU 206 , input/output IC 210 , the graphics back end IC 212 , the image processing and compression and expansion IC 216 , the graphics rendering and memory controller IC 218 , and the data compression/expansion device 215 ) has translation hardware for determining the physical addresses of pixel buffers as is discussed below.
  • input/output IC 210 can bring in a compressed stream of video data which can be stored into unified system memory 202 .
  • the input/output IC 210 can access the compressed data stored in unified system memory 220 , via a path through the graphics rendering and memory controller IC 218 .
  • the input/output IC 210 can then decompress the accessed data and store the decompressed data into unified system memory 202 .
  • the stored image data can then be used, for example, as a texture map by rendering engine 208 for mapping the stored image onto another image.
  • the resultant image can then be stored into a pixel buffer which has been allocated dynamically in unified system memory 202 . If the resultant image is stored into a frame buffer, allocated dynamically in unified system memory 202 , then the resultant image can be displayed by the graphics back end IC 212 or the image can be captured by writing the image back to another pixel buffer which has been allocated dynamically in unified system memory 202 . Since there is no necessity of transferring data from one dedicated memory unit to another in computer system 200 , functionality is increased.
  • unified system memory 202 of FIG. 2A is implemented using synchronous DRAM (SDRAM) accessed via a 256-bit wide memory data bus cycled at 66 MHz.
  • SDRAM synchronous DRAM
  • a SDRAM is made up of rows and columns of memory cells.
  • a row of memory cells is referred to as a page.
  • a memory cell is accessed with a row address and column address.
  • unified system memory 202 provides a peak data bandwidth of 2.133 Gb/s.
  • unified system memory 202 is made up of 8 slots. Each slot can hold one SDRAM DIMM.
  • a SDRAM DIMM is constructed from 1M ⁇ 16 or 4M ⁇ 16 SDRAM components and populated on the front only or the front and back side of the DIMM. Two DIMMs are required to make an external SDRAM bank. 1M ⁇ 16 SDRAM components construct a 32 Mbyte external bank, while 4M ⁇ 16 SDRAM components construct a 128 Mbyte external bank.
  • unified system memory 202 can range in size from 32 Mbytes to 1 Gbyte.
  • FIG. 3C shows a block diagram of an address translation scheme according to the present invention.
  • FIG. 4 shows a block diagram of the memory controller 204 of the present invention.
  • a memory client interface contains the signals listed in Table 1, below. TABLE 1 Memory client interface signals CRIME # of Signal Pin Name Bits Dir. Description clientreq.cmd internal 3 in type of request - only 1 - read 2 - write 4 - rmw clientreq.adr internal 25 in address of request only clientreq.msg internal 7 in message sent with request only clientreq.v internal 1 in 1 - valid only 0 - not valid clientreq.ecc internal 1 in 1 - ecc is valid only 0 - ecc not valid clientres.gnt internal 1 out 1 - room in client queue only 0 - no room clientres.wrrdy internal 1 out 1 - MC is ready for write data only 0 - MC not ready for write data clientres.rdrdy internal 1 out 1 - valid read data only 0 - not valid read data clientres.oe internal 1 out 1 - enable client driver only 0 - disable client driver client
  • a memory client makes a request to the memory controller 204 by asserting clientreq.valid while setting the clientreq.adr, clientreq.msg, clientreq.cmd and clientreq.ecc lines to the appropriate values. If there is room in the queue, the request is latched into the memory client queue. Only two of the memory clients, the rendering engine 208 and the input/output IC 210 , use clientreq.msg. The message specifies which subsystem within the input/output IC 210 or the rendering engine 208 made the request. When an error occurs, this message is saved along with other pertinent information to aid in the debug process.
  • the message is passed through the request pipe and returned with other pertinent information to aid in the debug process.
  • the message is passed through the request pipe and returned with the clientreq.wrrdy signal for a write request or with the clientreq.rdrdy signal for a read request.
  • the rendering engine 208 uses the information contained in the message to determine which rendering engine 208 queue to access.
  • FIG. 6 a timing diagram for memory client write data is shown.
  • the data for a write request is not latched with the address and request. Instead, the data, mask and message are latched when the memory controller 204 asserts the clientreq.wrrdy indicating that the request has reached the decode stage of the request pipe. Because the memory client queues are in front of the request pipe, there is not a simple relationship between the assertion clientreq.gnt and clientreq. wrrdy. Clientreq.msg is only valid for the rendering engine 208 and the input/output IC 210 .
  • the memory controller 204 asserts the clientreq.oe signal at least one cycle before the assertion of clientreq.wrrdy. Clientreq.oe is latched locally by the memory client and is used to turn on the memory client's memory data bus drivers.
  • FIG. 7 a timing diagram for memory client read data is shown.
  • the read data is sent to the memory client over the memdata2client_out bus.
  • clientres.rdrdy When clientres.rdrdy is asserted, the data and message are valid.
  • a memory client interface contains the signals listed in Table 2, below. TABLE 2 Memory Interface Signals Crime Pin # of Signal Name Bits Dir. Description memwrite mem_dir 1 out controls direction of SDMUX chips- default to write memdata2- mem_data 256 out memory data from mem_out client going to unified system memory memdata2- internal only 256 out memory data from client_out main memory going to the memory client memmask_out mem_mask 32 out memory mask from client going to unified system memory memdataoe internal only 3 out enable memory data bus drivers ecc_out mem_ecc 32 out ecc going to unified system memory eccmask mem_eccmask 32 out ecc mask going to main memory mem_addr mem_addr 14 out memory address ras_n mem_ras_n 1 out row address strobe cas_n mem_cas_n 1 out column address strobe we_n mem_we_n 1 out write enable cs
  • the data and mask are latched in the data pipe and flow out to the unified system memory 202 on memmask_out and memdata2mem_out.
  • the ECC and ECC mask are generated and sent to the unified system memory 202 across eccmask and ecc_out.
  • the memdataoe signal is used to turn on the memory bus drivers.
  • Data and ECC from the unified system memory 202 come in on the memdata2client_in and ecc_in busses.
  • the ECC is used to determine if the incoming data is correct. If there is a one bit error in the data, the error is corrected, and the corrected data is sent to the memory client. If there is more than one bit in error, the CPU 206 is interrupted, and incorrect data is returned to the memory client.
  • Ras_n, cas_n, we_n and cs_n are control signals for the unified system memory 202 .
  • a timing diagram is shown for an exemplary write to a new page performed by the unified system memory 202 .
  • FIG. 9 a timing diagram is shown for an exemplary read to a new page performed by the unified system memory 202 .
  • a read or write operation to the same SDRAM page is the same as the operation shown in FIGS. 8 and 9, except a same page operation does not need the precharge and activate cycles.
  • a request pipe is the control center for the memory controller 204 .
  • Memory client requests are placed in one end of the pipe and come out the other side as memory commands.
  • the memory client queues are at the front of the pipe, followed by the arbitration, then the decode, and finally the issue/state machine. If there is room in their queue, a memory client can place a request in it.
  • the arbitration logic looks at all of the requests at the top of the memory client queues and decides which request to start through the request pipe. From the arbitration stage, the request flows to the decode stage. During the decode stage, information about the request is collected and passed onto the issue/state machine stage. Based on this information, a state machine determines the proper sequence of commands for the unified system memory 202 .
  • the later portion of the issue stage decodes the state of the state machine into control signals that are latched and then sent across to the unified system memory 202 .
  • a request can sit in the issue stage for more than one cycle. While a request sits in the issue/state machine stage, the rest of the request pipe is stalled. Each stage of the request pipe is discussed herein.
  • All of the memory clients have queues, except for refresh.
  • a refresh request is guaranteed to retire before another request is issued, so a queue is not necessary.
  • the five memory client queues are simple two-port structures with the memory clients on the write side and the arbitration logic on the read side. If there is space available in a memory client queue, indicated by the assertion of clientres.gnt, a memory client can place a request into its queue.
  • a memory client request consists of an address, a command (read, write or read-modify-write), a message, an ECC valid and a request valid indication. If both clientreq.valid and clientres.gnt are asserted, the request is latched into the memory client queue.
  • the arbitration logic looks at all of the requests at the top of the memory client queues and determines which request to pop off and pass to the decode stage of the requests at the top of the memory client queues and determines which request to pop off and pass to the decode stage of the request pipe.
  • the clientres.gnt signal does not indicate that the request has retired. The request still needs to go through the arbitration process. To put it another way, memory client A might receive the clientres.gnt signal before memory client B, but if memory client B has a higher priority, its request might retire before the request from memory client A.
  • the arbiter determines which memory client request to pass to the decode stage of the request pipe. This decision process has two steps. The first step is to determine if the arbitration slot for the current memory client is over or not. An arbitration slot is series of requests from the same memory client. The number and type of requests allowed in one arbitration slot varies. Table 3, below, lists what each memory client can do in an arbitration slot.
  • the arbiter determines if the arbitration slot should end or not. If not, the request from the memory client who owns the current arbitration slot is passed to the decode stage. If the current arbitration slot is terminated, the arbiter uses the results from an arbitration algorithm to decide which request to pass to the decode stage. The arbitration algorithm to decide which request to pass to the decode stage.
  • the arbitration algorithm ensures that the graphics back end IC 212 gets 1 ⁇ 2 of the arbitration slots, the input/output IC 210 gets 1 ⁇ 4, the image processing and compression and expansion IC 216 gets 1 ⁇ 8, the rendering engine 208 gets ⁇ fraction (1/16) ⁇ , the CPU 206 gets ⁇ fraction (1/32) ⁇ , and the refresh gets ⁇ fraction (1/64) ⁇ .
  • the first step is to determine the maximum number of cycles that each memory client can use during an arbitration slot.
  • Table 4, below shows the number of cycles associated with each type of operation. With reference to Table 4, below, “P” refers to precharge, “X” refers to a dead cycle, “A” refers to activate, “R0” refers to “read word 0”, “W0” refers to “write word 0”, and “Ref” refers to “refresh”.
  • Table 5 refers to the maximum number of cycles for each of the memory clients. TABLE 5 Maximum # Cycles per Slot Memory Client Operation # of cycles Graphics Back End 16 memory word read or write 20 cycles CPU, Rendering Engine, 8 memory word read or write 12 cycles MACE IPCE 8 memory word read or write 18 cycles 1 page crossings REFRESH refresh 2 rows 14 cycles
  • the decode logic receives requests from the arbiter. Based on state maintained from the previous requests and information contained in the current request, the decode logic determines which memory bank to select, which of the four state machines in the next stage will handle the request, and whether or not the current request is on the same page as the previous request. This information is passed to the issue/state machine stage.
  • the unified system memory 202 is made up of 8 slots. Each slot can hold one SDRAM DIMM.
  • a SDRAM DIMM is constructed from 1M ⁇ 16 or 4M ⁇ 16 SDRAM components and populated on the front only or the front and back side of the DIMM. Two DIMMs are required to make an external SDRAM bank. 1M ⁇ 16 SDRAM components construct a 32 Mbyte external bank, while 4M ⁇ 16 SDRAM components construct a 128 Mbyte external bank.
  • the memory system can range in size from 32 Mbytes to 1 Gbyte.
  • Each SDRAM component has two internal banks, hence two possible open pages.
  • the maximum number of external banks is 8 and the maximum number of internal banks is 8 and the maximum number of internal banks is 16.
  • the memory controller 204 only supports 4 open pages at a time. This issue will be discussed in detail later in this section.
  • the decode logic is explained below in more detail.
  • software probes the memory to determine how many banks of memory are present and the size of each bank. Based on this information, the software programs the 8 bank control registers.
  • Each bank control register (please refer to the register section) has one bit that indicates the size of the bank and 5 bits for the upper address bits of that bank.
  • Software must place the 64 Mbit external banks in the lower address range followed by any 16 Mbit external banks. This is to prevent gaps in the memory.
  • the decode logic compares the upper address bits of the incoming request to the 8 bank control registers to determine which external bank to select. The number of bits that are compared is dependent on the size of the bank.
  • the decode logic compares bits 24:22 of the request address to bits 4:2 of the bank control register. If there is a match, that bank is selected. Each external bank has a separate chip select. If an incoming address matches more than one bank's control register, the bank with the lowest number is selected. If an incoming address does not match any of the bank control registers, a memory address error occurs. When an error occurs, pertinent information about the request is captured in error registers and the processor is interrupted—if the memory controller 204 interrupt is enabled.
  • the request that caused the error is still sent to the next stage in the pipeline and is processed like a normal request, but the memory controller 204 deasserts all of the external bank selects so that the memory operation doesn't actually occur. Deasserting the external bank selects is also done when bit 6 of the rendering engine 208 message is set. The rendering engine 208 sets this bit when a request is generated using an invalid TLB entry.
  • the memory controller 204 can handle any physical external bank configuration, we recommend that external bank 0 always be filled and that the external banks be placed in decreasing density order (for example a 64 Mbit external bank in bank 0 and a 16 Mbit external bank in bank 2 ).
  • the previous paragraph describes how the decode logic determines what external bank to select. This paragraph describes the method for determining page crossings and which bank state machine to use in the next stage of the pipeline.
  • the row address, along with the internal and external bank bits for previous requests, are kept in a set of registers which are referred to as the row registers.
  • Each row register corresponds to a bank state machine.
  • the decode logic compares the internal/external bank bits of the new request with the four row registers. If there is a match, then bank state machine corresponding to that row register is selected.
  • the decode logic passes the request along with the external bank selects, state machine select and same page information to the issue/state machine stage.
  • the selected bank state machine sequences through the proper states, while the issue logic decodes the state of the bank state machine into commands that are sent to the SDRAM DIMMS.
  • the initialization/refresh state machine sequences through special states for initialization and refresh while the four bank state machines are forced to an idle state.
  • the bank state machines and the initialization/refresh state machine are discussed in more detail in more detail in the following sections.
  • the four bank state machines operate independently, subject only to conflicts for access to the control, address, and data signals.
  • the bank state machines default to page mode operation. That is, the autoprecharge commands are not used, and the SDRAM bank must be explicitly precharged whenever there is a non-page-mode random reference.
  • the decode state passes the request along with the page information to the selected state machine which sequences through the proper states. At certain states, interval timers are stated that inhibit the state machine from advancing to the next state until the SDRAM minimum interval requirements have been met.
  • the bank state machines operate on one request at a time. That is, a request sequences through any required precharge and activation phases and then a read or write phase, at which point it is considered completed and the next request initiated. Finally, the state of the four bank state machines is decoded by the issue logic that generates the SDRAM control signals.
  • Tr2rp and Tr2w are additional timing parameters that explicitly define the interval between successive read, write, and precharge commands. These parameters insure that successive commands do not cause conflicts on the data signals. While these parameters could be derived internally by a state machine sequencer, they are made explicit to simplify the state machines and use the same timer paradigm as the SDRAM parameters.
  • Trc 7 Activate bank A to Activate bank A Tras 5 Activate bank A to Precharge bank A Trp 2 Precharge bank A to Activate bank A Trrd 2 Activate bank A to Activate bank B Trcd 2 Activate bank A to Read bank A Twp 1 Datain bank A to Precharge bank A Tr2rp 2 Read bank A to Read or Precharge bank C Tr2w 6 Read bank A to Write bank A
  • Banks A and B are in the same external bank while Bank C is in a different external bank.
  • Trp, Trrd and Trcd are enforced by design.
  • the Trc, Tras, Tr2rp and Tr2w parameters have a timer for each of the four bank state machines.
  • the Tr2rp and Tr2w timers are common to all of the four bank state machines, because they are used to prevent conflicts on the shared data lines.
  • the initialization/refresh state machine has two functions, initialization and refresh.
  • the initialization procedure is discussed first, followed by the refresh.
  • the initialization/refresh state machine sequences through the SDRAM initialization procedure, which is a precharge to all banks, followed by a mode set.
  • the issue stage decodes the state of the initialization/refresh state machine into commands that are sent the SDRAM.
  • the mode set command programs the SDRAM mode set register to a CAS latency of 2, burst length of 1 and a sequential operation type.
  • the SDRAM requires that 4096 refresh cycles occur every 64 ms.
  • the timer sends out a signal every 27 microseconds which causes the refresh memory client to make a request to the arbiter.
  • the arbiter treats refresh just like all the other memory clients.
  • the arbiter determines that the time for the refresh slot has come, the arbiter passes the refresh request to the decode stage.
  • the decode stage invalidates all of the row registers and passes the request onto the state machine/issue stage.
  • a bank state machine sees that it is a refresh request, it goes to its idle state.
  • the initialization/refresh state machine sequences through the refresh procedure which is a precharge to all banks followed by two refresh cycles.
  • a refresh command puts the SDRAM in the automatic refresh mode.
  • An address counter internal to the device, increments the word and bank address during the refresh cycle.
  • the SDRAM is in the idle state, which means that all the pages are closed. This is why it is important that the bank state machines are forced to the idle state and the row registers are invalidated during a refresh request.
  • the initialization/refresh state machine is very similar in structure to the bank state machines and has timers to enforce SDRAM parameters.
  • a Trc timer is used to enforce the Trc requirement between refresh cycles, and the outputs from the bank Tras timers are used to ensure that the “precharge all” command does not violate Tras for any of the active banks.
  • the main functions of the data pipe are to: (1) move data between a memory client and the unified system memory 202 , (2) perform ECC operations and (3) merge new byte from a memory client with old data from memory during a read-modify-write operation. Each of these functions is described below.
  • the data pipe has one stage which is in lock-step with the last stage of the request pipe.
  • the request pipe asserts clientres.wrrdy.
  • the clientres.wrrdy signal indicates to the memory client that the data on the Memdata2mem_in bus has been latched into the ECC stage of the data pipe.
  • the data is held in the ECC stage and flows out to the unified system memory 202 until the request is retired in the request pipe.
  • Incoming read data is latched in the data pipe, flows through the ECC correction logic and then is latched again before going on the Memdata2client_out bus.
  • the request pipe knows how many cycles the unified system memory 202 takes to return read responses data.
  • the request pipe asserts clientres.rdrdy.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Input (AREA)
  • Image Processing (AREA)
  • Image Generation (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Digital Computer Display Output (AREA)
  • Memory System (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

A computer systemprovides dynamic memory allocation for graphics. The computer system includes a memory controller, a unified system memory, and memory clients each having access to the system memory via the memory controller. Memory clients can include a graphics rendering engine, a CPU, an image processor, a data compression/expansion device, an input/output device, a graphics back end device. The computer system provides read/write access to the unified system memory, through the memory controller, for each of the memory clients. Translation hardware is included for mapping virtual addresses of pixel buffers to physical memory locations in the unified system memory. Pixel buffers are dynamically allocated as tiles of physically contiguous memory. Translation hardware is implemented in each of the computational devices, which are included as memory clients in the computer system, including primarily the rendering engine.

Description

    BACKGROUND OF THE INVENTION
  • The present invention relates to the field of computer systems. Specifically, the present invention relates to a computer system architecture including dynamic memory allocation of pixel buffers for graphics and image processing. [0001]
  • BACKGROUND OF THE INVENTION
  • Typical prior art computer systems often rely on peripheral processors and dedicated peripheral memory units to perform various computational operations. For example, peripheral graphics display processors are used to render graphics images (synthesis) and peripheral image processors are used to perform image processing (analysis). In typical prior art computer systems, CPU main memory is separate from peripheral memory units which can be dedicated to graphics rendering or image processing or other computational functions. [0002]
  • With reference to Prior Art FIG. 1, a prior art [0003] computer graphics system 100 is shown. The prior art computer graphics system 100 includes three separate memory units; a main memory 102, a dedicated graphics memory 104, and a dedicated image processing memory (image processor memory) 105. Main memory 102 provides fast access to data for a CPU 106 and an input/output device 108. The CPU 106 and input/output device 108 are connected to main memory 102 via a main memory controller 110. Dedicated graphics memory 104 provides fast access to graphics data for a graphics processor 112 via a graphics memory controller 114. Dedicated image processor memory 105 provides fast access to buffers of data used by an image processor 116 via an image processor memory controller 118. In the prior art computer graphics system 100, CPU 106 has read/write access to main memory 102 but not to dedicated graphics memory 104 or dedicated image processor memory 105. Likewise, the image processor 116 has read/write access to dedicated image processor memory 105, but not to main memory 102 or dedicated graphics memory 104. Similarly, graphics processor 112 has read/write access to dedicated graphics memory 104 but not to main memory 102 or dedicated image processor memory 105.
  • Certain computer system applications require that data, stored in [0004] main memory 102 or in one of the dedicated memory units 104, 105, be operated upon by a processor other than the processor which has access to the memory unit in which the desired data is stored. Whenever data stored in one particular memory unit is to be processed by a designated processor other than the processor which has access to that particular memory unit, the data must be transferred to a memory unit for which the designated processor has access. For example, certain image processing applications require that data, stored in main memory 102 or dedicated graphics memory 104, be processed by the image processor 116. Image processing is defined as any function(s) that apply to two dimensional blocks of pixels. These pixels may be in the format of file system images, fields, or frames of video entering the prior art computer system 100 through video ports, mass storage devices such as CD-ROMs, fixed-disk subsystems and Local or Wide Area network ports. In order to enable image processor 116 to access data stored in main memory 102 or in dedicated graphics memory 104, the data must be transferred or copied to dedicated image processor memory 105.
  • One problem with the prior art [0005] computer graphics system 100 is the cost of high performance peripheral dedicated memory systems such as the dedicated graphics memory unit 104 and dedicated image processor memory 105. Another problem with the prior art computer graphics system 100 is the cost of high performance interconnects for multiple memory systems. Another problem with the prior art computer graphics system 100 is that the above discussed transfers of data between memory units require time and processing resources.
  • Thus, what is needed is a computer system architecture with a single unified memory system which can be shared by multiple processors in the computer system without transferring data between multiple dedicated memory units. [0006]
  • SUMMARY OF THE INVENTION
  • The present invention pertains to a computer system providing dynamic memory allocation for graphics. The computer system includes a memory controller, a unified system memory, and memory clients each having access to the system memory via the memory controller. Memory clients can include a graphics rendering engine, a central processing unit (CPU), an image processor, a data compression/expansion device, an input/output device, and a graphics back end device. In a preferred embodiment, the rendering engine and the memory controller are implemented on a first integrated circuit (first IC) and the image processor and the data compression/expansion are implemented on a second IC. The computer system provides read/write access to the unified system memory, through the memory controller, for each of the memory clients. Translation hardware is included for mapping virtual addresses of pixel buffers to physical memory locations in the unified system memory. Pixel buffers are dynamically allocated as tiles of physically contiguous memory. Translation hardware, for mapping the virtual addresses of pixel buffers to physical memory locations in the unified system memory, is implemented in each of the computational devices which are included as memory clients in the computer system. [0007]
  • In a preferred embodiment, the unified system memory is implemented using synchronous DRAM. Also in the preferred embodiment, tiles are comprised of 64 kilobytes of physically contiguous memory arranged as 128 rows of 128 pixels wherein each pixel is a 4 byte pixel. However, the present invention is also well suited to using tiles of other sizes. Also in the preferred embodiment, the dynamically allocated pixel buffers are comprised of n[0008] 2 tiles where n is an integer.
  • The computer system of the present invention provides functional advantages for graphical display and image processing. There are no dedicated memory units in the computer system of the present invention aside from the unified system memory. Therefore, it is not necessary to transfer data from one dedicated memory unit to another when a peripheral processor is called upon to process data generated by the CPU or by another peripheral device. [0009]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The present invention is illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar elements and in which: [0010]
  • Prior Art FIG. 1A is a circuit block diagram of a typical prior art computer system including peripheral processors and associated dedicated memory units. [0011]
  • FIG. 2A is a circuit block diagram of an exemplary unified system memory computer architecture according to the present invention. [0012]
  • FIG. 2B is an internal circuit block diagram of a graphics rendering and memory controller IC including a memory controller (MC)and a graphics rendering engine integrated therein. [0013]
  • FIG. 3A is an illustration of an exemplary tile for dynamic allocation of pixel buffers according to the present invention. [0014]
  • FIG. 3B is an illustration of an exemplary pixel buffer comprised of n[0015] 2 tiles according to the present invention.
  • FIG. 3C is a block diagram of an address translation scheme according to the present invention. [0016]
  • FIG. 4 is a block diagram of a memory controller according to the present invention. [0017]
  • FIG. 5 is a timing diagram for memory client requests issued to the unified system memory according to the present invention. [0018]
  • FIG. 6 is a timing diagram for memory client write data according to the present invention. [0019]
  • FIG. 7 is a timing diagram for memory client read data according to the present invention. [0020]
  • FIG. 8 is a timing diagram for an exemplary write to a new page performed by the unified system memory according to the present invention. [0021]
  • FIG. 9 is a timing diagram for an exemplary read to a new page performed by the unified system memory according to the present invention. [0022]
  • FIG. 10 shows external banks of the memory controller according to the present invention. [0023]
  • FIG. 11 shows a flow diagram for bank state machines according to the present invention. [0024]
  • DETAILED DESCRIPTION OF THE INVENTION
  • In the following detailed description of the present invention, numerous specific details are set forth in order to provide a thorough understanding of the present invention. However, it will be obvious to one skilled in the art that the present invention may be practiced without these specific details. In other instances well known methods, procedures, components, and circuits have not been described in detail as not to unnecessarily obscure aspects of the present invention. [0025]
  • Reference will now be made in detail to the preferred embodiments of the present invention, a computer system architecture having dynamic memory allocation for graphics, examples of which are illustrated in the accompanying drawings. While the invention will be described in conjunction with the preferred embodiments, it will be understood that they are not intended to limit the invention to these embodiments. On the contrary, the invention is intended to cover alternatives, modifications and equivalents, which may be included within the spirit and scope of the invention as defined by the appended claims. Furthermore, in the following detailed description of the present invention, numerous specific details are set forth in order to provide a thorough understanding of the present invention. However, it will be obvious to one of ordinary skill in the art that the present invention may be practiced without these specific details. In other instances, well known methods, procedures, components, and circuits have not been described in detail as not to unnecessarily obscure aspects of the present invention. [0026]
  • With reference to FIG. 2A, a computer system [0027] 200, according to the present invention, is shown. Computer system 200 includes a unified system memory 204 which is shared by various memory system clients including a CPU 206, a graphics rendering engine 208, an input/output IC 210, a graphics back end IC 212, an image processor 214, and a memory controller 204.
  • With reference to FIG. 2B, an [0028] exemplary computer system 201, according to the present invention, is shown. Computer system 201 includes the unified system memory 202 which is shared by various memory system clients including the CPU 206, the input/output IC 210, the graphics back end IC 212, an image processing and compression and expansion IC 216, and a graphics rendering and memory controller IC 218. The image processing and compression and expansion IC 216 includes the image processor 214, and a data compression and expansion unit 215. GRMC IC 218 includes the graphics rendering engine (rendering engine) 208 and the memory controller 204 integrated therein. The graphics rendering and memory controller IC 218 is coupled to unified system memory 202 via a high bandwidth memory data bus (HBWMD BUS) 225. In a preferred embodiment of the present invention, HBWMD BUS 225 includes a demultiplexer (SD-MUX) 220, a first BUS 222 coupled between the graphics rendering and memory controller IC 218 and SD-MUX 220, and a second bus 224 coupled between SD-MUX 220 and unified system memory 202. In the preferred embodiment of the present invention, BUS 222 includes 144 lines cycled at 133 MHz and BUS 224 includes 288 lines cycled at 66 MHz. SD-MUX 220 demultiplexes the 144 lines of BUS 222, which are cycled at 133 MHz, to double the number of lines, 288, of BUS 224, which are cycled at half the frequency, 66 MHz. CPU 206 is coupled to the graphics rendering and memory controller IC 218 by a third bus 226. In the preferred embodiment of the present invention, BUS 226 is 64 bits wide and carries signals cycled at 100 MHz. The image processing and compression and expansion IC 216 is coupled to BUS 226, by a third bus 228. In the preferred embodiment of the present invention, BUS 228 is 64 bits wide and carries signals cycled at 100 MHz. The graphics back end IC 212 is coupled to the graphics rendering and memory controller IC 218 by a fourth bus 230. In the preferred embodiment of the present invention, BUS 230 is 64 bits wide and carries signals cycled at 133 MHz. The input/output IC 210 is coupled to the graphics rendering and memory controller IC 218 by a fifth bus 232. In the preferred embodiment of the present invention, BUS 232 is 32 bits wide and carries signals cycled at 133 MHz.
  • The input/[0029] output IC 210 of FIG. 2A contains all of the input/output interfaces including: keyboard & mouse, interval timers, serial, parallel, ic, audio, video in & out, and fast ethernet. The input/output IC 210 also contains an interface to an external 64-bit PCI expansion bus, BUS 231, that supports five masters (two SCSI controllers and three expansion slots).
  • With reference to FIG. 2B, an internal circuit block diagram is shown of the graphics rendering and [0030] memory controller IC 218 according to an embodiment of the present invention. As previously mentioned, rendering engine 208 and memory controller 214 are integrated within the graphics rendering and memory controller IC 218. The graphics rendering and memory controller IC 218 also includes a CPU/IPCE interface 234, an input/output interface 236, and a GBE interface 232.
  • With reference to FIGS. 2A and 2B, [0031] GBE interface 232 buffers and transfers display data from unified system memory 202 to the graphics back end IC 212 in 16×32-byte bursts. GBE interface 232 buffers and transfers video capture data from the graphics back end IC 212 to unified system memory 202 in 16×32 -byte bursts. GBE interface 232 issues GBE interrupts to CPU/IPCE interface 234. BUS 228, shown in both FIG. 2A and FIG. 2B, couples GBE interface 232 to the graphics back end IC 212 (FIG. 2A). The input/output interface 236 buffers and transfers data from unified system memory 202 to the input/output IC 210 in 8×32-byte bursts. The input/output interface 236 buffers and transfers data from the input/output IC 210 to unified system memory 202 in 8×32-byte bursts. The input/output interface 236 issues the input/output IC interrupts to CPU/IBCE interface 234. BUS 230, shown in both FIG. 2A and FIG. 2B, couples the input/output interface 236 to the input/output IC 210 (FIG. 2A). A bus, BUS 224, provides coupling between CPU/IPCE interface 234 and CPU 206 and the image processing and compression and expansion IC 216.
  • With reference to FIG. 2A, the [0032] memory controller 214 is the interface between memory system clients (CPU 206, rendering engine 208, input/output IC 210, graphics back end IC 212, image processor 214, and data compression/expansion device 215) and the unified system memory 202. As previously mentioned, the memory controller 214 is coupled to unified system memory 202 via HBWMD BUS 225 which allows fast transfer of large amounts of data to and from unified system memory 202. Memory clients make read and write requests to unified system memory 202 through the memory controller 214. The memory controller 214 converts requests into the appropriate control sequences and passes data between memory clients and unified system memory 202. In the preferred embodiment of the present invention, the memory controller 214 contains two pipeline structures, one for commands and another for data. The request pipe has three stages, arbitration, decode and issue/state machine. The data pipe has only one stage, ECC. Requests and data flow through the pipes in the following manner. Clients place their requests in a queue. The arbitration logic looks at all of the requests at the top of the client queues and decides which request to start through the pipe. From the arbitration stage, the request flows to the decode stage. During the decode stage, information about the request is collected and passed onto an issue/state machine stage.
  • With reference to FIG. 2A, the [0033] rendering engine 208 is a 2-D and 3-D graphics coprocessor which can accelerate rasterization. In a preferred embodiment of the present invention, the rendering engine 208 is also cycled at 66 MHz and operates synchronously to the unified system memory 202. The rendering engine 208 receives rendering parameters from the CPU 206 and renders directly to frame buffers stored in the unified system memory 202 (FIG. 2A). The rendering engine 208 issues memory access requests to the memory controller 214. Since the rendering engine 208 shares the unified system memory 202 with other memory clients, the performance of the rendering engine 208 will vary as a function of the load on the unified system memory 202. The rendering engine 208 is logically partitioned into four major functional units: a host interface, a pixel pipeline, a memory transfer engine, and a memory request unit. The host interface controls reading and writing from the host to programming interface registers. The pixel pipeline implements a rasterization and rendering pipeline to a frame buffer. The memory transfer engine performs memory bandwidth byte aligned clears and copies on both linear buffers and frame buffers. The memory request unit arbitrates between requests from the pixel pipeline and queues up memory requests to be issued to the memory controller 214.
  • The computer system [0034] 200 includes dynamic memory allocation of virtual pixel buffers in the unified system memory 202. Pixel buffers include frame buffers, texture maps, video maps, image buffers, etc. Each pixel buffer can include multiple color buffers, a depth buffer, and a stencil buffer. In the present invention, pixel buffers are allocated in units of contiguous memory called tiles and address translation buffers are provided for dynamic allocation of pixel buffers.
  • With reference to FIG. 3A, an illustration is shown of an [0035] exemplary tile 300 for dynamic allocation of pixel buffers according to the present invention. In a preferred embodiment of the present invention, each tile 300 includes 64 kilobytes of physically contiguous memory. A 64 kilobyte tile size can be comprised of 128×128 pixels for 32 bit pixels, 256×128 pixels for 16 bit pixels, or 512×128 pixels for 8 bit pixels. In the present invention, tiles begin on 64 kilobyte aligned addresses. An integer number of tiles can be allocated for each pixel buffer. For example, a 200×200 pixel buffer and a 256×256 pixel buffer would both require four (128×128) pixel tiles.
  • With reference to FIG. 3B, an illustration is shown of an [0036] exemplary pixel buffer 302 according to the present invention. In the computer system 200 of the present invention, translation hardware maps virtual addresses of pixel buffers 302 to physical memory locations in unified system memory 202. Each of the computational units of the computer system 200 (image processing and compression and expansion IC, 212, graphics back end IC 212, The input/output IC 210, and rendering engine 208) includes translation hardware for mapping virtual addresses of pixel buffers 302 to physical memory locations in unified system memory 202. Each pixel buffer 302 is partitioned into n2 tiles 300, where n is an integer. In a preferred embodiment of the present invention, n=4.
  • The [0037] rendering engine 208 supports a frame buffer address translation buffer (TLB) to translate frame buffer (x,y) addresses into physical memory addresses. This TLB is loaded by CPU 206 with the base physical memory addresses of the tiles which compose a color buffer and the stencil-depth buffer of a frame buffer. In a preferred embodiment of the present invention, the frame buffer TLB has enough entries to hold the tile base physical memory addresses of a 2048×2048 pixel color buffer and a 2048×2048 pixel stencil-depth buffer. Therefore, the TLB has 256 entries for color buffer tiles and 256 entries for stencil-depth buffer tiles.
  • Tiles provide a convenient unit for memory allocation. By allowing tiles to be scattered throughout memory, tiling makes the amount of memory which must be contiguously allocated manageable. Additionally, tiling provides a means of reducing the amount of system memory consumed by frame buffers. Rendering to tiles which do not contain any pixels pertinent for display, invisible tiles, can be easily clipped out and hence no memory needs to be allocated for these tiles. For example, a 1024×1024 virtual frame buffer consisting of front and back RGBA buffers and a depth buffer would consume 12 Mb of memory if fully resident. However, if each 1024×1024 buffer were partitioned into 64 (128×128) tiles of which only four tiles contained non-occluded pixels, only memory for those visible tiles would need to be allocated. In this case, only 3MB would be consumed. [0038]
  • In the present invention, memory system clients (e.g., [0039] CPU 206, rendering engine 208, input/output IC 210, graphics back end IC 212, image processor 214, and data compression/expansion device 215) share the unified system memory 202. Since each memory system client has access to memory shared by each of the other memory system clients, there is no need for transferring data from one dedicated memory unit to another. For example, data can be received by the input/output IC 210, decompressed (or expanded) by the data compression/expansion device 215, and stored in the unified system memory 202. This data can then be accessed by the CPU 206, the rendering engine 208, the input/output IC 210, the graphics back end IC 212, or the image processor 214. As a second example, the CPU 206, the rendering engine 208, the input/output IC 210, the graphics back end IC 212, or the image processor 214 can use data generated by the CPU 206, the rendering engine 208, the input/output IC 210, the graphics back end IC 212, or the image processor 214. Each of the computational units (CPU 206, input/output IC 210, the graphics back end IC 212, the image processing and compression and expansion IC 216, the graphics rendering and memory controller IC 218, and the data compression/expansion device 215) has translation hardware for determining the physical addresses of pixel buffers as is discussed below.
  • There are numerous video applications for which the present invention computer system [0040] 200 provides functional advantages over prior art computer system architectures. These applications range from video conferencing to video editing. There is significant variation in the processing required for the various applications, but a few processing steps are common to all applications: capture, filtering, scaling, compression, blending, and display. In operation of computer system 200, input/output IC 210 can bring in a compressed stream of video data which can be stored into unified system memory 202. The input/output IC 210 can access the compressed data stored in unified system memory 220, via a path through the graphics rendering and memory controller IC 218. The input/output IC 210 can then decompress the accessed data and store the decompressed data into unified system memory 202. The stored image data can then be used, for example, as a texture map by rendering engine 208 for mapping the stored image onto another image. The resultant image can then be stored into a pixel buffer which has been allocated dynamically in unified system memory 202. If the resultant image is stored into a frame buffer, allocated dynamically in unified system memory 202, then the resultant image can be displayed by the graphics back end IC 212 or the image can be captured by writing the image back to another pixel buffer which has been allocated dynamically in unified system memory 202. Since there is no necessity of transferring data from one dedicated memory unit to another in computer system 200, functionality is increased.
  • In the preferred embodiment of the present invention, [0041] unified system memory 202 of FIG. 2A is implemented using synchronous DRAM (SDRAM) accessed via a 256-bit wide memory data bus cycled at 66 MHz. A SDRAM is made up of rows and columns of memory cells. A row of memory cells is referred to as a page. A memory cell is accessed with a row address and column address. When a row is accessed, the entire row is placed into latches, so that subsequent accesses to that row only require the column address. Accesses to the same row are referred to as page accesses. In a preferred embodiment of the present invention, unified system memory 202 provides a peak data bandwidth of 2.133 Gb/s. Also, in a preferred embodiment of the present invention, unified system memory 202 is made up of 8 slots. Each slot can hold one SDRAM DIMM. A SDRAM DIMM is constructed from 1M×16 or 4M×16 SDRAM components and populated on the front only or the front and back side of the DIMM. Two DIMMs are required to make an external SDRAM bank. 1M×16 SDRAM components construct a 32 Mbyte external bank, while 4M×16 SDRAM components construct a 128 Mbyte external bank. unified system memory 202 can range in size from 32 Mbytes to 1 Gbyte.
  • FIG. 3C shows a block diagram of an address translation scheme according to the present invention. FIG. 4 shows a block diagram of the [0042] memory controller 204 of the present invention.
  • A memory client interface contains the signals listed in Table 1, below. [0043]
    TABLE 1
    Memory client interface signals
    CRIME # of
    Signal Pin Name Bits Dir. Description
    clientreq.cmd internal 3 in type of request -
    only 1 - read
    2 - write
    4 - rmw
    clientreq.adr internal 25 in address of request
    only
    clientreq.msg internal 7 in message sent with request
    only
    clientreq.v internal 1 in 1 - valid
    only 0 - not valid
    clientreq.ecc internal 1 in 1 - ecc is valid
    only 0 - ecc not valid
    clientres.gnt internal 1 out 1 - room in client queue
    only 0 - no room
    clientres.wrrdy internal 1 out 1 - MC is ready for write data
    only 0 - MC not ready for write
    data
    clientres.rdrdy internal 1 out 1 - valid read data
    only 0 - not valid read data
    clientres.oe internal 1 out 1 - enable client driver
    only 0 - disable client driver
    clientres.rdmsg internal 7 out read message sent with read
    only data
    clientres.wrmsg internal 7 out write message sent with wrrdy
    only
    memdata2- internal 256 out memory data from client
    mem_in only going to unified system
    memory
    memmask_in internal 32 in memory mask from client
    only going
    to unified system memory
    0 - write byte
    1 - don't write byte
    memmask_in (0) is matched
    with memdata2mem_in (7:0)
    and so on.
    memdata2- internal 256 out memory data from unified
    client_out only system memory going to the
    client
  • With reference to FIG. 5, a timing diagram for memory client requests is shown. A memory client makes a request to the [0044] memory controller 204 by asserting clientreq.valid while setting the clientreq.adr, clientreq.msg, clientreq.cmd and clientreq.ecc lines to the appropriate values. If there is room in the queue, the request is latched into the memory client queue. Only two of the memory clients, the rendering engine 208 and the input/output IC 210, use clientreq.msg. The message specifies which subsystem within the input/output IC 210 or the rendering engine 208 made the request. When an error occurs, this message is saved along with other pertinent information to aid in the debug process. For the rendering engine 208, the message is passed through the request pipe and returned with other pertinent information to aid in the debug process. For the rendering engine 208, the message is passed through the request pipe and returned with the clientreq.wrrdy signal for a write request or with the clientreq.rdrdy signal for a read request. The rendering engine 208 uses the information contained in the message to determine which rendering engine 208 queue to access.
  • With reference to FIG. 6, a timing diagram for memory client write data is shown. The data for a write request is not latched with the address and request. Instead, the data, mask and message are latched when the [0045] memory controller 204 asserts the clientreq.wrrdy indicating that the request has reached the decode stage of the request pipe. Because the memory client queues are in front of the request pipe, there is not a simple relationship between the assertion clientreq.gnt and clientreq. wrrdy. Clientreq.msg is only valid for the rendering engine 208 and the input/output IC 210. The memory controller 204 asserts the clientreq.oe signal at least one cycle before the assertion of clientreq.wrrdy. Clientreq.oe is latched locally by the memory client and is used to turn on the memory client's memory data bus drivers.
  • With reference to FIG. 7, a timing diagram for memory client read data is shown. The read data is sent to the memory client over the memdata2client_out bus. When clientres.rdrdy is asserted, the data and message are valid. [0046]
  • A memory client interface contains the signals listed in Table 2, below. [0047]
    TABLE 2
    Memory Interface Signals
    Crime Pin # of
    Signal Name Bits Dir. Description
    memwrite mem_dir 1 out controls direction of
    SDMUX chips-
    default to write
    memdata2- mem_data 256 out memory data from
    mem_out client going to unified
    system memory
    memdata2- internal only 256 out memory data from
    client_out main memory going to
    the memory client
    memmask_out mem_mask
    32 out memory mask from
    client going to unified
    system memory
    memdataoe internal only 3 out enable memory data
    bus drivers
    ecc_out mem_ecc 32 out ecc going to unified
    system memory
    eccmask mem_eccmask
    32 out ecc mask going to
    main memory
    mem_addr mem_addr
    14 out memory address
    ras_n mem_ras_n 1 out row address strobe
    cas_n mem_cas_n 1 out column address strobe
    we_n mem_we_n 1 out write enable
    cs_n mem_cs(3:0)_n 8 out chip selects
  • The data and mask are latched in the data pipe and flow out to the [0048] unified system memory 202 on memmask_out and memdata2mem_out. From the data and mask, the ECC and ECC mask are generated and sent to the unified system memory 202 across eccmask and ecc_out. The memdataoe signal is used to turn on the memory bus drivers. Data and ECC from the unified system memory 202 come in on the memdata2client_in and ecc_in busses. The ECC is used to determine if the incoming data is correct. If there is a one bit error in the data, the error is corrected, and the corrected data is sent to the memory client. If there is more than one bit in error, the CPU 206 is interrupted, and incorrect data is returned to the memory client. Ras_n, cas_n, we_n and cs_n are control signals for the unified system memory 202.
  • With reference to FIG. 8, a timing diagram is shown for an exemplary write to a new page performed by the [0049] unified system memory 202. With reference to FIG. 9, a timing diagram is shown for an exemplary read to a new page performed by the unified system memory 202. A read or write operation to the same SDRAM page is the same as the operation shown in FIGS. 8 and 9, except a same page operation does not need the precharge and activate cycles.
  • A request pipe is the control center for the [0050] memory controller 204. Memory client requests are placed in one end of the pipe and come out the other side as memory commands. The memory client queues are at the front of the pipe, followed by the arbitration, then the decode, and finally the issue/state machine. If there is room in their queue, a memory client can place a request in it. The arbitration logic looks at all of the requests at the top of the memory client queues and decides which request to start through the request pipe. From the arbitration stage, the request flows to the decode stage. During the decode stage, information about the request is collected and passed onto the issue/state machine stage. Based on this information, a state machine determines the proper sequence of commands for the unified system memory 202. The later portion of the issue stage decodes the state of the state machine into control signals that are latched and then sent across to the unified system memory 202. A request can sit in the issue stage for more than one cycle. While a request sits in the issue/state machine stage, the rest of the request pipe is stalled. Each stage of the request pipe is discussed herein.
  • All of the memory clients have queues, except for refresh. A refresh request is guaranteed to retire before another request is issued, so a queue is not necessary. The five memory client queues are simple two-port structures with the memory clients on the write side and the arbitration logic on the read side. If there is space available in a memory client queue, indicated by the assertion of clientres.gnt, a memory client can place a request into its queue. A memory client request consists of an address, a command (read, write or read-modify-write), a message, an ECC valid and a request valid indication. If both clientreq.valid and clientres.gnt are asserted, the request is latched into the memory client queue. If the pipeline is not stalled, the arbitration logic looks at all of the requests at the top of the memory client queues and determines which request to pop off and pass to the decode stage of the requests at the top of the memory client queues and determines which request to pop off and pass to the decode stage of the request pipe. [0051]
  • Because there is a request queue between the memory client and the arbiter, the clientres.gnt signal does not indicate that the request has retired. The request still needs to go through the arbitration process. To put it another way, memory client A might receive the clientres.gnt signal before memory client B, but if memory client B has a higher priority, its request might retire before the request from memory client A. [0052]
  • Arbiter [0053]
  • As stated above, the arbiter determines which memory client request to pass to the decode stage of the request pipe. This decision process has two steps. The first step is to determine if the arbitration slot for the current memory client is over or not. An arbitration slot is series of requests from the same memory client. The number and type of requests allowed in one arbitration slot varies. Table 3, below, lists what each memory client can do in an arbitration slot. [0054]
    TABLE 3
    Requests allowed in an Arbitration Slot
    Client Possible Operations
    Graphics Back < = 16 memory word read with no page crossings
    End < = 16 memory word write with no page crossings
    IPCE IC < = 8 memory word read with 1 page crossings
    < = 8 memory word write with 1 page crossings
    1 read-modify-write operation
    rendering < = 8 memory word read with no page crossings
    engine, CPU, < = 8 memory word write with no page crossings
    GRMC 1 read-modify-write operation
    REFRESH refresh 2 rows
  • Based on a state for the current arbitration slot and the next request from the current slot owner, the arbiter determines if the arbitration slot should end or not. If not, the request from the memory client who owns the current arbitration slot is passed to the decode stage. If the current arbitration slot is terminated, the arbiter uses the results from an arbitration algorithm to decide which request to pass to the decode stage. The arbitration algorithm to decide which request to pass to the decode stage. The arbitration algorithm ensures that the graphics back [0055] end IC 212 gets ½ of the arbitration slots, the input/output IC 210 gets ¼, the image processing and compression and expansion IC 216 gets ⅛, the rendering engine 208 gets {fraction (1/16)}, the CPU 206 gets {fraction (1/32)}, and the refresh gets {fraction (1/64)}.
  • Predicting the average bandwidth for each memory client is difficult, but the worst-case slot frequency per memory client can be calculated. The first step is to determine the maximum number of cycles that each memory client can use during an arbitration slot. Table 4, below, shows the number of cycles associated with each type of operation. With reference to Table 4, below, “P” refers to precharge, “X” refers to a dead cycle, “A” refers to activate, “R0” refers to “read [0056] word 0”, “W0” refers to “write word 0”, and “Ref” refers to “refresh”.
    TABLE 4
    Maximum Cycles for a Memory Operation
    Operation Command Sequence # of Cycles
    8 Word Read P X A X R0 R1 R2 R3 R4 R5 R6 R7 12
    8 Word Write P X A X W0 W1 W2 W3 W4 W5 12
    W6 W7
    Read-Modify-Write P X AX R0 X X X X X X W0 12
    8 Word Vice Read P X A X R0 XX P X AX R1 R2 R3 18
    with page crossing R4 R5 R6 R7
    2 Row Refresh P X Ref X X X X X Ref X X X X X 14
  • Table 5, below, refers to the maximum number of cycles for each of the memory clients. [0057]
    TABLE 5
    Maximum # Cycles per Slot
    Memory Client Operation # of cycles
    Graphics Back End 16 memory word read or write 20 cycles
    CPU, Rendering Engine, 8 memory word read or write 12 cycles
    MACE
    IPCE
    8 memory word read or write 18 cycles
    1 page crossings
    REFRESH refresh 2 rows 14 cycles
  • Finally, slots per second for each memory client can be calculated. If all of the memory clients are requesting all of the time, every memory client will get a turn after 64 slots. This is referrred to as a “round”. In that round, the graphics back end gets 32 out of the 64 slots, the input/[0058] output IC 210 gets 16 out of the 64 slots etc., so a round takes 32*20+8*18+4*12+2*12+14=1062 cycles.
    TABLE 6
    Slot Frequency for Each Client
    Bandwidth f slot
    Client is fully utilized
    GBE 32 slots/15.93 us  1 GB/sec
    slot/0.50 us
    MACE slot/1.00 us 256 MB/sec
    VICE slot/2.00 us 128 MB/sec
    RE slot/4.00  64 MB/sec
    CPU slot/8.00 us  32 MB/sec
    Refresh slot/16.00 us NA
  • Decode Logic [0059]
  • The decode logic receives requests from the arbiter. Based on state maintained from the previous requests and information contained in the current request, the decode logic determines which memory bank to select, which of the four state machines in the next stage will handle the request, and whether or not the current request is on the same page as the previous request. This information is passed to the issue/state machine stage. [0060]
  • The [0061] unified system memory 202 is made up of 8 slots. Each slot can hold one SDRAM DIMM. A SDRAM DIMM is constructed from 1M×16 or 4M×16 SDRAM components and populated on the front only or the front and back side of the DIMM. Two DIMMs are required to make an external SDRAM bank. 1M×16 SDRAM components construct a 32 Mbyte external bank, while 4M×16 SDRAM components construct a 128 Mbyte external bank. The memory system can range in size from 32 Mbytes to 1 Gbyte.
  • Each SDRAM component has two internal banks, hence two possible open pages. The maximum number of external banks is 8 and the maximum number of internal banks is 8 and the maximum number of internal banks is 16. The [0062] memory controller 204 only supports 4 open pages at a time. This issue will be discussed in detail later in this section.
  • The decode logic is explained below in more detail. During initalization, software probes the memory to determine how many banks of memory are present and the size of each bank. Based on this information, the software programs the 8 bank control registers. Each bank control register (please refer to the register section) has one bit that indicates the size of the bank and 5 bits for the upper address bits of that bank. Software must place the 64 Mbit external banks in the lower address range followed by any 16 Mbit external banks. This is to prevent gaps in the memory. The decode logic compares the upper address bits of the incoming request to the 8 bank control registers to determine which external bank to select. The number of bits that are compared is dependent on the size of the bank. For example, if the bank size is 64 Mbit, the decode logic compares bits 24:22 of the request address to bits 4:2 of the bank control register. If there is a match, that bank is selected. Each external bank has a separate chip select. If an incoming address matches more than one bank's control register, the bank with the lowest number is selected. If an incoming address does not match any of the bank control registers, a memory address error occurs. When an error occurs, pertinent information about the request is captured in error registers and the processor is interrupted—if the [0063] memory controller 204 interrupt is enabled. The request that caused the error is still sent to the next stage in the pipeline and is processed like a normal request, but the memory controller 204 deasserts all of the external bank selects so that the memory operation doesn't actually occur. Deasserting the external bank selects is also done when bit 6 of the rendering engine 208 message is set. The rendering engine 208 sets this bit when a request is generated using an invalid TLB entry.
  • With reference to FIG. 10, although the [0064] memory controller 204 can handle any physical external bank configuration, we recommend that external bank 0 always be filled and that the external banks be placed in decreasing density order (for example a 64 Mbit external bank in bank 0 and a 16 Mbit external bank in bank 2).
  • The previous paragraph describes how the decode logic determines what external bank to select. This paragraph describes the method for determining page crossings and which bank state machine to use in the next stage of the pipeline. The row address, along with the internal and external bank bits for previous requests, are kept in a set of registers which are referred to as the row registers. Each row register corresponds to a bank state machine. There are four row registers (hence four bank state machines), so the decode logic can kept track of up to four open pages. The decode logic compares the internal/external bank bits of the new request with the four row registers. If there is a match, then bank state machine corresponding to that row register is selected. If the new request does not match any of the row registers, one of the row registers, one of the row registers is selected and the register is updated with the new request information. If the internal/external bank bits match one of the row registers and the row bits of the new request match the row bits in that register, then request is on the same page otherwise it is not. [0065]
  • State Machines and Issue Logic [0066]
  • The decode logic passes the request along with the external bank selects, state machine select and same page information to the issue/state machine stage. The selected bank state machine sequences through the proper states, while the issue logic decodes the state of the bank state machine into commands that are sent to the SDRAM DIMMS. In addition to the four bank state machines, there is a state machine dedicated to refresh and initialization operations. The initialization/refresh state machine sequences through special states for initialization and refresh while the four bank state machines are forced to an idle state. The bank state machines and the initialization/refresh state machine are discussed in more detail in more detail in the following sections. [0067]
  • Bank State Machines [0068]
  • The four bank state machines operate independently, subject only to conflicts for access to the control, address, and data signals. The bank state machines default to page mode operation. That is, the autoprecharge commands are not used, and the SDRAM bank must be explicitly precharged whenever there is a non-page-mode random reference. The decode state passes the request along with the page information to the selected state machine which sequences through the proper states. At certain states, interval timers are stated that inhibit the state machine from advancing to the next state until the SDRAM minimum interval requirements have been met. The bank state machines operate on one request at a time. That is, a request sequences through any required precharge and activation phases and then a read or write phase, at which point it is considered completed and the next request initiated. Finally, the state of the four bank state machines is decoded by the issue logic that generates the SDRAM control signals. [0069]
  • There are several SDRAM parameters that the state machines must obey. These parameters vary slightly from vendor to vendor, but to simplify the state machines, the most common parameters were chosen and hard coded into the interval timers. Any SDRAM that is not compatible with the parameters listed in the following table is not supported. [0070]
  • Tr2rp and Tr2w are additional timing parameters that explicitly define the interval between successive read, write, and precharge commands. These parameters insure that successive commands do not cause conflicts on the data signals. While these parameters could be derived internally by a state machine sequencer, they are made explicit to simplify the state machines and use the same timer paradigm as the SDRAM parameters. [0071]
    TABLE 7
    SDRAM Parameters
    Parameter Value Description
    Trc
    7 Activate bank A to Activate bank A
    Tras 5 Activate bank A to Precharge bank A
    Trp
    2 Precharge bank A to Activate bank A
    Trrd
    2 Activate bank A to Activate bank B
    Trcd
    2 Activate bank A to Read bank A
    Twp 1 Datain bank A to Precharge bank A
    Tr2rp
    2 Read bank A to Read or Precharge bank C
    Tr2w
    6 Read bank A to Write bank A
  • With reference to Table 7, above, Banks A and B are in the same external bank while Bank C is in a different external bank. [0072]
  • With reference to FIG. 11, a flow diagram for the bank state machines is shown. As shown in FIG. 11, Trp, Trrd and Trcd are enforced by design. The Trc, Tras, Tr2rp and Tr2w parameters have a timer for each of the four bank state machines. The Tr2rp and Tr2w timers are common to all of the four bank state machines, because they are used to prevent conflicts on the shared data lines. [0073]
  • The initialization/refresh state machine has two functions, initialization and refresh. The initialization procedure is discussed first, followed by the refresh. After a reset, the initialization/refresh state machine sequences through the SDRAM initialization procedure, which is a precharge to all banks, followed by a mode set. The issue stage decodes the state of the initialization/refresh state machine into commands that are sent the SDRAM. After the mode set command programs the SDRAM mode set register to a CAS latency of 2, burst length of 1 and a sequential operation type. [0074]
  • The SDRAM requires that 4096 refresh cycles occur every 64 ms. In order to comply with this requirement, there is a refresh memory client with a timer. The timer sends out a signal every 27 microseconds which causes the refresh memory client to make a request to the arbiter. The arbiter treats refresh just like all the other memory clients. When the arbiter determines that the time for the refresh slot has come, the arbiter passes the refresh request to the decode stage. The decode stage invalidates all of the row registers and passes the request onto the state machine/issue stage. When a bank state machine sees that it is a refresh request, it goes to its idle state. The initialization/refresh state machine sequences through the refresh procedure which is a precharge to all banks followed by two refresh cycles. A refresh command puts the SDRAM in the automatic refresh mode. An address counter, internal to the device, increments the word and bank address during the refresh cycle. After a refresh cycle, the SDRAM is in the idle state, which means that all the pages are closed. This is why it is important that the bank state machines are forced to the idle state and the row registers are invalidated during a refresh request. [0075]
  • The initialization/refresh state machine is very similar in structure to the bank state machines and has timers to enforce SDRAM parameters. A Trc timer is used to enforce the Trc requirement between refresh cycles, and the outputs from the bank Tras timers are used to ensure that the “precharge all” command does not violate Tras for any of the active banks. [0076]
  • Data Pipe: [0077]
  • The main functions of the data pipe are to: (1) move data between a memory client and the [0078] unified system memory 202, (2) perform ECC operations and (3) merge new byte from a memory client with old data from memory during a read-modify-write operation. Each of these functions is described below.
  • Data Flow: [0079]
  • With reference to FIG. 4, the data pipe has one stage which is in lock-step with the last stage of the request pipe. When a write request reaches the decode stage, the request pipe asserts clientres.wrrdy. The clientres.wrrdy signal indicates to the memory client that the data on the Memdata2mem_in bus has been latched into the ECC stage of the data pipe. The data is held in the ECC stage and flows out to the [0080] unified system memory 202 until the request is retired in the request pipe.
  • Incoming read data is latched in the data pipe, flows through the ECC correction logic and then is latched again before going on the Memdata2client_out bus. The request pipe knows how many cycles the [0081] unified system memory 202 takes to return read responses data. When the read response data is on the Memdata2client_out bus, the request pipe asserts clientres.rdrdy.
  • The preferred embodiment of the present invention, a computer system architecture featuring dynamic memory allocation for graphics, is thus described. While the present invention has been described in particular embodiments, it should be appreciated that the present invention should not be construed as limited by such embodiments, but rather construed according to the below claims. [0082]

Claims (29)

What is claimed is:
1. A computer system comprising:
a memory controller;
a graphics rendering engine coupled to said memory controller;
a CPU coupled to said memory controller;
an image processor coupled to said memory controller;
a data compression/expansion device coupled to said memory controller;
an input/output device coupled to said memory controller;
a graphics back end device coupled to said memory controller;
a system memory, coupled to said memory controller via a high bandwidth data bus, said system memory providing read/write access, through said memory controller, for memory clients including said CPU, said input/output device, said graphics back end device, said image processor, said data compression/expansion device, said rendering engine, and said memory controller, wherein said memory controller is the interface between said memory clients and said system memory; and
translation hardware for mapping virtual addresses of pixel buffers to physical memory locations in said system memory wherein said pixel buffers are dynamically allocated as tiles of physically contiguous memory.
2. The computer system of
claim 1
wherein said translation hardware is implemented in said rendering engine.
3. The computer system of
claim 1
wherein said translation hardware is implemented in each of said rendering engine, said memory controller, said image processor, said data compression/expansion, said graphics back end IC, and said input/output IC.
4. The computer system of
claim 1
wherein said system memory is implemented using synchronous DRAM.
5. The computer system of
claim 1
wherein said system memory is implemented using synchronous DRAM (SDRAM) accessed via a 256-bit wide memory data bus cycled at 66 MHz.
6. The computer system of
claim 1
wherein said tiles are comprised of 64 kilobytes of physically contiguous memory arranged as 128 rows of pixels.
7. The computer system of
claim 1
wherein said tiles are comprised of 64 kilobytes of physically contiguous memory arranged as 128 rows of 128 pixels wherein each pixel is a 4 byte pixel.
8. The computer system of
claim 1
wherein said rend ering engine and said memory controller are implemented on a first IC.
9. The computer system of
claim 1
wherein said rendering engine and said memory controller are implemented on a first IC and said image processor and said data compression/expansion device are implemented on a second IC.
10. The computer system of
claim 1
wherein said dynamically allocated pixel buffers are comprised of n2 tiles where n is an integer.
11. A computer system comprising:
a graphics rendering engine and a memory controller implemented on a first IC;
a CPU coupled to said first IC;
an image processor coupled to said first IC;
a data compression/expansion device coupled to said first IC;
an input/output device coupled to said first IC;
a graphics back end device coupled to said first IC;
a system memory, coupled to said first IC via a high bandwidth data bus, said system memory providing read/write access, through said first IC, for memory clients including said CPU, said input/output device, said graphics back end device, said image processor, said data compression/expansion device, said rendering engine, and said memory controller, wherein said memory controller is the interface between said memory clients and said system memory; and
translation hardware for mapping virtual addresses of pixel buffers to physical memory locations in said system memory wherein said pixel buffers are dynamically allocated as tiles of physically contiguous memory.
12. The computer system of
claim 11
wherein said translation hardware is implemented in said rendering engine.
13. The computer system of
claim 11
wherein said translation hardware is implemented in each of said rendering engine, said memory controller, said image processor, said data compression/expansion, said graphics back end IC, and said input/output IC.
14. The computer system of
claim 11
wherein said system memory is implemented using synchronous DRAM.
15. The computer system of
claim 11
wherein said system memory is implemented using synchronous DRAM (SDRAM) accessed via a 256-bit wide memory data bus cycled at 66 MHz.
16. The computer system of
claim 11
wherein said tiles are comprised of 64 kilobytes of physically contiguous memory arranged as 128 rows of pixels.
17. The computer system of
claim 11
wherein said tiles are comprised of 64 kilobytes of physically contiguous memory arranged as 128 rows of 128 pixels wherein each pixel is a 4 byte pixel.
18. The computer system of
claim 11
wherein said image processor and said data compression/expansion are implemented on a second IC.
19. The computer system of
claim 11
wherein said first IC is coupled to said system memory by a demultiplexing bus comprising a first bus, coupled to said first IC and having 144 lines cycled at 133 MHz, a second bus, coupled to said system memory and having 288 lines cycled at 66 MHz, and a demultiplexer for demultiplexing signals propagating between said first bus and said second bus.
20. The computer system of cliam 11 wherein said dynamically allocated pixel buffers are comprised of n2 tiles where n is an integer.
21. A computer system comprising:
a CPU;
an input/output device;
a graphics back end unit;
a first IC including an image processor and a data compression and expansion device integrated therein;
a second IC including a graphics rendering engine and a memory controller device integrated therein;
a system memory which allows read/write access for memory clients including said CPU, said input/output device, said graphics back end device, said image processor, said data compression/expansion device, said rendering engine, and said memory controller, wherein said memory controller is the interface between said memory clients and said system memory;
a high bandwidth data bus for transferring data between said system memory and said second IC; and
translation hardware for mapping virtual addresses of pixel buffers to physical memory locations in said system memory wherein said pixel buffers are dynamically allocated as tiles of physically contiguous memory.
22. The computer system of
claim 21
wherein said translation hardware is implemented in said rendering engine.
23. The computer system of
claim 21
wherein said translation hardware is implemented in each of said rendering engine, said memory controller, said image processor, said data compression/expansion, said graphics back end IC, and said input/output IC.
24. The computer system of
claim 21
wherein said system memory is implemented using synchronous DRAM.
25. The computer system of
claim 21
wherein said system memory is implemented using synchronous DRAM (SDRAM) accessed via a 256-bit wide memory data bus cycled at 66 MHz.
26. The computer system of
claim 21
wherein said tiles are comprised of 64 kilobytes of physically contiguous memory arranged as 128 rows of pixels.
27. The computer system of
claim 21
wherein said tiles are comprised of 64 kilobytes of physically contiguous memory arranged as 128 rows of 128 pixels wherein each pixel is a 4 byte pixel.
28. The computer system of
claim 21
wherein said second IC is coupled to said system memory by a demultiplexing bus comprising a first bus, coupled to said second IC and having 144 lines cycled at 133 MHz, a second bus, coupled to said system memory and having 288 lines cycled at 66 MHz, and a demultiplexer for demultiplexing signals propagating between said second bus and said system memory.
30. The computer system of
claim 21
wherein said dynamically allocated pixel buffers are comprised of n2 tiles where n is an integer.
US09/137,067 1996-09-13 1998-08-20 Unified memory architecture for use in computer system Abandoned US20010019331A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US09/137,067 US20010019331A1 (en) 1996-09-13 1998-08-20 Unified memory architecture for use in computer system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US08/713,779 US6104417A (en) 1996-09-13 1996-09-13 Unified memory computer architecture with dynamic graphics memory allocation
US09/137,067 US20010019331A1 (en) 1996-09-13 1998-08-20 Unified memory architecture for use in computer system

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US08/713,779 Continuation US6104417A (en) 1996-09-13 1996-09-13 Unified memory computer architecture with dynamic graphics memory allocation

Publications (1)

Publication Number Publication Date
US20010019331A1 true US20010019331A1 (en) 2001-09-06

Family

ID=24867506

Family Applications (2)

Application Number Title Priority Date Filing Date
US08/713,779 Expired - Lifetime US6104417A (en) 1996-09-13 1996-09-13 Unified memory computer architecture with dynamic graphics memory allocation
US09/137,067 Abandoned US20010019331A1 (en) 1996-09-13 1998-08-20 Unified memory architecture for use in computer system

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US08/713,779 Expired - Lifetime US6104417A (en) 1996-09-13 1996-09-13 Unified memory computer architecture with dynamic graphics memory allocation

Country Status (5)

Country Link
US (2) US6104417A (en)
EP (1) EP0829820B1 (en)
JP (1) JPH10247138A (en)
CA (1) CA2214868C (en)
DE (1) DE69722117T2 (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040183808A1 (en) * 2001-10-09 2004-09-23 William Radke Embedded memory system and method including data error correction
US20050024367A1 (en) * 2000-12-13 2005-02-03 William Radke Memory system and method for improved utilization of read and write bandwidth of a graphics processing system
US20060098021A1 (en) * 2004-11-11 2006-05-11 Samsung Electronics Co., Ltd. Graphics system and memory device for three-dimensional graphics acceleration and method for three dimensional graphics processing
US20060103658A1 (en) * 2004-11-12 2006-05-18 Via Technologies, Inc. Color compression using multiple planes in a multi-sample anti-aliasing scheme
US20060177122A1 (en) * 2005-02-07 2006-08-10 Sony Computer Entertainment Inc. Method and apparatus for particle manipulation using graphics processing
US20080278513A1 (en) * 2004-04-15 2008-11-13 Junichi Naoi Plotting Apparatus, Plotting Method, Information Processing Apparatus, and Information Processing Method
US20090160857A1 (en) * 2007-12-20 2009-06-25 Jim Rasmusson Unified Compression/Decompression Graphics Architecture
US20100030980A1 (en) * 2006-12-25 2010-02-04 Panasonic Corporation Memory control device, memory device, and memory control method
US20100118935A1 (en) * 2004-04-23 2010-05-13 Sumitomo Electric Industries, Ltd. Coding method for motion-image data, decoding method, terminal equipment executing these, and two-way interactive system
US20110066815A1 (en) * 2009-09-15 2011-03-17 Olympus Corporation Memory access control device and memory access control method
US7928989B1 (en) * 2006-07-28 2011-04-19 Nvidia Corporation Feedback and record of transformed vertices in a graphics library
US20120162264A1 (en) * 2010-12-22 2012-06-28 Hughes Gregory F System level graphics manipulations on protected content
US8271746B1 (en) * 2006-11-03 2012-09-18 Nvidia Corporation Tiering of linear clients
US10692171B2 (en) 2015-11-17 2020-06-23 Samsung Electronics Co., Ltd. Method of operating virtual address generator and method of operating system including the same

Families Citing this family (146)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW360823B (en) * 1996-09-30 1999-06-11 Hitachi Ltd Data processor and graphic processor
US6308248B1 (en) * 1996-12-31 2001-10-23 Compaq Computer Corporation Method and system for allocating memory space using mapping controller, page table and frame numbers
US9098297B2 (en) * 1997-05-08 2015-08-04 Nvidia Corporation Hardware accelerator for an object-oriented programming language
US6118462A (en) 1997-07-01 2000-09-12 Memtrax Llc Computer system controller having internal memory and external memory control
US6266753B1 (en) * 1997-07-10 2001-07-24 Cirrus Logic, Inc. Memory manager for multi-media apparatus and method therefor
US6075546A (en) * 1997-11-10 2000-06-13 Silicon Grahphics, Inc. Packetized command interface to graphics processor
US6275243B1 (en) * 1998-04-08 2001-08-14 Nvidia Corporation Method and apparatus for accelerating the transfer of graphical images
US6480205B1 (en) 1998-07-22 2002-11-12 Nvidia Corporation Method and apparatus for occlusion culling in graphics systems
US6393543B1 (en) * 1998-11-12 2002-05-21 Acuid Corporation Limited System and a method for transformation of memory device addresses
US6381683B1 (en) * 1998-12-09 2002-04-30 Advanced Micro Devices, Inc. Method and system for destination-sensitive memory control and access in data processing systems
US6219769B1 (en) 1998-12-09 2001-04-17 Advanced Micro Devices, Inc. Method and system for origin-sensitive memory control and access in data processing systems
US6260123B1 (en) * 1998-12-09 2001-07-10 Advanced Micro Devices, Inc. Method and system for memory control and access in data processing systems
US6510497B1 (en) 1998-12-09 2003-01-21 Advanced Micro Devices, Inc. Method and system for page-state sensitive memory control and access in data processing systems
US6546439B1 (en) 1998-12-09 2003-04-08 Advanced Micro Devices, Inc. Method and system for improved data access
US6226721B1 (en) 1998-12-09 2001-05-01 Advanced Micro Devices, Inc. Method and system for generating and utilizing speculative memory access requests in data processing systems
US6563506B1 (en) * 1998-12-14 2003-05-13 Ati International Srl Method and apparatus for memory bandwith allocation and control in a video graphics system
US6362826B1 (en) * 1999-01-15 2002-03-26 Intel Corporation Method and apparatus for implementing dynamic display memory
US6414688B1 (en) * 1999-01-29 2002-07-02 Micron Technology, Inc. Programmable graphics memory method
US6189082B1 (en) * 1999-01-29 2001-02-13 Neomagic Corp. Burst access of registers at non-consecutive addresses using a mapping control word
US6377268B1 (en) * 1999-01-29 2002-04-23 Micron Technology, Inc. Programmable graphics memory apparatus
US6288729B1 (en) * 1999-02-26 2001-09-11 Ati International Srl Method and apparatus for a graphics controller to extend graphics memory
TW457430B (en) * 1999-03-02 2001-10-01 Via Tech Inc Memory access control device
US6526583B1 (en) * 1999-03-05 2003-02-25 Teralogic, Inc. Interactive set-top box having a unified memory architecture
US6433785B1 (en) * 1999-04-09 2002-08-13 Intel Corporation Method and apparatus for improving processor to graphics device throughput
US6504549B1 (en) * 1999-05-19 2003-01-07 Ati International Srl Apparatus to arbitrate among clients requesting memory access in a video system and method thereof
TW550956B (en) * 1999-05-26 2003-09-01 Koninkl Philips Electronics Nv Digital video-processing unit
US6469703B1 (en) 1999-07-02 2002-10-22 Ati International Srl System of accessing data in a graphics system and method thereof
US6496192B1 (en) * 1999-08-05 2002-12-17 Matsushita Electric Industrial Co., Ltd. Modular architecture for image transposition memory using synchronous DRAM
US6618048B1 (en) 1999-10-28 2003-09-09 Nintendo Co., Ltd. 3D graphics rendering system for performing Z value clamping in near-Z range to maximize scene resolution of visually important Z components
US6717577B1 (en) 1999-10-28 2004-04-06 Nintendo Co., Ltd. Vertex cache for 3D computer graphics
US6198488B1 (en) * 1999-12-06 2001-03-06 Nvidia Transform, lighting and rasterization system embodied on a single semiconductor platform
US6452595B1 (en) * 1999-12-06 2002-09-17 Nvidia Corporation Integrated graphics processing unit with antialiasing
US7209140B1 (en) 1999-12-06 2007-04-24 Nvidia Corporation System, method and article of manufacture for a programmable vertex processing model with instruction set
US6844880B1 (en) 1999-12-06 2005-01-18 Nvidia Corporation System, method and computer program product for an improved programmable vertex processing model with instruction set
US6643752B1 (en) * 1999-12-09 2003-11-04 Rambus Inc. Transceiver with latency alignment circuitry
US20050010737A1 (en) * 2000-01-05 2005-01-13 Fred Ware Configurable width buffered module having splitter elements
US7010642B2 (en) * 2000-01-05 2006-03-07 Rambus Inc. System featuring a controller device and a memory module that includes an integrated circuit buffer device and a plurality of integrated circuit memory devices
US6502161B1 (en) 2000-01-05 2002-12-31 Rambus Inc. Memory system including a point-to-point linked memory subsystem
US7266634B2 (en) * 2000-01-05 2007-09-04 Rambus Inc. Configurable width buffered module having flyby elements
US7356639B2 (en) * 2000-01-05 2008-04-08 Rambus Inc. Configurable width buffered module having a bypass circuit
US7404032B2 (en) * 2000-01-05 2008-07-22 Rambus Inc. Configurable width buffered module having switch elements
US7363422B2 (en) * 2000-01-05 2008-04-22 Rambus Inc. Configurable width buffered module
US7196710B1 (en) 2000-08-23 2007-03-27 Nintendo Co., Ltd. Method and apparatus for buffering graphics data in a graphics system
US6700586B1 (en) 2000-08-23 2004-03-02 Nintendo Co., Ltd. Low cost graphics with stitching processing hardware support for skeletal animation
US6707458B1 (en) 2000-08-23 2004-03-16 Nintendo Co., Ltd. Method and apparatus for texture tiling in a graphics system
US6811489B1 (en) 2000-08-23 2004-11-02 Nintendo Co., Ltd. Controller interface for a graphics system
US6636214B1 (en) 2000-08-23 2003-10-21 Nintendo Co., Ltd. Method and apparatus for dynamically reconfiguring the order of hidden surface processing based on rendering mode
US7538772B1 (en) 2000-08-23 2009-05-26 Nintendo Co., Ltd. Graphics processing system with enhanced memory controller
US7576748B2 (en) 2000-11-28 2009-08-18 Nintendo Co. Ltd. Graphics system with embedded frame butter having reconfigurable pixel formats
JP4042088B2 (en) * 2000-08-25 2008-02-06 株式会社ルネサステクノロジ Memory access method
US6885378B1 (en) * 2000-09-28 2005-04-26 Intel Corporation Method and apparatus for the implementation of full-scene anti-aliasing supersampling
US8692844B1 (en) * 2000-09-28 2014-04-08 Nvidia Corporation Method and system for efficient antialiased rendering
US6859208B1 (en) * 2000-09-29 2005-02-22 Intel Corporation Shared translation address caching
US6853382B1 (en) 2000-10-13 2005-02-08 Nvidia Corporation Controller for a memory system having multiple partitions
US6753873B2 (en) * 2001-01-31 2004-06-22 General Electric Company Shared memory control between detector framing node and processor
US6864896B2 (en) * 2001-05-15 2005-03-08 Rambus Inc. Scalable unified memory architecture
GB2378108B (en) 2001-07-24 2005-08-17 Imagination Tech Ltd Three dimensional graphics system
EP1296237A1 (en) * 2001-09-25 2003-03-26 Texas Instruments Incorporated Data transfer controlled by task attributes
US6906720B2 (en) * 2002-03-12 2005-06-14 Sun Microsystems, Inc. Multipurpose memory system for use in a graphics system
US7508398B1 (en) 2002-08-27 2009-03-24 Nvidia Corporation Transparent antialiased memory access
US7085910B2 (en) * 2003-08-27 2006-08-01 Lsi Logic Corporation Memory window manager for control structure access
US8775997B2 (en) 2003-09-15 2014-07-08 Nvidia Corporation System and method for testing and configuring semiconductor functional circuits
US8768642B2 (en) 2003-09-15 2014-07-01 Nvidia Corporation System and method for remotely configuring semiconductor functional circuits
US8732644B1 (en) 2003-09-15 2014-05-20 Nvidia Corporation Micro electro mechanical switch system and method for testing and configuring semiconductor functional circuits
US7286134B1 (en) 2003-12-17 2007-10-23 Nvidia Corporation System and method for packing data in a tiled graphics memory
US7420568B1 (en) 2003-12-17 2008-09-02 Nvidia Corporation System and method for packing data in different formats in a tiled graphics memory
US8711161B1 (en) 2003-12-18 2014-04-29 Nvidia Corporation Functional component compensation reconfiguration system and method
US6999088B1 (en) 2003-12-23 2006-02-14 Nvidia Corporation Memory system having multiple subpartitions
US7296139B1 (en) 2004-01-30 2007-11-13 Nvidia Corporation In-memory table structure for virtual address translation system with translation units of variable range size
US7334108B1 (en) * 2004-01-30 2008-02-19 Nvidia Corporation Multi-client virtual address translation system with translation units of variable-range size
US7278008B1 (en) 2004-01-30 2007-10-02 Nvidia Corporation Virtual address translation system with caching of variable-range translation clusters
US20050237329A1 (en) * 2004-04-27 2005-10-27 Nvidia Corporation GPU rendering to system memory
US20050246502A1 (en) * 2004-04-28 2005-11-03 Texas Instruments Incorporated Dynamic memory mapping
US7227548B2 (en) * 2004-05-07 2007-06-05 Valve Corporation Method and system for determining illumination of models using an ambient cube
US7221369B1 (en) 2004-07-29 2007-05-22 Nvidia Corporation Apparatus, system, and method for delivering data to multiple memory clients via a unitary buffer
US7277098B2 (en) * 2004-08-23 2007-10-02 Via Technologies, Inc. Apparatus and method of an improved stencil shadow volume operation
US8723231B1 (en) 2004-09-15 2014-05-13 Nvidia Corporation Semiconductor die micro electro-mechanical switch management system and method
US8711156B1 (en) 2004-09-30 2014-04-29 Nvidia Corporation Method and system for remapping processing elements in a pipeline of a graphics processing unit
US8427496B1 (en) 2005-05-13 2013-04-23 Nvidia Corporation Method and system for implementing compression across a graphics bus interconnect
US20060282604A1 (en) 2005-05-27 2006-12-14 Ati Technologies, Inc. Methods and apparatus for processing graphics data using multiple processing circuits
US7493520B2 (en) * 2005-06-07 2009-02-17 Microsoft Corporation System and method for validating the graphical output of an updated software module
US7562271B2 (en) 2005-09-26 2009-07-14 Rambus Inc. Memory system topologies including a buffer device and an integrated circuit memory device
US11328764B2 (en) 2005-09-26 2022-05-10 Rambus Inc. Memory system topologies including a memory die stack
US7464225B2 (en) * 2005-09-26 2008-12-09 Rambus Inc. Memory module including a plurality of integrated circuit memory devices and a plurality of buffer devices in a matrix topology
US8212832B2 (en) * 2005-12-08 2012-07-03 Ati Technologies Ulc Method and apparatus with dynamic graphics surface memory allocation
US8698811B1 (en) 2005-12-15 2014-04-15 Nvidia Corporation Nested boustrophedonic patterns for rasterization
US8390645B1 (en) 2005-12-19 2013-03-05 Nvidia Corporation Method and system for rendering connecting antialiased line segments
US9117309B1 (en) 2005-12-19 2015-08-25 Nvidia Corporation Method and system for rendering polygons with a bounding box in a graphics processor unit
US7791617B2 (en) * 2005-12-19 2010-09-07 Nvidia Corporation Method and system for rendering polygons having abutting edges
US8218091B2 (en) * 2006-04-18 2012-07-10 Marvell World Trade Ltd. Shared memory multi video channel display apparatus and methods
US8284322B2 (en) * 2006-04-18 2012-10-09 Marvell World Trade Ltd. Shared memory multi video channel display apparatus and methods
US8264610B2 (en) * 2006-04-18 2012-09-11 Marvell World Trade Ltd. Shared memory multi video channel display apparatus and methods
US8928676B2 (en) * 2006-06-23 2015-01-06 Nvidia Corporation Method for parallel fine rasterization in a raster stage of a graphics pipeline
US8477134B1 (en) 2006-06-30 2013-07-02 Nvidia Corporation Conservative triage of polygon status using low precision edge evaluation and high precision edge evaluation
US20080028181A1 (en) * 2006-07-31 2008-01-31 Nvidia Corporation Dedicated mechanism for page mapping in a gpu
GB2449399B (en) * 2006-09-29 2009-05-06 Imagination Tech Ltd Improvements in memory management for systems for generating 3-dimensional computer images
JP2008090673A (en) * 2006-10-03 2008-04-17 Mitsubishi Electric Corp Cache memory control device
US7986327B1 (en) * 2006-10-23 2011-07-26 Nvidia Corporation Systems for efficient retrieval from tiled memory surface to linear memory display
US7805587B1 (en) 2006-11-01 2010-09-28 Nvidia Corporation Memory addressing controlled by PTE fields
US8427487B1 (en) 2006-11-02 2013-04-23 Nvidia Corporation Multiple tile output using interface compression in a raster stage
US8482567B1 (en) 2006-11-03 2013-07-09 Nvidia Corporation Line rasterization techniques
US8120613B2 (en) * 2006-11-29 2012-02-21 Siemens Medical Solutions Usa, Inc. Method and apparatus for real-time digital image acquisition, storage, and retrieval
KR100867640B1 (en) * 2007-02-06 2008-11-10 삼성전자주식회사 System on chip including image processing memory with multiple access
US8724483B2 (en) * 2007-10-22 2014-05-13 Nvidia Corporation Loopback configuration for bi-directional interfaces
US8063903B2 (en) * 2007-11-09 2011-11-22 Nvidia Corporation Edge evaluation techniques for graphics hardware
US9064333B2 (en) 2007-12-17 2015-06-23 Nvidia Corporation Interrupt handling techniques in the rasterizer of a GPU
US8780123B2 (en) 2007-12-17 2014-07-15 Nvidia Corporation Interrupt handling techniques in the rasterizer of a GPU
US8255783B2 (en) * 2008-04-23 2012-08-28 International Business Machines Corporation Apparatus, system and method for providing error protection for data-masking bits
US8681861B2 (en) 2008-05-01 2014-03-25 Nvidia Corporation Multistandard hardware video encoder
US8923385B2 (en) 2008-05-01 2014-12-30 Nvidia Corporation Rewind-enabled hardware encoder
JP5658430B2 (en) * 2008-08-15 2015-01-28 パナソニックIpマネジメント株式会社 Image processing device
US8392667B2 (en) * 2008-12-12 2013-03-05 Nvidia Corporation Deadlock avoidance by marking CPU traffic as special
GB0823254D0 (en) 2008-12-19 2009-01-28 Imagination Tech Ltd Multi level display control list in tile based 3D computer graphics system
US8330766B1 (en) 2008-12-19 2012-12-11 Nvidia Corporation Zero-bandwidth clears
US8319783B1 (en) 2008-12-19 2012-11-27 Nvidia Corporation Index-based zero-bandwidth clears
GB0823468D0 (en) 2008-12-23 2009-01-28 Imagination Tech Ltd Display list control stream grouping in tile based 3D computer graphics systems
US20110063309A1 (en) * 2009-09-16 2011-03-17 Nvidia Corporation User interface for co-processing techniques on heterogeneous graphics processing units
US10175990B2 (en) * 2009-12-22 2019-01-08 Intel Corporation Gathering and scattering multiple data elements
US9530189B2 (en) 2009-12-31 2016-12-27 Nvidia Corporation Alternate reduction ratios and threshold mechanisms for framebuffer compression
US9331869B2 (en) 2010-03-04 2016-05-03 Nvidia Corporation Input/output request packet handling techniques by a device specific kernel mode driver
US8495464B2 (en) * 2010-06-28 2013-07-23 Intel Corporation Reliability support in memory systems without error correcting code support
US9171350B2 (en) 2010-10-28 2015-10-27 Nvidia Corporation Adaptive resolution DGPU rendering to provide constant framerate with free IGPU scale up
US9477597B2 (en) 2011-03-25 2016-10-25 Nvidia Corporation Techniques for different memory depths on different partitions
US8701057B2 (en) 2011-04-11 2014-04-15 Nvidia Corporation Design, layout, and manufacturing techniques for multivariant integrated circuits
US9529712B2 (en) 2011-07-26 2016-12-27 Nvidia Corporation Techniques for balancing accesses to memory having different memory types
US9886312B2 (en) 2011-09-28 2018-02-06 Microsoft Technology Licensing, Llc Dynamic provisioning of virtual video memory based on virtual video controller configuration
JP2013242694A (en) * 2012-05-21 2013-12-05 Renesas Mobile Corp Semiconductor device, electronic device, electronic system, and method of controlling electronic device
US9373182B2 (en) 2012-08-17 2016-06-21 Intel Corporation Memory sharing via a unified memory architecture
KR20140060141A (en) * 2012-11-09 2014-05-19 삼성전자주식회사 Display apparatus and method for controlling thereof
US9047198B2 (en) 2012-11-29 2015-06-02 Apple Inc. Prefetching across page boundaries in hierarchically cached processors
US9607407B2 (en) 2012-12-31 2017-03-28 Nvidia Corporation Variable-width differential memory compression
US9591309B2 (en) 2012-12-31 2017-03-07 Nvidia Corporation Progressive lossy memory compression
US9235905B2 (en) 2013-03-13 2016-01-12 Ologn Technologies Ag Efficient screen image transfer
US9607356B2 (en) 2013-05-02 2017-03-28 Arm Limited Graphics processing systems
US9070200B2 (en) * 2013-05-02 2015-06-30 Arm Limited Graphics processing systems
US9741089B2 (en) 2013-05-02 2017-08-22 Arm Limited Graphics processing systems
US9767595B2 (en) 2013-05-02 2017-09-19 Arm Limited Graphics processing systems
US9710894B2 (en) 2013-06-04 2017-07-18 Nvidia Corporation System and method for enhanced multi-sample anti-aliasing
US9098924B2 (en) * 2013-07-15 2015-08-04 Nvidia Corporation Techniques for optimizing stencil buffers
US9514563B2 (en) 2013-08-30 2016-12-06 Arm Limited Graphics processing systems
JP5928914B2 (en) * 2014-03-17 2016-06-01 株式会社ソニー・インタラクティブエンタテインメント Graphics processing apparatus and graphics processing method
JP6313632B2 (en) * 2014-03-31 2018-04-18 キヤノン株式会社 Image processing device
US9832388B2 (en) 2014-08-04 2017-11-28 Nvidia Corporation Deinterleaving interleaved high dynamic range image by using YUV interpolation
US20180012327A1 (en) * 2016-07-05 2018-01-11 Ubitus Inc. Overlaying multi-source media in vram
US10438569B2 (en) 2017-04-17 2019-10-08 Intel Corporation Consolidation of data compression using common sectored cache for graphics streams
US11321804B1 (en) * 2020-10-15 2022-05-03 Qualcomm Incorporated Techniques for flexible rendering operations

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0814803B2 (en) * 1986-05-23 1996-02-14 株式会社日立製作所 Address translation method
DE3852457T2 (en) * 1987-07-31 1995-06-14 Qms Inc Page printing system with virtual memory.
US5640543A (en) * 1992-06-19 1997-06-17 Intel Corporation Scalable multimedia platform architecture
WO1994016391A1 (en) * 1992-12-31 1994-07-21 Intel Corporation Bus to bus interface with address translation
US5450542A (en) * 1993-11-30 1995-09-12 Vlsi Technology, Inc. Bus interface with graphics and system paths for an integrated memory system

Cited By (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8194086B2 (en) 2000-12-13 2012-06-05 Round Rock Research, Llc Memory system and method for improved utilization of read and write bandwidth of a graphics processing system
US7724262B2 (en) 2000-12-13 2010-05-25 Round Rock Research, Llc Memory system and method for improved utilization of read and write bandwidth of a graphics processing system
US20100220103A1 (en) * 2000-12-13 2010-09-02 Round Rock Research, Llc Memory system and method for improved utilization of read and write bandwidth of a graphics processing system
US8446420B2 (en) 2000-12-13 2013-05-21 Round Rock Research, Llc Memory system and method for improved utilization of read and write bandwidth of a graphics processing system
US20110169846A1 (en) * 2000-12-13 2011-07-14 Round Rock Research, Llc Memory system and method for improved utilization of read and write bandwidth of a graphics processing system
US7916148B2 (en) 2000-12-13 2011-03-29 Round Rock Research, Llc Memory system and method for improved utilization of read and write bandwidth of a graphics processing system
US7379068B2 (en) 2000-12-13 2008-05-27 Micron Technology, Inc. Memory system and method for improved utilization of read and write bandwidth of a graphics processing system
US20080218525A1 (en) * 2000-12-13 2008-09-11 William Radke Memory system and method for improved utilization of read and write bandwidth of a graphics processing system
US20050024367A1 (en) * 2000-12-13 2005-02-03 William Radke Memory system and method for improved utilization of read and write bandwidth of a graphics processing system
US6956577B2 (en) * 2001-10-09 2005-10-18 Micron Technology, Inc. Embedded memory system and method including data error correction
US20040183808A1 (en) * 2001-10-09 2004-09-23 William Radke Embedded memory system and method including data error correction
US8203569B2 (en) * 2004-04-15 2012-06-19 Sony Computer Entertainment Inc. Graphics processor, graphics processing method, information processor and information processing method
US20080278513A1 (en) * 2004-04-15 2008-11-13 Junichi Naoi Plotting Apparatus, Plotting Method, Information Processing Apparatus, and Information Processing Method
US7983497B2 (en) * 2004-04-23 2011-07-19 Sumitomo Electric Industries, Ltd. Coding method for motion-image data, decoding method, terminal equipment executing these, and two-way interactive system
US20100118935A1 (en) * 2004-04-23 2010-05-13 Sumitomo Electric Industries, Ltd. Coding method for motion-image data, decoding method, terminal equipment executing these, and two-way interactive system
US20060098021A1 (en) * 2004-11-11 2006-05-11 Samsung Electronics Co., Ltd. Graphics system and memory device for three-dimensional graphics acceleration and method for three dimensional graphics processing
US20060103658A1 (en) * 2004-11-12 2006-05-18 Via Technologies, Inc. Color compression using multiple planes in a multi-sample anti-aliasing scheme
CN100357972C (en) * 2004-11-12 2007-12-26 威盛电子股份有限公司 Systems and methods for compressing computer graphics color data
US7126615B2 (en) * 2004-11-12 2006-10-24 Via Technologies, Inc. Color compression using multiple planes in a multi-sample anti-aliasing scheme
US20060177122A1 (en) * 2005-02-07 2006-08-10 Sony Computer Entertainment Inc. Method and apparatus for particle manipulation using graphics processing
US7928989B1 (en) * 2006-07-28 2011-04-19 Nvidia Corporation Feedback and record of transformed vertices in a graphics library
US8271746B1 (en) * 2006-11-03 2012-09-18 Nvidia Corporation Tiering of linear clients
US20100030980A1 (en) * 2006-12-25 2010-02-04 Panasonic Corporation Memory control device, memory device, and memory control method
US8307190B2 (en) * 2006-12-25 2012-11-06 Panasonic Corporation Memory control device, memory device, and memory control method
US8738888B2 (en) 2006-12-25 2014-05-27 Panasonic Corporation Memory control device, memory device, and memory control method
US20090160857A1 (en) * 2007-12-20 2009-06-25 Jim Rasmusson Unified Compression/Decompression Graphics Architecture
US9665951B2 (en) * 2007-12-20 2017-05-30 Telefonaktiebolaget Lm Ericsson (Publ) Unified compression/decompression graphics architecture
US20110066815A1 (en) * 2009-09-15 2011-03-17 Olympus Corporation Memory access control device and memory access control method
US20120162264A1 (en) * 2010-12-22 2012-06-28 Hughes Gregory F System level graphics manipulations on protected content
US8836727B2 (en) * 2010-12-22 2014-09-16 Apple Inc. System level graphics manipulations on protected content
US10692171B2 (en) 2015-11-17 2020-06-23 Samsung Electronics Co., Ltd. Method of operating virtual address generator and method of operating system including the same

Also Published As

Publication number Publication date
MX9706495A (en) 1998-06-30
CA2214868C (en) 2006-06-06
JPH10247138A (en) 1998-09-14
EP0829820B1 (en) 2003-05-21
CA2214868A1 (en) 1998-03-13
EP0829820A2 (en) 1998-03-18
DE69722117T2 (en) 2004-02-05
EP0829820A3 (en) 1998-11-18
US6104417A (en) 2000-08-15
DE69722117D1 (en) 2003-07-03

Similar Documents

Publication Publication Date Title
US6104417A (en) Unified memory computer architecture with dynamic graphics memory allocation
US6721864B2 (en) Programmable memory controller
US6745309B2 (en) Pipelined memory controller
US6026464A (en) Memory control system and method utilizing distributed memory controllers for multibank memory
US6622228B2 (en) System and method of processing memory requests in a pipelined memory controller
US5911149A (en) Apparatus and method for implementing a programmable shared memory with dual bus architecture
JP3976342B2 (en) Method and apparatus for enabling simultaneous access to shared memory from multiple agents
US6330645B1 (en) Multi-stream coherent memory controller apparatus and method
US7707328B2 (en) Memory access control circuit
US5506968A (en) Terminating access of an agent to a shared resource when a timer, started after a low latency agent requests access, reaches a predetermined value
US6591323B2 (en) Memory controller with arbitration among several strobe requests
JP7384806B2 (en) Scheduling memory requests for ganged memory devices
EP1474747A1 (en) Address space, bus system, memory controller and device system
US5822768A (en) Dual ported memory for a unified memory architecture
US6272583B1 (en) Microprocessor having built-in DRAM and internal data transfer paths wider and faster than independent external transfer paths
US5802581A (en) SDRAM memory controller with multiple arbitration points during a memory cycle
US5802597A (en) SDRAM memory controller while in burst four mode supporting single data accesses
US11360897B1 (en) Adaptive memory access management
JP2003316642A (en) Memory control circuit, dma request block and memory access system
US6425020B1 (en) Systems and methods for passively transferring data across a selected single bus line independent of a control circuitry
JPH10144073A (en) Access mechanism for synchronous dram
MXPA97006495A (en) An unified memory architecture with dynamic allocation of grafi memory
JP2000172553A (en) Data processor
JPH04199450A (en) Direct memory access control circuit
KR20070098352A (en) Apparatus for bus arbitration

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

AS Assignment

Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICROSOFT CORPORATION;REEL/FRAME:034766/0001

Effective date: 20141014